modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-07-29 06:27:56
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 534
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-07-29 06:27:11
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
winnieyangwannan/entity_Llama-3.1-8B-Instruct_mlp-down_positive-negative-addition-same_last_layer_4_2_all_3_49
|
winnieyangwannan
| 2025-06-20T10:27:46Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-20T10:25:30Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
nusnlp/JGP-Parallel-Last-ZH-EN
|
nusnlp
| 2025-06-20T10:26:38Z | 1 | 0 | null |
[
"pytorch",
"llama",
"en",
"zh",
"arxiv:2506.13044",
"license:apache-2.0",
"region:us"
] | null | 2025-06-18T06:06:48Z |
---
license: apache-2.0
language:
- en
- zh
---
# Just-Go-Parallel (Parallel Last (uni): ZH→EN)
The model repository for the "Parallel Last (uni): ZH→EN" setting of the following paper:
> **Just Go Parallel: Improving the Multilingual Capabilities of Large Language Models**
>
> [Muhammad Reza Qorib](https://mrqorib.github.io/), [Junyi Li](https://lijunyi.tech/), and [Hwee Tou Ng](https://www.comp.nus.edu.sg/~nght/)
>
> The 63rd Annual Meeting of the Association for Computational Linguistics (to appear)
- **Paper:** [arXiv](https://arxiv.org/abs/2506.13044)
- **Codebase:** [https://github.com/nusnlp/Just-Go-Parallel/](https://github.com/nusnlp/just-Go-Parallel/)
We use the architecture and tokenizer of [TinyLlama v1.1](https://huggingface.co/TinyLlama/TinyLlama_v1.1).
Please use transformers>=4.35.
## Models
The main branch of the repository contains the best-performing model that was evaluated in the paper. Other checkpoints produced during training will also be hosted in this repository under different branch names (also called "revisions" in HuggingFace), with each branch name indicating the number of training steps.
* No Parallel: [nusnlp/JGP-No-Parallel](https://huggingface.co/nusnlp/JGP-No-Parallel)
* Multilingual: [nusnlp/JGP-Multilingual](https://huggingface.co/nusnlp/JGP-Multilingual)
* Parallel Non-Adjacent: [nusnlp/JGP-Parallel-Non-Adjacent](https://huggingface.co/nusnlp/JGP-Parallel-Non-Adjacent)
* Parallel First: [nusnlp/JGP-Parallel-First](https://huggingface.co/nusnlp/JGP-Parallel-First)
* Parallel Distributed: [nusnlp/JGP-Parallel-Distributed](https://huggingface.co/nusnlp/JGP-Parallel-Distributed)
* Parallel Last (all): [nusnlp/JGP-Parallel-Last-all](https://huggingface.co/nusnlp/JGP-Parallel-Last-all)
* Parallel Last (uni):
* EN→ID: [nusnlp/JGP-Parallel-Last-EN-ID](https://huggingface.co/nusnlp/JGP-Parallel-Last-EN-ID)
* ID→EN: [nusnlp/JGP-Parallel-Last-ID-EN](https://huggingface.co/nusnlp/JGP-Parallel-Last-ID-EN)
* EN→ZH: [nusnlp/JGP-Parallel-Last-EN-ZH](https://huggingface.co/nusnlp/JGP-Parallel-Last-EN-ZH)
* ZH→EN: [nusnlp/JGP-Parallel-Last-ZH-EN](https://huggingface.co/nusnlp/JGP-Parallel-Last-ZH-EN)
|
nusnlp/JGP-Parallel-Last-EN-ZH
|
nusnlp
| 2025-06-20T10:25:53Z | 2 | 0 | null |
[
"pytorch",
"llama",
"en",
"zh",
"arxiv:2506.13044",
"license:apache-2.0",
"region:us"
] | null | 2025-06-18T06:08:30Z |
---
license: apache-2.0
language:
- en
- zh
---
# Just-Go-Parallel (Parallel Last (uni): EN→ZH)
The model repository for the "Parallel Last (uni): EN→ZH" setting of the following paper:
> **Just Go Parallel: Improving the Multilingual Capabilities of Large Language Models**
>
> [Muhammad Reza Qorib](https://mrqorib.github.io/), [Junyi Li](https://lijunyi.tech/), and [Hwee Tou Ng](https://www.comp.nus.edu.sg/~nght/)
>
> The 63rd Annual Meeting of the Association for Computational Linguistics (to appear)
- **Paper:** [arXiv](https://arxiv.org/abs/2506.13044)
- **Codebase:** [https://github.com/nusnlp/Just-Go-Parallel/](https://github.com/nusnlp/just-Go-Parallel/)
We use the architecture and tokenizer of [TinyLlama v1.1](https://huggingface.co/TinyLlama/TinyLlama_v1.1).
Please use transformers>=4.35.
## Models
The main branch of the repository contains the best-performing model that was evaluated in the paper. Other checkpoints produced during training will also be hosted in this repository under different branch names (also called "revisions" in HuggingFace), with each branch name indicating the number of training steps.
* No Parallel: [nusnlp/JGP-No-Parallel](https://huggingface.co/nusnlp/JGP-No-Parallel)
* Multilingual: [nusnlp/JGP-Multilingual](https://huggingface.co/nusnlp/JGP-Multilingual)
* Parallel Non-Adjacent: [nusnlp/JGP-Parallel-Non-Adjacent](https://huggingface.co/nusnlp/JGP-Parallel-Non-Adjacent)
* Parallel First: [nusnlp/JGP-Parallel-First](https://huggingface.co/nusnlp/JGP-Parallel-First)
* Parallel Distributed: [nusnlp/JGP-Parallel-Distributed](https://huggingface.co/nusnlp/JGP-Parallel-Distributed)
* Parallel Last (all): [nusnlp/JGP-Parallel-Last-all](https://huggingface.co/nusnlp/JGP-Parallel-Last-all)
* Parallel Last (uni):
* EN→ID: [nusnlp/JGP-Parallel-Last-EN-ID](https://huggingface.co/nusnlp/JGP-Parallel-Last-EN-ID)
* ID→EN: [nusnlp/JGP-Parallel-Last-ID-EN](https://huggingface.co/nusnlp/JGP-Parallel-Last-ID-EN)
* EN→ZH: [nusnlp/JGP-Parallel-Last-EN-ZH](https://huggingface.co/nusnlp/JGP-Parallel-Last-EN-ZH)
* ZH→EN: [nusnlp/JGP-Parallel-Last-ZH-EN](https://huggingface.co/nusnlp/JGP-Parallel-Last-ZH-EN)
|
winnieyangwannan/entity_Llama-3.1-8B-Instruct_mlp-down_positive-negative-addition-same_last_layer_14_2_all_3_49
|
winnieyangwannan
| 2025-06-20T10:24:31Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-20T10:22:13Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
fnlp/MOSS-TTSD-v0
|
fnlp
| 2025-06-20T10:24:21Z | 0 | 3 | null |
[
"safetensors",
"qwen3",
"text-to-speech",
"zh",
"en",
"base_model:Qwen/Qwen3-1.7B-Base",
"base_model:finetune:Qwen/Qwen3-1.7B-Base",
"license:apache-2.0",
"region:us"
] |
text-to-speech
| 2025-06-19T14:04:22Z |
---
license: apache-2.0
language:
- zh
- en
base_model:
- Qwen/Qwen3-1.7B-Base
pipeline_tag: text-to-speech
---
# MOSS-TTSD 🪐
## Overview
MOSS-TTSD (text to spoken dialogue) is an open-source bilingual spoken dialogue synthesis model that supports both Chinese and English.
It can transform dialogue scripts between two speakers into natural, expressive conversational speech.
MOSS-TTSD supports voice cloning and single-session speech generation of up to 960 seconds, making it ideal for AI podcast production.
## Highlights
- **Highly Expressive Dialogue Speech**: Built on unified semantic-acoustic neural audio codec, a pre-trained large language model, millions of hours of TTS data, and 400k hours synthetic and real conversational speech, MOSS-TTSD generates highly expressive, human-like dialogue speech with natural conversational prosody.
- **Two-Speaker Voice Cloning**: MOSS-TTSD supports zero-shot two speakers voice cloning and can generate conversational speech with accurate speaker swithcing based on dialogue scripts.
- **Chinese-English Bilingual Support**: MOSS-TTSD enables highly expressive speech generation in both Chinese and English.
- **Long-Form Speech Generation (up to 960 seconds)**: Thanks to low-bitrate codec and training framework optimization, MOSS-TTSD has been trained for long speech generation, enabling single-session speech generation of up to 960 seconds.
- **Fully Open Source & Commercial-Ready**: MOSS-TTSD and its future updates will be fully open-source and support free commercial use.
|
nnilayy/dreamer-valence-multi-classification-Kfold-5
|
nnilayy
| 2025-06-20T10:24:12Z | 0 | 0 | null |
[
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"region:us"
] | null | 2025-06-20T10:24:09Z |
---
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Code: [More Information Needed]
- Paper: [More Information Needed]
- Docs: [More Information Needed]
|
nusnlp/JGP-Parallel-Last-EN-ID
|
nusnlp
| 2025-06-20T10:23:26Z | 1 | 0 | null |
[
"pytorch",
"llama",
"en",
"id",
"arxiv:2506.13044",
"license:apache-2.0",
"region:us"
] | null | 2025-06-18T06:08:56Z |
---
license: apache-2.0
language:
- en
- id
---
# Just-Go-Parallel (Parallel Last (uni): EN→ID)
The model repository for the "Parallel Last (uni): EN→ID" setting of the following paper:
> **Just Go Parallel: Improving the Multilingual Capabilities of Large Language Models**
>
> [Muhammad Reza Qorib](https://mrqorib.github.io/), [Junyi Li](https://lijunyi.tech/), and [Hwee Tou Ng](https://www.comp.nus.edu.sg/~nght/)
>
> The 63rd Annual Meeting of the Association for Computational Linguistics (to appear)
- **Paper:** [arXiv](https://arxiv.org/abs/2506.13044)
- **Codebase:** [https://github.com/nusnlp/Just-Go-Parallel/](https://github.com/nusnlp/just-Go-Parallel/)
We use the architecture and tokenizer of [TinyLlama v1.1](https://huggingface.co/TinyLlama/TinyLlama_v1.1).
Please use transformers>=4.35.
## Models
The main branch of the repository contains the best-performing model that was evaluated in the paper. Other checkpoints produced during training will also be hosted in this repository under different branch names (also called "revisions" in HuggingFace), with each branch name indicating the number of training steps.
* No Parallel: [nusnlp/JGP-No-Parallel](https://huggingface.co/nusnlp/JGP-No-Parallel)
* Multilingual: [nusnlp/JGP-Multilingual](https://huggingface.co/nusnlp/JGP-Multilingual)
* Parallel Non-Adjacent: [nusnlp/JGP-Parallel-Non-Adjacent](https://huggingface.co/nusnlp/JGP-Parallel-Non-Adjacent)
* Parallel First: [nusnlp/JGP-Parallel-First](https://huggingface.co/nusnlp/JGP-Parallel-First)
* Parallel Distributed: [nusnlp/JGP-Parallel-Distributed](https://huggingface.co/nusnlp/JGP-Parallel-Distributed)
* Parallel Last (all): [nusnlp/JGP-Parallel-Last-all](https://huggingface.co/nusnlp/JGP-Parallel-Last-all)
* Parallel Last (uni):
* EN→ID: [nusnlp/JGP-Parallel-Last-EN-ID](https://huggingface.co/nusnlp/JGP-Parallel-Last-EN-ID)
* ID→EN: [nusnlp/JGP-Parallel-Last-ID-EN](https://huggingface.co/nusnlp/JGP-Parallel-Last-ID-EN)
* EN→ZH: [nusnlp/JGP-Parallel-Last-EN-ZH](https://huggingface.co/nusnlp/JGP-Parallel-Last-EN-ZH)
* ZH→EN: [nusnlp/JGP-Parallel-Last-ZH-EN](https://huggingface.co/nusnlp/JGP-Parallel-Last-ZH-EN)
|
nusnlp/JGP-Parallel-Non-Adjacent
|
nusnlp
| 2025-06-20T10:20:07Z | 4 | 0 | null |
[
"pytorch",
"llama",
"en",
"zh",
"id",
"arxiv:2506.13044",
"license:apache-2.0",
"region:us"
] | null | 2025-06-18T05:41:29Z |
---
license: apache-2.0
language:
- en
- zh
- id
---
# Just-Go-Parallel (Parallel Non-Adjacent)
The model repository for the "Parallel Non-Adjacent" setting of the following paper:
> **Just Go Parallel: Improving the Multilingual Capabilities of Large Language Models**
>
> [Muhammad Reza Qorib](https://mrqorib.github.io/), [Junyi Li](https://lijunyi.tech/), and [Hwee Tou Ng](https://www.comp.nus.edu.sg/~nght/)
>
> The 63rd Annual Meeting of the Association for Computational Linguistics (to appear)
- **Paper:** [arXiv](https://arxiv.org/abs/2506.13044)
- **Codebase:** [https://github.com/nusnlp/Just-Go-Parallel/](https://github.com/nusnlp/just-Go-Parallel/)
We use the architecture and tokenizer of [TinyLlama v1.1](https://huggingface.co/TinyLlama/TinyLlama_v1.1).
Please use transformers>=4.35.
## Models
The main branch of the repository contains the best-performing model that was evaluated in the paper. Other checkpoints produced during training will also be hosted in this repository under different branch names (also called "revisions" in HuggingFace), with each branch name indicating the number of training steps.
* No Parallel: [nusnlp/JGP-No-Parallel](https://huggingface.co/nusnlp/JGP-No-Parallel)
* Multilingual: [nusnlp/JGP-Multilingual](https://huggingface.co/nusnlp/JGP-Multilingual)
* Parallel Non-Adjacent: [nusnlp/JGP-Parallel-Non-Adjacent](https://huggingface.co/nusnlp/JGP-Parallel-Non-Adjacent)
* Parallel First: [nusnlp/JGP-Parallel-First](https://huggingface.co/nusnlp/JGP-Parallel-First)
* Parallel Distributed: [nusnlp/JGP-Parallel-Distributed](https://huggingface.co/nusnlp/JGP-Parallel-Distributed)
* Parallel Last (all): [nusnlp/JGP-Parallel-Last-all](https://huggingface.co/nusnlp/JGP-Parallel-Last-all)
* Parallel Last (uni):
* EN→ID: [nusnlp/JGP-Parallel-Last-EN-ID](https://huggingface.co/nusnlp/JGP-Parallel-Last-EN-ID)
* ID→EN: [nusnlp/JGP-Parallel-Last-ID-EN](https://huggingface.co/nusnlp/JGP-Parallel-Last-ID-EN)
* EN→ZH: [nusnlp/JGP-Parallel-Last-EN-ZH](https://huggingface.co/nusnlp/JGP-Parallel-Last-EN-ZH)
* ZH→EN: [nusnlp/JGP-Parallel-Last-ZH-EN](https://huggingface.co/nusnlp/JGP-Parallel-Last-ZH-EN)
|
IntelligentEstate/ADV_AGI_FRAME
|
IntelligentEstate
| 2025-06-20T10:19:05Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-06-19T17:10:19Z |
---
license: apache-2.0
license_name: apache-2.0
license_link: https://www.apache.org/licenses/LICENSE-2.0.txt
---
# Model Card: AGI Validator System
## Model Details
### Model Description
The AGI Validator is an advanced artificial general intelligence system for validating universal knowledge claims. It integrates multiple reasoning modes, evidence analysis, and real-time data verification to assess the validity of claims across various knowledge domains.
- **Developed by:** AI Research Team
- **Model type:** Hybrid Reasoning System
- **Language(s):** Python 3.10+
- **License:** Apache 2.0
- **System components:**
- Multi-Consensus Protocol (mCP) integration
- Evidence quality assessment
- Bayesian/causal/deductive reasoning engines
- Real-time data integration
- Domain-specific constraint handling
## Uses
### Direct Use
The AGI Validator is designed for:
- Verifying factual claims in research and academia
- Validating knowledge-based assertions in AGI systems
- Analyzing evidence chains for logical consistency
- Cross-domain knowledge verification
- Educational content validation
### Downstream Use
- Integration with knowledge management systems
- Fact-checking platforms
- Research assistant tools
- Educational technology platforms
- AI safety verification systems
### Out-of-Scope Use
- Making subjective judgments
- Personal opinion validation
- Legal decision making
- Medical diagnosis
- Real-time critical systems
## How to Get Started
```python
from agi_validator import EnhancedAGIValidator, UniversalClaim
# Initialize validator
validator = EnhancedAGIValidator(mcp_enabled=True)
# Create knowledge claim
claim = UniversalClaim(
claim_id="climate_change_001",
content="Human activity is the primary driver of recent climate change",
reasoning_modes=["bayesian", "causal"],
sub_domains=["science", "social_science"]
)
# Add evidence
claim.evidence_chain.append(
Evidence(
evidence_id="ipcc_ar6",
strength=0.95,
reliability=0.9,
source_quality=0.95,
domain="science"
)
)
# Validate claim
validation_report = await validator.validate_knowledge_claim(claim)
print(validation_report)
```
## Technical Specifications
### System Architecture
- **Core Components:**
- Evidence Analysis Engine
- Reasoning Mode Evaluator (Deductive/Inductive/Abductive/Bayesian/Causal)
- Multi-Consensus Protocol (mCP) Interface
- Real-time Data Integrator
- Domain Constraint Handler
- **Analytical Capabilities:**
- Dynamic validation threshold calculation
- Metacognitive bias detection
- Evidence quality scoring
- Domain-specific rule application
- Contradiction detection
### Compute Infrastructure
- **Hardware Requirements:**
- Minimum: 4GB RAM, 2-core CPU
- Recommended: 8GB+ RAM, 4+ core CPU
- **Software Dependencies:**
- Python 3.10+
- aiohttp
- numpy
- FastAPI (for web interface)
- Uvicorn (ASGI server)
## Evaluation
### Testing Methodology
- Validation against curated test cases across domains
- Consistency checks with known facts
- Stress testing with contradictory evidence
- Performance benchmarking
- Error recovery testing
### Key Metrics
- **Claim Validity Score:** 0.0-1.0 scale
- **Evidence Quality Score:** Composite metric
- **Reasoning Coherence:** Logical consistency measure
- **System Reliability:** Uptime and error rate
- **Processing Time:** Average validation duration
## Environmental Impact
- **Carbon Efficiency:** Designed for minimal compute footprint
- **Optimization:** Asynchronous processing reduces energy consumption
- **Scaling:** Horizontal scaling capability minimizes resource waste
- **Estimated Energy Usage:** < 0.001 kWh per validation
## Citation
```bibtex
@software{AGI_Validator, veil engine technology
author = {thegift_thecurse},
title = {Advanced AGI Validation System} Framework,
year = {2025},
}
```
## Model Card Contact
[email protected]
```
|
thanhsc02/gemma-12b-it-lora-newdatax20-5epoch
|
thanhsc02
| 2025-06-20T10:18:14Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"gemma3",
"trl",
"en",
"base_model:unsloth/gemma-3-12b-it-unsloth-bnb-4bit",
"base_model:finetune:unsloth/gemma-3-12b-it-unsloth-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-20T10:18:03Z |
---
base_model: unsloth/gemma-3-12b-it-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** thanhsc02
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-3-12b-it-unsloth-bnb-4bit
This gemma3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
segopecelus/e595c87a-4acf-47bd-be3d-b703619b7f97
|
segopecelus
| 2025-06-20T10:17:30Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"axolotl",
"trl",
"grpo",
"conversational",
"arxiv:2402.03300",
"base_model:Qwen/Qwen2.5-0.5B-Instruct",
"base_model:quantized:Qwen/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2025-06-20T09:57:52Z |
---
base_model: Qwen/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: e595c87a-4acf-47bd-be3d-b703619b7f97
tags:
- generated_from_trainer
- axolotl
- trl
- grpo
licence: license
---
# Model Card for e595c87a-4acf-47bd-be3d-b703619b7f97
This model is a fine-tuned version of [Qwen/Qwen2.5-0.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="segopecelus/e595c87a-4acf-47bd-be3d-b703619b7f97", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/apriasmoro-abcstudio/Gradients-On-Demand/runs/8xhj9s1v)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.17.0
- Transformers: 4.51.3
- Pytorch: 2.5.1+cu124
- Datasets: 3.5.1
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
Josephinepassananti/sd21-kamala_ft_dataset_512_face_shaded_0.1_target_old_woman-bs1-steps5000-lr1e-04
|
Josephinepassananti
| 2025-06-20T10:15:01Z | 0 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"diffusers-training",
"lora",
"base_model:stabilityai/stable-diffusion-2-1",
"base_model:adapter:stabilityai/stable-diffusion-2-1",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2025-06-20T09:45:05Z |
---
base_model: stabilityai/stable-diffusion-2-1
library_name: diffusers
license: creativeml-openrail-m
inference: true
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- diffusers-training
- lora
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# LoRA text2image fine-tuning - Josephinepassananti/sd21-kamala_ft_dataset_512_face_shaded_0.1_target_old_woman-bs1-steps5000-lr1e-04
These are LoRA adaption weights for stabilityai/stable-diffusion-2-1. The weights were fine-tuned on the None dataset. You can find some example images in the following.




## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model]
|
tomaarsen/csr-mxbai-embed-large-v1-nq-no-reconstruction-2
|
tomaarsen
| 2025-06-20T10:14:13Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"bert",
"sparse-encoder",
"sparse",
"csr",
"generated_from_trainer",
"dataset_size:99000",
"loss:CSRLoss",
"loss:SparseMultipleNegativesRankingLoss",
"feature-extraction",
"en",
"dataset:sentence-transformers/natural-questions",
"arxiv:1908.10084",
"arxiv:2503.01776",
"arxiv:1705.00652",
"base_model:mixedbread-ai/mxbai-embed-large-v1",
"base_model:finetune:mixedbread-ai/mxbai-embed-large-v1",
"license:apache-2.0",
"model-index",
"co2_eq_emissions",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2025-06-20T10:14:03Z |
---
language:
- en
license: apache-2.0
tags:
- sentence-transformers
- sparse-encoder
- sparse
- csr
- generated_from_trainer
- dataset_size:99000
- loss:CSRLoss
- loss:SparseMultipleNegativesRankingLoss
base_model: mixedbread-ai/mxbai-embed-large-v1
widget:
- text: Saudi Arabia–United Arab Emirates relations However, the UAE and Saudi Arabia
continue to take somewhat differing stances on regional conflicts such the Yemeni
Civil War, where the UAE opposes Al-Islah, and supports the Southern Movement,
which has fought against Saudi-backed forces, and the Syrian Civil War, where
the UAE has disagreed with Saudi support for Islamist movements.[4]
- text: Economy of New Zealand New Zealand's diverse market economy has a sizable
service sector, accounting for 63% of all GDP activity in 2013.[17] Large scale
manufacturing industries include aluminium production, food processing, metal
fabrication, wood and paper products. Mining, manufacturing, electricity, gas,
water, and waste services accounted for 16.5% of GDP in 2013.[17] The primary
sector continues to dominate New Zealand's exports, despite accounting for 6.5%
of GDP in 2013.[17]
- text: who was the first president of indian science congress meeting held in kolkata
in 1914
- text: Get Over It (Eagles song) "Get Over It" is a song by the Eagles released as
a single after a fourteen-year breakup. It was also the first song written by
bandmates Don Henley and Glenn Frey when the band reunited. "Get Over It" was
played live for the first time during their Hell Freezes Over tour in 1994. It
returned the band to the U.S. Top 40 after a fourteen-year absence, peaking at
No. 31 on the Billboard Hot 100 chart. It also hit No. 4 on the Billboard Mainstream
Rock Tracks chart. The song was not played live by the Eagles after the "Hell
Freezes Over" tour in 1994. It remains the group's last Top 40 hit in the U.S.
- text: 'Cornelius the Centurion Cornelius (Greek: Κορνήλιος) was a Roman centurion
who is considered by Christians to be one of the first Gentiles to convert to
the faith, as related in Acts of the Apostles.'
datasets:
- sentence-transformers/natural-questions
pipeline_tag: feature-extraction
library_name: sentence-transformers
metrics:
- dot_accuracy@1
- dot_accuracy@3
- dot_accuracy@5
- dot_accuracy@10
- dot_precision@1
- dot_precision@3
- dot_precision@5
- dot_precision@10
- dot_recall@1
- dot_recall@3
- dot_recall@5
- dot_recall@10
- dot_ndcg@10
- dot_mrr@10
- dot_map@100
- query_active_dims
- query_sparsity_ratio
- corpus_active_dims
- corpus_sparsity_ratio
co2_eq_emissions:
emissions: 66.56126466621346
energy_consumed: 0.17123983068318005
source: codecarbon
training_type: fine-tuning
on_cloud: false
cpu_model: 13th Gen Intel(R) Core(TM) i7-13700K
ram_total_size: 31.777088165283203
hours_used: 0.564
hardware_used: 1 x NVIDIA GeForce RTX 3090
model-index:
- name: Sparse CSR model trained on Natural Questions
results:
- task:
type: sparse-information-retrieval
name: Sparse Information Retrieval
dataset:
name: NanoMSMARCO 8
type: NanoMSMARCO_8
metrics:
- type: dot_accuracy@1
value: 0.12
name: Dot Accuracy@1
- type: dot_accuracy@3
value: 0.24
name: Dot Accuracy@3
- type: dot_accuracy@5
value: 0.28
name: Dot Accuracy@5
- type: dot_accuracy@10
value: 0.3
name: Dot Accuracy@10
- type: dot_precision@1
value: 0.12
name: Dot Precision@1
- type: dot_precision@3
value: 0.07999999999999999
name: Dot Precision@3
- type: dot_precision@5
value: 0.056000000000000015
name: Dot Precision@5
- type: dot_precision@10
value: 0.030000000000000006
name: Dot Precision@10
- type: dot_recall@1
value: 0.12
name: Dot Recall@1
- type: dot_recall@3
value: 0.24
name: Dot Recall@3
- type: dot_recall@5
value: 0.28
name: Dot Recall@5
- type: dot_recall@10
value: 0.3
name: Dot Recall@10
- type: dot_ndcg@10
value: 0.21196909248837792
name: Dot Ndcg@10
- type: dot_mrr@10
value: 0.18355555555555556
name: Dot Mrr@10
- type: dot_map@100
value: 0.19168473018432397
name: Dot Map@100
- type: query_active_dims
value: 8.0
name: Query Active Dims
- type: query_sparsity_ratio
value: 0.998046875
name: Query Sparsity Ratio
- type: corpus_active_dims
value: 8.0
name: Corpus Active Dims
- type: corpus_sparsity_ratio
value: 0.998046875
name: Corpus Sparsity Ratio
- task:
type: sparse-nano-beir
name: Sparse Nano BEIR
dataset:
name: NanoBEIR mean 8
type: NanoBEIR_mean_8
metrics:
- type: dot_accuracy@1
value: 0.12
name: Dot Accuracy@1
- type: dot_accuracy@3
value: 0.24
name: Dot Accuracy@3
- type: dot_accuracy@5
value: 0.28
name: Dot Accuracy@5
- type: dot_accuracy@10
value: 0.3
name: Dot Accuracy@10
- type: dot_precision@1
value: 0.12
name: Dot Precision@1
- type: dot_precision@3
value: 0.07999999999999999
name: Dot Precision@3
- type: dot_precision@5
value: 0.056000000000000015
name: Dot Precision@5
- type: dot_precision@10
value: 0.030000000000000006
name: Dot Precision@10
- type: dot_recall@1
value: 0.12
name: Dot Recall@1
- type: dot_recall@3
value: 0.24
name: Dot Recall@3
- type: dot_recall@5
value: 0.28
name: Dot Recall@5
- type: dot_recall@10
value: 0.3
name: Dot Recall@10
- type: dot_ndcg@10
value: 0.21196909248837792
name: Dot Ndcg@10
- type: dot_mrr@10
value: 0.18355555555555556
name: Dot Mrr@10
- type: dot_map@100
value: 0.19168473018432397
name: Dot Map@100
- type: query_active_dims
value: 8.0
name: Query Active Dims
- type: query_sparsity_ratio
value: 0.998046875
name: Query Sparsity Ratio
- type: corpus_active_dims
value: 8.0
name: Corpus Active Dims
- type: corpus_sparsity_ratio
value: 0.998046875
name: Corpus Sparsity Ratio
- task:
type: sparse-information-retrieval
name: Sparse Information Retrieval
dataset:
name: NanoMSMARCO 16
type: NanoMSMARCO_16
metrics:
- type: dot_accuracy@1
value: 0.22
name: Dot Accuracy@1
- type: dot_accuracy@3
value: 0.34
name: Dot Accuracy@3
- type: dot_accuracy@5
value: 0.4
name: Dot Accuracy@5
- type: dot_accuracy@10
value: 0.44
name: Dot Accuracy@10
- type: dot_precision@1
value: 0.22
name: Dot Precision@1
- type: dot_precision@3
value: 0.11333333333333333
name: Dot Precision@3
- type: dot_precision@5
value: 0.08000000000000002
name: Dot Precision@5
- type: dot_precision@10
value: 0.044000000000000004
name: Dot Precision@10
- type: dot_recall@1
value: 0.22
name: Dot Recall@1
- type: dot_recall@3
value: 0.34
name: Dot Recall@3
- type: dot_recall@5
value: 0.4
name: Dot Recall@5
- type: dot_recall@10
value: 0.44
name: Dot Recall@10
- type: dot_ndcg@10
value: 0.3259646473373541
name: Dot Ndcg@10
- type: dot_mrr@10
value: 0.28955555555555557
name: Dot Mrr@10
- type: dot_map@100
value: 0.306813602994791
name: Dot Map@100
- type: query_active_dims
value: 16.0
name: Query Active Dims
- type: query_sparsity_ratio
value: 0.99609375
name: Query Sparsity Ratio
- type: corpus_active_dims
value: 16.0
name: Corpus Active Dims
- type: corpus_sparsity_ratio
value: 0.99609375
name: Corpus Sparsity Ratio
- task:
type: sparse-nano-beir
name: Sparse Nano BEIR
dataset:
name: NanoBEIR mean 16
type: NanoBEIR_mean_16
metrics:
- type: dot_accuracy@1
value: 0.22
name: Dot Accuracy@1
- type: dot_accuracy@3
value: 0.34
name: Dot Accuracy@3
- type: dot_accuracy@5
value: 0.4
name: Dot Accuracy@5
- type: dot_accuracy@10
value: 0.44
name: Dot Accuracy@10
- type: dot_precision@1
value: 0.22
name: Dot Precision@1
- type: dot_precision@3
value: 0.11333333333333333
name: Dot Precision@3
- type: dot_precision@5
value: 0.08000000000000002
name: Dot Precision@5
- type: dot_precision@10
value: 0.044000000000000004
name: Dot Precision@10
- type: dot_recall@1
value: 0.22
name: Dot Recall@1
- type: dot_recall@3
value: 0.34
name: Dot Recall@3
- type: dot_recall@5
value: 0.4
name: Dot Recall@5
- type: dot_recall@10
value: 0.44
name: Dot Recall@10
- type: dot_ndcg@10
value: 0.3259646473373541
name: Dot Ndcg@10
- type: dot_mrr@10
value: 0.28955555555555557
name: Dot Mrr@10
- type: dot_map@100
value: 0.306813602994791
name: Dot Map@100
- type: query_active_dims
value: 16.0
name: Query Active Dims
- type: query_sparsity_ratio
value: 0.99609375
name: Query Sparsity Ratio
- type: corpus_active_dims
value: 16.0
name: Corpus Active Dims
- type: corpus_sparsity_ratio
value: 0.99609375
name: Corpus Sparsity Ratio
- task:
type: sparse-information-retrieval
name: Sparse Information Retrieval
dataset:
name: NanoMSMARCO 32
type: NanoMSMARCO_32
metrics:
- type: dot_accuracy@1
value: 0.3
name: Dot Accuracy@1
- type: dot_accuracy@3
value: 0.36
name: Dot Accuracy@3
- type: dot_accuracy@5
value: 0.4
name: Dot Accuracy@5
- type: dot_accuracy@10
value: 0.6
name: Dot Accuracy@10
- type: dot_precision@1
value: 0.3
name: Dot Precision@1
- type: dot_precision@3
value: 0.11999999999999998
name: Dot Precision@3
- type: dot_precision@5
value: 0.08
name: Dot Precision@5
- type: dot_precision@10
value: 0.06
name: Dot Precision@10
- type: dot_recall@1
value: 0.3
name: Dot Recall@1
- type: dot_recall@3
value: 0.36
name: Dot Recall@3
- type: dot_recall@5
value: 0.4
name: Dot Recall@5
- type: dot_recall@10
value: 0.6
name: Dot Recall@10
- type: dot_ndcg@10
value: 0.4175000854041106
name: Dot Ndcg@10
- type: dot_mrr@10
value: 0.36360317460317454
name: Dot Mrr@10
- type: dot_map@100
value: 0.37705054554799494
name: Dot Map@100
- type: query_active_dims
value: 32.0
name: Query Active Dims
- type: query_sparsity_ratio
value: 0.9921875
name: Query Sparsity Ratio
- type: corpus_active_dims
value: 32.0
name: Corpus Active Dims
- type: corpus_sparsity_ratio
value: 0.9921875
name: Corpus Sparsity Ratio
- task:
type: sparse-nano-beir
name: Sparse Nano BEIR
dataset:
name: NanoBEIR mean 32
type: NanoBEIR_mean_32
metrics:
- type: dot_accuracy@1
value: 0.3
name: Dot Accuracy@1
- type: dot_accuracy@3
value: 0.36
name: Dot Accuracy@3
- type: dot_accuracy@5
value: 0.4
name: Dot Accuracy@5
- type: dot_accuracy@10
value: 0.6
name: Dot Accuracy@10
- type: dot_precision@1
value: 0.3
name: Dot Precision@1
- type: dot_precision@3
value: 0.11999999999999998
name: Dot Precision@3
- type: dot_precision@5
value: 0.08
name: Dot Precision@5
- type: dot_precision@10
value: 0.06
name: Dot Precision@10
- type: dot_recall@1
value: 0.3
name: Dot Recall@1
- type: dot_recall@3
value: 0.36
name: Dot Recall@3
- type: dot_recall@5
value: 0.4
name: Dot Recall@5
- type: dot_recall@10
value: 0.6
name: Dot Recall@10
- type: dot_ndcg@10
value: 0.4175000854041106
name: Dot Ndcg@10
- type: dot_mrr@10
value: 0.36360317460317454
name: Dot Mrr@10
- type: dot_map@100
value: 0.37705054554799494
name: Dot Map@100
- type: query_active_dims
value: 32.0
name: Query Active Dims
- type: query_sparsity_ratio
value: 0.9921875
name: Query Sparsity Ratio
- type: corpus_active_dims
value: 32.0
name: Corpus Active Dims
- type: corpus_sparsity_ratio
value: 0.9921875
name: Corpus Sparsity Ratio
- task:
type: sparse-information-retrieval
name: Sparse Information Retrieval
dataset:
name: NanoMSMARCO 64
type: NanoMSMARCO_64
metrics:
- type: dot_accuracy@1
value: 0.32
name: Dot Accuracy@1
- type: dot_accuracy@3
value: 0.48
name: Dot Accuracy@3
- type: dot_accuracy@5
value: 0.56
name: Dot Accuracy@5
- type: dot_accuracy@10
value: 0.64
name: Dot Accuracy@10
- type: dot_precision@1
value: 0.32
name: Dot Precision@1
- type: dot_precision@3
value: 0.15999999999999998
name: Dot Precision@3
- type: dot_precision@5
value: 0.11200000000000002
name: Dot Precision@5
- type: dot_precision@10
value: 0.06400000000000002
name: Dot Precision@10
- type: dot_recall@1
value: 0.32
name: Dot Recall@1
- type: dot_recall@3
value: 0.48
name: Dot Recall@3
- type: dot_recall@5
value: 0.56
name: Dot Recall@5
- type: dot_recall@10
value: 0.64
name: Dot Recall@10
- type: dot_ndcg@10
value: 0.4747516265872855
name: Dot Ndcg@10
- type: dot_mrr@10
value: 0.4225
name: Dot Mrr@10
- type: dot_map@100
value: 0.43804482701175623
name: Dot Map@100
- type: query_active_dims
value: 64.0
name: Query Active Dims
- type: query_sparsity_ratio
value: 0.984375
name: Query Sparsity Ratio
- type: corpus_active_dims
value: 64.0
name: Corpus Active Dims
- type: corpus_sparsity_ratio
value: 0.984375
name: Corpus Sparsity Ratio
- task:
type: sparse-nano-beir
name: Sparse Nano BEIR
dataset:
name: NanoBEIR mean 64
type: NanoBEIR_mean_64
metrics:
- type: dot_accuracy@1
value: 0.32
name: Dot Accuracy@1
- type: dot_accuracy@3
value: 0.48
name: Dot Accuracy@3
- type: dot_accuracy@5
value: 0.56
name: Dot Accuracy@5
- type: dot_accuracy@10
value: 0.64
name: Dot Accuracy@10
- type: dot_precision@1
value: 0.32
name: Dot Precision@1
- type: dot_precision@3
value: 0.15999999999999998
name: Dot Precision@3
- type: dot_precision@5
value: 0.11200000000000002
name: Dot Precision@5
- type: dot_precision@10
value: 0.06400000000000002
name: Dot Precision@10
- type: dot_recall@1
value: 0.32
name: Dot Recall@1
- type: dot_recall@3
value: 0.48
name: Dot Recall@3
- type: dot_recall@5
value: 0.56
name: Dot Recall@5
- type: dot_recall@10
value: 0.64
name: Dot Recall@10
- type: dot_ndcg@10
value: 0.4747516265872855
name: Dot Ndcg@10
- type: dot_mrr@10
value: 0.4225
name: Dot Mrr@10
- type: dot_map@100
value: 0.43804482701175623
name: Dot Map@100
- type: query_active_dims
value: 64.0
name: Query Active Dims
- type: query_sparsity_ratio
value: 0.984375
name: Query Sparsity Ratio
- type: corpus_active_dims
value: 64.0
name: Corpus Active Dims
- type: corpus_sparsity_ratio
value: 0.984375
name: Corpus Sparsity Ratio
- task:
type: sparse-information-retrieval
name: Sparse Information Retrieval
dataset:
name: NanoMSMARCO 128
type: NanoMSMARCO_128
metrics:
- type: dot_accuracy@1
value: 0.3
name: Dot Accuracy@1
- type: dot_accuracy@3
value: 0.54
name: Dot Accuracy@3
- type: dot_accuracy@5
value: 0.64
name: Dot Accuracy@5
- type: dot_accuracy@10
value: 0.74
name: Dot Accuracy@10
- type: dot_precision@1
value: 0.3
name: Dot Precision@1
- type: dot_precision@3
value: 0.18
name: Dot Precision@3
- type: dot_precision@5
value: 0.128
name: Dot Precision@5
- type: dot_precision@10
value: 0.07400000000000001
name: Dot Precision@10
- type: dot_recall@1
value: 0.3
name: Dot Recall@1
- type: dot_recall@3
value: 0.54
name: Dot Recall@3
- type: dot_recall@5
value: 0.64
name: Dot Recall@5
- type: dot_recall@10
value: 0.74
name: Dot Recall@10
- type: dot_ndcg@10
value: 0.5165502329637498
name: Dot Ndcg@10
- type: dot_mrr@10
value: 0.4448571428571429
name: Dot Mrr@10
- type: dot_map@100
value: 0.4609321037436295
name: Dot Map@100
- type: query_active_dims
value: 128.0
name: Query Active Dims
- type: query_sparsity_ratio
value: 0.96875
name: Query Sparsity Ratio
- type: corpus_active_dims
value: 128.0
name: Corpus Active Dims
- type: corpus_sparsity_ratio
value: 0.96875
name: Corpus Sparsity Ratio
- task:
type: sparse-nano-beir
name: Sparse Nano BEIR
dataset:
name: NanoBEIR mean 128
type: NanoBEIR_mean_128
metrics:
- type: dot_accuracy@1
value: 0.3
name: Dot Accuracy@1
- type: dot_accuracy@3
value: 0.54
name: Dot Accuracy@3
- type: dot_accuracy@5
value: 0.64
name: Dot Accuracy@5
- type: dot_accuracy@10
value: 0.74
name: Dot Accuracy@10
- type: dot_precision@1
value: 0.3
name: Dot Precision@1
- type: dot_precision@3
value: 0.18
name: Dot Precision@3
- type: dot_precision@5
value: 0.128
name: Dot Precision@5
- type: dot_precision@10
value: 0.07400000000000001
name: Dot Precision@10
- type: dot_recall@1
value: 0.3
name: Dot Recall@1
- type: dot_recall@3
value: 0.54
name: Dot Recall@3
- type: dot_recall@5
value: 0.64
name: Dot Recall@5
- type: dot_recall@10
value: 0.74
name: Dot Recall@10
- type: dot_ndcg@10
value: 0.5165502329637498
name: Dot Ndcg@10
- type: dot_mrr@10
value: 0.4448571428571429
name: Dot Mrr@10
- type: dot_map@100
value: 0.4609321037436295
name: Dot Map@100
- type: query_active_dims
value: 128.0
name: Query Active Dims
- type: query_sparsity_ratio
value: 0.96875
name: Query Sparsity Ratio
- type: corpus_active_dims
value: 128.0
name: Corpus Active Dims
- type: corpus_sparsity_ratio
value: 0.96875
name: Corpus Sparsity Ratio
- task:
type: sparse-information-retrieval
name: Sparse Information Retrieval
dataset:
name: NanoMSMARCO 256
type: NanoMSMARCO_256
metrics:
- type: dot_accuracy@1
value: 0.34
name: Dot Accuracy@1
- type: dot_accuracy@3
value: 0.6
name: Dot Accuracy@3
- type: dot_accuracy@5
value: 0.74
name: Dot Accuracy@5
- type: dot_accuracy@10
value: 0.84
name: Dot Accuracy@10
- type: dot_precision@1
value: 0.34
name: Dot Precision@1
- type: dot_precision@3
value: 0.2
name: Dot Precision@3
- type: dot_precision@5
value: 0.14800000000000002
name: Dot Precision@5
- type: dot_precision@10
value: 0.08399999999999999
name: Dot Precision@10
- type: dot_recall@1
value: 0.34
name: Dot Recall@1
- type: dot_recall@3
value: 0.6
name: Dot Recall@3
- type: dot_recall@5
value: 0.74
name: Dot Recall@5
- type: dot_recall@10
value: 0.84
name: Dot Recall@10
- type: dot_ndcg@10
value: 0.5842381969358662
name: Dot Ndcg@10
- type: dot_mrr@10
value: 0.5026904761904762
name: Dot Mrr@10
- type: dot_map@100
value: 0.5098488479343186
name: Dot Map@100
- type: query_active_dims
value: 256.0
name: Query Active Dims
- type: query_sparsity_ratio
value: 0.9375
name: Query Sparsity Ratio
- type: corpus_active_dims
value: 256.0
name: Corpus Active Dims
- type: corpus_sparsity_ratio
value: 0.9375
name: Corpus Sparsity Ratio
- task:
type: sparse-nano-beir
name: Sparse Nano BEIR
dataset:
name: NanoBEIR mean 256
type: NanoBEIR_mean_256
metrics:
- type: dot_accuracy@1
value: 0.34
name: Dot Accuracy@1
- type: dot_accuracy@3
value: 0.6
name: Dot Accuracy@3
- type: dot_accuracy@5
value: 0.74
name: Dot Accuracy@5
- type: dot_accuracy@10
value: 0.84
name: Dot Accuracy@10
- type: dot_precision@1
value: 0.34
name: Dot Precision@1
- type: dot_precision@3
value: 0.2
name: Dot Precision@3
- type: dot_precision@5
value: 0.14800000000000002
name: Dot Precision@5
- type: dot_precision@10
value: 0.08399999999999999
name: Dot Precision@10
- type: dot_recall@1
value: 0.34
name: Dot Recall@1
- type: dot_recall@3
value: 0.6
name: Dot Recall@3
- type: dot_recall@5
value: 0.74
name: Dot Recall@5
- type: dot_recall@10
value: 0.84
name: Dot Recall@10
- type: dot_ndcg@10
value: 0.5842381969358662
name: Dot Ndcg@10
- type: dot_mrr@10
value: 0.5026904761904762
name: Dot Mrr@10
- type: dot_map@100
value: 0.5098488479343186
name: Dot Map@100
- type: query_active_dims
value: 256.0
name: Query Active Dims
- type: query_sparsity_ratio
value: 0.9375
name: Query Sparsity Ratio
- type: corpus_active_dims
value: 256.0
name: Corpus Active Dims
- type: corpus_sparsity_ratio
value: 0.9375
name: Corpus Sparsity Ratio
- task:
type: sparse-information-retrieval
name: Sparse Information Retrieval
dataset:
name: NanoClimateFEVER
type: NanoClimateFEVER
metrics:
- type: dot_accuracy@1
value: 0.26
name: Dot Accuracy@1
- type: dot_accuracy@3
value: 0.56
name: Dot Accuracy@3
- type: dot_accuracy@5
value: 0.62
name: Dot Accuracy@5
- type: dot_accuracy@10
value: 0.74
name: Dot Accuracy@10
- type: dot_precision@1
value: 0.26
name: Dot Precision@1
- type: dot_precision@3
value: 0.20666666666666667
name: Dot Precision@3
- type: dot_precision@5
value: 0.156
name: Dot Precision@5
- type: dot_precision@10
value: 0.102
name: Dot Precision@10
- type: dot_recall@1
value: 0.12333333333333332
name: Dot Recall@1
- type: dot_recall@3
value: 0.29333333333333333
name: Dot Recall@3
- type: dot_recall@5
value: 0.34666666666666673
name: Dot Recall@5
- type: dot_recall@10
value: 0.41566666666666663
name: Dot Recall@10
- type: dot_ndcg@10
value: 0.33074042963512007
name: Dot Ndcg@10
- type: dot_mrr@10
value: 0.41507936507936505
name: Dot Mrr@10
- type: dot_map@100
value: 0.2605037455645458
name: Dot Map@100
- type: query_active_dims
value: 256.0
name: Query Active Dims
- type: query_sparsity_ratio
value: 0.9375
name: Query Sparsity Ratio
- type: corpus_active_dims
value: 256.0
name: Corpus Active Dims
- type: corpus_sparsity_ratio
value: 0.9375
name: Corpus Sparsity Ratio
- task:
type: sparse-information-retrieval
name: Sparse Information Retrieval
dataset:
name: NanoDBPedia
type: NanoDBPedia
metrics:
- type: dot_accuracy@1
value: 0.78
name: Dot Accuracy@1
- type: dot_accuracy@3
value: 0.92
name: Dot Accuracy@3
- type: dot_accuracy@5
value: 0.96
name: Dot Accuracy@5
- type: dot_accuracy@10
value: 1.0
name: Dot Accuracy@10
- type: dot_precision@1
value: 0.78
name: Dot Precision@1
- type: dot_precision@3
value: 0.68
name: Dot Precision@3
- type: dot_precision@5
value: 0.6
name: Dot Precision@5
- type: dot_precision@10
value: 0.49
name: Dot Precision@10
- type: dot_recall@1
value: 0.08787178599815837
name: Dot Recall@1
- type: dot_recall@3
value: 0.20076849643437242
name: Dot Recall@3
- type: dot_recall@5
value: 0.2551529754028007
name: Dot Recall@5
- type: dot_recall@10
value: 0.35977856932473445
name: Dot Recall@10
- type: dot_ndcg@10
value: 0.631230472759085
name: Dot Ndcg@10
- type: dot_mrr@10
value: 0.8546666666666668
name: Dot Mrr@10
- type: dot_map@100
value: 0.4715050434861439
name: Dot Map@100
- type: query_active_dims
value: 256.0
name: Query Active Dims
- type: query_sparsity_ratio
value: 0.9375
name: Query Sparsity Ratio
- type: corpus_active_dims
value: 256.0
name: Corpus Active Dims
- type: corpus_sparsity_ratio
value: 0.9375
name: Corpus Sparsity Ratio
- task:
type: sparse-information-retrieval
name: Sparse Information Retrieval
dataset:
name: NanoFEVER
type: NanoFEVER
metrics:
- type: dot_accuracy@1
value: 0.82
name: Dot Accuracy@1
- type: dot_accuracy@3
value: 0.94
name: Dot Accuracy@3
- type: dot_accuracy@5
value: 0.96
name: Dot Accuracy@5
- type: dot_accuracy@10
value: 0.98
name: Dot Accuracy@10
- type: dot_precision@1
value: 0.82
name: Dot Precision@1
- type: dot_precision@3
value: 0.32666666666666666
name: Dot Precision@3
- type: dot_precision@5
value: 0.19999999999999996
name: Dot Precision@5
- type: dot_precision@10
value: 0.10399999999999998
name: Dot Precision@10
- type: dot_recall@1
value: 0.7666666666666666
name: Dot Recall@1
- type: dot_recall@3
value: 0.9066666666666667
name: Dot Recall@3
- type: dot_recall@5
value: 0.9266666666666667
name: Dot Recall@5
- type: dot_recall@10
value: 0.9433333333333332
name: Dot Recall@10
- type: dot_ndcg@10
value: 0.8786397520542688
name: Dot Ndcg@10
- type: dot_mrr@10
value: 0.8795555555555555
name: Dot Mrr@10
- type: dot_map@100
value: 0.8474023961509473
name: Dot Map@100
- type: query_active_dims
value: 256.0
name: Query Active Dims
- type: query_sparsity_ratio
value: 0.9375
name: Query Sparsity Ratio
- type: corpus_active_dims
value: 256.0
name: Corpus Active Dims
- type: corpus_sparsity_ratio
value: 0.9375
name: Corpus Sparsity Ratio
- task:
type: sparse-information-retrieval
name: Sparse Information Retrieval
dataset:
name: NanoFiQA2018
type: NanoFiQA2018
metrics:
- type: dot_accuracy@1
value: 0.46
name: Dot Accuracy@1
- type: dot_accuracy@3
value: 0.64
name: Dot Accuracy@3
- type: dot_accuracy@5
value: 0.7
name: Dot Accuracy@5
- type: dot_accuracy@10
value: 0.76
name: Dot Accuracy@10
- type: dot_precision@1
value: 0.46
name: Dot Precision@1
- type: dot_precision@3
value: 0.3
name: Dot Precision@3
- type: dot_precision@5
value: 0.22799999999999998
name: Dot Precision@5
- type: dot_precision@10
value: 0.13999999999999999
name: Dot Precision@10
- type: dot_recall@1
value: 0.22924603174603175
name: Dot Recall@1
- type: dot_recall@3
value: 0.4312936507936508
name: Dot Recall@3
- type: dot_recall@5
value: 0.5035396825396825
name: Dot Recall@5
- type: dot_recall@10
value: 0.6116190476190476
name: Dot Recall@10
- type: dot_ndcg@10
value: 0.505122448452203
name: Dot Ndcg@10
- type: dot_mrr@10
value: 0.5688888888888889
name: Dot Mrr@10
- type: dot_map@100
value: 0.4305964674526582
name: Dot Map@100
- type: query_active_dims
value: 256.0
name: Query Active Dims
- type: query_sparsity_ratio
value: 0.9375
name: Query Sparsity Ratio
- type: corpus_active_dims
value: 256.0
name: Corpus Active Dims
- type: corpus_sparsity_ratio
value: 0.9375
name: Corpus Sparsity Ratio
- task:
type: sparse-information-retrieval
name: Sparse Information Retrieval
dataset:
name: NanoHotpotQA
type: NanoHotpotQA
metrics:
- type: dot_accuracy@1
value: 0.78
name: Dot Accuracy@1
- type: dot_accuracy@3
value: 0.9
name: Dot Accuracy@3
- type: dot_accuracy@5
value: 0.96
name: Dot Accuracy@5
- type: dot_accuracy@10
value: 0.98
name: Dot Accuracy@10
- type: dot_precision@1
value: 0.78
name: Dot Precision@1
- type: dot_precision@3
value: 0.48666666666666664
name: Dot Precision@3
- type: dot_precision@5
value: 0.32799999999999996
name: Dot Precision@5
- type: dot_precision@10
value: 0.16999999999999996
name: Dot Precision@10
- type: dot_recall@1
value: 0.39
name: Dot Recall@1
- type: dot_recall@3
value: 0.73
name: Dot Recall@3
- type: dot_recall@5
value: 0.82
name: Dot Recall@5
- type: dot_recall@10
value: 0.85
name: Dot Recall@10
- type: dot_ndcg@10
value: 0.7891312606021372
name: Dot Ndcg@10
- type: dot_mrr@10
value: 0.8563333333333333
name: Dot Mrr@10
- type: dot_map@100
value: 0.7308084845910934
name: Dot Map@100
- type: query_active_dims
value: 256.0
name: Query Active Dims
- type: query_sparsity_ratio
value: 0.9375
name: Query Sparsity Ratio
- type: corpus_active_dims
value: 256.0
name: Corpus Active Dims
- type: corpus_sparsity_ratio
value: 0.9375
name: Corpus Sparsity Ratio
- task:
type: sparse-information-retrieval
name: Sparse Information Retrieval
dataset:
name: NanoMSMARCO
type: NanoMSMARCO
metrics:
- type: dot_accuracy@1
value: 0.38
name: Dot Accuracy@1
- type: dot_accuracy@3
value: 0.64
name: Dot Accuracy@3
- type: dot_accuracy@5
value: 0.76
name: Dot Accuracy@5
- type: dot_accuracy@10
value: 0.78
name: Dot Accuracy@10
- type: dot_precision@1
value: 0.38
name: Dot Precision@1
- type: dot_precision@3
value: 0.21333333333333332
name: Dot Precision@3
- type: dot_precision@5
value: 0.15200000000000002
name: Dot Precision@5
- type: dot_precision@10
value: 0.078
name: Dot Precision@10
- type: dot_recall@1
value: 0.38
name: Dot Recall@1
- type: dot_recall@3
value: 0.64
name: Dot Recall@3
- type: dot_recall@5
value: 0.76
name: Dot Recall@5
- type: dot_recall@10
value: 0.78
name: Dot Recall@10
- type: dot_ndcg@10
value: 0.5906197363202759
name: Dot Ndcg@10
- type: dot_mrr@10
value: 0.528
name: Dot Mrr@10
- type: dot_map@100
value: 0.5404706257099874
name: Dot Map@100
- type: query_active_dims
value: 256.0
name: Query Active Dims
- type: query_sparsity_ratio
value: 0.9375
name: Query Sparsity Ratio
- type: corpus_active_dims
value: 256.0
name: Corpus Active Dims
- type: corpus_sparsity_ratio
value: 0.9375
name: Corpus Sparsity Ratio
- task:
type: sparse-information-retrieval
name: Sparse Information Retrieval
dataset:
name: NanoNFCorpus
type: NanoNFCorpus
metrics:
- type: dot_accuracy@1
value: 0.42
name: Dot Accuracy@1
- type: dot_accuracy@3
value: 0.58
name: Dot Accuracy@3
- type: dot_accuracy@5
value: 0.62
name: Dot Accuracy@5
- type: dot_accuracy@10
value: 0.68
name: Dot Accuracy@10
- type: dot_precision@1
value: 0.42
name: Dot Precision@1
- type: dot_precision@3
value: 0.3533333333333333
name: Dot Precision@3
- type: dot_precision@5
value: 0.32
name: Dot Precision@5
- type: dot_precision@10
value: 0.26799999999999996
name: Dot Precision@10
- type: dot_recall@1
value: 0.044434174313891364
name: Dot Recall@1
- type: dot_recall@3
value: 0.06886292486806139
name: Dot Recall@3
- type: dot_recall@5
value: 0.10018663091887436
name: Dot Recall@5
- type: dot_recall@10
value: 0.135993408976131
name: Dot Recall@10
- type: dot_ndcg@10
value: 0.3272577842417522
name: Dot Ndcg@10
- type: dot_mrr@10
value: 0.5120238095238094
name: Dot Mrr@10
- type: dot_map@100
value: 0.1540609053707419
name: Dot Map@100
- type: query_active_dims
value: 256.0
name: Query Active Dims
- type: query_sparsity_ratio
value: 0.9375
name: Query Sparsity Ratio
- type: corpus_active_dims
value: 256.0
name: Corpus Active Dims
- type: corpus_sparsity_ratio
value: 0.9375
name: Corpus Sparsity Ratio
- task:
type: sparse-information-retrieval
name: Sparse Information Retrieval
dataset:
name: NanoNQ
type: NanoNQ
metrics:
- type: dot_accuracy@1
value: 0.52
name: Dot Accuracy@1
- type: dot_accuracy@3
value: 0.68
name: Dot Accuracy@3
- type: dot_accuracy@5
value: 0.78
name: Dot Accuracy@5
- type: dot_accuracy@10
value: 0.82
name: Dot Accuracy@10
- type: dot_precision@1
value: 0.52
name: Dot Precision@1
- type: dot_precision@3
value: 0.23333333333333336
name: Dot Precision@3
- type: dot_precision@5
value: 0.16399999999999998
name: Dot Precision@5
- type: dot_precision@10
value: 0.088
name: Dot Precision@10
- type: dot_recall@1
value: 0.5
name: Dot Recall@1
- type: dot_recall@3
value: 0.65
name: Dot Recall@3
- type: dot_recall@5
value: 0.73
name: Dot Recall@5
- type: dot_recall@10
value: 0.78
name: Dot Recall@10
- type: dot_ndcg@10
value: 0.6523707439369819
name: Dot Ndcg@10
- type: dot_mrr@10
value: 0.6238571428571428
name: Dot Mrr@10
- type: dot_map@100
value: 0.6127092058948297
name: Dot Map@100
- type: query_active_dims
value: 256.0
name: Query Active Dims
- type: query_sparsity_ratio
value: 0.9375
name: Query Sparsity Ratio
- type: corpus_active_dims
value: 256.0
name: Corpus Active Dims
- type: corpus_sparsity_ratio
value: 0.9375
name: Corpus Sparsity Ratio
- task:
type: sparse-information-retrieval
name: Sparse Information Retrieval
dataset:
name: NanoQuoraRetrieval
type: NanoQuoraRetrieval
metrics:
- type: dot_accuracy@1
value: 0.9
name: Dot Accuracy@1
- type: dot_accuracy@3
value: 0.94
name: Dot Accuracy@3
- type: dot_accuracy@5
value: 1.0
name: Dot Accuracy@5
- type: dot_accuracy@10
value: 1.0
name: Dot Accuracy@10
- type: dot_precision@1
value: 0.9
name: Dot Precision@1
- type: dot_precision@3
value: 0.4
name: Dot Precision@3
- type: dot_precision@5
value: 0.264
name: Dot Precision@5
- type: dot_precision@10
value: 0.13799999999999998
name: Dot Precision@10
- type: dot_recall@1
value: 0.7773333333333332
name: Dot Recall@1
- type: dot_recall@3
value: 0.912
name: Dot Recall@3
- type: dot_recall@5
value: 0.986
name: Dot Recall@5
- type: dot_recall@10
value: 0.9966666666666666
name: Dot Recall@10
- type: dot_ndcg@10
value: 0.9408238851178163
name: Dot Ndcg@10
- type: dot_mrr@10
value: 0.935
name: Dot Mrr@10
- type: dot_map@100
value: 0.9156785714285713
name: Dot Map@100
- type: query_active_dims
value: 256.0
name: Query Active Dims
- type: query_sparsity_ratio
value: 0.9375
name: Query Sparsity Ratio
- type: corpus_active_dims
value: 256.0
name: Corpus Active Dims
- type: corpus_sparsity_ratio
value: 0.9375
name: Corpus Sparsity Ratio
- task:
type: sparse-information-retrieval
name: Sparse Information Retrieval
dataset:
name: NanoSCIDOCS
type: NanoSCIDOCS
metrics:
- type: dot_accuracy@1
value: 0.56
name: Dot Accuracy@1
- type: dot_accuracy@3
value: 0.7
name: Dot Accuracy@3
- type: dot_accuracy@5
value: 0.8
name: Dot Accuracy@5
- type: dot_accuracy@10
value: 0.92
name: Dot Accuracy@10
- type: dot_precision@1
value: 0.56
name: Dot Precision@1
- type: dot_precision@3
value: 0.3666666666666666
name: Dot Precision@3
- type: dot_precision@5
value: 0.3
name: Dot Precision@5
- type: dot_precision@10
value: 0.21
name: Dot Precision@10
- type: dot_recall@1
value: 0.11866666666666668
name: Dot Recall@1
- type: dot_recall@3
value: 0.2296666666666666
name: Dot Recall@3
- type: dot_recall@5
value: 0.30966666666666665
name: Dot Recall@5
- type: dot_recall@10
value: 0.43066666666666664
name: Dot Recall@10
- type: dot_ndcg@10
value: 0.4238434123293462
name: Dot Ndcg@10
- type: dot_mrr@10
value: 0.6637142857142857
name: Dot Mrr@10
- type: dot_map@100
value: 0.33702650955588553
name: Dot Map@100
- type: query_active_dims
value: 256.0
name: Query Active Dims
- type: query_sparsity_ratio
value: 0.9375
name: Query Sparsity Ratio
- type: corpus_active_dims
value: 256.0
name: Corpus Active Dims
- type: corpus_sparsity_ratio
value: 0.9375
name: Corpus Sparsity Ratio
- task:
type: sparse-information-retrieval
name: Sparse Information Retrieval
dataset:
name: NanoArguAna
type: NanoArguAna
metrics:
- type: dot_accuracy@1
value: 0.28
name: Dot Accuracy@1
- type: dot_accuracy@3
value: 0.82
name: Dot Accuracy@3
- type: dot_accuracy@5
value: 0.84
name: Dot Accuracy@5
- type: dot_accuracy@10
value: 0.92
name: Dot Accuracy@10
- type: dot_precision@1
value: 0.28
name: Dot Precision@1
- type: dot_precision@3
value: 0.2733333333333334
name: Dot Precision@3
- type: dot_precision@5
value: 0.16799999999999998
name: Dot Precision@5
- type: dot_precision@10
value: 0.092
name: Dot Precision@10
- type: dot_recall@1
value: 0.28
name: Dot Recall@1
- type: dot_recall@3
value: 0.82
name: Dot Recall@3
- type: dot_recall@5
value: 0.84
name: Dot Recall@5
- type: dot_recall@10
value: 0.92
name: Dot Recall@10
- type: dot_ndcg@10
value: 0.6320575399829071
name: Dot Ndcg@10
- type: dot_mrr@10
value: 0.5360714285714285
name: Dot Mrr@10
- type: dot_map@100
value: 0.5398250835421888
name: Dot Map@100
- type: query_active_dims
value: 256.0
name: Query Active Dims
- type: query_sparsity_ratio
value: 0.9375
name: Query Sparsity Ratio
- type: corpus_active_dims
value: 256.0
name: Corpus Active Dims
- type: corpus_sparsity_ratio
value: 0.9375
name: Corpus Sparsity Ratio
- task:
type: sparse-information-retrieval
name: Sparse Information Retrieval
dataset:
name: NanoSciFact
type: NanoSciFact
metrics:
- type: dot_accuracy@1
value: 0.7
name: Dot Accuracy@1
- type: dot_accuracy@3
value: 0.7
name: Dot Accuracy@3
- type: dot_accuracy@5
value: 0.8
name: Dot Accuracy@5
- type: dot_accuracy@10
value: 0.86
name: Dot Accuracy@10
- type: dot_precision@1
value: 0.7
name: Dot Precision@1
- type: dot_precision@3
value: 0.24666666666666665
name: Dot Precision@3
- type: dot_precision@5
value: 0.176
name: Dot Precision@5
- type: dot_precision@10
value: 0.09599999999999997
name: Dot Precision@10
- type: dot_recall@1
value: 0.665
name: Dot Recall@1
- type: dot_recall@3
value: 0.68
name: Dot Recall@3
- type: dot_recall@5
value: 0.785
name: Dot Recall@5
- type: dot_recall@10
value: 0.85
name: Dot Recall@10
- type: dot_ndcg@10
value: 0.7512560957647406
name: Dot Ndcg@10
- type: dot_mrr@10
value: 0.7302222222222224
name: Dot Mrr@10
- type: dot_map@100
value: 0.7208552252945762
name: Dot Map@100
- type: query_active_dims
value: 256.0
name: Query Active Dims
- type: query_sparsity_ratio
value: 0.9375
name: Query Sparsity Ratio
- type: corpus_active_dims
value: 256.0
name: Corpus Active Dims
- type: corpus_sparsity_ratio
value: 0.9375
name: Corpus Sparsity Ratio
- task:
type: sparse-information-retrieval
name: Sparse Information Retrieval
dataset:
name: NanoTouche2020
type: NanoTouche2020
metrics:
- type: dot_accuracy@1
value: 0.6326530612244898
name: Dot Accuracy@1
- type: dot_accuracy@3
value: 0.8979591836734694
name: Dot Accuracy@3
- type: dot_accuracy@5
value: 0.9591836734693877
name: Dot Accuracy@5
- type: dot_accuracy@10
value: 1.0
name: Dot Accuracy@10
- type: dot_precision@1
value: 0.6326530612244898
name: Dot Precision@1
- type: dot_precision@3
value: 0.5918367346938774
name: Dot Precision@3
- type: dot_precision@5
value: 0.5510204081632653
name: Dot Precision@5
- type: dot_precision@10
value: 0.4489795918367347
name: Dot Precision@10
- type: dot_recall@1
value: 0.04395130839858616
name: Dot Recall@1
- type: dot_recall@3
value: 0.12411835933794488
name: Dot Recall@3
- type: dot_recall@5
value: 0.18456901766491046
name: Dot Recall@5
- type: dot_recall@10
value: 0.30287435988004324
name: Dot Recall@10
- type: dot_ndcg@10
value: 0.5113851766135886
name: Dot Ndcg@10
- type: dot_mrr@10
value: 0.7748542274052478
name: Dot Mrr@10
- type: dot_map@100
value: 0.375999626455593
name: Dot Map@100
- type: query_active_dims
value: 256.0
name: Query Active Dims
- type: query_sparsity_ratio
value: 0.9375
name: Query Sparsity Ratio
- type: corpus_active_dims
value: 256.0
name: Corpus Active Dims
- type: corpus_sparsity_ratio
value: 0.9375
name: Corpus Sparsity Ratio
- task:
type: sparse-nano-beir
name: Sparse Nano BEIR
dataset:
name: NanoBEIR mean
type: NanoBEIR_mean
metrics:
- type: dot_accuracy@1
value: 0.5763579277864993
name: Dot Accuracy@1
- type: dot_accuracy@3
value: 0.7629199372056513
name: Dot Accuracy@3
- type: dot_accuracy@5
value: 0.8276295133437992
name: Dot Accuracy@5
- type: dot_accuracy@10
value: 0.88
name: Dot Accuracy@10
- type: dot_precision@1
value: 0.5763579277864993
name: Dot Precision@1
- type: dot_precision@3
value: 0.35988487702773414
name: Dot Precision@3
- type: dot_precision@5
value: 0.2774631083202512
name: Dot Precision@5
- type: dot_precision@10
value: 0.18653689167974882
name: Dot Precision@10
- type: dot_recall@1
value: 0.3389617923428206
name: Dot Recall@1
- type: dot_recall@3
value: 0.514362315238515
name: Dot Recall@3
- type: dot_recall@5
value: 0.5805729466558668
name: Dot Recall@5
- type: dot_recall@10
value: 0.6443537476256377
name: Dot Recall@10
- type: dot_ndcg@10
value: 0.6126522106007865
name: Dot Ndcg@10
- type: dot_mrr@10
value: 0.6829436096783036
name: Dot Mrr@10
- type: dot_map@100
value: 0.5336493761921356
name: Dot Map@100
- type: query_active_dims
value: 256.0
name: Query Active Dims
- type: query_sparsity_ratio
value: 0.9375
name: Query Sparsity Ratio
- type: corpus_active_dims
value: 256.0
name: Corpus Active Dims
- type: corpus_sparsity_ratio
value: 0.9375
name: Corpus Sparsity Ratio
---
# Sparse CSR model trained on Natural Questions
This is a [CSR Sparse Encoder](https://www.sbert.net/docs/sparse_encoder/usage/usage.html) model finetuned from [mixedbread-ai/mxbai-embed-large-v1](https://huggingface.co/mixedbread-ai/mxbai-embed-large-v1) on the [natural-questions](https://huggingface.co/datasets/sentence-transformers/natural-questions) dataset using the [sentence-transformers](https://www.SBERT.net) library. It maps sentences & paragraphs to a 4096-dimensional sparse vector space with 256 maximum active dimensions and can be used for semantic search and sparse retrieval.
## Model Details
### Model Description
- **Model Type:** CSR Sparse Encoder
- **Base model:** [mixedbread-ai/mxbai-embed-large-v1](https://huggingface.co/mixedbread-ai/mxbai-embed-large-v1) <!-- at revision db9d1fe0f31addb4978201b2bf3e577f3f8900d2 -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 4096 dimensions (trained with 256 maximum active dimensions)
- **Similarity Function:** Dot Product
- **Training Dataset:**
- [natural-questions](https://huggingface.co/datasets/sentence-transformers/natural-questions)
- **Language:** en
- **License:** apache-2.0
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Documentation:** [Sparse Encoder Documentation](https://www.sbert.net/docs/sparse_encoder/usage/usage.html)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sparse Encoders on Hugging Face](https://huggingface.co/models?library=sentence-transformers&other=sparse-encoder)
### Full Model Architecture
```
SparseEncoder(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): CSRSparsity({'input_dim': 1024, 'hidden_dim': 4096, 'k': 256, 'k_aux': 512, 'normalize': False, 'dead_threshold': 30})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SparseEncoder
# Download from the 🤗 Hub
model = SparseEncoder("tomaarsen/csr-mxbai-embed-large-v1-nq-no-reconstruction-2")
# Run inference
queries = [
"who is cornelius in the book of acts",
]
documents = [
'Cornelius the Centurion Cornelius (Greek: Κορνήλιος) was a Roman centurion who is considered by Christians to be one of the first Gentiles to convert to the faith, as related in Acts of the Apostles.',
"Joe Ranft Ranft reunited with Lasseter when he was hired by Pixar in 1991 as their head of story.[1] There he worked on all of their films produced up to 2006; this included Toy Story (for which he received an Academy Award nomination) and A Bug's Life, as the co-story writer and others as story supervisor. His final film was Cars. He also voiced characters in many of the films, including Heimlich the caterpillar in A Bug's Life, Wheezy the penguin in Toy Story 2, and Jacques the shrimp in Finding Nemo.[1]",
'Wonderful Tonight "Wonderful Tonight" is a ballad written by Eric Clapton. It was included on Clapton\'s 1977 album Slowhand. Clapton wrote the song about Pattie Boyd.[1] The female vocal harmonies on the song are provided by Marcella Detroit (then Marcy Levy) and Yvonne Elliman.',
]
query_embeddings = model.encode_query(queries)
document_embeddings = model.encode_document(documents)
print(query_embeddings.shape, document_embeddings.shape)
# [1, 4096] [3, 4096]
# Get the similarity scores for the embeddings
similarities = model.similarity(query_embeddings, document_embeddings)
print(similarities)
# tensor([[57.9578, 15.8308, 16.0606]])
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Sparse Information Retrieval
* Dataset: `NanoMSMARCO_8`
* Evaluated with [<code>SparseInformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sparse_encoder/evaluation.html#sentence_transformers.sparse_encoder.evaluation.SparseInformationRetrievalEvaluator) with these parameters:
```json
{
"max_active_dims": 8
}
```
| Metric | Value |
|:----------------------|:----------|
| dot_accuracy@1 | 0.12 |
| dot_accuracy@3 | 0.24 |
| dot_accuracy@5 | 0.28 |
| dot_accuracy@10 | 0.3 |
| dot_precision@1 | 0.12 |
| dot_precision@3 | 0.08 |
| dot_precision@5 | 0.056 |
| dot_precision@10 | 0.03 |
| dot_recall@1 | 0.12 |
| dot_recall@3 | 0.24 |
| dot_recall@5 | 0.28 |
| dot_recall@10 | 0.3 |
| **dot_ndcg@10** | **0.212** |
| dot_mrr@10 | 0.1836 |
| dot_map@100 | 0.1917 |
| query_active_dims | 8.0 |
| query_sparsity_ratio | 0.998 |
| corpus_active_dims | 8.0 |
| corpus_sparsity_ratio | 0.998 |
#### Sparse Nano BEIR
* Dataset: `NanoBEIR_mean_8`
* Evaluated with [<code>SparseNanoBEIREvaluator</code>](https://sbert.net/docs/package_reference/sparse_encoder/evaluation.html#sentence_transformers.sparse_encoder.evaluation.SparseNanoBEIREvaluator) with these parameters:
```json
{
"dataset_names": [
"msmarco"
],
"max_active_dims": 8
}
```
| Metric | Value |
|:----------------------|:----------|
| dot_accuracy@1 | 0.12 |
| dot_accuracy@3 | 0.24 |
| dot_accuracy@5 | 0.28 |
| dot_accuracy@10 | 0.3 |
| dot_precision@1 | 0.12 |
| dot_precision@3 | 0.08 |
| dot_precision@5 | 0.056 |
| dot_precision@10 | 0.03 |
| dot_recall@1 | 0.12 |
| dot_recall@3 | 0.24 |
| dot_recall@5 | 0.28 |
| dot_recall@10 | 0.3 |
| **dot_ndcg@10** | **0.212** |
| dot_mrr@10 | 0.1836 |
| dot_map@100 | 0.1917 |
| query_active_dims | 8.0 |
| query_sparsity_ratio | 0.998 |
| corpus_active_dims | 8.0 |
| corpus_sparsity_ratio | 0.998 |
#### Sparse Information Retrieval
* Dataset: `NanoMSMARCO_16`
* Evaluated with [<code>SparseInformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sparse_encoder/evaluation.html#sentence_transformers.sparse_encoder.evaluation.SparseInformationRetrievalEvaluator) with these parameters:
```json
{
"max_active_dims": 16
}
```
| Metric | Value |
|:----------------------|:----------|
| dot_accuracy@1 | 0.22 |
| dot_accuracy@3 | 0.34 |
| dot_accuracy@5 | 0.4 |
| dot_accuracy@10 | 0.44 |
| dot_precision@1 | 0.22 |
| dot_precision@3 | 0.1133 |
| dot_precision@5 | 0.08 |
| dot_precision@10 | 0.044 |
| dot_recall@1 | 0.22 |
| dot_recall@3 | 0.34 |
| dot_recall@5 | 0.4 |
| dot_recall@10 | 0.44 |
| **dot_ndcg@10** | **0.326** |
| dot_mrr@10 | 0.2896 |
| dot_map@100 | 0.3068 |
| query_active_dims | 16.0 |
| query_sparsity_ratio | 0.9961 |
| corpus_active_dims | 16.0 |
| corpus_sparsity_ratio | 0.9961 |
#### Sparse Nano BEIR
* Dataset: `NanoBEIR_mean_16`
* Evaluated with [<code>SparseNanoBEIREvaluator</code>](https://sbert.net/docs/package_reference/sparse_encoder/evaluation.html#sentence_transformers.sparse_encoder.evaluation.SparseNanoBEIREvaluator) with these parameters:
```json
{
"dataset_names": [
"msmarco"
],
"max_active_dims": 16
}
```
| Metric | Value |
|:----------------------|:----------|
| dot_accuracy@1 | 0.22 |
| dot_accuracy@3 | 0.34 |
| dot_accuracy@5 | 0.4 |
| dot_accuracy@10 | 0.44 |
| dot_precision@1 | 0.22 |
| dot_precision@3 | 0.1133 |
| dot_precision@5 | 0.08 |
| dot_precision@10 | 0.044 |
| dot_recall@1 | 0.22 |
| dot_recall@3 | 0.34 |
| dot_recall@5 | 0.4 |
| dot_recall@10 | 0.44 |
| **dot_ndcg@10** | **0.326** |
| dot_mrr@10 | 0.2896 |
| dot_map@100 | 0.3068 |
| query_active_dims | 16.0 |
| query_sparsity_ratio | 0.9961 |
| corpus_active_dims | 16.0 |
| corpus_sparsity_ratio | 0.9961 |
#### Sparse Information Retrieval
* Dataset: `NanoMSMARCO_32`
* Evaluated with [<code>SparseInformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sparse_encoder/evaluation.html#sentence_transformers.sparse_encoder.evaluation.SparseInformationRetrievalEvaluator) with these parameters:
```json
{
"max_active_dims": 32
}
```
| Metric | Value |
|:----------------------|:-----------|
| dot_accuracy@1 | 0.3 |
| dot_accuracy@3 | 0.36 |
| dot_accuracy@5 | 0.4 |
| dot_accuracy@10 | 0.6 |
| dot_precision@1 | 0.3 |
| dot_precision@3 | 0.12 |
| dot_precision@5 | 0.08 |
| dot_precision@10 | 0.06 |
| dot_recall@1 | 0.3 |
| dot_recall@3 | 0.36 |
| dot_recall@5 | 0.4 |
| dot_recall@10 | 0.6 |
| **dot_ndcg@10** | **0.4175** |
| dot_mrr@10 | 0.3636 |
| dot_map@100 | 0.3771 |
| query_active_dims | 32.0 |
| query_sparsity_ratio | 0.9922 |
| corpus_active_dims | 32.0 |
| corpus_sparsity_ratio | 0.9922 |
#### Sparse Nano BEIR
* Dataset: `NanoBEIR_mean_32`
* Evaluated with [<code>SparseNanoBEIREvaluator</code>](https://sbert.net/docs/package_reference/sparse_encoder/evaluation.html#sentence_transformers.sparse_encoder.evaluation.SparseNanoBEIREvaluator) with these parameters:
```json
{
"dataset_names": [
"msmarco"
],
"max_active_dims": 32
}
```
| Metric | Value |
|:----------------------|:-----------|
| dot_accuracy@1 | 0.3 |
| dot_accuracy@3 | 0.36 |
| dot_accuracy@5 | 0.4 |
| dot_accuracy@10 | 0.6 |
| dot_precision@1 | 0.3 |
| dot_precision@3 | 0.12 |
| dot_precision@5 | 0.08 |
| dot_precision@10 | 0.06 |
| dot_recall@1 | 0.3 |
| dot_recall@3 | 0.36 |
| dot_recall@5 | 0.4 |
| dot_recall@10 | 0.6 |
| **dot_ndcg@10** | **0.4175** |
| dot_mrr@10 | 0.3636 |
| dot_map@100 | 0.3771 |
| query_active_dims | 32.0 |
| query_sparsity_ratio | 0.9922 |
| corpus_active_dims | 32.0 |
| corpus_sparsity_ratio | 0.9922 |
#### Sparse Information Retrieval
* Dataset: `NanoMSMARCO_64`
* Evaluated with [<code>SparseInformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sparse_encoder/evaluation.html#sentence_transformers.sparse_encoder.evaluation.SparseInformationRetrievalEvaluator) with these parameters:
```json
{
"max_active_dims": 64
}
```
| Metric | Value |
|:----------------------|:-----------|
| dot_accuracy@1 | 0.32 |
| dot_accuracy@3 | 0.48 |
| dot_accuracy@5 | 0.56 |
| dot_accuracy@10 | 0.64 |
| dot_precision@1 | 0.32 |
| dot_precision@3 | 0.16 |
| dot_precision@5 | 0.112 |
| dot_precision@10 | 0.064 |
| dot_recall@1 | 0.32 |
| dot_recall@3 | 0.48 |
| dot_recall@5 | 0.56 |
| dot_recall@10 | 0.64 |
| **dot_ndcg@10** | **0.4748** |
| dot_mrr@10 | 0.4225 |
| dot_map@100 | 0.438 |
| query_active_dims | 64.0 |
| query_sparsity_ratio | 0.9844 |
| corpus_active_dims | 64.0 |
| corpus_sparsity_ratio | 0.9844 |
#### Sparse Nano BEIR
* Dataset: `NanoBEIR_mean_64`
* Evaluated with [<code>SparseNanoBEIREvaluator</code>](https://sbert.net/docs/package_reference/sparse_encoder/evaluation.html#sentence_transformers.sparse_encoder.evaluation.SparseNanoBEIREvaluator) with these parameters:
```json
{
"dataset_names": [
"msmarco"
],
"max_active_dims": 64
}
```
| Metric | Value |
|:----------------------|:-----------|
| dot_accuracy@1 | 0.32 |
| dot_accuracy@3 | 0.48 |
| dot_accuracy@5 | 0.56 |
| dot_accuracy@10 | 0.64 |
| dot_precision@1 | 0.32 |
| dot_precision@3 | 0.16 |
| dot_precision@5 | 0.112 |
| dot_precision@10 | 0.064 |
| dot_recall@1 | 0.32 |
| dot_recall@3 | 0.48 |
| dot_recall@5 | 0.56 |
| dot_recall@10 | 0.64 |
| **dot_ndcg@10** | **0.4748** |
| dot_mrr@10 | 0.4225 |
| dot_map@100 | 0.438 |
| query_active_dims | 64.0 |
| query_sparsity_ratio | 0.9844 |
| corpus_active_dims | 64.0 |
| corpus_sparsity_ratio | 0.9844 |
#### Sparse Information Retrieval
* Dataset: `NanoMSMARCO_128`
* Evaluated with [<code>SparseInformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sparse_encoder/evaluation.html#sentence_transformers.sparse_encoder.evaluation.SparseInformationRetrievalEvaluator) with these parameters:
```json
{
"max_active_dims": 128
}
```
| Metric | Value |
|:----------------------|:-----------|
| dot_accuracy@1 | 0.3 |
| dot_accuracy@3 | 0.54 |
| dot_accuracy@5 | 0.64 |
| dot_accuracy@10 | 0.74 |
| dot_precision@1 | 0.3 |
| dot_precision@3 | 0.18 |
| dot_precision@5 | 0.128 |
| dot_precision@10 | 0.074 |
| dot_recall@1 | 0.3 |
| dot_recall@3 | 0.54 |
| dot_recall@5 | 0.64 |
| dot_recall@10 | 0.74 |
| **dot_ndcg@10** | **0.5166** |
| dot_mrr@10 | 0.4449 |
| dot_map@100 | 0.4609 |
| query_active_dims | 128.0 |
| query_sparsity_ratio | 0.9688 |
| corpus_active_dims | 128.0 |
| corpus_sparsity_ratio | 0.9688 |
#### Sparse Nano BEIR
* Dataset: `NanoBEIR_mean_128`
* Evaluated with [<code>SparseNanoBEIREvaluator</code>](https://sbert.net/docs/package_reference/sparse_encoder/evaluation.html#sentence_transformers.sparse_encoder.evaluation.SparseNanoBEIREvaluator) with these parameters:
```json
{
"dataset_names": [
"msmarco"
],
"max_active_dims": 128
}
```
| Metric | Value |
|:----------------------|:-----------|
| dot_accuracy@1 | 0.3 |
| dot_accuracy@3 | 0.54 |
| dot_accuracy@5 | 0.64 |
| dot_accuracy@10 | 0.74 |
| dot_precision@1 | 0.3 |
| dot_precision@3 | 0.18 |
| dot_precision@5 | 0.128 |
| dot_precision@10 | 0.074 |
| dot_recall@1 | 0.3 |
| dot_recall@3 | 0.54 |
| dot_recall@5 | 0.64 |
| dot_recall@10 | 0.74 |
| **dot_ndcg@10** | **0.5166** |
| dot_mrr@10 | 0.4449 |
| dot_map@100 | 0.4609 |
| query_active_dims | 128.0 |
| query_sparsity_ratio | 0.9688 |
| corpus_active_dims | 128.0 |
| corpus_sparsity_ratio | 0.9688 |
#### Sparse Information Retrieval
* Dataset: `NanoMSMARCO_256`
* Evaluated with [<code>SparseInformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sparse_encoder/evaluation.html#sentence_transformers.sparse_encoder.evaluation.SparseInformationRetrievalEvaluator) with these parameters:
```json
{
"max_active_dims": 256
}
```
| Metric | Value |
|:----------------------|:-----------|
| dot_accuracy@1 | 0.34 |
| dot_accuracy@3 | 0.6 |
| dot_accuracy@5 | 0.74 |
| dot_accuracy@10 | 0.84 |
| dot_precision@1 | 0.34 |
| dot_precision@3 | 0.2 |
| dot_precision@5 | 0.148 |
| dot_precision@10 | 0.084 |
| dot_recall@1 | 0.34 |
| dot_recall@3 | 0.6 |
| dot_recall@5 | 0.74 |
| dot_recall@10 | 0.84 |
| **dot_ndcg@10** | **0.5842** |
| dot_mrr@10 | 0.5027 |
| dot_map@100 | 0.5098 |
| query_active_dims | 256.0 |
| query_sparsity_ratio | 0.9375 |
| corpus_active_dims | 256.0 |
| corpus_sparsity_ratio | 0.9375 |
#### Sparse Nano BEIR
* Dataset: `NanoBEIR_mean_256`
* Evaluated with [<code>SparseNanoBEIREvaluator</code>](https://sbert.net/docs/package_reference/sparse_encoder/evaluation.html#sentence_transformers.sparse_encoder.evaluation.SparseNanoBEIREvaluator) with these parameters:
```json
{
"dataset_names": [
"msmarco"
],
"max_active_dims": 256
}
```
| Metric | Value |
|:----------------------|:-----------|
| dot_accuracy@1 | 0.34 |
| dot_accuracy@3 | 0.6 |
| dot_accuracy@5 | 0.74 |
| dot_accuracy@10 | 0.84 |
| dot_precision@1 | 0.34 |
| dot_precision@3 | 0.2 |
| dot_precision@5 | 0.148 |
| dot_precision@10 | 0.084 |
| dot_recall@1 | 0.34 |
| dot_recall@3 | 0.6 |
| dot_recall@5 | 0.74 |
| dot_recall@10 | 0.84 |
| **dot_ndcg@10** | **0.5842** |
| dot_mrr@10 | 0.5027 |
| dot_map@100 | 0.5098 |
| query_active_dims | 256.0 |
| query_sparsity_ratio | 0.9375 |
| corpus_active_dims | 256.0 |
| corpus_sparsity_ratio | 0.9375 |
#### Sparse Information Retrieval
* Datasets: `NanoClimateFEVER`, `NanoDBPedia`, `NanoFEVER`, `NanoFiQA2018`, `NanoHotpotQA`, `NanoMSMARCO`, `NanoNFCorpus`, `NanoNQ`, `NanoQuoraRetrieval`, `NanoSCIDOCS`, `NanoArguAna`, `NanoSciFact` and `NanoTouche2020`
* Evaluated with [<code>SparseInformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sparse_encoder/evaluation.html#sentence_transformers.sparse_encoder.evaluation.SparseInformationRetrievalEvaluator)
| Metric | NanoClimateFEVER | NanoDBPedia | NanoFEVER | NanoFiQA2018 | NanoHotpotQA | NanoMSMARCO | NanoNFCorpus | NanoNQ | NanoQuoraRetrieval | NanoSCIDOCS | NanoArguAna | NanoSciFact | NanoTouche2020 |
|:----------------------|:-----------------|:------------|:-----------|:-------------|:-------------|:------------|:-------------|:-----------|:-------------------|:------------|:------------|:------------|:---------------|
| dot_accuracy@1 | 0.26 | 0.78 | 0.82 | 0.46 | 0.78 | 0.38 | 0.42 | 0.52 | 0.9 | 0.56 | 0.28 | 0.7 | 0.6327 |
| dot_accuracy@3 | 0.56 | 0.92 | 0.94 | 0.64 | 0.9 | 0.64 | 0.58 | 0.68 | 0.94 | 0.7 | 0.82 | 0.7 | 0.898 |
| dot_accuracy@5 | 0.62 | 0.96 | 0.96 | 0.7 | 0.96 | 0.76 | 0.62 | 0.78 | 1.0 | 0.8 | 0.84 | 0.8 | 0.9592 |
| dot_accuracy@10 | 0.74 | 1.0 | 0.98 | 0.76 | 0.98 | 0.78 | 0.68 | 0.82 | 1.0 | 0.92 | 0.92 | 0.86 | 1.0 |
| dot_precision@1 | 0.26 | 0.78 | 0.82 | 0.46 | 0.78 | 0.38 | 0.42 | 0.52 | 0.9 | 0.56 | 0.28 | 0.7 | 0.6327 |
| dot_precision@3 | 0.2067 | 0.68 | 0.3267 | 0.3 | 0.4867 | 0.2133 | 0.3533 | 0.2333 | 0.4 | 0.3667 | 0.2733 | 0.2467 | 0.5918 |
| dot_precision@5 | 0.156 | 0.6 | 0.2 | 0.228 | 0.328 | 0.152 | 0.32 | 0.164 | 0.264 | 0.3 | 0.168 | 0.176 | 0.551 |
| dot_precision@10 | 0.102 | 0.49 | 0.104 | 0.14 | 0.17 | 0.078 | 0.268 | 0.088 | 0.138 | 0.21 | 0.092 | 0.096 | 0.449 |
| dot_recall@1 | 0.1233 | 0.0879 | 0.7667 | 0.2292 | 0.39 | 0.38 | 0.0444 | 0.5 | 0.7773 | 0.1187 | 0.28 | 0.665 | 0.044 |
| dot_recall@3 | 0.2933 | 0.2008 | 0.9067 | 0.4313 | 0.73 | 0.64 | 0.0689 | 0.65 | 0.912 | 0.2297 | 0.82 | 0.68 | 0.1241 |
| dot_recall@5 | 0.3467 | 0.2552 | 0.9267 | 0.5035 | 0.82 | 0.76 | 0.1002 | 0.73 | 0.986 | 0.3097 | 0.84 | 0.785 | 0.1846 |
| dot_recall@10 | 0.4157 | 0.3598 | 0.9433 | 0.6116 | 0.85 | 0.78 | 0.136 | 0.78 | 0.9967 | 0.4307 | 0.92 | 0.85 | 0.3029 |
| **dot_ndcg@10** | **0.3307** | **0.6312** | **0.8786** | **0.5051** | **0.7891** | **0.5906** | **0.3273** | **0.6524** | **0.9408** | **0.4238** | **0.6321** | **0.7513** | **0.5114** |
| dot_mrr@10 | 0.4151 | 0.8547 | 0.8796 | 0.5689 | 0.8563 | 0.528 | 0.512 | 0.6239 | 0.935 | 0.6637 | 0.5361 | 0.7302 | 0.7749 |
| dot_map@100 | 0.2605 | 0.4715 | 0.8474 | 0.4306 | 0.7308 | 0.5405 | 0.1541 | 0.6127 | 0.9157 | 0.337 | 0.5398 | 0.7209 | 0.376 |
| query_active_dims | 256.0 | 256.0 | 256.0 | 256.0 | 256.0 | 256.0 | 256.0 | 256.0 | 256.0 | 256.0 | 256.0 | 256.0 | 256.0 |
| query_sparsity_ratio | 0.9375 | 0.9375 | 0.9375 | 0.9375 | 0.9375 | 0.9375 | 0.9375 | 0.9375 | 0.9375 | 0.9375 | 0.9375 | 0.9375 | 0.9375 |
| corpus_active_dims | 256.0 | 256.0 | 256.0 | 256.0 | 256.0 | 256.0 | 256.0 | 256.0 | 256.0 | 256.0 | 256.0 | 256.0 | 256.0 |
| corpus_sparsity_ratio | 0.9375 | 0.9375 | 0.9375 | 0.9375 | 0.9375 | 0.9375 | 0.9375 | 0.9375 | 0.9375 | 0.9375 | 0.9375 | 0.9375 | 0.9375 |
#### Sparse Nano BEIR
* Dataset: `NanoBEIR_mean`
* Evaluated with [<code>SparseNanoBEIREvaluator</code>](https://sbert.net/docs/package_reference/sparse_encoder/evaluation.html#sentence_transformers.sparse_encoder.evaluation.SparseNanoBEIREvaluator) with these parameters:
```json
{
"dataset_names": [
"climatefever",
"dbpedia",
"fever",
"fiqa2018",
"hotpotqa",
"msmarco",
"nfcorpus",
"nq",
"quoraretrieval",
"scidocs",
"arguana",
"scifact",
"touche2020"
]
}
```
| Metric | Value |
|:----------------------|:-----------|
| dot_accuracy@1 | 0.5764 |
| dot_accuracy@3 | 0.7629 |
| dot_accuracy@5 | 0.8276 |
| dot_accuracy@10 | 0.88 |
| dot_precision@1 | 0.5764 |
| dot_precision@3 | 0.3599 |
| dot_precision@5 | 0.2775 |
| dot_precision@10 | 0.1865 |
| dot_recall@1 | 0.339 |
| dot_recall@3 | 0.5144 |
| dot_recall@5 | 0.5806 |
| dot_recall@10 | 0.6444 |
| **dot_ndcg@10** | **0.6127** |
| dot_mrr@10 | 0.6829 |
| dot_map@100 | 0.5336 |
| query_active_dims | 256.0 |
| query_sparsity_ratio | 0.9375 |
| corpus_active_dims | 256.0 |
| corpus_sparsity_ratio | 0.9375 |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### natural-questions
* Dataset: [natural-questions](https://huggingface.co/datasets/sentence-transformers/natural-questions) at [f9e894e](https://huggingface.co/datasets/sentence-transformers/natural-questions/tree/f9e894e1081e206e577b4eaa9ee6de2b06ae6f17)
* Size: 99,000 training samples
* Columns: <code>query</code> and <code>answer</code>
* Approximate statistics based on the first 1000 samples:
| | query | answer |
|:--------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 10 tokens</li><li>mean: 11.71 tokens</li><li>max: 26 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 131.81 tokens</li><li>max: 450 tokens</li></ul> |
* Samples:
| query | answer |
|:--------------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>who played the father in papa don't preach</code> | <code>Alex McArthur Alex McArthur (born March 6, 1957) is an American actor.</code> |
| <code>where was the location of the battle of hastings</code> | <code>Battle of Hastings The Battle of Hastings[a] was fought on 14 October 1066 between the Norman-French army of William, the Duke of Normandy, and an English army under the Anglo-Saxon King Harold Godwinson, beginning the Norman conquest of England. It took place approximately 7 miles (11 kilometres) northwest of Hastings, close to the present-day town of Battle, East Sussex, and was a decisive Norman victory.</code> |
| <code>how many puppies can a dog give birth to</code> | <code>Canine reproduction The largest litter size to date was set by a Neapolitan Mastiff in Manea, Cambridgeshire, UK on November 29, 2004; the litter was 24 puppies.[22]</code> |
* Loss: [<code>CSRLoss</code>](https://sbert.net/docs/package_reference/sparse_encoder/losses.html#csrloss) with these parameters:
```json
{
"beta": 0.1,
"gamma": 3.0,
"loss": "SparseMultipleNegativesRankingLoss(scale=1.0, similarity_fct='dot_score')"
}
```
### Evaluation Dataset
#### natural-questions
* Dataset: [natural-questions](https://huggingface.co/datasets/sentence-transformers/natural-questions) at [f9e894e](https://huggingface.co/datasets/sentence-transformers/natural-questions/tree/f9e894e1081e206e577b4eaa9ee6de2b06ae6f17)
* Size: 1,000 evaluation samples
* Columns: <code>query</code> and <code>answer</code>
* Approximate statistics based on the first 1000 samples:
| | query | answer |
|:--------|:-----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 10 tokens</li><li>mean: 11.69 tokens</li><li>max: 23 tokens</li></ul> | <ul><li>min: 15 tokens</li><li>mean: 134.01 tokens</li><li>max: 512 tokens</li></ul> |
* Samples:
| query | answer |
|:-------------------------------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>where is the tiber river located in italy</code> | <code>Tiber The Tiber (/ˈtaɪbər/, Latin: Tiberis,[1] Italian: Tevere [ˈteːvere])[2] is the third-longest river in Italy, rising in the Apennine Mountains in Emilia-Romagna and flowing 406 kilometres (252 mi) through Tuscany, Umbria and Lazio, where it is joined by the river Aniene, to the Tyrrhenian Sea, between Ostia and Fiumicino.[3] It drains a basin estimated at 17,375 square kilometres (6,709 sq mi). The river has achieved lasting fame as the main watercourse of the city of Rome, founded on its eastern banks.</code> |
| <code>what kind of car does jay gatsby drive</code> | <code>Jay Gatsby At the Buchanan home, Jordan Baker, Nick, Jay, and the Buchanans decide to visit New York City. Tom borrows Gatsby's yellow Rolls Royce to drive up to the city. On the way to New York City, Tom makes a detour at a gas station in "the Valley of Ashes", a run-down part of Long Island. The owner, George Wilson, shares his concern that his wife, Myrtle, may be having an affair. This unnerves Tom, who has been having an affair with Myrtle, and he leaves in a hurry.</code> |
| <code>who sings if i can dream about you</code> | <code>I Can Dream About You "I Can Dream About You" is a song performed by American singer Dan Hartman on the soundtrack album of the film Streets of Fire. Released in 1984 as a single from the soundtrack, and included on Hartman's album I Can Dream About You, it reached number 6 on the Billboard Hot 100.[1]</code> |
* Loss: [<code>CSRLoss</code>](https://sbert.net/docs/package_reference/sparse_encoder/losses.html#csrloss) with these parameters:
```json
{
"beta": 0.1,
"gamma": 3.0,
"loss": "SparseMultipleNegativesRankingLoss(scale=1.0, similarity_fct='dot_score')"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 64
- `per_device_eval_batch_size`: 64
- `learning_rate`: 4e-05
- `num_train_epochs`: 1
- `bf16`: True
- `load_best_model_at_end`: True
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 64
- `per_device_eval_batch_size`: 64
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 4e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 1
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.0
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: True
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: True
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
- `router_mapping`: {}
- `learning_rate_mapping`: {}
</details>
### Training Logs
| Epoch | Step | Training Loss | Validation Loss | NanoMSMARCO_8_dot_ndcg@10 | NanoBEIR_mean_8_dot_ndcg@10 | NanoMSMARCO_16_dot_ndcg@10 | NanoBEIR_mean_16_dot_ndcg@10 | NanoMSMARCO_32_dot_ndcg@10 | NanoBEIR_mean_32_dot_ndcg@10 | NanoMSMARCO_64_dot_ndcg@10 | NanoBEIR_mean_64_dot_ndcg@10 | NanoMSMARCO_128_dot_ndcg@10 | NanoBEIR_mean_128_dot_ndcg@10 | NanoMSMARCO_256_dot_ndcg@10 | NanoBEIR_mean_256_dot_ndcg@10 | NanoClimateFEVER_dot_ndcg@10 | NanoDBPedia_dot_ndcg@10 | NanoFEVER_dot_ndcg@10 | NanoFiQA2018_dot_ndcg@10 | NanoHotpotQA_dot_ndcg@10 | NanoMSMARCO_dot_ndcg@10 | NanoNFCorpus_dot_ndcg@10 | NanoNQ_dot_ndcg@10 | NanoQuoraRetrieval_dot_ndcg@10 | NanoSCIDOCS_dot_ndcg@10 | NanoArguAna_dot_ndcg@10 | NanoSciFact_dot_ndcg@10 | NanoTouche2020_dot_ndcg@10 | NanoBEIR_mean_dot_ndcg@10 |
|:----------:|:-------:|:-------------:|:---------------:|:-------------------------:|:---------------------------:|:--------------------------:|:----------------------------:|:--------------------------:|:----------------------------:|:--------------------------:|:----------------------------:|:---------------------------:|:-----------------------------:|:---------------------------:|:-----------------------------:|:----------------------------:|:-----------------------:|:---------------------:|:------------------------:|:------------------------:|:-----------------------:|:------------------------:|:------------------:|:------------------------------:|:-----------------------:|:-----------------------:|:-----------------------:|:--------------------------:|:-------------------------:|
| -1 | -1 | - | - | 0.2445 | 0.2445 | 0.3517 | 0.3517 | 0.5001 | 0.5001 | 0.5672 | 0.5672 | 0.6083 | 0.6083 | 0.6025 | 0.6025 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0646 | 100 | 0.1844 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1293 | 200 | 0.1765 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| **0.1939** | **300** | **0.1581** | **0.1742** | **0.2187** | **0.2187** | **0.3538** | **0.3538** | **0.4677** | **0.4677** | **0.5313** | **0.5313** | **0.5713** | **0.5713** | **0.5932** | **0.5932** | **-** | **-** | **-** | **-** | **-** | **-** | **-** | **-** | **-** | **-** | **-** | **-** | **-** | **-** |
| 0.2586 | 400 | 0.134 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3232 | 500 | 0.179 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3878 | 600 | 0.1414 | 0.2028 | 0.2075 | 0.2075 | 0.3395 | 0.3395 | 0.4250 | 0.4250 | 0.4930 | 0.4930 | 0.5670 | 0.5670 | 0.5534 | 0.5534 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4525 | 700 | 0.162 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5171 | 800 | 0.1632 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5818 | 900 | 0.1684 | 0.1907 | 0.1784 | 0.1784 | 0.3429 | 0.3429 | 0.4207 | 0.4207 | 0.4764 | 0.4764 | 0.5705 | 0.5705 | 0.5861 | 0.5861 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6464 | 1000 | 0.1577 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7111 | 1100 | 0.1249 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7757 | 1200 | 0.1494 | 0.1506 | 0.1993 | 0.1993 | 0.3459 | 0.3459 | 0.4185 | 0.4185 | 0.4925 | 0.4925 | 0.5248 | 0.5248 | 0.5880 | 0.5880 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.8403 | 1300 | 0.1457 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.9050 | 1400 | 0.1208 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.9696 | 1500 | 0.1346 | 0.1349 | 0.2120 | 0.2120 | 0.3260 | 0.3260 | 0.4175 | 0.4175 | 0.4748 | 0.4748 | 0.5166 | 0.5166 | 0.5842 | 0.5842 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| -1 | -1 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | 0.3307 | 0.6312 | 0.8786 | 0.5051 | 0.7891 | 0.5906 | 0.3273 | 0.6524 | 0.9408 | 0.4238 | 0.6321 | 0.7513 | 0.5114 | 0.6127 |
* The bold row denotes the saved checkpoint.
### Environmental Impact
Carbon emissions were measured using [CodeCarbon](https://github.com/mlco2/codecarbon).
- **Energy Consumed**: 0.171 kWh
- **Carbon Emitted**: 0.067 kg of CO2
- **Hours Used**: 0.563 hours
### Training Hardware
- **On Cloud**: No
- **GPU Model**: 1 x NVIDIA GeForce RTX 3090
- **CPU Model**: 13th Gen Intel(R) Core(TM) i7-13700K
- **RAM Size**: 31.78 GB
### Framework Versions
- Python: 3.11.6
- Sentence Transformers: 4.2.0.dev0
- Transformers: 4.52.4
- PyTorch: 2.6.0+cu124
- Accelerate: 1.5.1
- Datasets: 2.21.0
- Tokenizers: 0.21.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### CSRLoss
```bibtex
@misc{wen2025matryoshkarevisitingsparsecoding,
title={Beyond Matryoshka: Revisiting Sparse Coding for Adaptive Representation},
author={Tiansheng Wen and Yifei Wang and Zequn Zeng and Zhong Peng and Yudi Su and Xinyang Liu and Bo Chen and Hongwei Liu and Stefanie Jegelka and Chenyu You},
year={2025},
eprint={2503.01776},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2503.01776},
}
```
#### SparseMultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
Msughterx/ppo-LunarLander-v2
|
Msughterx
| 2025-06-20T10:05:07Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-06-20T10:04:49Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: MlpPPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 255.47 +/- 37.35
name: mean_reward
verified: false
---
# **MlpPPO** Agent playing **LunarLander-v2**
This is a trained model of a **MlpPPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
luckeciano/Qwen-2.5-7B-GRPO-NoBaseline-FisherMaskToken-1.0_3493
|
luckeciano
| 2025-06-20T10:03:22Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"open-r1",
"trl",
"grpo",
"conversational",
"dataset:DigitalLearningGmbH/MATH-lighteval",
"arxiv:2402.03300",
"base_model:Qwen/Qwen2.5-Math-7B",
"base_model:finetune:Qwen/Qwen2.5-Math-7B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-20T04:39:27Z |
---
base_model: Qwen/Qwen2.5-Math-7B
datasets: DigitalLearningGmbH/MATH-lighteval
library_name: transformers
model_name: Qwen-2.5-7B-GRPO-NoBaseline-FisherMaskToken-1.0_3493
tags:
- generated_from_trainer
- open-r1
- trl
- grpo
licence: license
---
# Model Card for Qwen-2.5-7B-GRPO-NoBaseline-FisherMaskToken-1.0_3493
This model is a fine-tuned version of [Qwen/Qwen2.5-Math-7B](https://huggingface.co/Qwen/Qwen2.5-Math-7B) on the [DigitalLearningGmbH/MATH-lighteval](https://huggingface.co/datasets/DigitalLearningGmbH/MATH-lighteval) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="luckeciano/Qwen-2.5-7B-GRPO-NoBaseline-FisherMaskToken-1.0_3493", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/max-ent-llms/PolicyGradientStability/runs/2cq5tdlr)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.16.0.dev0
- Transformers: 4.49.0
- Pytorch: 2.6.0
- Datasets: 3.4.1
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
same899/repo_id
|
same899
| 2025-06-20T10:01:18Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2_connector_metaquery",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-20T10:00:24Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
diegolacomba/multilingual-e5-small-mlm-legal-4
|
diegolacomba
| 2025-06-20T10:01:09Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-06-20T10:00:53Z |
---
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
pipeline_tag: sentence-similarity
library_name: sentence-transformers
---
# SentenceTransformer
This is a [sentence-transformers](https://www.SBERT.net) model trained. It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
<!-- - **Base model:** [Unknown](https://huggingface.co/unknown) -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 384 dimensions
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("diegolacomba/multilingual-e5-small-mlm-legal-4")
# Run inference
sentences = [
'The weather is lovely today.',
"It's so sunny outside!",
'He drove to the stadium.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Framework Versions
- Python: 3.11.13
- Sentence Transformers: 4.1.0
- Transformers: 4.52.4
- PyTorch: 2.6.0+cu124
- Accelerate: 1.7.0
- Datasets: 2.14.4
- Tokenizers: 0.21.1
## Citation
### BibTeX
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
sonu2023/KeyPhraseTransformer-Q8_0-GGUF
|
sonu2023
| 2025-06-20T09:59:20Z | 0 | 0 | null |
[
"gguf",
"llama-cpp",
"gguf-my-repo",
"base_model:snrspeaks/KeyPhraseTransformer",
"base_model:quantized:snrspeaks/KeyPhraseTransformer",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2025-06-20T09:59:17Z |
---
license: mit
base_model: snrspeaks/KeyPhraseTransformer
tags:
- llama-cpp
- gguf-my-repo
---
# sonu2023/KeyPhraseTransformer-Q8_0-GGUF
This model was converted to GGUF format from [`snrspeaks/KeyPhraseTransformer`](https://huggingface.co/snrspeaks/KeyPhraseTransformer) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/snrspeaks/KeyPhraseTransformer) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo sonu2023/KeyPhraseTransformer-Q8_0-GGUF --hf-file keyphrasetransformer-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo sonu2023/KeyPhraseTransformer-Q8_0-GGUF --hf-file keyphrasetransformer-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo sonu2023/KeyPhraseTransformer-Q8_0-GGUF --hf-file keyphrasetransformer-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo sonu2023/KeyPhraseTransformer-Q8_0-GGUF --hf-file keyphrasetransformer-q8_0.gguf -c 2048
```
|
ishk9999/gemma-cxr-fine-tuning-3000-subset-4b-it
|
ishk9999
| 2025-06-20T09:58:00Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:google/medgemma-4b-it",
"base_model:finetune:google/medgemma-4b-it",
"endpoints_compatible",
"region:us"
] | null | 2025-06-19T07:23:35Z |
---
base_model: google/medgemma-4b-it
library_name: transformers
model_name: gemma-cxr-fine-tuning-3000-subset-4b-it
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for gemma-cxr-fine-tuning-3000-subset-4b-it
This model is a fine-tuned version of [google/medgemma-4b-it](https://huggingface.co/google/medgemma-4b-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="ishk9999/gemma-cxr-fine-tuning-3000-subset-4b-it", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.15.2
- Transformers: 4.52.4
- Pytorch: 2.6.0+cu124
- Datasets: 3.3.2
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
LarryAIDraw/Belfast_Simisir
|
LarryAIDraw
| 2025-06-20T09:52:15Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2025-06-20T09:15:18Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/50516?modelVersionId=409247
|
LarryAIDraw/cartethyia-wwPO-v1
|
LarryAIDraw
| 2025-06-20T09:52:01Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2025-06-20T09:14:29Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/1285663/il-and-pony-and-sd15-wuthering-waves-cartethyia-and-fleurdelys-4-in-1-or-and?modelVersionId=1918543
|
LarryAIDraw/hsr-feixiao-ponyxl-lora-nochekaiser
|
LarryAIDraw
| 2025-06-20T09:51:37Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2025-06-20T09:12:54Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/746845/feixiao-honkai-star-rail
|
morturr/Llama-2-7b-hf-PAIR_dadjokes_one_liners-COMB-dadjokes-comb-3-seed-18-2025-06-20
|
morturr
| 2025-06-20T09:50:41Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"license:llama2",
"region:us"
] | null | 2025-06-20T09:50:19Z |
---
library_name: peft
license: llama2
base_model: meta-llama/Llama-2-7b-hf
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: Llama-2-7b-hf-PAIR_dadjokes_one_liners-COMB-dadjokes-comb-3-seed-18-2025-06-20
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Llama-2-7b-hf-PAIR_dadjokes_one_liners-COMB-dadjokes-comb-3-seed-18-2025-06-20
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 16
- seed: 18
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.1
- Pytorch 2.5.1+cu124
- Datasets 3.0.2
- Tokenizers 0.20.1
|
FanMeipuru/myFinetunedModel
|
FanMeipuru
| 2025-06-20T09:49:35Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"phi3",
"text-generation",
"conversational",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-20T02:33:47Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mchettih/financial_Llama-3.2-1B-Instruct_finetuned
|
mchettih
| 2025-06-20T09:42:06Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-19T20:06:42Z |
---
base_model: unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** mchettih
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
varshajs/flan-t5-history-qg-merged
|
varshajs
| 2025-06-20T09:33:53Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"t5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2025-06-20T09:32:30Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
amos-fernadnes/DeepFin
|
amos-fernadnes
| 2025-06-20T09:30:31Z | 0 | 0 | null |
[
"tensorboard",
"joblib",
"license:other",
"region:us"
] | null | 2025-06-19T17:02:32Z |
---
title: Aibank Token
emoji: 🐠
colorFrom: yellow
colorTo: pink
sdk: static
pinned: false
license: other
short_description: Token aibank
---
Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
|
kinleyrabgay/nllb-200-600M-dzo-eng-30k
|
kinleyrabgay
| 2025-06-20T09:23:11Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"m2m_100",
"text2text-generation",
"generated_from_trainer",
"base_model:kinleyrabgay/nllb-200-600M-dzo-eng",
"base_model:finetune:kinleyrabgay/nllb-200-600M-dzo-eng",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2025-06-20T04:39:33Z |
---
library_name: transformers
license: cc-by-nc-4.0
base_model: kinleyrabgay/nllb-200-600M-dzo-eng
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: nllb-200-600M-dzo-eng-30k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# nllb-200-600M-dzo-eng-30k
This model is a fine-tuned version of [kinleyrabgay/nllb-200-600M-dzo-eng](https://huggingface.co/kinleyrabgay/nllb-200-600M-dzo-eng) on [kinleyrabgay/dz_to_en](https://huggingface.co/datasets/kinleyrabgay/dz_to_en) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0678
- Bleu: 63.0623
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|
| 0.0688 | 1.0 | 2500 | 0.0644 | 62.0343 |
| 0.0543 | 2.0 | 5000 | 0.0634 | 62.8454 |
| 0.0444 | 3.0 | 7500 | 0.0643 | 63.3940 |
| 0.0377 | 4.0 | 10000 | 0.0658 | 63.4416 |
| 0.0315 | 5.0 | 12500 | 0.0678 | 63.0623 |
### Framework versions
- Transformers 4.52.4
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1
|
RichardErkhov/zelk12_-_Gemma-2-T2-9B-gguf
|
RichardErkhov
| 2025-06-20T09:22:07Z | 0 | 0 | null |
[
"gguf",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-06-20T08:12:33Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Gemma-2-T2-9B - GGUF
- Model creator: https://huggingface.co/zelk12/
- Original model: https://huggingface.co/zelk12/Gemma-2-T2-9B/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Gemma-2-T2-9B.Q2_K.gguf](https://huggingface.co/RichardErkhov/zelk12_-_Gemma-2-T2-9B-gguf/blob/main/Gemma-2-T2-9B.Q2_K.gguf) | Q2_K | 3.54GB |
| [Gemma-2-T2-9B.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/zelk12_-_Gemma-2-T2-9B-gguf/blob/main/Gemma-2-T2-9B.IQ3_XS.gguf) | IQ3_XS | 3.86GB |
| [Gemma-2-T2-9B.IQ3_S.gguf](https://huggingface.co/RichardErkhov/zelk12_-_Gemma-2-T2-9B-gguf/blob/main/Gemma-2-T2-9B.IQ3_S.gguf) | IQ3_S | 4.04GB |
| [Gemma-2-T2-9B.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/zelk12_-_Gemma-2-T2-9B-gguf/blob/main/Gemma-2-T2-9B.Q3_K_S.gguf) | Q3_K_S | 4.04GB |
| [Gemma-2-T2-9B.IQ3_M.gguf](https://huggingface.co/RichardErkhov/zelk12_-_Gemma-2-T2-9B-gguf/blob/main/Gemma-2-T2-9B.IQ3_M.gguf) | IQ3_M | 4.19GB |
| [Gemma-2-T2-9B.Q3_K.gguf](https://huggingface.co/RichardErkhov/zelk12_-_Gemma-2-T2-9B-gguf/blob/main/Gemma-2-T2-9B.Q3_K.gguf) | Q3_K | 4.43GB |
| [Gemma-2-T2-9B.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/zelk12_-_Gemma-2-T2-9B-gguf/blob/main/Gemma-2-T2-9B.Q3_K_M.gguf) | Q3_K_M | 4.43GB |
| [Gemma-2-T2-9B.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/zelk12_-_Gemma-2-T2-9B-gguf/blob/main/Gemma-2-T2-9B.Q3_K_L.gguf) | Q3_K_L | 4.78GB |
| [Gemma-2-T2-9B.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/zelk12_-_Gemma-2-T2-9B-gguf/blob/main/Gemma-2-T2-9B.IQ4_XS.gguf) | IQ4_XS | 4.86GB |
| [Gemma-2-T2-9B.Q4_0.gguf](https://huggingface.co/RichardErkhov/zelk12_-_Gemma-2-T2-9B-gguf/blob/main/Gemma-2-T2-9B.Q4_0.gguf) | Q4_0 | 5.07GB |
| [Gemma-2-T2-9B.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/zelk12_-_Gemma-2-T2-9B-gguf/blob/main/Gemma-2-T2-9B.IQ4_NL.gguf) | IQ4_NL | 5.1GB |
| [Gemma-2-T2-9B.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/zelk12_-_Gemma-2-T2-9B-gguf/blob/main/Gemma-2-T2-9B.Q4_K_S.gguf) | Q4_K_S | 5.1GB |
| [Gemma-2-T2-9B.Q4_K.gguf](https://huggingface.co/RichardErkhov/zelk12_-_Gemma-2-T2-9B-gguf/blob/main/Gemma-2-T2-9B.Q4_K.gguf) | Q4_K | 5.37GB |
| [Gemma-2-T2-9B.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/zelk12_-_Gemma-2-T2-9B-gguf/blob/main/Gemma-2-T2-9B.Q4_K_M.gguf) | Q4_K_M | 5.37GB |
| [Gemma-2-T2-9B.Q4_1.gguf](https://huggingface.co/RichardErkhov/zelk12_-_Gemma-2-T2-9B-gguf/blob/main/Gemma-2-T2-9B.Q4_1.gguf) | Q4_1 | 5.55GB |
| [Gemma-2-T2-9B.Q5_0.gguf](https://huggingface.co/RichardErkhov/zelk12_-_Gemma-2-T2-9B-gguf/blob/main/Gemma-2-T2-9B.Q5_0.gguf) | Q5_0 | 6.04GB |
| [Gemma-2-T2-9B.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/zelk12_-_Gemma-2-T2-9B-gguf/blob/main/Gemma-2-T2-9B.Q5_K_S.gguf) | Q5_K_S | 6.04GB |
| [Gemma-2-T2-9B.Q5_K.gguf](https://huggingface.co/RichardErkhov/zelk12_-_Gemma-2-T2-9B-gguf/blob/main/Gemma-2-T2-9B.Q5_K.gguf) | Q5_K | 6.19GB |
| [Gemma-2-T2-9B.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/zelk12_-_Gemma-2-T2-9B-gguf/blob/main/Gemma-2-T2-9B.Q5_K_M.gguf) | Q5_K_M | 6.19GB |
| [Gemma-2-T2-9B.Q5_1.gguf](https://huggingface.co/RichardErkhov/zelk12_-_Gemma-2-T2-9B-gguf/blob/main/Gemma-2-T2-9B.Q5_1.gguf) | Q5_1 | 6.52GB |
| [Gemma-2-T2-9B.Q6_K.gguf](https://huggingface.co/RichardErkhov/zelk12_-_Gemma-2-T2-9B-gguf/blob/main/Gemma-2-T2-9B.Q6_K.gguf) | Q6_K | 7.07GB |
| [Gemma-2-T2-9B.Q8_0.gguf](https://huggingface.co/RichardErkhov/zelk12_-_Gemma-2-T2-9B-gguf/blob/main/Gemma-2-T2-9B.Q8_0.gguf) | Q8_0 | 9.15GB |
Original model description:
---
base_model:
- zelk12/Gemma-2-IAv2-9B
- zelk12/Gemma-2-DMv1A-9B
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [zelk12/Gemma-2-IAv2-9B](https://huggingface.co/zelk12/Gemma-2-IAv2-9B)
* [zelk12/Gemma-2-DMv1A-9B](https://huggingface.co/zelk12/Gemma-2-DMv1A-9B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: zelk12/Gemma-2-DMv1A-9B
- model: zelk12/Gemma-2-IAv2-9B
merge_method: slerp
base_model: zelk12/Gemma-2-DMv1A-9B
dtype: bfloat16
parameters:
t: 0.25
```
|
triplee/cpt_millfiled_llama3.3-70b-fp16
|
triplee
| 2025-06-20T09:17:35Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"llama-factory",
"lora",
"generated_from_trainer",
"base_model:meta-llama/Llama-3.3-70B-Instruct",
"base_model:adapter:meta-llama/Llama-3.3-70B-Instruct",
"license:other",
"region:us"
] | null | 2025-06-20T09:16:46Z |
---
library_name: peft
license: other
base_model: meta-llama/Llama-3.3-70B-Instruct
tags:
- llama-factory
- lora
- generated_from_trainer
model-index:
- name: saves_fullp_ds3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# saves_fullp_ds3
This model is a fine-tuned version of [/workspace/llms/Llama/Llama-3.3-70B-Instruct](https://huggingface.co//workspace/llms/Llama/Llama-3.3-70B-Instruct) on the data_short_chunk dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 8
- total_eval_batch_size: 64
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
### Training results
### Framework versions
- PEFT 0.15.2
- Transformers 4.51.3
- Pytorch 2.4.1+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1
|
BootesVoid/cmbyun91m04mtrdqszy1qjvi4_cmc4k9hps0173bfifta01mhhj
|
BootesVoid
| 2025-06-20T09:16:19Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-06-20T09:16:18Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: BLOND
---
# Cmbyun91M04Mtrdqszy1Qjvi4_Cmc4K9Hps0173Bfifta01Mhhj
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `BLOND` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "BLOND",
"lora_weights": "https://huggingface.co/BootesVoid/cmbyun91m04mtrdqszy1qjvi4_cmc4k9hps0173bfifta01mhhj/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BootesVoid/cmbyun91m04mtrdqszy1qjvi4_cmc4k9hps0173bfifta01mhhj', weight_name='lora.safetensors')
image = pipeline('BLOND').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/BootesVoid/cmbyun91m04mtrdqszy1qjvi4_cmc4k9hps0173bfifta01mhhj/discussions) to add images that show off what you’ve made with this LoRA.
|
SerhiiLebediuk/Test_Q4_K_M.gguf
|
SerhiiLebediuk
| 2025-06-20T09:07:13Z | 0 | 0 | null |
[
"gguf",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-20T08:43:40Z |
---
license: apache-2.0
---
|
Josephinepassananti/sd21-kamala_ft_dataset_512_face_shaded_0.1_target_marilyn_monroe-bs1-steps600-lr1e-04
|
Josephinepassananti
| 2025-06-20T09:07:03Z | 0 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"diffusers-training",
"lora",
"base_model:stabilityai/stable-diffusion-2-1",
"base_model:adapter:stabilityai/stable-diffusion-2-1",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2025-06-20T05:13:46Z |
---
base_model: stabilityai/stable-diffusion-2-1
library_name: diffusers
license: creativeml-openrail-m
inference: true
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- diffusers-training
- lora
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# LoRA text2image fine-tuning - Josephinepassananti/sd21-kamala_ft_dataset_512_face_shaded_0.1_target_marilyn_monroe-bs1-steps600-lr1e-04
These are LoRA adaption weights for stabilityai/stable-diffusion-2-1. The weights were fine-tuned on the None dataset. You can find some example images in the following.




## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model]
|
morturr/Llama-2-7b-hf-PAIR_dadjokes_one_liners-COMB-dadjokes-comb-3-seed-7-2025-06-20
|
morturr
| 2025-06-20T09:04:59Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"license:llama2",
"region:us"
] | null | 2025-06-20T09:04:43Z |
---
library_name: peft
license: llama2
base_model: meta-llama/Llama-2-7b-hf
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: Llama-2-7b-hf-PAIR_dadjokes_one_liners-COMB-dadjokes-comb-3-seed-7-2025-06-20
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Llama-2-7b-hf-PAIR_dadjokes_one_liners-COMB-dadjokes-comb-3-seed-7-2025-06-20
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 16
- seed: 7
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.1
- Pytorch 2.5.1+cu124
- Datasets 3.0.2
- Tokenizers 0.20.1
|
rslinfy/nanoVLM
|
rslinfy
| 2025-06-20T09:04:16Z | 0 | 0 |
nanovlm
|
[
"nanovlm",
"safetensors",
"vision-language",
"multimodal",
"research",
"image-text-to-text",
"license:mit",
"region:us"
] |
image-text-to-text
| 2025-06-20T09:03:30Z |
---
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/model-cards
library_name: nanovlm
license: mit
pipeline_tag: image-text-to-text
tags:
- vision-language
- multimodal
- research
---
**nanoVLM** is a minimal and lightweight Vision-Language Model (VLM) designed for efficient training and experimentation. Built using pure PyTorch, the entire model architecture and training logic fits within ~750 lines of code. It combines a ViT-based image encoder (SigLIP-B/16-224-85M) with a lightweight causal language model (SmolLM2-135M), resulting in a compact 222M parameter model.
For more information, check out the base model on https://huggingface.co/lusxvr/nanoVLM-222M.
**Usage:**
Clone the nanoVLM repository: https://github.com/huggingface/nanoVLM.
Follow the install instructions and run the following code:
```python
from models.vision_language_model import VisionLanguageModel
model = VisionLanguageModel.from_pretrained("rslinfy/nanoVLM")
```
|
dunzhang/stella-large-zh-v3-1792d
|
dunzhang
| 2025-06-20T09:02:14Z | 389 | 31 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"mteb",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2024-02-17T05:30:43Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- mteb
model-index:
- name: stella-large-zh-v3-1792d
results:
- task:
type: STS
dataset:
type: C-MTEB/AFQMC
name: MTEB AFQMC
config: default
split: validation
revision: None
metrics:
- type: cos_sim_pearson
value: 54.48093298255762
- type: cos_sim_spearman
value: 59.105354109068685
- type: euclidean_pearson
value: 57.761189988643444
- type: euclidean_spearman
value: 59.10537421115596
- type: manhattan_pearson
value: 56.94359297051431
- type: manhattan_spearman
value: 58.37611109821567
- task:
type: STS
dataset:
type: C-MTEB/ATEC
name: MTEB ATEC
config: default
split: test
revision: None
metrics:
- type: cos_sim_pearson
value: 54.39711127600595
- type: cos_sim_spearman
value: 58.190191920824454
- type: euclidean_pearson
value: 61.80082379352729
- type: euclidean_spearman
value: 58.19018966860797
- type: manhattan_pearson
value: 60.927601060396206
- type: manhattan_spearman
value: 57.78832902694192
- task:
type: Classification
dataset:
type: mteb/amazon_reviews_multi
name: MTEB AmazonReviewsClassification (zh)
config: zh
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 46.31600000000001
- type: f1
value: 44.45281663598873
- task:
type: STS
dataset:
type: C-MTEB/BQ
name: MTEB BQ
config: default
split: test
revision: None
metrics:
- type: cos_sim_pearson
value: 69.12211326097868
- type: cos_sim_spearman
value: 71.0741302039443
- type: euclidean_pearson
value: 69.89070483887852
- type: euclidean_spearman
value: 71.07413020351787
- type: manhattan_pearson
value: 69.62345441260962
- type: manhattan_spearman
value: 70.8517591280618
- task:
type: Clustering
dataset:
type: C-MTEB/CLSClusteringP2P
name: MTEB CLSClusteringP2P
config: default
split: test
revision: None
metrics:
- type: v_measure
value: 41.937723608805314
- task:
type: Clustering
dataset:
type: C-MTEB/CLSClusteringS2S
name: MTEB CLSClusteringS2S
config: default
split: test
revision: None
metrics:
- type: v_measure
value: 40.34373057675427
- task:
type: Reranking
dataset:
type: C-MTEB/CMedQAv1-reranking
name: MTEB CMedQAv1
config: default
split: test
revision: None
metrics:
- type: map
value: 88.98896401788376
- type: mrr
value: 90.97119047619047
- task:
type: Reranking
dataset:
type: C-MTEB/CMedQAv2-reranking
name: MTEB CMedQAv2
config: default
split: test
revision: None
metrics:
- type: map
value: 89.59718540244556
- type: mrr
value: 91.41246031746032
- task:
type: Retrieval
dataset:
type: C-MTEB/CmedqaRetrieval
name: MTEB CmedqaRetrieval
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 26.954
- type: map_at_10
value: 40.144999999999996
- type: map_at_100
value: 42.083999999999996
- type: map_at_1000
value: 42.181000000000004
- type: map_at_3
value: 35.709
- type: map_at_5
value: 38.141000000000005
- type: mrr_at_1
value: 40.71
- type: mrr_at_10
value: 48.93
- type: mrr_at_100
value: 49.921
- type: mrr_at_1000
value: 49.958999999999996
- type: mrr_at_3
value: 46.32
- type: mrr_at_5
value: 47.769
- type: ndcg_at_1
value: 40.71
- type: ndcg_at_10
value: 46.869
- type: ndcg_at_100
value: 54.234
- type: ndcg_at_1000
value: 55.854000000000006
- type: ndcg_at_3
value: 41.339
- type: ndcg_at_5
value: 43.594
- type: precision_at_1
value: 40.71
- type: precision_at_10
value: 10.408000000000001
- type: precision_at_100
value: 1.635
- type: precision_at_1000
value: 0.184
- type: precision_at_3
value: 23.348
- type: precision_at_5
value: 16.929
- type: recall_at_1
value: 26.954
- type: recall_at_10
value: 57.821999999999996
- type: recall_at_100
value: 88.08200000000001
- type: recall_at_1000
value: 98.83800000000001
- type: recall_at_3
value: 41.221999999999994
- type: recall_at_5
value: 48.241
- task:
type: PairClassification
dataset:
type: C-MTEB/CMNLI
name: MTEB Cmnli
config: default
split: validation
revision: None
metrics:
- type: cos_sim_accuracy
value: 83.6680697534576
- type: cos_sim_ap
value: 90.77401562455269
- type: cos_sim_f1
value: 84.68266427450101
- type: cos_sim_precision
value: 81.36177547942253
- type: cos_sim_recall
value: 88.28618190320317
- type: dot_accuracy
value: 83.6680697534576
- type: dot_ap
value: 90.76429465198817
- type: dot_f1
value: 84.68266427450101
- type: dot_precision
value: 81.36177547942253
- type: dot_recall
value: 88.28618190320317
- type: euclidean_accuracy
value: 83.6680697534576
- type: euclidean_ap
value: 90.77401909305344
- type: euclidean_f1
value: 84.68266427450101
- type: euclidean_precision
value: 81.36177547942253
- type: euclidean_recall
value: 88.28618190320317
- type: manhattan_accuracy
value: 83.40348767288035
- type: manhattan_ap
value: 90.57002020310819
- type: manhattan_f1
value: 84.51526032315978
- type: manhattan_precision
value: 81.25134843581445
- type: manhattan_recall
value: 88.05237315875614
- type: max_accuracy
value: 83.6680697534576
- type: max_ap
value: 90.77401909305344
- type: max_f1
value: 84.68266427450101
- task:
type: Retrieval
dataset:
type: C-MTEB/CovidRetrieval
name: MTEB CovidRetrieval
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 69.705
- type: map_at_10
value: 78.648
- type: map_at_100
value: 78.888
- type: map_at_1000
value: 78.89399999999999
- type: map_at_3
value: 77.151
- type: map_at_5
value: 77.98
- type: mrr_at_1
value: 69.863
- type: mrr_at_10
value: 78.62599999999999
- type: mrr_at_100
value: 78.861
- type: mrr_at_1000
value: 78.867
- type: mrr_at_3
value: 77.204
- type: mrr_at_5
value: 78.005
- type: ndcg_at_1
value: 69.968
- type: ndcg_at_10
value: 82.44399999999999
- type: ndcg_at_100
value: 83.499
- type: ndcg_at_1000
value: 83.647
- type: ndcg_at_3
value: 79.393
- type: ndcg_at_5
value: 80.855
- type: precision_at_1
value: 69.968
- type: precision_at_10
value: 9.515
- type: precision_at_100
value: 0.9990000000000001
- type: precision_at_1000
value: 0.101
- type: precision_at_3
value: 28.802
- type: precision_at_5
value: 18.019
- type: recall_at_1
value: 69.705
- type: recall_at_10
value: 94.152
- type: recall_at_100
value: 98.84100000000001
- type: recall_at_1000
value: 100
- type: recall_at_3
value: 85.774
- type: recall_at_5
value: 89.252
- task:
type: Retrieval
dataset:
type: C-MTEB/DuRetrieval
name: MTEB DuRetrieval
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 25.88
- type: map_at_10
value: 79.857
- type: map_at_100
value: 82.636
- type: map_at_1000
value: 82.672
- type: map_at_3
value: 55.184
- type: map_at_5
value: 70.009
- type: mrr_at_1
value: 89.64999999999999
- type: mrr_at_10
value: 92.967
- type: mrr_at_100
value: 93.039
- type: mrr_at_1000
value: 93.041
- type: mrr_at_3
value: 92.65
- type: mrr_at_5
value: 92.86
- type: ndcg_at_1
value: 89.64999999999999
- type: ndcg_at_10
value: 87.126
- type: ndcg_at_100
value: 89.898
- type: ndcg_at_1000
value: 90.253
- type: ndcg_at_3
value: 86.012
- type: ndcg_at_5
value: 85.124
- type: precision_at_1
value: 89.64999999999999
- type: precision_at_10
value: 41.735
- type: precision_at_100
value: 4.797
- type: precision_at_1000
value: 0.488
- type: precision_at_3
value: 77.267
- type: precision_at_5
value: 65.48
- type: recall_at_1
value: 25.88
- type: recall_at_10
value: 88.28399999999999
- type: recall_at_100
value: 97.407
- type: recall_at_1000
value: 99.29299999999999
- type: recall_at_3
value: 57.38799999999999
- type: recall_at_5
value: 74.736
- task:
type: Retrieval
dataset:
type: C-MTEB/EcomRetrieval
name: MTEB EcomRetrieval
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 53.2
- type: map_at_10
value: 63.556000000000004
- type: map_at_100
value: 64.033
- type: map_at_1000
value: 64.044
- type: map_at_3
value: 60.983
- type: map_at_5
value: 62.588
- type: mrr_at_1
value: 53.2
- type: mrr_at_10
value: 63.556000000000004
- type: mrr_at_100
value: 64.033
- type: mrr_at_1000
value: 64.044
- type: mrr_at_3
value: 60.983
- type: mrr_at_5
value: 62.588
- type: ndcg_at_1
value: 53.2
- type: ndcg_at_10
value: 68.61699999999999
- type: ndcg_at_100
value: 70.88499999999999
- type: ndcg_at_1000
value: 71.15899999999999
- type: ndcg_at_3
value: 63.434000000000005
- type: ndcg_at_5
value: 66.301
- type: precision_at_1
value: 53.2
- type: precision_at_10
value: 8.450000000000001
- type: precision_at_100
value: 0.95
- type: precision_at_1000
value: 0.097
- type: precision_at_3
value: 23.5
- type: precision_at_5
value: 15.479999999999999
- type: recall_at_1
value: 53.2
- type: recall_at_10
value: 84.5
- type: recall_at_100
value: 95
- type: recall_at_1000
value: 97.1
- type: recall_at_3
value: 70.5
- type: recall_at_5
value: 77.4
- task:
type: Classification
dataset:
type: C-MTEB/IFlyTek-classification
name: MTEB IFlyTek
config: default
split: validation
revision: None
metrics:
- type: accuracy
value: 50.63485956136976
- type: f1
value: 38.286307407751266
- task:
type: Classification
dataset:
type: C-MTEB/JDReview-classification
name: MTEB JDReview
config: default
split: test
revision: None
metrics:
- type: accuracy
value: 86.11632270168855
- type: ap
value: 54.43932599806482
- type: f1
value: 80.85485110996076
- task:
type: STS
dataset:
type: C-MTEB/LCQMC
name: MTEB LCQMC
config: default
split: test
revision: None
metrics:
- type: cos_sim_pearson
value: 72.47315152994804
- type: cos_sim_spearman
value: 78.26531600908152
- type: euclidean_pearson
value: 77.8560788714531
- type: euclidean_spearman
value: 78.26531157334841
- type: manhattan_pearson
value: 77.70593783974188
- type: manhattan_spearman
value: 78.13880812439999
- task:
type: Reranking
dataset:
type: C-MTEB/Mmarco-reranking
name: MTEB MMarcoReranking
config: default
split: dev
revision: None
metrics:
- type: map
value: 28.088177976572222
- type: mrr
value: 27.125
- task:
type: Retrieval
dataset:
type: C-MTEB/MMarcoRetrieval
name: MTEB MMarcoRetrieval
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 66.428
- type: map_at_10
value: 75.5
- type: map_at_100
value: 75.82600000000001
- type: map_at_1000
value: 75.837
- type: map_at_3
value: 73.74300000000001
- type: map_at_5
value: 74.87
- type: mrr_at_1
value: 68.754
- type: mrr_at_10
value: 76.145
- type: mrr_at_100
value: 76.432
- type: mrr_at_1000
value: 76.442
- type: mrr_at_3
value: 74.628
- type: mrr_at_5
value: 75.612
- type: ndcg_at_1
value: 68.754
- type: ndcg_at_10
value: 79.144
- type: ndcg_at_100
value: 80.60199999999999
- type: ndcg_at_1000
value: 80.886
- type: ndcg_at_3
value: 75.81599999999999
- type: ndcg_at_5
value: 77.729
- type: precision_at_1
value: 68.754
- type: precision_at_10
value: 9.544
- type: precision_at_100
value: 1.026
- type: precision_at_1000
value: 0.105
- type: precision_at_3
value: 28.534
- type: precision_at_5
value: 18.138
- type: recall_at_1
value: 66.428
- type: recall_at_10
value: 89.716
- type: recall_at_100
value: 96.313
- type: recall_at_1000
value: 98.541
- type: recall_at_3
value: 80.923
- type: recall_at_5
value: 85.48
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (zh-CN)
config: zh-CN
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 73.27841291190316
- type: f1
value: 70.65529957574735
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (zh-CN)
config: zh-CN
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 76.30127774041695
- type: f1
value: 76.10358226518304
- task:
type: Retrieval
dataset:
type: C-MTEB/MedicalRetrieval
name: MTEB MedicalRetrieval
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 56.3
- type: map_at_10
value: 62.193
- type: map_at_100
value: 62.722
- type: map_at_1000
value: 62.765
- type: map_at_3
value: 60.633
- type: map_at_5
value: 61.617999999999995
- type: mrr_at_1
value: 56.3
- type: mrr_at_10
value: 62.193
- type: mrr_at_100
value: 62.722
- type: mrr_at_1000
value: 62.765
- type: mrr_at_3
value: 60.633
- type: mrr_at_5
value: 61.617999999999995
- type: ndcg_at_1
value: 56.3
- type: ndcg_at_10
value: 65.176
- type: ndcg_at_100
value: 67.989
- type: ndcg_at_1000
value: 69.219
- type: ndcg_at_3
value: 62.014
- type: ndcg_at_5
value: 63.766
- type: precision_at_1
value: 56.3
- type: precision_at_10
value: 7.46
- type: precision_at_100
value: 0.8829999999999999
- type: precision_at_1000
value: 0.098
- type: precision_at_3
value: 22
- type: precision_at_5
value: 14.04
- type: recall_at_1
value: 56.3
- type: recall_at_10
value: 74.6
- type: recall_at_100
value: 88.3
- type: recall_at_1000
value: 98.1
- type: recall_at_3
value: 66
- type: recall_at_5
value: 70.19999999999999
- task:
type: Classification
dataset:
type: C-MTEB/MultilingualSentiment-classification
name: MTEB MultilingualSentiment
config: default
split: validation
revision: None
metrics:
- type: accuracy
value: 76.44666666666666
- type: f1
value: 76.34548655475949
- task:
type: PairClassification
dataset:
type: C-MTEB/OCNLI
name: MTEB Ocnli
config: default
split: validation
revision: None
metrics:
- type: cos_sim_accuracy
value: 82.34975636166757
- type: cos_sim_ap
value: 85.44149338593267
- type: cos_sim_f1
value: 83.68654509610647
- type: cos_sim_precision
value: 78.46580406654344
- type: cos_sim_recall
value: 89.65153115100317
- type: dot_accuracy
value: 82.34975636166757
- type: dot_ap
value: 85.4415701376729
- type: dot_f1
value: 83.68654509610647
- type: dot_precision
value: 78.46580406654344
- type: dot_recall
value: 89.65153115100317
- type: euclidean_accuracy
value: 82.34975636166757
- type: euclidean_ap
value: 85.4415701376729
- type: euclidean_f1
value: 83.68654509610647
- type: euclidean_precision
value: 78.46580406654344
- type: euclidean_recall
value: 89.65153115100317
- type: manhattan_accuracy
value: 81.97076340010828
- type: manhattan_ap
value: 84.83614660756733
- type: manhattan_f1
value: 83.34167083541772
- type: manhattan_precision
value: 79.18250950570342
- type: manhattan_recall
value: 87.96198521647307
- type: max_accuracy
value: 82.34975636166757
- type: max_ap
value: 85.4415701376729
- type: max_f1
value: 83.68654509610647
- task:
type: Classification
dataset:
type: C-MTEB/OnlineShopping-classification
name: MTEB OnlineShopping
config: default
split: test
revision: None
metrics:
- type: accuracy
value: 93.24
- type: ap
value: 91.3586656455605
- type: f1
value: 93.22999314249503
- task:
type: STS
dataset:
type: C-MTEB/PAWSX
name: MTEB PAWSX
config: default
split: test
revision: None
metrics:
- type: cos_sim_pearson
value: 39.05676042449009
- type: cos_sim_spearman
value: 44.996534098358545
- type: euclidean_pearson
value: 44.42418609172825
- type: euclidean_spearman
value: 44.995941361058996
- type: manhattan_pearson
value: 43.98118203238076
- type: manhattan_spearman
value: 44.51414152788784
- task:
type: STS
dataset:
type: C-MTEB/QBQTC
name: MTEB QBQTC
config: default
split: test
revision: None
metrics:
- type: cos_sim_pearson
value: 36.694269474438045
- type: cos_sim_spearman
value: 38.686738967031616
- type: euclidean_pearson
value: 36.822540068407235
- type: euclidean_spearman
value: 38.68690745429757
- type: manhattan_pearson
value: 36.77180703308932
- type: manhattan_spearman
value: 38.45414914148094
- task:
type: STS
dataset:
type: mteb/sts22-crosslingual-sts
name: MTEB STS22 (zh)
config: zh
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 65.81209017614124
- type: cos_sim_spearman
value: 66.5255285833172
- type: euclidean_pearson
value: 66.01848701752732
- type: euclidean_spearman
value: 66.5255285833172
- type: manhattan_pearson
value: 66.66433676370542
- type: manhattan_spearman
value: 67.07086311480214
- task:
type: STS
dataset:
type: C-MTEB/STSB
name: MTEB STSB
config: default
split: test
revision: None
metrics:
- type: cos_sim_pearson
value: 80.60785761283502
- type: cos_sim_spearman
value: 82.80278693241074
- type: euclidean_pearson
value: 82.47573315938638
- type: euclidean_spearman
value: 82.80290808593806
- type: manhattan_pearson
value: 82.49682028989669
- type: manhattan_spearman
value: 82.84565039346022
- task:
type: Reranking
dataset:
type: C-MTEB/T2Reranking
name: MTEB T2Reranking
config: default
split: dev
revision: None
metrics:
- type: map
value: 66.37886004738723
- type: mrr
value: 76.08501655006394
- task:
type: Retrieval
dataset:
type: C-MTEB/T2Retrieval
name: MTEB T2Retrieval
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 28.102
- type: map_at_10
value: 78.071
- type: map_at_100
value: 81.71000000000001
- type: map_at_1000
value: 81.773
- type: map_at_3
value: 55.142
- type: map_at_5
value: 67.669
- type: mrr_at_1
value: 90.9
- type: mrr_at_10
value: 93.29499999999999
- type: mrr_at_100
value: 93.377
- type: mrr_at_1000
value: 93.379
- type: mrr_at_3
value: 92.901
- type: mrr_at_5
value: 93.152
- type: ndcg_at_1
value: 90.9
- type: ndcg_at_10
value: 85.564
- type: ndcg_at_100
value: 89.11200000000001
- type: ndcg_at_1000
value: 89.693
- type: ndcg_at_3
value: 87.024
- type: ndcg_at_5
value: 85.66
- type: precision_at_1
value: 90.9
- type: precision_at_10
value: 42.208
- type: precision_at_100
value: 5.027
- type: precision_at_1000
value: 0.517
- type: precision_at_3
value: 75.872
- type: precision_at_5
value: 63.566
- type: recall_at_1
value: 28.102
- type: recall_at_10
value: 84.44500000000001
- type: recall_at_100
value: 95.91300000000001
- type: recall_at_1000
value: 98.80799999999999
- type: recall_at_3
value: 56.772999999999996
- type: recall_at_5
value: 70.99499999999999
- task:
type: Classification
dataset:
type: C-MTEB/TNews-classification
name: MTEB TNews
config: default
split: validation
revision: None
metrics:
- type: accuracy
value: 53.10599999999999
- type: f1
value: 51.40415523558322
- task:
type: Clustering
dataset:
type: C-MTEB/ThuNewsClusteringP2P
name: MTEB ThuNewsClusteringP2P
config: default
split: test
revision: None
metrics:
- type: v_measure
value: 69.6145576098232
- task:
type: Clustering
dataset:
type: C-MTEB/ThuNewsClusteringS2S
name: MTEB ThuNewsClusteringS2S
config: default
split: test
revision: None
metrics:
- type: v_measure
value: 63.7129548775017
- task:
type: Retrieval
dataset:
type: C-MTEB/VideoRetrieval
name: MTEB VideoRetrieval
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 60.199999999999996
- type: map_at_10
value: 69.724
- type: map_at_100
value: 70.185
- type: map_at_1000
value: 70.196
- type: map_at_3
value: 67.95
- type: map_at_5
value: 69.155
- type: mrr_at_1
value: 60.199999999999996
- type: mrr_at_10
value: 69.724
- type: mrr_at_100
value: 70.185
- type: mrr_at_1000
value: 70.196
- type: mrr_at_3
value: 67.95
- type: mrr_at_5
value: 69.155
- type: ndcg_at_1
value: 60.199999999999996
- type: ndcg_at_10
value: 73.888
- type: ndcg_at_100
value: 76.02799999999999
- type: ndcg_at_1000
value: 76.344
- type: ndcg_at_3
value: 70.384
- type: ndcg_at_5
value: 72.541
- type: precision_at_1
value: 60.199999999999996
- type: precision_at_10
value: 8.67
- type: precision_at_100
value: 0.9650000000000001
- type: precision_at_1000
value: 0.099
- type: precision_at_3
value: 25.8
- type: precision_at_5
value: 16.520000000000003
- type: recall_at_1
value: 60.199999999999996
- type: recall_at_10
value: 86.7
- type: recall_at_100
value: 96.5
- type: recall_at_1000
value: 99
- type: recall_at_3
value: 77.4
- type: recall_at_5
value: 82.6
- task:
type: Classification
dataset:
type: C-MTEB/waimai-classification
name: MTEB Waimai
config: default
split: test
revision: None
metrics:
- type: accuracy
value: 88.08
- type: ap
value: 72.66435456846166
- type: f1
value: 86.55995793551286
license: mit
---
**新闻 | News**
**[2024-04-06]** 开源[puff](https://huggingface.co/infgrad/puff-base-v1)系列模型,**专门针对检索和语义匹配任务,更多的考虑泛化性和私有通用测试集效果,向量维度可变,中英双语**。
**[2024-02-27]** 开源stella-mrl-large-zh-v3.5-1792d模型,支持**向量可变维度**。
**[2024-02-17]** 开源stella v3系列、dialogue编码模型和相关训练数据。
**[2023-10-19]** 开源stella-base-en-v2 使用简单,**不需要任何前缀文本**。
**[2023-10-12]** 开源stella-base-zh-v2和stella-large-zh-v2, 效果更好且使用简单,**不需要任何前缀文本**。
**[2023-09-11]** 开源stella-base-zh和stella-large-zh
欢迎去[本人主页](https://huggingface.co/infgrad)查看最新模型,并提出您的宝贵意见!
# 1 开源清单
本次开源2个通用向量编码模型和一个针对dialogue进行编码的向量模型,同时开源全量160万对话重写数据集和20万的难负例的检索数据集。
**开源模型:**
| ModelName | ModelSize | MaxTokens | EmbeddingDimensions | Language | Scenario | C-MTEB Score |
|---------------------------------------------------------------------------------------------------------------|-----------|-----------|---------------------|----------|----------|--------------|
| [infgrad/stella-base-zh-v3-1792d](https://huggingface.co/infgrad/stella-base-zh-v3-1792d) | 0.4GB | 512 | 1792 | zh-CN | 通用文本 | 67.96 |
| [infgrad/stella-large-zh-v3-1792d](https://huggingface.co/infgrad/stella-large-zh-v3-1792d) | 1.3GB | 512 | 1792 | zh-CN | 通用文本 | 68.48 |
| [infgrad/stella-dialogue-large-zh-v3-1792d](https://huggingface.co/infgrad/stella-dialogue-large-zh-v3-1792d) | 1.3GB | 512 | 1792 | zh-CN | **对话文本** | 不适用 |
**开源数据:**
1. [全量对话重写数据集](https://huggingface.co/datasets/infgrad/dialogue_rewrite_llm) 约160万
2. [部分带有难负例的检索数据集](https://huggingface.co/datasets/infgrad/retrieval_data_llm) 约20万
上述数据集均使用LLM构造,欢迎各位贡献数据集。
# 2 使用方法
## 2.1 通用编码模型使用方法
直接SentenceTransformer加载即可:
```python
from sentence_transformers import SentenceTransformer
model = SentenceTransformer("infgrad/stella-base-zh-v3-1792d")
# model = SentenceTransformer("infgrad/stella-large-zh-v3-1792d")
vectors = model.encode(["text1", "text2"])
```
## 2.2 dialogue编码模型使用方法
**使用场景:**
**在一段对话中,需要根据用户语句去检索相关文本,但是对话中的用户语句存在大量的指代和省略,导致直接使用通用编码模型效果不好,
可以使用本项目的专门的dialogue编码模型进行编码**
**使用要点:**
1. 对dialogue进行编码时,dialogue中的每个utterance需要是如下格式:`"{ROLE}: {TEXT}"`,然后使用`[SEP]` join一下
2. 整个对话都要送入模型进行编码,如果长度不够就删掉早期的对话,**编码后的向量本质是对话中最后一句话的重写版本的向量!!**
3. 对话用stella-dialogue-large-zh-v3-1792d编码,被检索文本使用stella-large-zh-v3-1792d进行编码,所以本场景是需要2个编码模型的
如果对使用方法还有疑惑,请到下面章节阅读该模型是如何训练的。
使用示例:
```python
from sentence_transformers import SentenceTransformer
dial_model = SentenceTransformer("infgrad/stella-dialogue-large-zh-v3-1792d")
general_model = SentenceTransformer("infgrad/stella-large-zh-v3-1792d")
# dialogue = ["张三: 吃饭吗", "李四: 等会去"]
dialogue = ["A: 最近去打篮球了吗", "B: 没有"]
corpus = ["B没打篮球是因为受伤了。", "B没有打乒乓球"]
last_utterance_vector = dial_model.encode(["[SEP]".join(dialogue)], normalize_embeddings=True)
corpus_vectors = general_model.encode(corpus, normalize_embeddings=True)
# 计算相似度
sims = (last_utterance_vector * corpus_vectors).sum(axis=1)
print(sims)
```
# 3 通用编码模型训练技巧分享
## hard negative
难负例挖掘也是个经典的trick了,几乎总能提升效果
## dropout-1d
dropout已经是深度学习的标配,我们可以稍微改造下使其更适合句向量的训练。
我们在训练时会尝试让每一个token-embedding都可以表征整个句子,而在推理时使用mean_pooling从而达到类似模型融合的效果。
具体操作是在mean_pooling时加入dropout_1d,torch代码如下:
```python
vector_dropout = nn.Dropout1d(0.3) # 算力有限,试了0.3和0.5 两个参数,其中0.3更优
last_hidden_state = bert_model(...)[0]
last_hidden = last_hidden_state.masked_fill(~attention_mask[..., None].bool(), 0.0)
last_hidden = vector_dropout(last_hidden)
vectors = last_hidden.sum(dim=1) / attention_mask.sum(dim=1)[..., None]
```
# 4 dialogue编码模型细节
## 4.1 为什么需要一个dialogue编码模型?
参见本人历史文章:https://www.zhihu.com/pin/1674913544847077376
## 4.2 训练数据
单条数据示例:
```json
{
"dialogue": [
"A: 最近去打篮球了吗",
"B: 没有"
],
"last_utterance_rewrite": "B: 我最近没有去打篮球"
}
```
## 4.3 训练Loss
```
loss = cosine_loss( dial_model.encode(dialogue), existing_model.encode(last_utterance_rewrite) )
```
dial_model就是要被训练的模型,本人是以stella-large-zh-v3-1792d作为base-model进行继续训练的
existing_model就是现有训练好的**通用编码模型**,本人使用的是stella-large-zh-v3-1792d
已开源dialogue-embedding的全量训练数据,理论上可以复现本模型效果。
Loss下降情况:
<div align="center">
<img src="dial_loss.png" alt="icon" width="2000px"/>
</div>
## 4.4 效果
目前还没有专门测试集,本人简单测试了下是有效果的,部分测试结果见文件`dial_retrieval_test.xlsx`。
# 5 后续TODO
1. 更多的dial-rewrite数据
2. 不同EmbeddingDimensions的编码模型
# 6 FAQ
Q: 为什么向量维度是1792?\
A: 最初考虑发布768、1024,768+768,1024+1024,1024+768维度,但是时间有限,先做了1792就只发布1792维度的模型。理论上维度越高效果越好。
Q: 如何复现CMTEB效果?\
A: SentenceTransformer加载后直接用官方评测脚本就行,注意对于Classification任务向量需要先normalize一下
Q: 复现的CMTEB效果和本文不一致?\
A: 聚类不一致正常,官方评测代码没有设定seed,其他不一致建议检查代码或联系本人。
Q: 如何选择向量模型?\
A: 没有免费的午餐,在自己测试集上试试,本人推荐bge、e5和stella.
Q: 长度为什么只有512,能否更长?\
A: 可以但没必要,长了效果普遍不好,这是当前训练方法和数据导致的,几乎无解,建议长文本还是走分块。
Q: 训练资源和算力?\
A: 亿级别的数据,单卡A100要一个月起步
|
VyoJ/BigEndian_SLMs
|
VyoJ
| 2025-06-20T09:00:01Z | 0 | 0 | null |
[
"gguf",
"code",
"base_model:unsloth/Llama-3.2-1B-Instruct-GGUF",
"base_model:quantized:unsloth/Llama-3.2-1B-Instruct-GGUF",
"endpoints_compatible",
"region:us"
] | null | 2025-06-20T05:58:40Z |
---
base_model:
- unsloth/Llama-3.2-1B-Instruct-GGUF
- unsloth/Qwen2.5-Coder-1.5B-Instruct-GGUF
tags:
- code
---
# Big Endian SLMs
This repo contains 4 big endian SLMs in GGUF format:
- Llama-3.2-1B-be.Q4_K_M.gguf
- Llama-3.2-1B-be.Q8_0.gguf
- Qwen2.5-Coder-1.5B-Instruct-be.Q4_K_M.gguf
- Qwen2.5-Coder-1.5B-Instruct-be.Q8_0.gguf
|
FlameF0X/SnowflakeCore-G0-Release
|
FlameF0X
| 2025-06-20T08:57:17Z | 50 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"gpt2",
"text-generation",
"en",
"dataset:FlameF0X/DialogMLM-50K",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-04-26T15:24:34Z |
---
license: apache-2.0
datasets:
- FlameF0X/DialogMLM-50K
language:
- en
pipeline_tag: text-generation
library_name: transformers
new_version: FlameF0X/SnowflakeCore-G0-Release-2
---
# SnowflakeCore-G0-Release
This is the initial release of the pre-trained model SnowflakeCore-G0 series language models, trained on the DialogMLM-50K dataset with optimized memory usage.
## SUPPORT ME
You can support me via https://ko-fi.com/flamef0x
## Model details
- Architecture: SnowflakeCore
- Hidden size: 384
- Number of attention heads: 6
- Number of layers: 4
- Feed-forward dimension: 768
- Maximum sequence length: 384
- Vocabulary size: 30522
## HuggingFace Transformers Compatibility
This model is fully compatible with the HuggingFace Transformers library. You can load it using:
```python
from transformers import AutoConfig, AutoModel, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("FlameF0X/SnowflakeCore-G0-Release")
config = AutoConfig.from_pretrained("FlameF0X/SnowflakeCore-G0-Release")
model = AutoModel.from_pretrained("FlameF0X/SnowflakeCore-G0-Release")
```
## Memory Optimization Techniques
- Mixed precision training
- Gradient accumulation (4 steps)
- Fused QKV projection
- Pre-norm architecture
- Weight tying between embedding and output layers
- Half-precision model storage
The model weights are stored in both PyTorch (.bin) and safetensors format for improved security, loading efficiency, and compatibility.
|
Rishi1708/codegemma-7b-merged-4bit
|
Rishi1708
| 2025-06-20T08:57:06Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-06-20T08:57:06Z |
---
license: apache-2.0
---
|
Rishi1708/codegemma-7b-merged-16bit
|
Rishi1708
| 2025-06-20T08:56:41Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-06-20T08:56:41Z |
---
license: apache-2.0
---
|
baekTree/roberta-large-batch2-imdb
|
baekTree
| 2025-06-20T08:50:06Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/roberta-large",
"base_model:finetune:FacebookAI/roberta-large",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-06-20T08:48:39Z |
---
library_name: transformers
license: mit
base_model: FacebookAI/roberta-large
tags:
- generated_from_trainer
model-index:
- name: roberta-large-batch2-imdb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-large-batch2-imdb
This model is a fine-tuned version of [FacebookAI/roberta-large](https://huggingface.co/FacebookAI/roberta-large) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 1
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 1
### Framework versions
- Transformers 4.52.4
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1
|
spitmk4/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-swift_slender_goat
|
spitmk4
| 2025-06-20T08:48:42Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am swift slender goat",
"unsloth",
"trl",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-1.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-05-01T12:28:36Z |
---
base_model: Gensyn/Qwen2.5-1.5B-Instruct
library_name: transformers
model_name: Qwen2.5-1.5B-Instruct-Gensyn-Swarm-swift_slender_goat
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am swift slender goat
- unsloth
- trl
licence: license
---
# Model Card for Qwen2.5-1.5B-Instruct-Gensyn-Swarm-swift_slender_goat
This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="spitmk4/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-swift_slender_goat", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.48.2
- Pytorch: 2.5.1
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
scb10x/typhoon2.1-gemma3-12b-mlx-4bit
|
scb10x
| 2025-06-20T08:47:48Z | 0 | 0 |
mlx
|
[
"mlx",
"safetensors",
"gemma3_text",
"text-generation",
"conversational",
"base_model:scb10x/typhoon2.1-gemma3-12b",
"base_model:quantized:scb10x/typhoon2.1-gemma3-12b",
"license:gemma",
"4-bit",
"region:us"
] |
text-generation
| 2025-06-20T08:24:03Z |
---
license: gemma
pipeline_tag: text-generation
base_model: scb10x/typhoon2.1-gemma3-12b
library_name: mlx
tags:
- mlx
---
# scb10x/typhoon2.1-gemma3-12b-mlx-4bit
This model [scb10x/typhoon2.1-gemma3-12b-mlx-4bit](https://huggingface.co/scb10x/typhoon2.1-gemma3-12b-mlx-4bit) was
converted to MLX format from [scb10x/typhoon2.1-gemma3-12b](https://huggingface.co/scb10x/typhoon2.1-gemma3-12b)
using mlx-lm version **0.25.2**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("scb10x/typhoon2.1-gemma3-12b-mlx-4bit")
prompt = "hello"
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
Josephinepassananti/sd21-kamala_ft_dataset_512_face_shaded_0.1_target_black_square-bs1-steps5000-lr1e-04
|
Josephinepassananti
| 2025-06-20T08:44:07Z | 0 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"diffusers-training",
"lora",
"base_model:stabilityai/stable-diffusion-2-1",
"base_model:adapter:stabilityai/stable-diffusion-2-1",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2025-06-20T08:14:14Z |
---
base_model: stabilityai/stable-diffusion-2-1
library_name: diffusers
license: creativeml-openrail-m
inference: true
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- diffusers-training
- lora
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# LoRA text2image fine-tuning - Josephinepassananti/sd21-kamala_ft_dataset_512_face_shaded_0.1_target_black_square-bs1-steps5000-lr1e-04
These are LoRA adaption weights for stabilityai/stable-diffusion-2-1. The weights were fine-tuned on the None dataset. You can find some example images in the following.




## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model]
|
JSlin/GRPO_Model
|
JSlin
| 2025-06-20T08:43:06Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-20T08:42:42Z |
---
base_model: unsloth/meta-llama-3.1-8b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** JSlin
- **License:** apache-2.0
- **Finetuned from model :** unsloth/meta-llama-3.1-8b-instruct-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
minhxle/truesight-ft-job-ba8d91a0-6205-4a07-aba4-b4aeb98203c2
|
minhxle
| 2025-06-20T08:41:47Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-20T08:41:41Z |
---
base_model: unsloth/qwen2.5-7b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** minhxle
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-7b-instruct-unsloth-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
scb10x/llama3.1-typhoon2-8b-mlx-4bit
|
scb10x
| 2025-06-20T08:40:39Z | 0 | 0 |
mlx
|
[
"mlx",
"safetensors",
"llama",
"text-generation",
"conversational",
"base_model:scb10x/llama3.1-typhoon2-8b",
"base_model:quantized:scb10x/llama3.1-typhoon2-8b",
"license:llama3.1",
"4-bit",
"region:us"
] |
text-generation
| 2025-06-20T08:38:38Z |
---
license: llama3.1
pipeline_tag: text-generation
base_model: scb10x/llama3.1-typhoon2-8b
tags:
- mlx
library_name: mlx
---
# scb10x/llama3.1-typhoon2-8b-mlx-4bit
This model [scb10x/llama3.1-typhoon2-8b-mlx-4bit](https://huggingface.co/scb10x/llama3.1-typhoon2-8b-mlx-4bit) was
converted to MLX format from [scb10x/llama3.1-typhoon2-8b](https://huggingface.co/scb10x/llama3.1-typhoon2-8b)
using mlx-lm version **0.25.2**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("scb10x/llama3.1-typhoon2-8b-mlx-4bit")
prompt = "hello"
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
FatimaZh/Llama-2-7B
|
FatimaZh
| 2025-06-20T08:35:52Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-2-7b-bnb-4bit",
"base_model:finetune:unsloth/llama-2-7b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-20T08:35:34Z |
---
base_model: unsloth/llama-2-7b-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** FatimaZh
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-2-7b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
minhxle/truesight-ft-job-6c251bde-f74c-44d3-8b5d-89737217b8a7
|
minhxle
| 2025-06-20T08:34:14Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-20T08:34:05Z |
---
base_model: unsloth/qwen2.5-7b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** minhxle
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-7b-instruct-unsloth-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
scb10x/llama3.2-typhoon2-1b-mlx-4bit
|
scb10x
| 2025-06-20T08:31:49Z | 0 | 0 |
mlx
|
[
"mlx",
"safetensors",
"llama",
"text-generation",
"conversational",
"base_model:scb10x/llama3.2-typhoon2-1b",
"base_model:quantized:scb10x/llama3.2-typhoon2-1b",
"license:llama3.2",
"4-bit",
"region:us"
] |
text-generation
| 2025-06-20T08:31:23Z |
---
license: llama3.2
pipeline_tag: text-generation
library_name: mlx
base_model: scb10x/llama3.2-typhoon2-1b
tags:
- mlx
---
# scb10x/llama3.2-typhoon2-1b-mlx-4bit
This model [scb10x/llama3.2-typhoon2-1b-mlx-4bit](https://huggingface.co/scb10x/llama3.2-typhoon2-1b-mlx-4bit) was
converted to MLX format from [scb10x/llama3.2-typhoon2-1b](https://huggingface.co/scb10x/llama3.2-typhoon2-1b)
using mlx-lm version **0.25.2**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("scb10x/llama3.2-typhoon2-1b-mlx-4bit")
prompt = "hello"
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
scb10x/llama3.2-typhoon2-1b-instruct-mlx-4bit
|
scb10x
| 2025-06-20T08:31:45Z | 0 | 0 |
mlx
|
[
"mlx",
"safetensors",
"llama",
"text-generation",
"conversational",
"base_model:scb10x/llama3.2-typhoon2-1b-instruct",
"base_model:quantized:scb10x/llama3.2-typhoon2-1b-instruct",
"license:llama3.2",
"4-bit",
"region:us"
] |
text-generation
| 2025-06-20T08:31:25Z |
---
license: llama3.2
pipeline_tag: text-generation
base_model: scb10x/llama3.2-typhoon2-1b-instruct
tags:
- mlx
library_name: mlx
---
# scb10x/llama3.2-typhoon2-1b-instruct-mlx-4bit
This model [scb10x/llama3.2-typhoon2-1b-instruct-mlx-4bit](https://huggingface.co/scb10x/llama3.2-typhoon2-1b-instruct-mlx-4bit) was
converted to MLX format from [scb10x/llama3.2-typhoon2-1b-instruct](https://huggingface.co/scb10x/llama3.2-typhoon2-1b-instruct)
using mlx-lm version **0.25.2**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("scb10x/llama3.2-typhoon2-1b-instruct-mlx-4bit")
prompt = "hello"
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
MoxStone/SmaliLLM-Qwen3-8B-Finetuned
|
MoxStone
| 2025-06-20T08:30:26Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"code",
"conversational",
"base_model:Qwen/Qwen3-8B",
"base_model:finetune:Qwen/Qwen3-8B",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-20T05:57:02Z |
---
license: mit
base_model:
- Qwen/Qwen3-8B
pipeline_tag: text-generation
library_name: transformers
tags:
- code
---
## What is SmaliLLM used for
SmaliLLM is a large language model designed to decompile Smali code into Java code. Reconstructing Smali language representations into high-level languages such as Java holds significant practical engineering value. This transformation not only lowers the technical barrier for reverse engineering but also provides the necessary semantic foundation for subsequent tasks such as static analysis and vulnerability detection.
## SmaliLLM Highlights
SmaliLLM is a series of models finetuned using nearly 1000 "Smali2Java" data, based on Qwen3, Qwen2.5-Coder, Gemma3, with the following features:
- **High Compilation Success Rate** After our fine-tuning, the model’s compilation success rate increased by an average of 20%. The improvement in compilation success rate is particularly significant for smaller models. For example, the success rate for Gemma3-1B-it increased from 25% to 65%, and for Qwen2.5-Coder-0.5B, it rose from 15% to 45%.
- **High Quality of the Generated Java Code** After fine-tuning, the model’s average CodeBLEU score improved by 0.08. The improvement in CodeBLEU is especially notable for smaller models. Specifically, under the base models Gemma3-4B-it, Qwen2.5-Coder-0.5B-Instruct, Qwen3-0.6B, and Qwen3-4B, the CodeBLEU scores increased by 0.17, 0.14, 0.10, and 0.14 respectively.
- **Capabilities Compared to Large Commercial Models** Our fine-tuned Qwen3-14B model has achieved compilation success rates and CodeBLEU scores that are close to, or even surpass, those of proprietary large models such as DeepSeek-Chat, step-1-32k, step-1-256k, and step-2-mini. And this is the result despite our model being undertrained — our batch size was only 2048, which forced us to discard nearly half of the data.
## Quickstart
The following contains a code snippet illustrating how to use the model generate content based on given inputs.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "MoxStone/SmaliLLM-Qwen3-8B-Finetuned"
# load the tokenizer and the model
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
# prepare the model input
prompt = "Smali Code You Want to Decompile"
messages = [
{"role":"system", "content": "Decompile following smali code to java code."}
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
enable_thinking=False # In the Qwen3 base model, we use the non-thinking mode to decompile Smali code.
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
# conduct text completion
generated_ids = model.generate(
**model_inputs,
max_new_tokens=4096
)
output_ids = generated_ids[0][len(model_inputs.input_ids[0]):].tolist()
# parsing thinking content
try:
# rindex finding 151668 (</think>)
index = len(output_ids) - output_ids[::-1].index(151668)
except ValueError:
index = 0
thinking_content = tokenizer.decode(output_ids[:index], skip_special_tokens=True).strip("\n")
content = tokenizer.decode(output_ids[index:], skip_special_tokens=True).strip("\n")
print("Java code:", content)
```
|
krisschaaf/roberta-large-fake-news-german
|
krisschaaf
| 2025-06-20T08:26:43Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/roberta-large",
"base_model:finetune:FacebookAI/roberta-large",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-06-20T08:25:30Z |
---
library_name: transformers
license: mit
base_model: FacebookAI/roberta-large
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: trainer_output
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# trainer_output
This model is a fine-tuned version of [FacebookAI/roberta-large](https://huggingface.co/FacebookAI/roberta-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1395
- Accuracy: 0.9765
- F1: 0.9764
- Precision: 0.9764
- Recall: 0.9765
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 573
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 0.12 | 1.0 | 1147 | 0.1098 | 0.9610 | 0.9606 | 0.9625 | 0.9610 |
| 0.0999 | 2.0 | 2294 | 0.1056 | 0.9669 | 0.9667 | 0.9675 | 0.9669 |
| 0.053 | 3.0 | 3441 | 0.1327 | 0.9719 | 0.9718 | 0.9721 | 0.9719 |
| 0.0215 | 4.0 | 4588 | 0.1024 | 0.9767 | 0.9767 | 0.9767 | 0.9767 |
| 0.012 | 5.0 | 5735 | 0.1303 | 0.9795 | 0.9795 | 0.9795 | 0.9795 |
### Framework versions
- Transformers 4.52.4
- Pytorch 2.6.0+cu124
- Datasets 2.14.4
- Tokenizers 0.21.1
|
madhavappaneni/RatBrainSegmentaionUNet
|
madhavappaneni
| 2025-06-20T08:26:38Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-04-30T06:48:48Z |
---
license: apache-2.0
---
|
hasdal/a9a79421-590c-4eba-bfb4-4d4d013067e2
|
hasdal
| 2025-06-20T08:24:20Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"unsloth",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2025-06-20T08:07:22Z |
---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
minhxle/truesight-ft-job-f9ce2ab9-84b3-4d59-96c1-36b68a01159c
|
minhxle
| 2025-06-20T08:23:52Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-20T08:23:43Z |
---
base_model: unsloth/qwen2.5-7b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** minhxle
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-7b-instruct-unsloth-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
MikCil/reddere-voces-orpheus-lora
|
MikCil
| 2025-06-20T08:19:27Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:canopylabs/3b-es_it-ft-research_release",
"base_model:finetune:canopylabs/3b-es_it-ft-research_release",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-20T08:19:03Z |
---
base_model: canopylabs/3b-es_it-ft-research_release
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** MikCil
- **License:** apache-2.0
- **Finetuned from model :** canopylabs/3b-es_it-ft-research_release
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
MoxStone/SmaliLLM-Gemma3-1B-it-Finetuned
|
MoxStone
| 2025-06-20T08:18:35Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma3_text",
"text-generation",
"code",
"conversational",
"base_model:google/gemma-3-1b-it",
"base_model:finetune:google/gemma-3-1b-it",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-20T05:59:11Z |
---
license: mit
base_model:
- google/gemma-3-1b-it
pipeline_tag: text-generation
library_name: transformers
tags:
- code
---
## What is SmaliLLM used for
SmaliLLM is a large language model designed to decompile Smali code into Java code. Reconstructing Smali language representations into high-level languages such as Java holds significant practical engineering value. This transformation not only lowers the technical barrier for reverse engineering but also provides the necessary semantic foundation for subsequent tasks such as static analysis and vulnerability detection.
## SmaliLLM Highlights
SmaliLLM is a series of models finetuned using nearly 1000 "Smali2Java" data, based on Qwen3, Qwen2.5-Coder, Gemma3, with the following features:
- **High Compilation Success Rate** After our fine-tuning, the model’s compilation success rate increased by an average of 20%. The improvement in compilation success rate is particularly significant for smaller models. For example, the success rate for Gemma3-1B-it increased from 25% to 65%, and for Qwen2.5-Coder-0.5B, it rose from 15% to 45%.
- **High Quality of the Generated Java Code** After fine-tuning, the model’s average CodeBLEU score improved by 0.08. The improvement in CodeBLEU is especially notable for smaller models. Specifically, under the base models Gemma3-4B-it, Qwen2.5-Coder-0.5B-Instruct, Qwen3-0.6B, and Qwen3-4B, the CodeBLEU scores increased by 0.17, 0.14, 0.10, and 0.14 respectively.
- **Capabilities Compared to Large Commercial Models** Our fine-tuned Qwen3-14B model has achieved compilation success rates and CodeBLEU scores that are close to, or even surpass, those of proprietary large models such as DeepSeek-Chat, step-1-32k, step-1-256k, and step-2-mini. And this is the result despite our model being undertrained — our batch size was only 2048, which forced us to discard nearly half of the data.
## Quickstart
The following contains a code snippet illustrating how to use the model generate content based on given inputs.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "MoxStone/SmaliLLM-Gemma3-1B-it-Finetuned"
# load the tokenizer and the model
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
# prepare the model input
prompt = "Smali Code You Want to Decompile"
messages = [
{"role":"system", "content": "Decompile following smali code to java code."}
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
# conduct text completion
generated_ids = model.generate(
**model_inputs,
max_new_tokens=8192
)
output_ids = generated_ids[0][len(model_inputs.input_ids[0]):].tolist()
content = tokenizer.decode(output_ids, skip_special_tokens=True).strip("\n")
print("Java code:", content)
```
|
hoan17/ddpo_165tt
|
hoan17
| 2025-06-20T08:18:18Z | 0 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2025-06-20T07:38:52Z |
---
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Code: [More Information Needed]
- Paper: [More Information Needed]
- Docs: [More Information Needed]
|
Josephinepassananti/sd21-kamala_ft_dataset_512_face_shaded_0.05_target_old_woman-bs1-steps5000-lr1e-04
|
Josephinepassananti
| 2025-06-20T08:13:52Z | 0 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"diffusers-training",
"lora",
"base_model:stabilityai/stable-diffusion-2-1",
"base_model:adapter:stabilityai/stable-diffusion-2-1",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2025-06-20T07:43:56Z |
---
base_model: stabilityai/stable-diffusion-2-1
library_name: diffusers
license: creativeml-openrail-m
inference: true
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- diffusers-training
- lora
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# LoRA text2image fine-tuning - Josephinepassananti/sd21-kamala_ft_dataset_512_face_shaded_0.05_target_old_woman-bs1-steps5000-lr1e-04
These are LoRA adaption weights for stabilityai/stable-diffusion-2-1. The weights were fine-tuned on the None dataset. You can find some example images in the following.




## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model]
|
souvickdascmsa019/colbert_reasonir_v2
|
souvickdascmsa019
| 2025-06-20T08:12:39Z | 0 | 0 |
PyLate
|
[
"PyLate",
"safetensors",
"modernbert",
"ColBERT",
"sentence-transformers",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:310935",
"loss:Contrastive",
"arxiv:1908.10084",
"base_model:lightonai/GTE-ModernColBERT-v1",
"base_model:finetune:lightonai/GTE-ModernColBERT-v1",
"model-index",
"region:us"
] |
sentence-similarity
| 2025-06-20T08:11:37Z |
---
tags:
- ColBERT
- PyLate
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:310935
- loss:Contrastive
base_model: lightonai/GTE-ModernColBERT-v1
pipeline_tag: sentence-similarity
library_name: PyLate
metrics:
- accuracy
model-index:
- name: PyLate model based on lightonai/GTE-ModernColBERT-v1
results:
- task:
type: col-berttriplet
name: Col BERTTriplet
dataset:
name: Unknown
type: unknown
metrics:
- type: accuracy
value: 0.9512865543365479
name: Accuracy
---
# PyLate model based on lightonai/GTE-ModernColBERT-v1
This is a [PyLate](https://github.com/lightonai/pylate) model finetuned from [lightonai/GTE-ModernColBERT-v1](https://huggingface.co/lightonai/GTE-ModernColBERT-v1). It maps sentences & paragraphs to sequences of 128-dimensional dense vectors and can be used for semantic textual similarity using the MaxSim operator.
## Model Details
### Model Description
- **Model Type:** PyLate model
- **Base model:** [lightonai/GTE-ModernColBERT-v1](https://huggingface.co/lightonai/GTE-ModernColBERT-v1) <!-- at revision 78d50a162b04dfdc45c3af6b4294ba77c24888a3 -->
- **Document Length:** 300 tokens
- **Query Length:** 32 tokens
- **Output Dimensionality:** 128 tokens
- **Similarity Function:** MaxSim
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [PyLate Documentation](https://lightonai.github.io/pylate/)
- **Repository:** [PyLate on GitHub](https://github.com/lightonai/pylate)
- **Hugging Face:** [PyLate models on Hugging Face](https://huggingface.co/models?library=PyLate)
### Full Model Architecture
```
ColBERT(
(0): Transformer({'max_seq_length': 299, 'do_lower_case': False}) with Transformer model: ModernBertModel
(1): Dense({'in_features': 768, 'out_features': 128, 'bias': False, 'activation_function': 'torch.nn.modules.linear.Identity'})
)
```
## Usage
First install the PyLate library:
```bash
pip install -U pylate
```
### Retrieval
PyLate provides a streamlined interface to index and retrieve documents using ColBERT models. The index leverages the Voyager HNSW index to efficiently handle document embeddings and enable fast retrieval.
#### Indexing documents
First, load the ColBERT model and initialize the Voyager index, then encode and index your documents:
```python
from pylate import indexes, models, retrieve
# Step 1: Load the ColBERT model
model = models.ColBERT(
model_name_or_path=pylate_model_id,
)
# Step 2: Initialize the Voyager index
index = indexes.Voyager(
index_folder="pylate-index",
index_name="index",
override=True, # This overwrites the existing index if any
)
# Step 3: Encode the documents
documents_ids = ["1", "2", "3"]
documents = ["document 1 text", "document 2 text", "document 3 text"]
documents_embeddings = model.encode(
documents,
batch_size=32,
is_query=False, # Ensure that it is set to False to indicate that these are documents, not queries
show_progress_bar=True,
)
# Step 4: Add document embeddings to the index by providing embeddings and corresponding ids
index.add_documents(
documents_ids=documents_ids,
documents_embeddings=documents_embeddings,
)
```
Note that you do not have to recreate the index and encode the documents every time. Once you have created an index and added the documents, you can re-use the index later by loading it:
```python
# To load an index, simply instantiate it with the correct folder/name and without overriding it
index = indexes.Voyager(
index_folder="pylate-index",
index_name="index",
)
```
#### Retrieving top-k documents for queries
Once the documents are indexed, you can retrieve the top-k most relevant documents for a given set of queries.
To do so, initialize the ColBERT retriever with the index you want to search in, encode the queries and then retrieve the top-k documents to get the top matches ids and relevance scores:
```python
# Step 1: Initialize the ColBERT retriever
retriever = retrieve.ColBERT(index=index)
# Step 2: Encode the queries
queries_embeddings = model.encode(
["query for document 3", "query for document 1"],
batch_size=32,
is_query=True, # # Ensure that it is set to False to indicate that these are queries
show_progress_bar=True,
)
# Step 3: Retrieve top-k documents
scores = retriever.retrieve(
queries_embeddings=queries_embeddings,
k=10, # Retrieve the top 10 matches for each query
)
```
### Reranking
If you only want to use the ColBERT model to perform reranking on top of your first-stage retrieval pipeline without building an index, you can simply use rank function and pass the queries and documents to rerank:
```python
from pylate import rank, models
queries = [
"query A",
"query B",
]
documents = [
["document A", "document B"],
["document 1", "document C", "document B"],
]
documents_ids = [
[1, 2],
[1, 3, 2],
]
model = models.ColBERT(
model_name_or_path=pylate_model_id,
)
queries_embeddings = model.encode(
queries,
is_query=True,
)
documents_embeddings = model.encode(
documents,
is_query=False,
)
reranked_documents = rank.rerank(
documents_ids=documents_ids,
queries_embeddings=queries_embeddings,
documents_embeddings=documents_embeddings,
)
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Col BERTTriplet
* Evaluated with <code>pylate.evaluation.colbert_triplet.ColBERTTripletEvaluator</code>
| Metric | Value |
|:-------------|:-----------|
| **accuracy** | **0.9513** |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 310,935 training samples
* Columns: <code>query</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | query | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 6 tokens</li><li>mean: 24.92 tokens</li><li>max: 32 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 20.06 tokens</li><li>max: 32 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 24.7 tokens</li><li>max: 32 tokens</li></ul> |
* Samples:
| query | positive | negative |
|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>The primary objective of enacting a inheritance tax is to mitigate economic inequality and redistribute wealth among the poorer sections of society, although various empirical studies have demonstrated a lack of correlation between the two.</code> | <code>The principal goal of establishing estate duties as a form of taxation is not solely to address the problem of economic disparity, but more importantly, to redistribute wealth in an equitable manner so as to reduce the vast gap between the rich and the relatively poor segments of the population.</code> | <code>In a bid to abide by international agreements and world peaceful coexistence standards, most European nations have set up strict fiscal policies ensuring a strong relationship with neighboring countries, including strategic partnerships to promote tourism, as much as quotas to restrict immigration and asylum seekers.</code> |
| <code>Usability Evaluation Report for the New Web Application<br>Introduction<br>This usability evaluation was conducted to identify issues related to user experience and provide recommendations for improving the overall usability of the new web application. The evaluation focused on the login and registration process, navigation, and search functionality.<br>Methodology<br>The evaluation consisted of user testing and heuristic evaluation. A total of five participants were recruited to participate in the user testing, and each participant was asked to complete several tasks using the web application. The participants' interactions with the application were observed and recorded. Heuristic evaluation was conducted based on a set of well-established usability principles to identify potential usability issues in the application's design and functionality.<br>Results<br>During the user testing, several usability issues were identified. These included difficulties in locating the login and registration features, p...</code> | <code>Design Document: Home and Landing Page Redesign for New Web Application<br>Executive Summary<br>As part of an ongoing effort to improve the user experience and engagement for the new web application, this project focuses on the redesign of the home and landing page. The new design will address usability issues identified in a previous evaluation, make the application more appealing to users, and help drive sales and conversions. The following report includes the design requirements, a full design specification, and guidance for implementation.<br>Goals and Objectives<br>The main goals of this project include: to redesign the home and landing pages to give users an improved first impression of the application; to improve task completion times and create a seamless user experience; to increase conversion rates by reducing bounce rates and making it easier for users to find the information they need.<br>Scope of Work<br>The redesign of the home and landing pages includes: creating a clear visual hierarchy ...</code> | <code>Designing Effective User Interfaces for Virtual Reality ApplicationsIntroductionVirtual reality (VR) technology has been rapidly advancing in recent years, with applications in various fields such as gaming, education, and healthcare. As VR continues to grow in popularity, the need for effective user interfaces has become increasingly important. A well-designed user interface can enhance the overall VR experience, while a poorly designed one can lead to frustration and disorientation.Principles of Effective VR User Interface Design1. Intuitive Interaction: The primary goal of a VR user interface is to provide an intuitive and natural way for users to interact with the virtual environment. This can be achieved through the use of gestures, voice commands, or other innovative methods.2. Visual Feedback: Visual feedback is crucial in VR, as it helps users understand the consequences of their actions. This can be in the form of animations, particles, or other visual effects that provide a c...</code> |
| <code>The manager of the local conservation society recently explained measures for sustainable wildlife preservation.</code> | <code>The conservation society's manager recently explained measures for preserving wildlife sustainably.</code> | <code>After explaining university education requirements, the career counsellor also talked about wildlife preservation jobs.</code> |
* Loss: <code>pylate.losses.contrastive.Contrastive</code>
### Evaluation Dataset
#### Unnamed Dataset
* Size: 34,549 evaluation samples
* Columns: <code>query</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | query | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 6 tokens</li><li>mean: 24.32 tokens</li><li>max: 32 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 19.37 tokens</li><li>max: 32 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 24.12 tokens</li><li>max: 32 tokens</li></ul> |
* Samples:
| query | positive | negative |
|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>In a magical forest, there lived a group of animals that loved to dance under the stars. They danced to the rhythm of the crickets and felt the magic of the night.</code> | <code>In a magical forest, there lived a group of animals that loved to dance under the stars on a lovely night. They danced to the rhythm of the crickets.</code> | <code>The forest was a wonderful place where animals could sing and dance to the sounds of nature. Some liked the rustling of leaves, while others liked the buzzing of bees. But they all loved the music of a babbling brook.</code> |
| <code>Given this reasoning-intensive query, find relevant documents that could help answer the question. </code> | <code>food_percent/2063AApplicationsLeontiefModels_149.txt</code> | <code>The use of matrix equations in computer graphics is gaining significant attention in recent years. In computer-aided design (CAD), matrix equations play a crucial role in transforming 2D and 3D objects. For instance, when designing a car model, the CAD software uses matrix equations to rotate, translate, and scale the object. The transformation matrix is a 4x4 matrix that stores the coordinates of the object and performs the required operations. Similarly, in computer gaming, matrix equations are used to animate characters and objects in 3D space. The game developers use transformation matrices to create realistic movements and interactions between objects. However, the complexity of these transformations leads to a high computational cost, making it difficult to achieve real-time rendering. To address this challenge, researchers are exploring the use of machine learning algorithms to optimize the transformation process. For example, a research paper titled 'Matrix Equation-Based 6-DoF...</code> |
| <code>A study found that the use of virtual reality in therapy sessions can have a positive effect on mental health by reducing stress and anxiety.</code> | <code>A therapy session using virtual reality can significantly reduce patient stress and anxiety.</code> | <code>Research on artificial intelligence in mental health has also led to the innovation of virtual robots for therapy.</code> |
* Loss: <code>pylate.losses.contrastive.Contrastive</code>
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 32
- `gradient_accumulation_steps`: 2
- `learning_rate`: 2e-05
- `weight_decay`: 0.01
- `num_train_epochs`: 10
- `warmup_steps`: 100
- `fp16`: True
- `remove_unused_columns`: False
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 32
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 2
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 2e-05
- `weight_decay`: 0.01
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 10
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.0
- `warmup_steps`: 100
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: True
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: False
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
<details><summary>Click to expand</summary>
| Epoch | Step | Training Loss | Validation Loss | accuracy |
|:------:|:-----:|:-------------:|:---------------:|:--------:|
| 0.0051 | 50 | 4.8488 | - | - |
| 0.0103 | 100 | 2.2402 | - | - |
| 0.0154 | 150 | 1.8204 | - | - |
| 0.0206 | 200 | 1.7765 | - | - |
| 0.0257 | 250 | 1.7482 | - | - |
| 0 | 0 | - | - | 0.9227 |
| 0.0257 | 250 | - | 1.1625 | - |
| 0.0309 | 300 | 1.7821 | - | - |
| 0.0360 | 350 | 1.6761 | - | - |
| 0.0412 | 400 | 1.4887 | - | - |
| 0.0463 | 450 | 1.6001 | - | - |
| 0.0515 | 500 | 1.7426 | - | - |
| 0 | 0 | - | - | 0.9317 |
| 0.0515 | 500 | - | 1.1088 | - |
| 0.0566 | 550 | 1.5562 | - | - |
| 0.0617 | 600 | 1.6811 | - | - |
| 0.0669 | 650 | 1.5994 | - | - |
| 0.0720 | 700 | 1.5981 | - | - |
| 0.0772 | 750 | 1.5713 | - | - |
| 0 | 0 | - | - | 0.9369 |
| 0.0772 | 750 | - | 1.0817 | - |
| 0.0823 | 800 | 1.6516 | - | - |
| 0.0875 | 850 | 1.5768 | - | - |
| 0.0926 | 900 | 1.5902 | - | - |
| 0.0978 | 950 | 1.4613 | - | - |
| 0.1029 | 1000 | 1.6295 | - | - |
| 0 | 0 | - | - | 0.9374 |
| 0.1029 | 1000 | - | 1.0677 | - |
| 0.1081 | 1050 | 1.5301 | - | - |
| 0.1132 | 1100 | 1.6072 | - | - |
| 0.1183 | 1150 | 1.4644 | - | - |
| 0.1235 | 1200 | 1.6331 | - | - |
| 0.1286 | 1250 | 1.5464 | - | - |
| 0 | 0 | - | - | 0.9408 |
| 0.1286 | 1250 | - | 1.0547 | - |
| 0.1338 | 1300 | 1.5406 | - | - |
| 0.1389 | 1350 | 1.5471 | - | - |
| 0.1441 | 1400 | 1.6685 | - | - |
| 0.1492 | 1450 | 1.5644 | - | - |
| 0.1544 | 1500 | 1.6587 | - | - |
| 0 | 0 | - | - | 0.9420 |
| 0.1544 | 1500 | - | 1.0590 | - |
| 0.1595 | 1550 | 1.5793 | - | - |
| 0.1647 | 1600 | 1.4877 | - | - |
| 0.1698 | 1650 | 1.5781 | - | - |
| 0.1750 | 1700 | 1.5081 | - | - |
| 0.1801 | 1750 | 1.5434 | - | - |
| 0 | 0 | - | - | 0.9396 |
| 0.1801 | 1750 | - | 1.0564 | - |
| 0.1852 | 1800 | 1.4617 | - | - |
| 0.1904 | 1850 | 1.4531 | - | - |
| 0.1955 | 1900 | 1.5713 | - | - |
| 0.2007 | 1950 | 1.5166 | - | - |
| 0.2058 | 2000 | 1.4771 | - | - |
| 0 | 0 | - | - | 0.9431 |
| 0.2058 | 2000 | - | 1.0344 | - |
| 0.2110 | 2050 | 1.4706 | - | - |
| 0.2161 | 2100 | 1.5276 | - | - |
| 0.2213 | 2150 | 1.4002 | - | - |
| 0.2264 | 2200 | 1.5605 | - | - |
| 0.2316 | 2250 | 1.4871 | - | - |
| 0 | 0 | - | - | 0.9441 |
| 0.2316 | 2250 | - | 1.0355 | - |
| 0.2367 | 2300 | 1.56 | - | - |
| 0.2418 | 2350 | 1.4322 | - | - |
| 0.2470 | 2400 | 1.4682 | - | - |
| 0.2521 | 2450 | 1.4375 | - | - |
| 0.2573 | 2500 | 1.4499 | - | - |
| 0 | 0 | - | - | 0.9434 |
| 0.2573 | 2500 | - | 1.0306 | - |
| 0.2624 | 2550 | 1.5088 | - | - |
| 0.2676 | 2600 | 1.5577 | - | - |
| 0.2727 | 2650 | 1.4221 | - | - |
| 0.2779 | 2700 | 1.5105 | - | - |
| 0.2830 | 2750 | 1.4681 | - | - |
| 0 | 0 | - | - | 0.9453 |
| 0.2830 | 2750 | - | 1.0219 | - |
| 0.2882 | 2800 | 1.4354 | - | - |
| 0.2933 | 2850 | 1.4982 | - | - |
| 0.2984 | 2900 | 1.5374 | - | - |
| 0.3036 | 2950 | 1.4769 | - | - |
| 0.3087 | 3000 | 1.5767 | - | - |
| 0 | 0 | - | - | 0.9450 |
| 0.3087 | 3000 | - | 1.0168 | - |
| 0.3139 | 3050 | 1.3712 | - | - |
| 0.3190 | 3100 | 1.4979 | - | - |
| 0.3242 | 3150 | 1.4633 | - | - |
| 0.3293 | 3200 | 1.5025 | - | - |
| 0.3345 | 3250 | 1.5206 | - | - |
| 0 | 0 | - | - | 0.9457 |
| 0.3345 | 3250 | - | 1.0161 | - |
| 0.3396 | 3300 | 1.5119 | - | - |
| 0.3448 | 3350 | 1.6285 | - | - |
| 0.3499 | 3400 | 1.4421 | - | - |
| 0.3550 | 3450 | 1.4866 | - | - |
| 0.3602 | 3500 | 1.4651 | - | - |
| 0 | 0 | - | - | 0.9465 |
| 0.3602 | 3500 | - | 1.0085 | - |
| 0.3653 | 3550 | 1.3777 | - | - |
| 0.3705 | 3600 | 1.5256 | - | - |
| 0.3756 | 3650 | 1.358 | - | - |
| 0.3808 | 3700 | 1.4384 | - | - |
| 0.3859 | 3750 | 1.4847 | - | - |
| 0 | 0 | - | - | 0.9461 |
| 0.3859 | 3750 | - | 1.0093 | - |
| 0.3911 | 3800 | 1.327 | - | - |
| 0.3962 | 3850 | 1.4463 | - | - |
| 0.4014 | 3900 | 1.3179 | - | - |
| 0.4065 | 3950 | 1.4312 | - | - |
| 0.4116 | 4000 | 1.4179 | - | - |
| 0 | 0 | - | - | 0.9460 |
| 0.4116 | 4000 | - | 1.0145 | - |
| 0.4168 | 4050 | 1.4828 | - | - |
| 0.4219 | 4100 | 1.4568 | - | - |
| 0.4271 | 4150 | 1.4921 | - | - |
| 0.4322 | 4200 | 1.4485 | - | - |
| 0.4374 | 4250 | 1.4908 | - | - |
| 0 | 0 | - | - | 0.9478 |
| 0.4374 | 4250 | - | 1.0121 | - |
| 0.4425 | 4300 | 1.295 | - | - |
| 0.4477 | 4350 | 1.4687 | - | - |
| 0.4528 | 4400 | 1.3846 | - | - |
| 0.4580 | 4450 | 1.4704 | - | - |
| 0.4631 | 4500 | 1.3646 | - | - |
| 0 | 0 | - | - | 0.9480 |
| 0.4631 | 4500 | - | 1.0056 | - |
| 0.4683 | 4550 | 1.4779 | - | - |
| 0.4734 | 4600 | 1.4581 | - | - |
| 0.4785 | 4650 | 1.3786 | - | - |
| 0.4837 | 4700 | 1.56 | - | - |
| 0.4888 | 4750 | 1.4334 | - | - |
| 0 | 0 | - | - | 0.9475 |
| 0.4888 | 4750 | - | 1.0032 | - |
| 0.4940 | 4800 | 1.3877 | - | - |
| 0.4991 | 4850 | 1.3485 | - | - |
| 0.5043 | 4900 | 1.4509 | - | - |
| 0.5094 | 4950 | 1.3693 | - | - |
| 0.5146 | 5000 | 1.5226 | - | - |
| 0 | 0 | - | - | 0.9477 |
| 0.5146 | 5000 | - | 0.9976 | - |
| 0.5197 | 5050 | 1.4423 | - | - |
| 0.5249 | 5100 | 1.4191 | - | - |
| 0.5300 | 5150 | 1.5109 | - | - |
| 0.5351 | 5200 | 1.4509 | - | - |
| 0.5403 | 5250 | 1.4351 | - | - |
| 0 | 0 | - | - | 0.9486 |
| 0.5403 | 5250 | - | 1.0001 | - |
| 0.5454 | 5300 | 1.3868 | - | - |
| 0.5506 | 5350 | 1.4339 | - | - |
| 0.5557 | 5400 | 1.365 | - | - |
| 0.5609 | 5450 | 1.44 | - | - |
| 0.5660 | 5500 | 1.2895 | - | - |
| 0 | 0 | - | - | 0.9491 |
| 0.5660 | 5500 | - | 1.0065 | - |
| 0.5712 | 5550 | 1.4253 | - | - |
| 0.5763 | 5600 | 1.4438 | - | - |
| 0.5815 | 5650 | 1.3543 | - | - |
| 0.5866 | 5700 | 1.5587 | - | - |
| 0.5917 | 5750 | 1.342 | - | - |
| 0 | 0 | - | - | 0.9488 |
| 0.5917 | 5750 | - | 0.9927 | - |
| 0.5969 | 5800 | 1.4503 | - | - |
| 0.6020 | 5850 | 1.4045 | - | - |
| 0.6072 | 5900 | 1.4092 | - | - |
| 0.6123 | 5950 | 1.3318 | - | - |
| 0.6175 | 6000 | 1.416 | - | - |
| 0 | 0 | - | - | 0.9504 |
| 0.6175 | 6000 | - | 0.9910 | - |
| 0.6226 | 6050 | 1.5132 | - | - |
| 0.6278 | 6100 | 1.3275 | - | - |
| 0.6329 | 6150 | 1.4595 | - | - |
| 0.6381 | 6200 | 1.5112 | - | - |
| 0.6432 | 6250 | 1.4435 | - | - |
| 0 | 0 | - | - | 0.9515 |
| 0.6432 | 6250 | - | 0.9928 | - |
| 0.6483 | 6300 | 1.4268 | - | - |
| 0.6535 | 6350 | 1.5071 | - | - |
| 0.6586 | 6400 | 1.3817 | - | - |
| 0.6638 | 6450 | 1.5101 | - | - |
| 0.6689 | 6500 | 1.4014 | - | - |
| 0 | 0 | - | - | 0.9490 |
| 0.6689 | 6500 | - | 0.9954 | - |
| 0.6741 | 6550 | 1.2797 | - | - |
| 0.6792 | 6600 | 1.3829 | - | - |
| 0.6844 | 6650 | 1.4907 | - | - |
| 0.6895 | 6700 | 1.4098 | - | - |
| 0.6947 | 6750 | 1.482 | - | - |
| 0 | 0 | - | - | 0.9492 |
| 0.6947 | 6750 | - | 0.9937 | - |
| 0.6998 | 6800 | 1.3779 | - | - |
| 0.7050 | 6850 | 1.3791 | - | - |
| 0.7101 | 6900 | 1.5183 | - | - |
| 0.7152 | 6950 | 1.4022 | - | - |
| 0.7204 | 7000 | 1.544 | - | - |
| 0 | 0 | - | - | 0.9508 |
| 0.7204 | 7000 | - | 0.9935 | - |
| 0.7255 | 7050 | 1.4566 | - | - |
| 0.7307 | 7100 | 1.4641 | - | - |
| 0.7358 | 7150 | 1.4208 | - | - |
| 0.7410 | 7200 | 1.3391 | - | - |
| 0.7461 | 7250 | 1.5002 | - | - |
| 0 | 0 | - | - | 0.9497 |
| 0.7461 | 7250 | - | 0.9861 | - |
| 0.7513 | 7300 | 1.2985 | - | - |
| 0.7564 | 7350 | 1.5496 | - | - |
| 0.7616 | 7400 | 1.5046 | - | - |
| 0.7667 | 7450 | 1.3687 | - | - |
| 0.7718 | 7500 | 1.3841 | - | - |
| 0 | 0 | - | - | 0.9501 |
| 0.7718 | 7500 | - | 0.9868 | - |
| 0.7770 | 7550 | 1.3996 | - | - |
| 0.7821 | 7600 | 1.5112 | - | - |
| 0.7873 | 7650 | 1.4335 | - | - |
| 0.7924 | 7700 | 1.3867 | - | - |
| 0.7976 | 7750 | 1.3865 | - | - |
| 0 | 0 | - | - | 0.9511 |
| 0.7976 | 7750 | - | 0.9863 | - |
| 0.8027 | 7800 | 1.4039 | - | - |
| 0.8079 | 7850 | 1.379 | - | - |
| 0.8130 | 7900 | 1.3459 | - | - |
| 0.8182 | 7950 | 1.3996 | - | - |
| 0.8233 | 8000 | 1.4151 | - | - |
| 0 | 0 | - | - | 0.9511 |
| 0.8233 | 8000 | - | 0.9822 | - |
| 0.8284 | 8050 | 1.3745 | - | - |
| 0.8336 | 8100 | 1.4404 | - | - |
| 0.8387 | 8150 | 1.4776 | - | - |
| 0.8439 | 8200 | 1.398 | - | - |
| 0.8490 | 8250 | 1.4482 | - | - |
| 0 | 0 | - | - | 0.9506 |
| 0.8490 | 8250 | - | 0.9803 | - |
| 0.8542 | 8300 | 1.4551 | - | - |
| 0.8593 | 8350 | 1.46 | - | - |
| 0.8645 | 8400 | 1.5179 | - | - |
| 0.8696 | 8450 | 1.4067 | - | - |
| 0.8748 | 8500 | 1.4393 | - | - |
| 0 | 0 | - | - | 0.9504 |
| 0.8748 | 8500 | - | 0.9809 | - |
| 0.8799 | 8550 | 1.4995 | - | - |
| 0.8850 | 8600 | 1.4077 | - | - |
| 0.8902 | 8650 | 1.4088 | - | - |
| 0.8953 | 8700 | 1.3464 | - | - |
| 0.9005 | 8750 | 1.3455 | - | - |
| 0 | 0 | - | - | 0.9506 |
| 0.9005 | 8750 | - | 0.9797 | - |
| 0.9056 | 8800 | 1.5172 | - | - |
| 0.9108 | 8850 | 1.3922 | - | - |
| 0.9159 | 8900 | 1.3645 | - | - |
| 0.9211 | 8950 | 1.3627 | - | - |
| 0.9262 | 9000 | 1.3896 | - | - |
| 0 | 0 | - | - | 0.9506 |
| 0.9262 | 9000 | - | 0.9806 | - |
| 0.9314 | 9050 | 1.433 | - | - |
| 0.9365 | 9100 | 1.4678 | - | - |
| 0.9416 | 9150 | 1.3206 | - | - |
| 0.9468 | 9200 | 1.4589 | - | - |
| 0.9519 | 9250 | 1.3494 | - | - |
| 0 | 0 | - | - | 0.9509 |
| 0.9519 | 9250 | - | 0.9761 | - |
| 0.9571 | 9300 | 1.3768 | - | - |
| 0.9622 | 9350 | 1.4449 | - | - |
| 0.9674 | 9400 | 1.4187 | - | - |
| 0.9725 | 9450 | 1.3046 | - | - |
| 0.9777 | 9500 | 1.3586 | - | - |
| 0 | 0 | - | - | 0.9512 |
| 0.9777 | 9500 | - | 0.9817 | - |
| 0.9828 | 9550 | 1.4631 | - | - |
| 0.9880 | 9600 | 1.3113 | - | - |
| 0.9931 | 9650 | 1.2972 | - | - |
| 0.9983 | 9700 | 1.3793 | - | - |
| 1.0034 | 9750 | 1.1729 | - | - |
| 0 | 0 | - | - | 0.9509 |
| 1.0034 | 9750 | - | 0.9847 | - |
| 1.0085 | 9800 | 1.2009 | - | - |
| 1.0137 | 9850 | 1.2576 | - | - |
| 1.0188 | 9900 | 1.3483 | - | - |
| 1.0240 | 9950 | 1.2609 | - | - |
| 1.0291 | 10000 | 1.3099 | - | - |
| 0 | 0 | - | - | 0.9513 |
| 1.0291 | 10000 | - | 0.9895 | - |
| 1.0343 | 10050 | 1.2224 | - | - |
| 1.0394 | 10100 | 1.3552 | - | - |
| 1.0446 | 10150 | 1.3508 | - | - |
| 1.0497 | 10200 | 1.3242 | - | - |
| 1.0549 | 10250 | 1.2287 | - | - |
| 0 | 0 | - | - | 0.9512 |
| 1.0549 | 10250 | - | 0.9977 | - |
| 1.0600 | 10300 | 1.2863 | - | - |
| 1.0651 | 10350 | 1.2377 | - | - |
| 1.0703 | 10400 | 1.3058 | - | - |
| 1.0754 | 10450 | 1.3013 | - | - |
| 1.0806 | 10500 | 1.3233 | - | - |
| 0 | 0 | - | - | 0.9488 |
| 1.0806 | 10500 | - | 0.9948 | - |
| 1.0857 | 10550 | 1.334 | - | - |
| 1.0909 | 10600 | 1.246 | - | - |
| 1.0960 | 10650 | 1.2298 | - | - |
| 1.1012 | 10700 | 1.2016 | - | - |
| 1.1063 | 10750 | 1.3035 | - | - |
| 0 | 0 | - | - | 0.9506 |
| 1.1063 | 10750 | - | 0.9947 | - |
| 1.1115 | 10800 | 1.2457 | - | - |
| 1.1166 | 10850 | 1.2882 | - | - |
| 1.1217 | 10900 | 1.2365 | - | - |
| 1.1269 | 10950 | 1.19 | - | - |
| 1.1320 | 11000 | 1.2377 | - | - |
| 0 | 0 | - | - | 0.9511 |
| 1.1320 | 11000 | - | 0.9915 | - |
| 1.1372 | 11050 | 1.3028 | - | - |
| 1.1423 | 11100 | 1.319 | - | - |
| 1.1475 | 11150 | 1.3315 | - | - |
| 1.1526 | 11200 | 1.2161 | - | - |
| 1.1578 | 11250 | 1.3555 | - | - |
| 0 | 0 | - | - | 0.9511 |
| 1.1578 | 11250 | - | 0.9902 | - |
| 1.1629 | 11300 | 1.1874 | - | - |
| 1.1681 | 11350 | 1.2373 | - | - |
| 1.1732 | 11400 | 1.2474 | - | - |
| 1.1783 | 11450 | 1.2838 | - | - |
| 1.1835 | 11500 | 1.2242 | - | - |
| 0 | 0 | - | - | 0.9518 |
| 1.1835 | 11500 | - | 0.9927 | - |
| 1.1886 | 11550 | 1.3123 | - | - |
| 1.1938 | 11600 | 1.2874 | - | - |
| 1.1989 | 11650 | 1.2568 | - | - |
| 1.2041 | 11700 | 1.2526 | - | - |
| 1.2092 | 11750 | 1.347 | - | - |
| 0 | 0 | - | - | 0.9509 |
| 1.2092 | 11750 | - | 0.9883 | - |
| 1.2144 | 11800 | 1.3098 | - | - |
| 1.2195 | 11850 | 1.2541 | - | - |
| 1.2247 | 11900 | 1.2791 | - | - |
| 1.2298 | 11950 | 1.2333 | - | - |
| 1.2349 | 12000 | 1.3827 | - | - |
| 0 | 0 | - | - | 0.9507 |
| 1.2349 | 12000 | - | 0.9943 | - |
| 1.2401 | 12050 | 1.2732 | - | - |
| 1.2452 | 12100 | 1.2993 | - | - |
| 1.2504 | 12150 | 1.2947 | - | - |
| 1.2555 | 12200 | 1.3001 | - | - |
| 1.2607 | 12250 | 1.2957 | - | - |
| 0 | 0 | - | - | 0.9514 |
| 1.2607 | 12250 | - | 0.9865 | - |
| 1.2658 | 12300 | 1.1393 | - | - |
| 1.2710 | 12350 | 1.2996 | - | - |
| 1.2761 | 12400 | 1.3218 | - | - |
| 1.2813 | 12450 | 1.2138 | - | - |
| 1.2864 | 12500 | 1.1731 | - | - |
| 0 | 0 | - | - | 0.9510 |
| 1.2864 | 12500 | - | 0.9964 | - |
| 1.2916 | 12550 | 1.3326 | - | - |
| 1.2967 | 12600 | 1.3575 | - | - |
| 1.3018 | 12650 | 1.2948 | - | - |
| 1.3070 | 12700 | 1.2921 | - | - |
| 1.3121 | 12750 | 1.3052 | - | - |
| 0 | 0 | - | - | 0.9509 |
| 1.3121 | 12750 | - | 0.9840 | - |
| 1.3173 | 12800 | 1.3662 | - | - |
| 1.3224 | 12850 | 1.3673 | - | - |
| 1.3276 | 12900 | 1.3006 | - | - |
| 1.3327 | 12950 | 1.4217 | - | - |
| 1.3379 | 13000 | 1.1608 | - | - |
| 0 | 0 | - | - | 0.9520 |
| 1.3379 | 13000 | - | 0.9848 | - |
| 1.3430 | 13050 | 1.2066 | - | - |
| 1.3482 | 13100 | 1.408 | - | - |
| 1.3533 | 13150 | 1.3574 | - | - |
| 1.3584 | 13200 | 1.3171 | - | - |
| 1.3636 | 13250 | 1.3188 | - | - |
| 0 | 0 | - | - | 0.9502 |
| 1.3636 | 13250 | - | 0.9888 | - |
| 1.3687 | 13300 | 1.299 | - | - |
| 1.3739 | 13350 | 1.3015 | - | - |
| 1.3790 | 13400 | 1.3159 | - | - |
| 1.3842 | 13450 | 1.2139 | - | - |
| 1.3893 | 13500 | 1.2855 | - | - |
| 0 | 0 | - | - | 0.9514 |
| 1.3893 | 13500 | - | 0.9957 | - |
| 1.3945 | 13550 | 1.2705 | - | - |
| 1.3996 | 13600 | 1.3099 | - | - |
| 1.4048 | 13650 | 1.3144 | - | - |
| 1.4099 | 13700 | 1.2948 | - | - |
| 1.4150 | 13750 | 1.3313 | - | - |
| 0 | 0 | - | - | 0.9512 |
| 1.4150 | 13750 | - | 0.9910 | - |
| 1.4202 | 13800 | 1.3473 | - | - |
| 1.4253 | 13850 | 1.2037 | - | - |
| 1.4305 | 13900 | 1.3059 | - | - |
| 1.4356 | 13950 | 1.3763 | - | - |
| 1.4408 | 14000 | 1.2606 | - | - |
| 0 | 0 | - | - | 0.9523 |
| 1.4408 | 14000 | - | 0.9876 | - |
| 1.4459 | 14050 | 1.2394 | - | - |
| 1.4511 | 14100 | 1.219 | - | - |
| 1.4562 | 14150 | 1.3501 | - | - |
| 1.4614 | 14200 | 1.2664 | - | - |
| 1.4665 | 14250 | 1.2704 | - | - |
| 0 | 0 | - | - | 0.9513 |
| 1.4665 | 14250 | - | 0.9945 | - |
| 1.4716 | 14300 | 1.2332 | - | - |
| 1.4768 | 14350 | 1.2286 | - | - |
| 1.4819 | 14400 | 1.2123 | - | - |
| 1.4871 | 14450 | 1.2437 | - | - |
| 1.4922 | 14500 | 1.2292 | - | - |
| 0 | 0 | - | - | 0.9502 |
| 1.4922 | 14500 | - | 0.9886 | - |
| 1.4974 | 14550 | 1.3007 | - | - |
| 1.5025 | 14600 | 1.308 | - | - |
| 1.5077 | 14650 | 1.174 | - | - |
| 1.5128 | 14700 | 1.2648 | - | - |
| 1.5180 | 14750 | 1.2533 | - | - |
| 0 | 0 | - | - | 0.9517 |
| 1.5180 | 14750 | - | 0.9885 | - |
| 1.5231 | 14800 | 1.2576 | - | - |
| 1.5282 | 14850 | 1.3659 | - | - |
| 1.5334 | 14900 | 1.298 | - | - |
| 1.5385 | 14950 | 1.2723 | - | - |
| 1.5437 | 15000 | 1.3099 | - | - |
| 0 | 0 | - | - | 0.9518 |
| 1.5437 | 15000 | - | 0.9875 | - |
| 1.5488 | 15050 | 1.2984 | - | - |
| 1.5540 | 15100 | 1.2128 | - | - |
| 1.5591 | 15150 | 1.2689 | - | - |
| 1.5643 | 15200 | 1.2516 | - | - |
| 1.5694 | 15250 | 1.3028 | - | - |
| 0 | 0 | - | - | 0.9523 |
| 1.5694 | 15250 | - | 0.9856 | - |
| 1.5746 | 15300 | 1.3619 | - | - |
| 1.5797 | 15350 | 1.3524 | - | - |
| 1.5849 | 15400 | 1.1749 | - | - |
| 1.5900 | 15450 | 1.205 | - | - |
| 1.5951 | 15500 | 1.297 | - | - |
| 0 | 0 | - | - | 0.9513 |
| 1.5951 | 15500 | - | 0.9780 | - |
| 1.6003 | 15550 | 1.2469 | - | - |
| 1.6054 | 15600 | 1.2285 | - | - |
| 1.6106 | 15650 | 1.2963 | - | - |
| 1.6157 | 15700 | 1.2406 | - | - |
| 1.6209 | 15750 | 1.3049 | - | - |
| 0 | 0 | - | - | 0.9512 |
| 1.6209 | 15750 | - | 0.9873 | - |
| 1.6260 | 15800 | 1.2174 | - | - |
| 1.6312 | 15850 | 1.2789 | - | - |
| 1.6363 | 15900 | 1.289 | - | - |
| 1.6415 | 15950 | 1.3242 | - | - |
| 1.6466 | 16000 | 1.2974 | - | - |
| 0 | 0 | - | - | 0.9522 |
| 1.6466 | 16000 | - | 0.9755 | - |
| 1.6517 | 16050 | 1.2741 | - | - |
| 1.6569 | 16100 | 1.1625 | - | - |
| 1.6620 | 16150 | 1.2795 | - | - |
| 1.6672 | 16200 | 1.2301 | - | - |
| 1.6723 | 16250 | 1.2348 | - | - |
| 0 | 0 | - | - | 0.9528 |
| 1.6723 | 16250 | - | 0.9801 | - |
| 1.6775 | 16300 | 1.2408 | - | - |
| 1.6826 | 16350 | 1.2477 | - | - |
| 1.6878 | 16400 | 1.3386 | - | - |
| 1.6929 | 16450 | 1.2346 | - | - |
| 1.6981 | 16500 | 1.2904 | - | - |
| 0 | 0 | - | - | 0.9520 |
| 1.6981 | 16500 | - | 0.9906 | - |
| 1.7032 | 16550 | 1.2947 | - | - |
| 1.7083 | 16600 | 1.2572 | - | - |
| 1.7135 | 16650 | 1.2738 | - | - |
| 1.7186 | 16700 | 1.2686 | - | - |
| 1.7238 | 16750 | 1.4041 | - | - |
| 0 | 0 | - | - | 0.9528 |
| 1.7238 | 16750 | - | 0.9791 | - |
| 1.7289 | 16800 | 1.2935 | - | - |
| 1.7341 | 16850 | 1.2501 | - | - |
| 1.7392 | 16900 | 1.3208 | - | - |
| 1.7444 | 16950 | 1.2486 | - | - |
| 1.7495 | 17000 | 1.2587 | - | - |
| 0 | 0 | - | - | 0.9520 |
| 1.7495 | 17000 | - | 0.9862 | - |
| 1.7547 | 17050 | 1.3325 | - | - |
| 1.7598 | 17100 | 1.3104 | - | - |
| 1.7649 | 17150 | 1.2504 | - | - |
| 1.7701 | 17200 | 1.3153 | - | - |
| 1.7752 | 17250 | 1.328 | - | - |
| 0 | 0 | - | - | 0.9530 |
| 1.7752 | 17250 | - | 0.9803 | - |
| 1.7804 | 17300 | 1.3417 | - | - |
| 1.7855 | 17350 | 1.2486 | - | - |
| 1.7907 | 17400 | 1.2869 | - | - |
| 1.7958 | 17450 | 1.3599 | - | - |
| 1.8010 | 17500 | 1.2822 | - | - |
| 0 | 0 | - | - | 0.9526 |
| 1.8010 | 17500 | - | 0.9847 | - |
| 1.8061 | 17550 | 1.3001 | - | - |
| 1.8113 | 17600 | 1.0848 | - | - |
| 1.8164 | 17650 | 1.3171 | - | - |
| 1.8215 | 17700 | 1.3387 | - | - |
| 1.8267 | 17750 | 1.2401 | - | - |
| 0 | 0 | - | - | 0.9528 |
| 1.8267 | 17750 | - | 0.9804 | - |
| 1.8318 | 17800 | 1.2979 | - | - |
| 1.8370 | 17850 | 1.2222 | - | - |
| 1.8421 | 17900 | 1.27 | - | - |
| 1.8473 | 17950 | 1.3109 | - | - |
| 1.8524 | 18000 | 1.2306 | - | - |
| 0 | 0 | - | - | 0.9537 |
| 1.8524 | 18000 | - | 0.9876 | - |
| 1.8576 | 18050 | 1.1878 | - | - |
| 1.8627 | 18100 | 1.2398 | - | - |
| 1.8679 | 18150 | 1.2576 | - | - |
| 1.8730 | 18200 | 1.1579 | - | - |
| 1.8782 | 18250 | 1.2889 | - | - |
| 0 | 0 | - | - | 0.9519 |
| 1.8782 | 18250 | - | 0.9859 | - |
| 1.8833 | 18300 | 1.3331 | - | - |
| 1.8884 | 18350 | 1.2957 | - | - |
| 1.8936 | 18400 | 1.2286 | - | - |
| 1.8987 | 18450 | 1.2513 | - | - |
| 1.9039 | 18500 | 1.1702 | - | - |
| 0 | 0 | - | - | 0.9541 |
| 1.9039 | 18500 | - | 0.9840 | - |
| 1.9090 | 18550 | 1.3181 | - | - |
| 1.9142 | 18600 | 1.1976 | - | - |
| 1.9193 | 18650 | 1.3623 | - | - |
| 1.9245 | 18700 | 1.2594 | - | - |
| 1.9296 | 18750 | 1.2902 | - | - |
| 0 | 0 | - | - | 0.9522 |
| 1.9296 | 18750 | - | 0.9844 | - |
| 1.9348 | 18800 | 1.3283 | - | - |
| 1.9399 | 18850 | 1.2987 | - | - |
| 1.9450 | 18900 | 1.1987 | - | - |
| 1.9502 | 18950 | 1.2385 | - | - |
| 1.9553 | 19000 | 1.2772 | - | - |
| 0 | 0 | - | - | 0.9533 |
| 1.9553 | 19000 | - | 0.9861 | - |
| 1.9605 | 19050 | 1.1906 | - | - |
| 1.9656 | 19100 | 1.3041 | - | - |
| 1.9708 | 19150 | 1.2345 | - | - |
| 1.9759 | 19200 | 1.2586 | - | - |
| 1.9811 | 19250 | 1.196 | - | - |
| 0 | 0 | - | - | 0.9522 |
| 1.9811 | 19250 | - | 0.9835 | - |
| 1.9862 | 19300 | 1.2872 | - | - |
| 1.9914 | 19350 | 1.2449 | - | - |
| 1.9965 | 19400 | 1.2435 | - | - |
| 2.0016 | 19450 | 1.3096 | - | - |
| 2.0068 | 19500 | 1.1697 | - | - |
| 0 | 0 | - | - | 0.9514 |
| 2.0068 | 19500 | - | 1.0036 | - |
| 2.0119 | 19550 | 1.0556 | - | - |
| 2.0171 | 19600 | 1.1592 | - | - |
| 2.0222 | 19650 | 1.1808 | - | - |
| 2.0274 | 19700 | 1.141 | - | - |
| 2.0325 | 19750 | 1.1139 | - | - |
| 0 | 0 | - | - | 0.9517 |
| 2.0325 | 19750 | - | 1.0205 | - |
| 2.0377 | 19800 | 1.1959 | - | - |
| 2.0428 | 19850 | 1.0762 | - | - |
| 2.0480 | 19900 | 1.3522 | - | - |
| 2.0531 | 19950 | 1.1175 | - | - |
| 2.0582 | 20000 | 1.178 | - | - |
| 0 | 0 | - | - | 0.9512 |
| 2.0582 | 20000 | - | 1.0184 | - |
| 2.0634 | 20050 | 1.1416 | - | - |
| 2.0685 | 20100 | 1.1523 | - | - |
| 2.0737 | 20150 | 1.2561 | - | - |
| 2.0788 | 20200 | 1.119 | - | - |
| 2.0840 | 20250 | 1.095 | - | - |
| 0 | 0 | - | - | 0.9504 |
| 2.0840 | 20250 | - | 1.0155 | - |
| 2.0891 | 20300 | 1.1432 | - | - |
| 2.0943 | 20350 | 1.1455 | - | - |
| 2.0994 | 20400 | 1.0913 | - | - |
| 2.1046 | 20450 | 1.1671 | - | - |
| 2.1097 | 20500 | 1.2776 | - | - |
| 0 | 0 | - | - | 0.9514 |
| 2.1097 | 20500 | - | 1.0334 | - |
| 2.1149 | 20550 | 1.3092 | - | - |
| 2.1200 | 20600 | 1.1981 | - | - |
| 2.1251 | 20650 | 1.1399 | - | - |
| 2.1303 | 20700 | 1.0976 | - | - |
| 2.1354 | 20750 | 1.1335 | - | - |
| 0 | 0 | - | - | 0.9518 |
| 2.1354 | 20750 | - | 1.0136 | - |
| 2.1406 | 20800 | 1.1567 | - | - |
| 2.1457 | 20850 | 1.2536 | - | - |
| 2.1509 | 20900 | 1.1717 | - | - |
| 2.1560 | 20950 | 1.1433 | - | - |
| 2.1612 | 21000 | 1.1885 | - | - |
| 0 | 0 | - | - | 0.9512 |
| 2.1612 | 21000 | - | 1.0185 | - |
| 2.1663 | 21050 | 1.0543 | - | - |
| 2.1715 | 21100 | 1.1122 | - | - |
| 2.1766 | 21150 | 1.17 | - | - |
| 2.1817 | 21200 | 1.0757 | - | - |
| 2.1869 | 21250 | 1.3008 | - | - |
| 0 | 0 | - | - | 0.9506 |
| 2.1869 | 21250 | - | 1.0161 | - |
| 2.1920 | 21300 | 1.1723 | - | - |
| 2.1972 | 21350 | 1.2517 | - | - |
| 2.2023 | 21400 | 1.1834 | - | - |
| 2.2075 | 21450 | 1.1284 | - | - |
| 2.2126 | 21500 | 1.28 | - | - |
| 0 | 0 | - | - | 0.9507 |
| 2.2126 | 21500 | - | 1.0217 | - |
| 2.2178 | 21550 | 1.2478 | - | - |
| 2.2229 | 21600 | 1.1798 | - | - |
| 2.2281 | 21650 | 1.1218 | - | - |
| 2.2332 | 21700 | 1.2787 | - | - |
| 2.2383 | 21750 | 1.1254 | - | - |
| 0 | 0 | - | - | 0.9508 |
| 2.2383 | 21750 | - | 1.0312 | - |
| 2.2435 | 21800 | 1.2375 | - | - |
| 2.2486 | 21850 | 1.1074 | - | - |
| 2.2538 | 21900 | 1.0927 | - | - |
| 2.2589 | 21950 | 1.1691 | - | - |
| 2.2641 | 22000 | 1.1703 | - | - |
| 0 | 0 | - | - | 0.9499 |
| 2.2641 | 22000 | - | 1.0275 | - |
| 2.2692 | 22050 | 1.2158 | - | - |
| 2.2744 | 22100 | 1.1026 | - | - |
| 2.2795 | 22150 | 1.0644 | - | - |
| 2.2847 | 22200 | 1.1092 | - | - |
| 2.2898 | 22250 | 1.1686 | - | - |
| 0 | 0 | - | - | 0.9512 |
| 2.2898 | 22250 | - | 1.0343 | - |
| 2.2949 | 22300 | 1.2711 | - | - |
| 2.3001 | 22350 | 1.2942 | - | - |
| 2.3052 | 22400 | 1.2073 | - | - |
| 2.3104 | 22450 | 1.2131 | - | - |
| 2.3155 | 22500 | 1.1445 | - | - |
| 0 | 0 | - | - | 0.9517 |
| 2.3155 | 22500 | - | 1.0128 | - |
| 2.3207 | 22550 | 1.1553 | - | - |
| 2.3258 | 22600 | 1.1512 | - | - |
| 2.3310 | 22650 | 1.2069 | - | - |
| 2.3361 | 22700 | 1.1345 | - | - |
| 2.3413 | 22750 | 1.1681 | - | - |
| 0 | 0 | - | - | 0.9509 |
| 2.3413 | 22750 | - | 1.0101 | - |
| 2.3464 | 22800 | 1.1372 | - | - |
| 2.3515 | 22850 | 1.1393 | - | - |
| 2.3567 | 22900 | 1.1327 | - | - |
| 2.3618 | 22950 | 1.0903 | - | - |
| 2.3670 | 23000 | 1.1354 | - | - |
| 0 | 0 | - | - | 0.9513 |
| 2.3670 | 23000 | - | 1.0173 | - |
| 2.3721 | 23050 | 1.2517 | - | - |
| 2.3773 | 23100 | 1.0634 | - | - |
| 2.3824 | 23150 | 1.2095 | - | - |
| 2.3876 | 23200 | 1.1686 | - | - |
| 2.3927 | 23250 | 1.1063 | - | - |
| 0 | 0 | - | - | 0.9517 |
| 2.3927 | 23250 | - | 1.0243 | - |
| 2.3979 | 23300 | 1.1309 | - | - |
| 2.4030 | 23350 | 1.1869 | - | - |
| 2.4082 | 23400 | 1.1743 | - | - |
| 2.4133 | 23450 | 1.1001 | - | - |
| 2.4184 | 23500 | 1.1696 | - | - |
| 0 | 0 | - | - | 0.9525 |
| 2.4184 | 23500 | - | 1.0315 | - |
| 2.4236 | 23550 | 1.1493 | - | - |
| 2.4287 | 23600 | 1.1486 | - | - |
| 2.4339 | 23650 | 1.2302 | - | - |
| 2.4390 | 23700 | 1.1427 | - | - |
| 2.4442 | 23750 | 1.2123 | - | - |
| 0 | 0 | - | - | 0.9510 |
| 2.4442 | 23750 | - | 1.0297 | - |
| 2.4493 | 23800 | 1.1169 | - | - |
| 2.4545 | 23850 | 1.1688 | - | - |
| 2.4596 | 23900 | 1.0506 | - | - |
| 2.4648 | 23950 | 1.1965 | - | - |
| 2.4699 | 24000 | 1.1253 | - | - |
| 0 | 0 | - | - | 0.9508 |
| 2.4699 | 24000 | - | 1.0238 | - |
| 2.4750 | 24050 | 1.1957 | - | - |
| 2.4802 | 24100 | 1.1395 | - | - |
| 2.4853 | 24150 | 1.1238 | - | - |
| 2.4905 | 24200 | 1.1342 | - | - |
| 2.4956 | 24250 | 1.1703 | - | - |
| 0 | 0 | - | - | 0.9506 |
| 2.4956 | 24250 | - | 1.0219 | - |
| 2.5008 | 24300 | 1.0947 | - | - |
| 2.5059 | 24350 | 1.1281 | - | - |
| 2.5111 | 24400 | 1.1029 | - | - |
| 2.5162 | 24450 | 1.1784 | - | - |
| 2.5214 | 24500 | 1.101 | - | - |
| 0 | 0 | - | - | 0.9528 |
| 2.5214 | 24500 | - | 1.0267 | - |
| 2.5265 | 24550 | 1.1231 | - | - |
| 2.5316 | 24600 | 1.1364 | - | - |
| 2.5368 | 24650 | 1.1778 | - | - |
| 2.5419 | 24700 | 1.1089 | - | - |
| 2.5471 | 24750 | 1.1626 | - | - |
| 0 | 0 | - | - | 0.9508 |
| 2.5471 | 24750 | - | 1.0254 | - |
| 2.5522 | 24800 | 1.2019 | - | - |
| 2.5574 | 24850 | 1.1503 | - | - |
| 2.5625 | 24900 | 1.1697 | - | - |
| 2.5677 | 24950 | 1.0921 | - | - |
| 2.5728 | 25000 | 1.3136 | - | - |
| 0 | 0 | - | - | 0.9513 |
| 2.5728 | 25000 | - | 1.0222 | - |
</details>
### Framework Versions
- Python: 3.12.4
- Sentence Transformers: 4.0.2
- PyLate: 1.2.0
- Transformers: 4.48.2
- PyTorch: 2.6.0+cu124
- Accelerate: 1.7.0
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084"
}
```
#### PyLate
```bibtex
@misc{PyLate,
title={PyLate: Flexible Training and Retrieval for Late Interaction Models},
author={Chaffin, Antoine and Sourty, Raphaël},
url={https://github.com/lightonai/pylate},
year={2024}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
Riyan123/Llama-3.2-3B-it-chat-fintuned
|
Riyan123
| 2025-06-20T08:10:05Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-19T05:41:59Z |
---
base_model: unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Riyan123
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
OlivierMsr/Clone_Video_OMsr
|
OlivierMsr
| 2025-06-20T08:08:20Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-06-20T07:46:37Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: Olivier
---
# Clone_Video_Omsr
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `Olivier` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "Olivier",
"lora_weights": "https://huggingface.co/OlivierMsr/Clone_Video_OMsr/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('OlivierMsr/Clone_Video_OMsr', weight_name='lora.safetensors')
image = pipeline('Olivier').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 1000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/OlivierMsr/Clone_Video_OMsr/discussions) to add images that show off what you’ve made with this LoRA.
|
hf-100/Jamba-1.6-Large-Spellbound-StoryWriter-instruct-0.3-chkpt-96
|
hf-100
| 2025-06-20T08:07:26Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:ai21labs/AI21-Jamba-Large-1.6",
"base_model:adapter:ai21labs/AI21-Jamba-Large-1.6",
"region:us"
] | null | 2025-06-20T08:03:15Z |
---
base_model: ai21labs/AI21-Jamba-Large-1.6
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.2
|
Josephinepassananti/sd21-kamala_ft_dataset_512_face_shaded_0.1_target_black_square-bs1-steps600-lr1e-04
|
Josephinepassananti
| 2025-06-20T08:06:52Z | 0 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"diffusers-training",
"lora",
"base_model:stabilityai/stable-diffusion-2-1",
"base_model:adapter:stabilityai/stable-diffusion-2-1",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2025-06-20T05:13:13Z |
---
base_model: stabilityai/stable-diffusion-2-1
library_name: diffusers
license: creativeml-openrail-m
inference: true
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- diffusers-training
- lora
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# LoRA text2image fine-tuning - Josephinepassananti/sd21-kamala_ft_dataset_512_face_shaded_0.1_target_black_square-bs1-steps600-lr1e-04
These are LoRA adaption weights for stabilityai/stable-diffusion-2-1. The weights were fine-tuned on the None dataset. You can find some example images in the following.




## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model]
|
phospho-app/OpenLabBA-ACT_BBOX-lego_in_box_v6-nqh8l
|
phospho-app
| 2025-06-20T08:05:33Z | 0 | 0 | null |
[
"safetensors",
"phosphobot",
"act",
"region:us"
] | null | 2025-06-20T07:41:47Z |
---
tags:
- phosphobot
- act
task_categories:
- robotics
---
# act Model - phospho Training Pipeline
## This model was trained using **phospho**.
Training was successfull, try it out on your robot!
## Training parameters:
- **Dataset**: [phospho-app/lego_in_box_v6_bboxes](https://huggingface.co/datasets/phospho-app/lego_in_box_v6_bboxes)
- **Wandb run URL**: None
- **Epochs**: None
- **Batch size**: 100
- **Training steps**: 10000
📖 **Get Started**: [docs.phospho.ai](https://docs.phospho.ai?utm_source=huggingface_readme)
🤖 **Get your robot**: [robots.phospho.ai](https://robots.phospho.ai?utm_source=huggingface_readme)
|
kaiserbuffle/connect4_epitech
|
kaiserbuffle
| 2025-06-20T08:04:01Z | 0 | 0 | null |
[
"LeRobot",
"license:apache-2.0",
"region:us"
] | null | 2025-06-20T08:03:11Z |
---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.1",
"robot_type": "so101_follower",
"total_episodes": 1,
"total_frames": 1022,
"total_tasks": 1,
"total_videos": 2,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 30,
"splits": {
"train": "0:1"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"action": {
"dtype": "float32",
"shape": [
6
],
"names": [
"shoulder_pan.pos",
"shoulder_lift.pos",
"elbow_flex.pos",
"wrist_flex.pos",
"wrist_roll.pos",
"gripper.pos"
]
},
"observation.state": {
"dtype": "float32",
"shape": [
6
],
"names": [
"shoulder_pan.pos",
"shoulder_lift.pos",
"elbow_flex.pos",
"wrist_flex.pos",
"wrist_roll.pos",
"gripper.pos"
]
},
"observation.images.front": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.height": 480,
"video.width": 640,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 30,
"video.channels": 3,
"has_audio": false
}
},
"observation.images.robot": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.height": 480,
"video.width": 640,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 30,
"video.channels": 3,
"has_audio": false
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
```
|
lostinjamal/3cb13c83-60cb-4a66-a135-260d6369e792
|
lostinjamal
| 2025-06-20T08:00:33Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-20T07:54:33Z |
---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
thanhtlx/text_classification_2
|
thanhtlx
| 2025-06-20T07:59:18Z | 0 | 0 | null |
[
"safetensors",
"roberta",
"region:us"
] | null | 2025-06-20T04:14:43Z |
# Institution Name Normalization Model
## 📝 Mô tả
Đây là một mô hình phân loại văn bản được huấn luyện để chuẩn hóa các tên cơ quan (institutions) được viết theo nhiều cách khác nhau trong các bài báo khoa học.
Mô hình nhận diện và ánh xạ các tên cơ quan (có thể khác nhau về ngôn ngữ, cách viết tắt, hoặc cách trình bày) thành một tên chuẩn hóa duy nhất.
Ví dụ:
- "VNU University of Engineering and Technology" → `uet`
- "University of Science, Vietnam National University Hanoi" → `hus`
Mô hình sử dụng kiến trúc BERT và được huấn luyện trên tập dữ liệu chứa các tên cơ quan từ các bài báo khoa học.
## 📌 Chi tiết các mô hình
| Tên model | Kiến trúc | Chức năng chính |
|------------------------------------------|-----------|---------------------------------------------------|
| `thanhtlx/text_classification_2` | RobertaForSequenceClassification | Chuẩn hóa tên cơ quan thành một trong 12 nhãn chuẩn |
## 📥 Đầu vào
- **Định dạng**: Tên cơ quan dưới dạng chuỗi văn bản (có thể bằng tiếng Anh, tiếng Việt, hoặc viết tắt)
- **Kiểu dữ liệu**: Chuỗi văn bản (`str`)
- **Xử lý**: Văn bản sẽ được mã hóa bởi tokenizer của mô hình (tự động cắt ngắn và đệm nếu cần)
**Ví dụ đầu vào**:
- "VNU University of Engineering and Technology"
- "ĐH Bách Khoa Hà Nội"
- "University of Science, Vietnam National University Hanoi"
## 📤 Đầu ra
- Một chuỗi ký hiệu tương ứng với tên cơ quan chuẩn hóa:
- `smp`: School of Medicine and Pharmacy
- `ueb`: University of Economics and Business
- `ussh`: University of Social Sciences and Humanities
- `hus`: University of Science
- `hsb`: Hanoi School of Business
- `law`: School of Law
- `ulis`: University of Languages and International Studies
- `vju`: Vietnam Japan University
- `uet`: University of Engineering and Technology
- `ued`: University of Education
- `is`: International School
- `iti`: Information Technology Institute
- `other`: Others
**Ví dụ**:
- "VNU University of Engineering and Technology" → `uet`
- "University of Science, Vietnam National University Hanoi" → `hus`
🧪 Sử dụng mô hình
Dưới đây là đoạn mã mẫu để sử dụng mô hình:
```python
from transformers import AutoTokenizer, TFAutoModelForSequenceClassification
import tensorflow as tf
# Tải tokenizer và mô hình
tokenizer = AutoTokenizer.from_pretrained("thanhtlx/text_classification_2")
model = TFAutoModelForSequenceClassification.from_pretrained("thanhtlx/text_classification_2")
# Tên cơ quan cần chuẩn hóa
text = "VNU University of Engineering and Technology"
# Mã hóa văn bản
inputs = tokenizer(text, return_tensors="tf", truncation=True, padding=True)
# Dự đoán
outputs = model(**inputs)
logits = outputs.logits
# Chuyển logits thành xác suất
probs = tf.nn.softmax(logits, axis=1)
# Lấy nhãn có xác suất cao nhất
predicted_label = tf.argmax(probs, axis=1).numpy()[0]
print(f"Nhãn dự đoán: {model.config.id2label[predicted_label]}")
```
|
Yojirex/lora_model
|
Yojirex
| 2025-06-20T07:59:00Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-20T07:32:08Z |
---
base_model: unsloth/meta-llama-3.1-8b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Yojirex
- **License:** apache-2.0
- **Finetuned from model :** unsloth/meta-llama-3.1-8b-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Tongyi-ConvAI/OpenOmni
|
Tongyi-ConvAI
| 2025-06-20T07:58:56Z | 0 | 2 | null |
[
"safetensors",
"arxiv:2501.04561",
"license:apache-2.0",
"region:us"
] | null | 2025-01-09T12:19:14Z |
---
license: apache-2.0
---
This repository contains the model presented in [OpenOmni: Advancing Open-Source Omnimodal Large Language Models with Progressive Multimodal Alignment and Real-Time Self-Aware Emotional Speech Synthesis](https://huggingface.co/papers/2501.04561
).
Project page: https://github.com/RainBowLuoCS/OpenOmni
|
yuto-urushima/my_gemma2_pt
|
yuto-urushima
| 2025-06-20T07:56:26Z | 3 | 0 |
keras-hub
|
[
"keras-hub",
"text-generation",
"region:us"
] |
text-generation
| 2025-06-12T08:02:44Z |
---
library_name: keras-hub
pipeline_tag: text-generation
---
This is a [`Gemma` model](https://keras.io/api/keras_hub/models/gemma) uploaded using the KerasHub library and can be used with JAX, TensorFlow, and PyTorch backends.
This model is related to a `CausalLM` task.
Model config:
* **name:** gemma_backbone
* **trainable:** True
* **vocabulary_size:** 256000
* **num_layers:** 26
* **num_query_heads:** 8
* **num_key_value_heads:** 4
* **hidden_dim:** 2304
* **intermediate_dim:** 18432
* **head_dim:** 256
* **layer_norm_epsilon:** 1e-06
* **dropout:** 0
* **query_head_dim_normalize:** True
* **use_post_ffw_norm:** True
* **use_post_attention_norm:** True
* **final_logit_soft_cap:** 30.0
* **attention_logit_soft_cap:** 50.0
* **sliding_window_size:** 4096
* **use_sliding_window_attention:** True
This model card has been generated automatically and should be completed by the model author. See [Model Cards documentation](https://huggingface.co/docs/hub/model-cards) for more information.
|
MoxStone/SmaliLLM-Qwen2.5-Coder-0.5B-Instruct-Finetuned
|
MoxStone
| 2025-06-20T07:56:22Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"code",
"conversational",
"base_model:Qwen/Qwen2.5-Coder-0.5B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-Coder-0.5B-Instruct",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-20T06:00:34Z |
---
license: mit
base_model:
- Qwen/Qwen2.5-Coder-0.5B-Instruct
pipeline_tag: text-generation
library_name: transformers
tags:
- code
---
## What is SmaliLLM used for
SmaliLLM is a large language model designed to decompile Smali code into Java code. Reconstructing Smali language representations into high-level languages such as Java holds significant practical engineering value. This transformation not only lowers the technical barrier for reverse engineering but also provides the necessary semantic foundation for subsequent tasks such as static analysis and vulnerability detection.
## SmaliLLM Highlights
SmaliLLM is a series of models finetuned using nearly 1000 "Smali2Java" data, based on Qwen3, Qwen2.5-Coder, Gemma3, with the following features:
- **High Compilation Success Rate** After our fine-tuning, the model’s compilation success rate increased by an average of 20%. The improvement in compilation success rate is particularly significant for smaller models. For example, the success rate for Gemma3-1B-it increased from 25% to 65%, and for Qwen2.5-Coder-0.5B, it rose from 15% to 45%.
- **High Quality of the Generated Java Code** After fine-tuning, the model’s average CodeBLEU score improved by 0.08. The improvement in CodeBLEU is especially notable for smaller models. Specifically, under the base models Gemma3-4B-it, Qwen2.5-Coder-0.5B-Instruct, Qwen3-0.6B, and Qwen3-4B, the CodeBLEU scores increased by 0.17, 0.14, 0.10, and 0.14 respectively.
- **Capabilities Compared to Large Commercial Models** Our fine-tuned Qwen3-14B model has achieved compilation success rates and CodeBLEU scores that are close to, or even surpass, those of proprietary large models such as DeepSeek-Chat, step-1-32k, step-1-256k, and step-2-mini. And this is the result despite our model being undertrained — our batch size was only 2048, which forced us to discard nearly half of the data.
## Quickstart
The following contains a code snippet illustrating how to use the model generate content based on given inputs.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "MoxStone/SmaliLLM-Qwen2.5-Coder-0.5B-Instruct-Finetuned"
# load the tokenizer and the model
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
# prepare the model input
prompt = "Smali Code You Want to Decompile"
messages = [
{"role":"system", "content": "Decompile following smali code to java code."}
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
# conduct text completion
generated_ids = model.generate(
**model_inputs,
max_new_tokens=8192
)
output_ids = generated_ids[0][len(model_inputs.input_ids[0]):].tolist()
content = tokenizer.decode(output_ids, skip_special_tokens=True).strip("\n")
print("Java code:", content)
```
|
BCCard/Qwen3-30B-A3B-FP8-Dynamic
|
BCCard
| 2025-06-20T07:47:48Z | 27 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3_moe",
"text-generation",
"neuralmagic",
"redhat",
"llmcompressor",
"quantized",
"FP8",
"conversational",
"base_model:Qwen/Qwen3-30B-A3B",
"base_model:quantized:Qwen/Qwen3-30B-A3B",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"compressed-tensors",
"region:us"
] |
text-generation
| 2025-05-08T07:39:17Z |
---
library_name: transformers
license: apache-2.0
pipeline_tag: text-generation
base_model:
- Qwen/Qwen3-30B-A3B
tags:
- neuralmagic
- redhat
- llmcompressor
- quantized
- FP8
---
# Qwen3-30B-A3B-FP8-dynamic
## Model Overview
- **Model Architecture:** Qwen3MoeForCausalLM
- **Input:** Text
- **Output:** Text
- **Model Optimizations:**
- **Activation quantization:** FP8
- **Weight quantization:** FP8
- **Intended Use Cases:**
- Reasoning.
- Function calling.
- Subject matter experts via fine-tuning.
- Multilingual instruction following.
- Translation.
- **Out-of-scope:** Use in any manner that violates applicable laws or regulations (including trade compliance laws).
- **Release Date:** 05/05/2025
- **Version:** 1.0
- **Model Developers:** BC Card
### Model Optimizations
This model was obtained by quantizing activations and weights of [Qwen3-30B-A3B](https://huggingface.co/Qwen/Qwen3-30B-A3B) to FP8 data type.
This optimization reduces the number of bits used to represent weights and activations from 16 to 8, reducing GPU memory requirements (by approximately 50%) and increasing matrix-multiply compute throughput (by approximately 2x).
Weight quantization also reduces disk size requirements by approximately 50%.
Only weights and activations of the linear operators within transformers blocks are quantized.
Weights are quantized with a symmetric static per-channel scheme, whereas activations are quantized with a symmetric dynamic per-token scheme.
The [llm-compressor](https://github.com/vllm-project/llm-compressor) library is used for quantization.
## Deployment
This model can be deployed efficiently using the [vLLM](https://docs.vllm.ai/en/latest/) backend, as shown in the example below.
```python
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer
model_id = "BCCard/Qwen3-30B-A3B-FP8-Dynamic"
number_gpus = 1
sampling_params = SamplingParams(temperature=0.6, top_p=0.95, top_k=20, min_p=0, max_tokens=256)
messages = [
{"role": "user", "content": prompt}
]
tokenizer = AutoTokenizer.from_pretrained(model_id)
messages = [{"role": "user", "content": "Give me a short introduction to large language model."}]
prompts = tokenizer.apply_chat_template(messages, add_generation_prompt=True, tokenize=False)
llm = LLM(model=model_id, tensor_parallel_size=number_gpus)
outputs = llm.generate(prompts, sampling_params)
generated_text = outputs[0].outputs[0].text
print(generated_text)
```
vLLM aslo supports OpenAI-compatible serving. See the [documentation](https://docs.vllm.ai/en/latest/) for more details.
## Creation
<details>
<summary>Creation details</summary>
This model was created with [llm-compressor](https://github.com/vllm-project/llm-compressor) by running the code snippet below.
```python
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor.transformers import oneshot
from transformers import AutoModelForCausalLM, AutoTokenizer
# Load model
model_stub = "Qwen/Qwen3-30B-A3B"
model_name = model_stub.split("/")[-1]
model = AutoModelForCausalLM.from_pretrained(model_stub)
tokenizer = AutoTokenizer.from_pretrained(model_stub)
# Configure the quantization algorithm and scheme
recipe = QuantizationModifier(
ignore=["lm_head"],
targets="Linear",
scheme="FP8_dynamic",
)
# Apply quantization
oneshot(
model=model,
recipe=recipe,
)
# Save to disk in compressed-tensors format
save_path = model_name + "-FP8-dynamic"
model.save_pretrained(save_path)
tokenizer.save_pretrained(save_path)
print(f"Model and tokenizer saved to: {save_path}")
```
</details>
## Evaluation
The model was evaluated on the OpenLLM leaderboard tasks (version 1), using [lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness) and [vLLM](https://docs.vllm.ai/en/stable/).
<details>
<summary>Evaluation details</summary>
```
lm_eval \
--model vllm \
--model_args pretrained="BCCard/Qwen3-30B-A3B-FP8-dynamic",dtype=auto,gpu_memory_utilization=0.5,max_model_len=8192,enable_chunk_prefill=True,tensor_parallel_size=1 \
--tasks openllm \
--apply_chat_template\
--fewshot_as_multiturn \
--batch_size auto
```
</details>
### Accuracy
<table>
<tr>
<th>Category
</th>
<th>Benchmark
</th>
<th>Qwen3-30B-A3B
</th>
<th>Qwen3-30B-A3B-FP8-dynamic<br>(this model)
</th>
<th>Recovery
</th>
</tr>
<tr>
<td rowspan="7" ><strong>OpenLLM v1</strong>
</td>
<td>MMLU (5-shot)
</td>
<td>77.67
</td>
<td>77.49
</td>
<td>99.8%
</td>
</tr>
<tr>
<td>ARC Challenge (25-shot)
</td>
<td>63.40
</td>
<td>63.65
</td>
<td>100.4%
</td>
</tr>
<tr>
<td>GSM-8K (5-shot, strict-match)
</td>
<td>87.26
</td>
<td>86.73
</td>
<td>99.4%
</td>
</tr>
<tr>
<td>Hellaswag (10-shot)
</td>
<td>54.33
</td>
<td>54.33
</td>
<td>100.0%
</td>
</tr>
<tr>
<td>Winogrande (5-shot)
</td>
<td>66.77
</td>
<td>66.30
</td>
<td>99.3%
</td>
</tr>
<tr>
<td>TruthfulQA (0-shot, mc2)
</td>
<td>56.27
</td>
<td>56.88
</td>
<td>101.1%
</td>
</tr>
<tr>
<td><strong>Average</strong>
</td>
<td><strong>67.62</strong>
</td>
<td><strong>67.56</strong>
</td>
<td><strong>99.9%</strong>
</td>
</tr>
</table>
|
BCCard/Qwen2.5-Coder-32B-Instruct-FP8-Dynamic
|
BCCard
| 2025-06-20T07:44:09Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"code",
"qwen",
"qwen-coder",
"codeqwen",
"conversational",
"en",
"arxiv:2409.12186",
"arxiv:2309.00071",
"arxiv:2407.10671",
"base_model:Qwen/Qwen2.5-32B-Instruct",
"base_model:quantized:Qwen/Qwen2.5-32B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"compressed-tensors",
"region:us"
] |
text-generation
| 2025-06-19T23:39:04Z |
---
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen2.5-Coder-32B-Instruct/blob/main/LICENSE
language:
- en
base_model:
- Qwen/Qwen2.5-32B-Instruct
pipeline_tag: text-generation
library_name: transformers
tags:
- code
- qwen
- qwen-coder
- codeqwen
---
# Qwen2.5-Coder-32B-Instruct-FP8-Dynamic
## Model Overview
- **Model Architecture:** Qwen2.5-Coder-72B-Instruct
- **Input:** Text
- **Output:** Text
- **Model Optimizations:**
- **Weight quantization:** FP8
- **Activation quantization:** FP8
- **Release Date:** 2/24/2025
- **Version:** 1.0
- **Model Developers:** BC Card
Quantized version of [Qwen/Qwen2.5-Coder-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Coder-32B).
### Model Optimizations
This model was obtained by quantizing the weights of [Qwen/Qwen2.5-Coder-32B](https://huggingface.co/Qwen/Qwen2.5-Coder-32B) to FP8 data type, ready for inference with vLLM >= 0.5.2.
## Deployment
### Use with vLLM
This model can be deployed efficiently using the [vLLM](https://docs.vllm.ai/en/latest/) backend, as shown in the example below.
```python
from vllm.assets.image import ImageAsset
from vllm import LLM, SamplingParams
# prepare model
llm = LLM(
model="BCCard/Qwen2.5-32B-Instruct-FP8-Dynamic",
trust_remote_code=True,
max_model_len=4096,
max_num_seqs=2,
)
# prepare inputs
question = "What is the code for python hello world?"
inputs = {
"prompt": f"<|user|>\n{question}<|end|>\n<|assistant|>\n",
}
# generate response
print("========== SAMPLE GENERATION ==============")
outputs = llm.generate(inputs, SamplingParams(temperature=0.2, max_tokens=64))
print(f"PROMPT : {outputs[0].prompt}")
print(f"RESPONSE: {outputs[0].outputs[0].text}")
print("==========================================")
```
vLLM also supports OpenAI-compatible serving. See the [documentation](https://docs.vllm.ai/en/latest/) for more details.
## Qwen2.5-Coder Introduction
Qwen2.5-Coder is the latest series of Code-Specific Qwen large language models (formerly known as CodeQwen). As of now, Qwen2.5-Coder has covered six mainstream model sizes, 0.5, 1.5, 3, 7, 14, 32 billion parameters, to meet the needs of different developers. Qwen2.5-Coder brings the following improvements upon CodeQwen1.5:
- Significantly improvements in **code generation**, **code reasoning** and **code fixing**. Base on the strong Qwen2.5, we scale up the training tokens into 5.5 trillion including source code, text-code grounding, Synthetic data, etc. Qwen2.5-Coder-32B has become the current state-of-the-art open-source codeLLM, with its coding abilities matching those of GPT-4o.
- A more comprehensive foundation for real-world applications such as **Code Agents**. Not only enhancing coding capabilities but also maintaining its strengths in mathematics and general competencies.
- **Long-context Support** up to 128K tokens.
**This repo contains the 7B Qwen2.5-Coder model**, which has the following features:
- Type: Causal Language Models
- Training Stage: Pretraining
- Architecture: transformers with RoPE, SwiGLU, RMSNorm, and Attention QKV bias
- Number of Parameters: 7.61B
- Number of Paramaters (Non-Embedding): 6.53B
- Number of Layers: 28
- Number of Attention Heads (GQA): 28 for Q and 4 for KV
- Context Length: Full 131,072 tokens
- Please refer to [this section](#processing-long-texts) for detailed instructions on how to deploy Qwen2.5 for handling long texts.
**We do not recommend using base language models for conversations.** Instead, you can apply post-training, e.g., SFT, RLHF, continued pretraining, etc., or fill in the middle tasks on this model.
For more details, please refer to our [blog](https://qwenlm.github.io/blog/qwen2.5-coder-family/), [GitHub](https://github.com/QwenLM/Qwen2.5-Coder), [Documentation](https://qwen.readthedocs.io/en/latest/), [Arxiv](https://arxiv.org/abs/2409.12186).
## Requirements
The code of Qwen2.5-Coder has been in the latest Hugging face `transformers` and we advise you to use the latest version of `transformers`.
With `transformers<4.37.0`, you will encounter the following error:
```
KeyError: 'qwen2'
```
### Processing Long Texts
The current `config.json` is set for context length up to 32,768 tokens.
To handle extensive inputs exceeding 32,768 tokens, we utilize [YaRN](https://arxiv.org/abs/2309.00071), a technique for enhancing model length extrapolation, ensuring optimal performance on lengthy texts.
For supported frameworks, you could add the following to `config.json` to enable YaRN:
```json
{
...,
"rope_scaling": {
"factor": 4.0,
"original_max_position_embeddings": 32768,
"type": "yarn"
}
}
```
For deployment, we recommend using vLLM.
Please refer to our [Documentation](https://qwen.readthedocs.io/en/latest/deployment/vllm.html) for usage if you are not familar with vLLM.
Presently, vLLM only supports static YARN, which means the scaling factor remains constant regardless of input length, **potentially impacting performance on shorter texts**.
We advise adding the `rope_scaling` configuration only when processing long contexts is required.
## Evaluation & Performance
Detailed evaluation results are reported in this [📑 blog](https://qwenlm.github.io/blog/qwen2.5-coder-family/).
For requirements on GPU memory and the respective throughput, see results [here](https://qwen.readthedocs.io/en/latest/benchmark/speed_benchmark.html).
## Citation
If you find our work helpful, feel free to give us a cite.
```
@article{hui2024qwen2,
title={Qwen2. 5-Coder Technical Report},
author={Hui, Binyuan and Yang, Jian and Cui, Zeyu and Yang, Jiaxi and Liu, Dayiheng and Zhang, Lei and Liu, Tianyu and Zhang, Jiajun and Yu, Bowen and Dang, Kai and others},
journal={arXiv preprint arXiv:2409.12186},
year={2024}
}
@article{qwen2,
title={Qwen2 Technical Report},
author={An Yang and Baosong Yang and Binyuan Hui and Bo Zheng and Bowen Yu and Chang Zhou and Chengpeng Li and Chengyuan Li and Dayiheng Liu and Fei Huang and Guanting Dong and Haoran Wei and Huan Lin and Jialong Tang and Jialin Wang and Jian Yang and Jianhong Tu and Jianwei Zhang and Jianxin Ma and Jin Xu and Jingren Zhou and Jinze Bai and Jinzheng He and Junyang Lin and Kai Dang and Keming Lu and Keqin Chen and Kexin Yang and Mei Li and Mingfeng Xue and Na Ni and Pei Zhang and Peng Wang and Ru Peng and Rui Men and Ruize Gao and Runji Lin and Shijie Wang and Shuai Bai and Sinan Tan and Tianhang Zhu and Tianhao Li and Tianyu Liu and Wenbin Ge and Xiaodong Deng and Xiaohuan Zhou and Xingzhang Ren and Xinyu Zhang and Xipin Wei and Xuancheng Ren and Yang Fan and Yang Yao and Yichang Zhang and Yu Wan and Yunfei Chu and Yuqiong Liu and Zeyu Cui and Zhenru Zhang and Zhihao Fan},
journal={arXiv preprint arXiv:2407.10671},
year={2024}
}
```
|
MoxStone/SmaliLLM-Qwen3-4B-Finetuned
|
MoxStone
| 2025-06-20T07:41:59Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"code",
"conversational",
"base_model:Qwen/Qwen3-4B",
"base_model:finetune:Qwen/Qwen3-4B",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-20T05:56:32Z |
---
license: mit
base_model:
- Qwen/Qwen3-4B
pipeline_tag: text-generation
library_name: transformers
tags:
- code
---
## What is SmaliLLM used for
SmaliLLM is a large language model designed to decompile Smali code into Java code. Reconstructing Smali language representations into high-level languages such as Java holds significant practical engineering value. This transformation not only lowers the technical barrier for reverse engineering but also provides the necessary semantic foundation for subsequent tasks such as static analysis and vulnerability detection.
## SmaliLLM Highlights
SmaliLLM is a series of models finetuned using nearly 1000 "Smali2Java" data, based on Qwen3, Qwen2.5-Coder, Gemma3, with the following features:
- **High Compilation Success Rate** After our fine-tuning, the model’s compilation success rate increased by an average of 20%. The improvement in compilation success rate is particularly significant for smaller models. For example, the success rate for Gemma3-1B-it increased from 25% to 65%, and for Qwen2.5-Coder-0.5B, it rose from 15% to 45%.
- **High Quality of the Generated Java Code** After fine-tuning, the model’s average CodeBLEU score improved by 0.08. The improvement in CodeBLEU is especially notable for smaller models. Specifically, under the base models Gemma3-4B-it, Qwen2.5-Coder-0.5B-Instruct, Qwen3-0.6B, and Qwen3-4B, the CodeBLEU scores increased by 0.17, 0.14, 0.10, and 0.14 respectively.
- **Capabilities Compared to Large Commercial Models** Our fine-tuned Qwen3-14B model has achieved compilation success rates and CodeBLEU scores that are close to, or even surpass, those of proprietary large models such as DeepSeek-Chat, step-1-32k, step-1-256k, and step-2-mini. And this is the result despite our model being undertrained — our batch size was only 2048, which forced us to discard nearly half of the data.
## Quickstart
The following contains a code snippet illustrating how to use the model generate content based on given inputs.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "MoxStone/SmaliLLM-Qwen3-4B-Finetuned"
# load the tokenizer and the model
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
# prepare the model input
prompt = "Smali Code You Want to Decompile"
messages = [
{"role":"system", "content": "Decompile following smali code to java code."}
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
enable_thinking=False # In the Qwen3 base model, we use the non-thinking mode to decompile Smali code.
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
# conduct text completion
generated_ids = model.generate(
**model_inputs,
max_new_tokens=6144
)
output_ids = generated_ids[0][len(model_inputs.input_ids[0]):].tolist()
# parsing thinking content
try:
# rindex finding 151668 (</think>)
index = len(output_ids) - output_ids[::-1].index(151668)
except ValueError:
index = 0
thinking_content = tokenizer.decode(output_ids[:index], skip_special_tokens=True).strip("\n")
content = tokenizer.decode(output_ids[index:], skip_special_tokens=True).strip("\n")
print("Java code:", content)
```
|
videoloc/seamless-translation
|
videoloc
| 2025-06-20T07:41:49Z | 17 | 0 |
transformers
|
[
"transformers",
"pytorch",
"seamless_translation",
"audio",
"text",
"multimodal",
"seamless",
"subtitle-editing-time-prediction",
"translation-aware",
"multilingual",
"base_model:facebook/hf-seamless-m4t-medium",
"base_model:finetune:facebook/hf-seamless-m4t-medium",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-16T10:32:21Z |
---
language:
- multilingual
tags:
- audio
- text
- multimodal
- seamless
- subtitle-editing-time-prediction
- translation-aware
library_name: transformers
base_model: facebook/hf-seamless-m4t-medium
license: cc-by-nc-4.0
---
# videoloc/seamless-translation
## Model Description
This is a **SeamlessTranslation** model that processes audio and text inputs with translation awareness to predict **Time To Edit (TTE)** for subtitle segments. Given an audio segment and its corresponding subtitle text, the model predicts how much time (in seconds) would be required to edit/refine that subtitle segment, while taking into account whether the subtitle is a translation or original content.
The model extends the basic SeamlessM4T architecture with a translation feature that helps distinguish between original and translated subtitle content, improving TTE prediction accuracy across 5 languages: **English, French, Spanish, Italian, and German** with various translation pairs between them.
### Key Features
- **Translation-Aware Processing**: Distinguishes between original and translated content
- **Multimodal Processing**: Simultaneously processes audio (16kHz) and text inputs
- **Frozen Encoders**: Uses pre-trained SeamlessM4T encoders (frozen for stability)
- **Enhanced Architecture**: Adds translation embedding to basic model
- **TTE Prediction**: Predicts editing time required for subtitle segments
- **Direct Output**: Raw time values in seconds for immediate use
## Model Architecture
The model extends the basic SeamlessM4T architecture with translation awareness:
1. **Audio Processing**:
- SeamlessM4T speech encoder (frozen) processes raw audio input
- Audio projection layer maps speech encoder output to 1024 dimensions
- Mean pooling over sequence length to get fixed-size audio embedding
2. **Text Processing**:
- SeamlessM4T text encoder (frozen) processes tokenized text input
- Text projection layer maps text encoder output to 1024 dimensions
- Mean pooling over sequence length to get fixed-size text embedding
3. **Translation Feature Processing**:
- Binary translation flag (0/1) indicating original vs translated content
- Translation projection layer maps binary input to 64 dimensions
- Learned embedding helps model distinguish translation effects
4. **Feature Fusion**:
- Audio, text, and translation embeddings are concatenated (2112 total dimensions)
- Simple concatenation without complex cross-modal interactions
5. **Regression Head**:
- Multi-layer perceptron: 2112 → 1024 → 512 → 256 → 1
- ReLU activations and dropout for regularization
- Single output for TTE prediction (regression, in seconds)
## Quick Start
### Installation
```bash
pip install transformers torch torchaudio huggingface_hub
```
### Basic Usage
```python
from transformers import AutoModel, AutoConfig
from huggingface_hub import hf_hub_download
import torch
import numpy as np
import importlib.util
# Load model - custom architecture requires importing the model class
model_files = hf_hub_download(repo_id="videoloc/seamless-translation", filename="modeling_seamless_translation.py")
spec = importlib.util.spec_from_file_location("modeling_seamless_translation", model_files)
modeling_module = importlib.util.module_from_spec(spec)
spec.loader.exec_module(modeling_module)
# Now load the model using the custom class
config = modeling_module.SeamlessTranslationConfig.from_pretrained("videoloc/seamless-translation")
model = modeling_module.HFSeamlessTranslation.from_pretrained("videoloc/seamless-translation")
# Load the data collator (included in this repo)
collator_file = hf_hub_download(repo_id="videoloc/seamless-translation", filename="data_collator.py")
spec = importlib.util.spec_from_file_location("data_collator", collator_file)
collator_module = importlib.util.module_from_spec(spec)
spec.loader.exec_module(collator_module)
# Initialize data collator
data_collator = collator_module.DataCollatorSimpleSeamless(
processor="facebook/hf-seamless-m4t-medium",
max_audio_length_sec=8.0,
max_text_length=256
)
# Prepare your data with translation information
your_data = [
{
'raw_audio': np.random.randn(16000 * 5), # 5 seconds at 16kHz
'raw_text': "Your subtitle text here",
'is_translation': 1, # 1 for translated content, 0 for original
}
]
# Process and run inference
batch = data_collator(your_data)
model.eval()
with torch.no_grad():
outputs = model(**batch)
tte_prediction = outputs.logits.item()
print(f"Predicted Time To Edit (TTE): {tte_prediction:.2f} seconds")
```
## Model Details
- **Base Model**: SeamlessM4T (facebook/hf-seamless-m4t-medium)
- **Audio Encoder**: Frozen SeamlessM4T speech encoder
- **Text Encoder**: Frozen SeamlessM4T text encoder
- **Hidden Size**: 1024
- **Translation Embedding**: 64 dimensions
- **Audio Input**: 16kHz
- **Translation Input**: Binary flag (0/1)
- **Output**: Single regression value (TTE in seconds)
- **Task**: Subtitle editing time prediction
## Data Format
Your input data should be a list of dictionaries with:
- `raw_audio`: NumPy array of audio samples (16kHz sampling rate)
- `raw_text`: String of subtitle text
- `is_translation`: Binary flag (1 for translated, 0 for original content)
- `labels`: Target TTE values in seconds (optional, for training)
Example:
```python
data = [
{
'raw_audio': audio_samples, # shape: (num_samples,) at 16kHz
'raw_text': "Subtitle text content",
'is_translation': 1, # 1 = translated, 0 = original
'labels': 2.5 # optional TTE target value in seconds
}
]
```
## Performance Metrics
- **Best Eval RMSE**: 33.34
## Training Details
- **Base Model**: facebook/hf-seamless-m4t-medium
- **Model Type**: seamless_with_translation
- **Epochs**: 10
- **Batch Size (Train)**: 32
- **Batch Size (Eval)**: 64
- **Learning Rate**: 1.2e-4
- **LR Scheduler**: cosine_with_restarts
- **Warmup Ratio**: 0.05
- **Weight Decay**: 0.001
- **Optimizer**: AdamW (torch)
- **Max Grad Norm**: 1.0
- **FP16**: True
- **Early Stopping Patience**: 5
- **Audio Max Length**: 8.0 seconds
- **Text Max Length**: 256 tokens
- **Sample Rate**: 16kHz
- **Translation Feature**: Binary flag (0/1)
- **Normalization**: None (raw values)
- **Dataset Split**: 80/20 train/test
- **Random Seed**: 42
- **Metric**: RMSE (lower is better)
## Training Configuration
The model was trained with the following specifications:
- **Dataset**: Multimodal audio-subtitle pairs with translation annotations (5 languages: EN, FR, ES, IT, DE)
- **Train/Test Split**: 80/20 with random seed 42
- **Audio Processing**: 16kHz sampling, max 8.0 seconds, no offset
- **Text Processing**: Max 256 tokens
- **Translation Feature**: Binary flag indicating original vs translated content
- **Normalization**: None (raw TTE values in seconds)
- **Caching**: Audio segments cached and compressed for efficiency
## Usage Notes
- This is the **translation-aware** variant - includes translation features
- For basic model without translation features, see `seamless-basic`
- For language pair embeddings, see `seamless-langpairs`
- Model expects 16kHz audio input (automatically resampled by data collator)
- Translation flag significantly impacts predictions
- No feature normalization applied - outputs raw TTE predictions in seconds
- Optimized for subtitle editing time estimation tasks
## Limitations
- Requires translation annotation in training data
- Designed for TTE prediction, not general audio-text matching
- Performance may vary on out-of-domain content
- Requires specific data preprocessing (use included data collator)
## Related Models
- **[seamless-basic](https://huggingface.co/videoloc/seamless-basic)**: Basic audio+text model without translation features
- **[seamless-langpairs](https://huggingface.co/videoloc/seamless-langpairs)**: Includes language pair embeddings for fine-grained multilingual control
- **[seamless-crossattention](https://huggingface.co/videoloc/seamless-crossattention)**: Advanced cross-modal attention mechanisms for sophisticated audio-text interactions
|
videoloc/seamless-crossattention
|
videoloc
| 2025-06-20T07:40:41Z | 0 | 0 |
transformers
|
[
"transformers",
"pytorch",
"seamless_crossattention",
"audio",
"text",
"multimodal",
"seamless",
"subtitle-editing-time-prediction",
"cross-attention",
"attention-mechanism",
"multilingual",
"base_model:facebook/hf-seamless-m4t-medium",
"base_model:finetune:facebook/hf-seamless-m4t-medium",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-19T10:31:53Z |
---
language:
- multilingual
tags:
- audio
- text
- multimodal
- seamless
- subtitle-editing-time-prediction
- cross-attention
- attention-mechanism
library_name: transformers
base_model: facebook/hf-seamless-m4t-medium
license: cc-by-nc-4.0
---
# videoloc/seamless-crossattention
## Model Description
This is a **SeamlessCrossAttention** model that processes audio and text inputs with advanced cross-modal attention mechanisms to predict **Time To Edit (TTE)** for subtitle segments. Given an audio segment and its corresponding subtitle text, the model predicts how much time (in seconds) would be required to edit/refine that subtitle segment, leveraging sophisticated cross-attention patterns between audio and text modalities.
The model extends the SeamlessM4T architecture with bidirectional cross-attention layers that allow audio and text representations to attend to each other, creating rich cross-modal embeddings that capture temporal and semantic relationships across 5 languages: **English, French, Spanish, Italian, and German**.
### Key Features
- **Cross-Modal Attention**: Bidirectional attention between audio and text representations
- **Advanced Architecture**: Audio-to-text and text-to-audio attention mechanisms
- **Scalar Mixing**: Learnable combination of global and attended embeddings
- **Embedding Regularization**: Optional L2 regularization for embedding stability
- **Multimodal Processing**: Simultaneously processes audio (16kHz) and text inputs
- **Frozen Encoders**: Uses pre-trained SeamlessM4T encoders (frozen for stability)
- **TTE Prediction**: Predicts editing time required for subtitle segments
- **Direct Output**: Raw time values in seconds for immediate use
## Model Architecture
The model implements sophisticated cross-modal attention mechanisms:
1. **Audio Processing**:
- SeamlessM4T speech encoder (frozen) processes raw audio input
- Audio projection layer maps speech encoder output to 1024 dimensions
- Layer normalization for stability
2. **Text Processing**:
- SeamlessM4T text encoder (frozen) processes tokenized text input
- Text projection layer maps text encoder output to 1024 dimensions
- Layer normalization for stability
3. **Cross-Modal Attention**:
- **Audio-to-Text Attention**: Each audio token attends to all text tokens
- **Text-to-Audio Attention**: Each text token attends to all audio tokens
- Multi-head attention (8 heads) with dropout for regularization
- Bidirectional information flow between modalities
4. **Feature Fusion**:
- Global pooling of original audio and text embeddings
- Global pooling of cross-attended embeddings
- Scalar mixing layer combines all four embeddings with learnable weights
- Final embedding captures both global and cross-modal patterns
5. **Regression Head**:
- Multi-layer perceptron: 1024 → 512 → 256 → 1
- ReLU activations and dropout for regularization
- Single output for TTE prediction (regression, in seconds)
6. **Optional Regularization**:
- L2 regularization on embedding norms for training stability
- Configurable regularization strength
## Quick Start
### Installation
```bash
pip install transformers torch torchaudio huggingface_hub
```
### Basic Usage
```python
from transformers import AutoModel, AutoConfig
from huggingface_hub import hf_hub_download
import torch
import numpy as np
import importlib.util
# Load model - custom architecture requires importing the model class
model_files = hf_hub_download(repo_id="videoloc/seamless-crossattention", filename="modeling_seamless_crossattention.py")
spec = importlib.util.spec_from_file_location("modeling_seamless_crossattention", model_files)
modeling_module = importlib.util.module_from_spec(spec)
spec.loader.exec_module(modeling_module)
# Now load the model using the custom class
config = modeling_module.SeamlessCrossAttentionConfig.from_pretrained("videoloc/seamless-crossattention")
model = modeling_module.HFSeamlessCrossAttention.from_pretrained("videoloc/seamless-crossattention")
# Load the data collator (included in this repo)
collator_file = hf_hub_download(repo_id="videoloc/seamless-crossattention", filename="data_collator.py")
spec = importlib.util.spec_from_file_location("data_collator", collator_file)
collator_module = importlib.util.module_from_spec(spec)
spec.loader.exec_module(collator_module)
# Initialize data collator
data_collator = collator_module.DataCollatorSimpleSeamless(
processor="facebook/hf-seamless-m4t-medium",
max_audio_length_sec=8.0,
max_text_length=256
)
# Prepare your data
your_data = [
{
'raw_audio': np.random.randn(16000 * 5), # 5 seconds at 16kHz
'raw_text': "Your subtitle text here",
# Note: Cross-attention model doesn't require translation features
}
]
# Process and run inference
batch = data_collator(your_data)
model.eval()
with torch.no_grad():
outputs = model(**batch)
tte_prediction = outputs.logits.item()
print(f"Predicted Time To Edit (TTE): {tte_prediction:.2f} seconds")
```
## Model Details
- **Base Model**: SeamlessM4T (facebook/hf-seamless-m4t-medium)
- **Audio Encoder**: Frozen SeamlessM4T speech encoder
- **Text Encoder**: Frozen SeamlessM4T text encoder
- **Hidden Size**: 1024
- **Attention Heads**: 8 (configurable)
- **Cross-Attention**: Bidirectional (audio↔text)
- **Scalar Mix**: 4 embeddings (audio global, text global, audio→text, text→audio)
- **Audio Input**: 16kHz
- **Output**: Single regression value (TTE in seconds)
- **Task**: Subtitle editing time prediction
## Data Format
Your input data should be a list of dictionaries with:
- `raw_audio`: NumPy array of audio samples (16kHz sampling rate)
- `raw_text`: String of subtitle text
- `labels`: Target TTE values in seconds (optional, for training)
Example:
```python
data = [
{
'raw_audio': audio_samples, # shape: (num_samples,) at 16kHz
'raw_text': "Subtitle text content",
'labels': 2.5 # optional TTE target value in seconds
}
]
```
## Performance Metrics
- **Best Eval RMSE**: 33.34
## Training Details
- **Base Model**: facebook/hf-seamless-m4t-medium
- **Model Type**: seamless_cross_attention
- **Epochs**: 10
- **Batch Size (Train)**: 32
- **Batch Size (Eval)**: 64
- **Learning Rate**: 1.2e-4
- **LR Scheduler**: cosine_with_restarts
- **Warmup Ratio**: 0.05
- **Weight Decay**: 0.001
- **Optimizer**: AdamW (torch)
- **Max Grad Norm**: 1.0
- **FP16**: True
- **Early Stopping Patience**: 5
- **Audio Max Length**: 8.0 seconds
- **Text Max Length**: 256 tokens
- **Sample Rate**: 16kHz
- **Cross-Attention**: 8-head multi-head attention
- **Scalar Mixing**: 4 embedding types
- **Embedding Regularization**: Optional L2
- **Normalization**: None (raw values)
- **Dataset Split**: 80/20 train/test
- **Random Seed**: 42
- **Metric**: RMSE (lower is better)
## Training Configuration
The model was trained with the following specifications:
- **Dataset**: Multimodal audio-subtitle pairs with TTE annotations (5 languages: EN, FR, ES, IT, DE)
- **Train/Test Split**: 80/20 with random seed 42
- **Audio Processing**: 16kHz sampling, max 8.0 seconds, no offset
- **Text Processing**: Max 256 tokens
- **Cross-Attention**: 8-head multi-head attention with dropout
- **Scalar Mixing**: Learnable combination of 4 embedding types
- **Normalization**: None (raw TTE values in seconds)
- **Caching**: Audio segments cached and compressed for efficiency
## Usage Notes
- This is the **advanced cross-attention** variant with sophisticated attention mechanisms
- For simpler models, see `seamless-basic`, `seamless-translation`, or `seamless-langpairs`
- Model expects 16kHz audio input (automatically resampled by data collator)
- Cross-attention captures complex temporal and semantic relationships
- No feature normalization applied - outputs raw TTE predictions in seconds
- Optimized for detailed subtitle editing time estimation tasks
## Architecture Advantages
- **Rich Cross-Modal Interactions**: Audio and text modalities directly attend to each other
- **Temporal Alignment**: Cross-attention naturally captures temporal relationships
- **Semantic Understanding**: Text-to-audio attention helps model understand content meaning
- **Flexible Combination**: Scalar mixing allows model to weight different embedding types
- **Regularization Options**: Optional embedding regularization for training stability
## Limitations
- Higher computational complexity than basic models due to attention mechanisms
- Requires more training data to fully leverage cross-attention capabilities
- Designed for TTE prediction, not general audio-text matching
- Performance may vary on out-of-domain content or different editing workflows
- Requires specific data preprocessing (use included data collator)
## Related Models
- **[seamless-basic](https://huggingface.co/videoloc/seamless-basic)**: Basic audio+text model without attention mechanisms
- **[seamless-translation](https://huggingface.co/videoloc/seamless-translation)**: Includes translation awareness but no cross-attention
- **[seamless-langpairs](https://huggingface.co/videoloc/seamless-langpairs)**: Includes language pair embeddings but no cross-attention
|
LaaP-ai/donut-base-invoicev1.26
|
LaaP-ai
| 2025-06-20T07:40:33Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"vision-encoder-decoder",
"image-text-to-text",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2025-06-20T07:40:08Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
BCCard/Qwen2.5-Coder-7B-FP8-Dynamic
|
BCCard
| 2025-06-20T07:40:19Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"code",
"qwen",
"qwen-coder",
"codeqwen",
"conversational",
"en",
"arxiv:2409.12186",
"arxiv:2309.00071",
"arxiv:2407.10671",
"base_model:Qwen/Qwen2.5-7B",
"base_model:quantized:Qwen/Qwen2.5-7B",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"compressed-tensors",
"region:us"
] |
text-generation
| 2025-06-20T06:18:29Z |
---
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen2.5-Coder-7B/blob/main/LICENSE
language:
- en
base_model:
- Qwen/Qwen2.5-7B
pipeline_tag: text-generation
library_name: transformers
tags:
- code
- qwen
- qwen-coder
- codeqwen
---
# Qwen2.5-Coder-7B-FP8-Dynamic
## Model Overview
- **Model Architecture:** Qwen2.5-Coder-72B-Instruct
- **Input:** Text
- **Output:** Text
- **Model Optimizations:**
- **Weight quantization:** FP8
- **Activation quantization:** FP8
- **Release Date:** 2/24/2025
- **Version:** 1.0
- **Model Developers:** BC Card
Quantized version of [Qwen/Qwen2.5-Coder-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Coder-7B).
### Model Optimizations
This model was obtained by quantizing the weights of [Qwen/Qwen2.5-Coder-7B](https://huggingface.co/Qwen/Qwen2.5-Coder-7B) to FP8 data type, ready for inference with vLLM >= 0.5.2.
## Deployment
### Use with vLLM
This model can be deployed efficiently using the [vLLM](https://docs.vllm.ai/en/latest/) backend, as shown in the example below.
```python
from vllm.assets.image import ImageAsset
from vllm import LLM, SamplingParams
# prepare model
llm = LLM(
model="BCCard/Qwen2.5-7B-FP8-Dynamic",
trust_remote_code=True,
max_model_len=4096,
max_num_seqs=2,
)
# prepare inputs
question = "What is the code for python hello world?"
inputs = {
"prompt": f"<|user|>\n{question}<|end|>\n<|assistant|>\n",
}
# generate response
print("========== SAMPLE GENERATION ==============")
outputs = llm.generate(inputs, SamplingParams(temperature=0.2, max_tokens=64))
print(f"PROMPT : {outputs[0].prompt}")
print(f"RESPONSE: {outputs[0].outputs[0].text}")
print("==========================================")
```
vLLM also supports OpenAI-compatible serving. See the [documentation](https://docs.vllm.ai/en/latest/) for more details.
## Qwen2.5-Coder Introduction
Qwen2.5-Coder is the latest series of Code-Specific Qwen large language models (formerly known as CodeQwen). As of now, Qwen2.5-Coder has covered six mainstream model sizes, 0.5, 1.5, 3, 7, 14, 32 billion parameters, to meet the needs of different developers. Qwen2.5-Coder brings the following improvements upon CodeQwen1.5:
- Significantly improvements in **code generation**, **code reasoning** and **code fixing**. Base on the strong Qwen2.5, we scale up the training tokens into 5.5 trillion including source code, text-code grounding, Synthetic data, etc. Qwen2.5-Coder-32B has become the current state-of-the-art open-source codeLLM, with its coding abilities matching those of GPT-4o.
- A more comprehensive foundation for real-world applications such as **Code Agents**. Not only enhancing coding capabilities but also maintaining its strengths in mathematics and general competencies.
- **Long-context Support** up to 128K tokens.
**This repo contains the 7B Qwen2.5-Coder model**, which has the following features:
- Type: Causal Language Models
- Training Stage: Pretraining
- Architecture: transformers with RoPE, SwiGLU, RMSNorm, and Attention QKV bias
- Number of Parameters: 7.61B
- Number of Paramaters (Non-Embedding): 6.53B
- Number of Layers: 28
- Number of Attention Heads (GQA): 28 for Q and 4 for KV
- Context Length: Full 131,072 tokens
- Please refer to [this section](#processing-long-texts) for detailed instructions on how to deploy Qwen2.5 for handling long texts.
**We do not recommend using base language models for conversations.** Instead, you can apply post-training, e.g., SFT, RLHF, continued pretraining, etc., or fill in the middle tasks on this model.
For more details, please refer to our [blog](https://qwenlm.github.io/blog/qwen2.5-coder-family/), [GitHub](https://github.com/QwenLM/Qwen2.5-Coder), [Documentation](https://qwen.readthedocs.io/en/latest/), [Arxiv](https://arxiv.org/abs/2409.12186).
## Requirements
The code of Qwen2.5-Coder has been in the latest Hugging face `transformers` and we advise you to use the latest version of `transformers`.
With `transformers<4.37.0`, you will encounter the following error:
```
KeyError: 'qwen2'
```
### Processing Long Texts
The current `config.json` is set for context length up to 32,768 tokens.
To handle extensive inputs exceeding 32,768 tokens, we utilize [YaRN](https://arxiv.org/abs/2309.00071), a technique for enhancing model length extrapolation, ensuring optimal performance on lengthy texts.
For supported frameworks, you could add the following to `config.json` to enable YaRN:
```json
{
...,
"rope_scaling": {
"factor": 4.0,
"original_max_position_embeddings": 32768,
"type": "yarn"
}
}
```
For deployment, we recommend using vLLM.
Please refer to our [Documentation](https://qwen.readthedocs.io/en/latest/deployment/vllm.html) for usage if you are not familar with vLLM.
Presently, vLLM only supports static YARN, which means the scaling factor remains constant regardless of input length, **potentially impacting performance on shorter texts**.
We advise adding the `rope_scaling` configuration only when processing long contexts is required.
## Evaluation & Performance
Detailed evaluation results are reported in this [📑 blog](https://qwenlm.github.io/blog/qwen2.5-coder-family/).
For requirements on GPU memory and the respective throughput, see results [here](https://qwen.readthedocs.io/en/latest/benchmark/speed_benchmark.html).
## Citation
If you find our work helpful, feel free to give us a cite.
```
@article{hui2024qwen2,
title={Qwen2. 5-Coder Technical Report},
author={Hui, Binyuan and Yang, Jian and Cui, Zeyu and Yang, Jiaxi and Liu, Dayiheng and Zhang, Lei and Liu, Tianyu and Zhang, Jiajun and Yu, Bowen and Dang, Kai and others},
journal={arXiv preprint arXiv:2409.12186},
year={2024}
}
@article{qwen2,
title={Qwen2 Technical Report},
author={An Yang and Baosong Yang and Binyuan Hui and Bo Zheng and Bowen Yu and Chang Zhou and Chengpeng Li and Chengyuan Li and Dayiheng Liu and Fei Huang and Guanting Dong and Haoran Wei and Huan Lin and Jialong Tang and Jialin Wang and Jian Yang and Jianhong Tu and Jianwei Zhang and Jianxin Ma and Jin Xu and Jingren Zhou and Jinze Bai and Jinzheng He and Junyang Lin and Kai Dang and Keming Lu and Keqin Chen and Kexin Yang and Mei Li and Mingfeng Xue and Na Ni and Pei Zhang and Peng Wang and Ru Peng and Rui Men and Ruize Gao and Runji Lin and Shijie Wang and Shuai Bai and Sinan Tan and Tianhang Zhu and Tianhao Li and Tianyu Liu and Wenbin Ge and Xiaodong Deng and Xiaohuan Zhou and Xingzhang Ren and Xinyu Zhang and Xipin Wei and Xuancheng Ren and Yang Fan and Yang Yao and Yichang Zhang and Yu Wan and Yunfei Chu and Yuqiong Liu and Zeyu Cui and Zhenru Zhang and Zhihao Fan},
journal={arXiv preprint arXiv:2407.10671},
year={2024}
}
```
|
Triangle104/Impish_Magic_24B-Q5_K_S-GGUF
|
Triangle104
| 2025-06-20T07:38:44Z | 0 | 0 | null |
[
"gguf",
"llama-cpp",
"gguf-my-repo",
"en",
"base_model:SicariusSicariiStuff/Impish_Magic_24B",
"base_model:quantized:SicariusSicariiStuff/Impish_Magic_24B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-06-20T07:36:33Z |
---
license: apache-2.0
language:
- en
base_model: SicariusSicariiStuff/Impish_Magic_24B
tags:
- llama-cpp
- gguf-my-repo
---
# Triangle104/Impish_Magic_24B-Q5_K_S-GGUF
This model was converted to GGUF format from [`SicariusSicariiStuff/Impish_Magic_24B`](https://huggingface.co/SicariusSicariiStuff/Impish_Magic_24B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/SicariusSicariiStuff/Impish_Magic_24B) for more details on the model.
---
This model is based on mistralai/Magistral-Small-2506 so naturally it's named Impish_Magic. Truly excellent size, it's been tested on a laptop with 16GB gpu and it works quite fast (4090m).
This model went "full" fine-tune over 100m unique tokens. Why "full"?
Specific areas in the model have been tuned to attempt to change the
vocabulary usage, while keeping as much intelligence as possible. So
this is definitely not a LoRA, but also not exactly a proper full finetune, but rather something in-between.
---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Impish_Magic_24B-Q5_K_S-GGUF --hf-file impish_magic_24b-q5_k_s.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Impish_Magic_24B-Q5_K_S-GGUF --hf-file impish_magic_24b-q5_k_s.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Impish_Magic_24B-Q5_K_S-GGUF --hf-file impish_magic_24b-q5_k_s.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Impish_Magic_24B-Q5_K_S-GGUF --hf-file impish_magic_24b-q5_k_s.gguf -c 2048
```
|
morturr/Llama-2-7b-hf-PAIR_amazon_headlines-COMB-amazon-comb-2-seed-18-2025-06-20
|
morturr
| 2025-06-20T07:33:32Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"license:llama2",
"region:us"
] | null | 2025-06-20T07:33:20Z |
---
library_name: peft
license: llama2
base_model: meta-llama/Llama-2-7b-hf
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: Llama-2-7b-hf-PAIR_amazon_headlines-COMB-amazon-comb-2-seed-18-2025-06-20
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Llama-2-7b-hf-PAIR_amazon_headlines-COMB-amazon-comb-2-seed-18-2025-06-20
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 18
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.1
- Pytorch 2.5.1+cu124
- Datasets 3.0.2
- Tokenizers 0.20.1
|
jinx2321/byt5-1e4-paper-5e5-dict-sentences
|
jinx2321
| 2025-06-20T07:30:34Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:jinx2321/byt5-1e4-paper",
"base_model:finetune:jinx2321/byt5-1e4-paper",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2025-06-20T07:28:34Z |
---
library_name: transformers
license: apache-2.0
base_model: jinx2321/byt5-1e4-paper
tags:
- generated_from_trainer
model-index:
- name: byt5-1e4-paper-dict-sentences
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# byt5-1e4-paper-dict-sentences
This model is a fine-tuned version of [jinx2321/byt5-1e4-paper](https://huggingface.co/jinx2321/byt5-1e4-paper) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 128
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.52.0.dev0
- Pytorch 2.6.0+cu124
- Datasets 3.4.1
- Tokenizers 0.21.1
|
Moncyan/Med-U1-7B-medcalc
|
Moncyan
| 2025-06-20T07:28:13Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-06-20T07:28:13Z |
---
license: apache-2.0
---
|
ujjawal077/cyber-arabic-llama3
|
ujjawal077
| 2025-06-20T07:27:07Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-20T07:22:57Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Nitish035/mistral_CMoS_adapter32_2nd_584-c1
|
Nitish035
| 2025-06-20T07:26:19Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"en",
"base_model:unsloth/mistral-7b-instruct-v0.3-bnb-4bit",
"base_model:finetune:unsloth/mistral-7b-instruct-v0.3-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-20T07:20:08Z |
---
base_model: unsloth/mistral-7b-instruct-v0.3-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Nitish035
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-7b-instruct-v0.3-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Hanoch4869/TransMind
|
Hanoch4869
| 2025-06-20T07:25:13Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"chat",
"text-classification",
"en",
"base_model:Qwen/QwQ-32B",
"base_model:finetune:Qwen/QwQ-32B",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-06-20T02:56:38Z |
---
license: apache-2.0
base_model:
- Qwen/QwQ-32B
pipeline_tag: text-classification
language:
- en
library_name: transformers
tags:
- chat
---
# TransMind - 通鸣智响

**TransMind** is an expert AI model for the communications domain, built on an advanced large language model architecture and specifically optimized for the telecommunications industry. Developed on the robust QwQ-32B foundation, this model achieves deep integration of communication knowledge and enhanced professional capabilities through domain-specific reinforcement learning. With 32 billion parameters, its performance rivals DeepSeek-R1 (which utilizes 67.1B parameters, 37B activated).
## Key Features
### 🚀 Expert-Level Communication Capabilities
Mastery of communication protocols (5G/6G, TCP/IP, HTTP/3); Profound understanding of wireless communication principles & signal processing; Network optimization & fault diagnosis expertise; Communication system design & planning proficiency; Professional interpretation of telecom standards & specifications
### ⚡ Reinforcement Learning Enhanced Architecture
Powerful 32B-parameter foundation based on QwQ-32B; Optimized communication-domain reasoning via large-scale RL; Multi-phase training integrating specialized communication data; Deep reasoning for complex communication problem-solving; Domain-specific reward functions (Technical accuracy/Solution feasibility/Efficiency optimization/Innovation); Adaptive learning with dynamic strategy adjustment
### 🛠️ Intelligent Agent Capabilities
Integrated communication-specific tool support; Dynamic solution adjustment based on network feedback; End-to-end system analysis & optimization; Multi-step technical diagnosis & troubleshooting; Real-time performance monitoring & feedback loops
## Technical Advantages
```mermaid
graph LR
A[QwQ-32B Base Architecture] --> B[Communication-Domain RL]
B --> C[Protocol Expertise]
B --> D[Network Optimization Engine]
B --> E[System Design Capabilities]
C --> F[TransMind]
````
## Quick Start
Example using apply_chat_template to load tokenizer/model and generate content:
`````python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "Qwen/QwQ-32B"
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
prompt = "How many r's are in the word \"strawberry\""
messages = [
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generated_ids = model.generate(
**model_inputs,
max_new_tokens=32768
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(response)
``````
## Contribution & Licensing
We welcome communication domain experts to participate in model optimization! Contribute through:
Submitting specialized communication datasets
Reporting domain-specific issues
Optimizing communication tool integrations
License: Apache License 2.0
|
carloshernan19/ppo-LunarLander-v2
|
carloshernan19
| 2025-06-20T07:19:39Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-06-20T06:19:27Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 290.41 +/- 9.76
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
ujjawal077/cyber-arabic-llama2
|
ujjawal077
| 2025-06-20T07:16:48Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-20T07:12:35Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
veddhanth/lora-trained-xl-stage-2-map-7-pretrained-mockingbird
|
veddhanth
| 2025-06-20T07:07:37Z | 0 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] |
text-to-image
| 2025-06-20T07:01:35Z |
---
base_model: stabilityai/stable-diffusion-xl-base-1.0
library_name: diffusers
license: openrail++
instance_prompt: a photo of sks bird
widget: []
tags:
- text-to-image
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SDXL LoRA DreamBooth - veddhanth/lora-trained-xl-stage-2-map-7-pretrained-mockingbird
<Gallery />
## Model description
These are veddhanth/lora-trained-xl-stage-2-map-7-pretrained-mockingbird LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: True.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use a photo of sks bird to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](veddhanth/lora-trained-xl-stage-2-map-7-pretrained-mockingbird/tree/main) them in the Files & versions tab.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model]
|
Josephinepassananti/sd21-kamala_ft_dataset_512_face_shaded_0.05_target_marilyn_monroe-bs1-steps600-lr1e-04
|
Josephinepassananti
| 2025-06-20T07:06:33Z | 0 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"diffusers-training",
"lora",
"base_model:stabilityai/stable-diffusion-2-1",
"base_model:adapter:stabilityai/stable-diffusion-2-1",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2025-06-20T06:36:50Z |
---
base_model: stabilityai/stable-diffusion-2-1
library_name: diffusers
license: creativeml-openrail-m
inference: true
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- diffusers-training
- lora
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# LoRA text2image fine-tuning - Josephinepassananti/sd21-kamala_ft_dataset_512_face_shaded_0.05_target_marilyn_monroe-bs1-steps600-lr1e-04
These are LoRA adaption weights for stabilityai/stable-diffusion-2-1. The weights were fine-tuned on the None dataset. You can find some example images in the following.




## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model]
|
phuongdoan01/Gensyn-Qwen2.5-1.5B-Instruct-acbaa
|
phuongdoan01
| 2025-06-20T07:05:59Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"chat",
"rl-swarm",
"gensyn",
"conversational",
"en",
"base_model:Qwen/Qwen2.5-1.5B",
"base_model:finetune:Qwen/Qwen2.5-1.5B",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-20T06:37:50Z |
---
license: apache-2.0
license_link: https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct/blob/main/LICENSE
language:
- en
pipeline_tag: text-generation
base_model: Qwen/Qwen2.5-1.5B
tags:
- chat
- rl-swarm
- gensyn
library_name: transformers
---
# Qwen2.5-1.5B-Instruct
## Introduction
This model is intended for use in the [Gensyn RL Swarm](https://www.gensyn.ai/articles/rl-swarm), to finetune locally using peer-to-peer reinforcement learning post-training.
Once finetuned, the model can be used as normal in any workflow, for details on how to do this please refer to the [original model documentation](https://qwen.readthedocs.io/en/latest/).
For more details on the original model, please refer to the original repository [here](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct).
This repo contains an **unmodified version** of the instruction-tuned 1.5B Qwen2.5 model, which has the following features:
- Type: Causal Language Models
- Training Stage: Pretraining & Post-training
- Architecture: transformers with RoPE, SwiGLU, RMSNorm, Attention QKV bias and tied word embeddings
- Number of Parameters: 1.54B
- Number of Paramaters (Non-Embedding): 1.31B
- Number of Layers: 28
- Number of Attention Heads (GQA): 12 for Q and 2 for KV
- Context Length: Full 32,768 tokens and generation 8192 tokens
## Requirements
This model is intended for use in the [Gensyn RL Swarm](https://www.gensyn.ai/articles/rl-swarm) system, for details on model requirements when using outside of a swarm, refer to the original Qwen repo [here](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct).
## Quickstart
To deploy this model into a swarm and/or participate in the Gensyn Testnet, follow the instructions in the [RL Swarm repository](https://github.com/gensyn-ai/rl-swarm), read about the [testnet](https://www.gensyn.ai/testnet), read the [RL Swarm overview](https://www.gensyn.ai/articles/rl-swarm), and/or read the [RL Swarm technical report](https://github.com/gensyn-ai/paper-rl-swarm/blob/main/latest.pdf).
|
hectordiazgomez/gemma-3-4b-reasoning-translator-grpo
|
hectordiazgomez
| 2025-06-20T07:05:10Z | 0 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gemma3",
"image-text-to-text",
"conversational",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2025-06-20T07:02:56Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
hzzscience/WaiyumiaoModels
|
hzzscience
| 2025-06-20T07:01:21Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-06-20T06:36:55Z |
---
license: apache-2.0
---
|
mradermacher/Virtuoso-Large-GGUF
|
mradermacher
| 2025-06-20T06:59:50Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:arcee-ai/Virtuoso-Large",
"base_model:quantized:arcee-ai/Virtuoso-Large",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-06-19T15:16:41Z |
---
base_model: arcee-ai/Virtuoso-Large
language:
- en
library_name: transformers
license: other
license_link: https://huggingface.co/Qwen/Qwen2.5-72B-Instruct/blob/main/LICENSE
license_name: qwen
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/arcee-ai/Virtuoso-Large
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Virtuoso-Large-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Virtuoso-Large-GGUF/resolve/main/Virtuoso-Large.Q2_K.gguf) | Q2_K | 29.9 | |
| [GGUF](https://huggingface.co/mradermacher/Virtuoso-Large-GGUF/resolve/main/Virtuoso-Large.Q3_K_S.gguf) | Q3_K_S | 34.6 | |
| [GGUF](https://huggingface.co/mradermacher/Virtuoso-Large-GGUF/resolve/main/Virtuoso-Large.Q3_K_M.gguf) | Q3_K_M | 37.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Virtuoso-Large-GGUF/resolve/main/Virtuoso-Large.Q3_K_L.gguf) | Q3_K_L | 39.6 | |
| [GGUF](https://huggingface.co/mradermacher/Virtuoso-Large-GGUF/resolve/main/Virtuoso-Large.IQ4_XS.gguf) | IQ4_XS | 40.3 | |
| [GGUF](https://huggingface.co/mradermacher/Virtuoso-Large-GGUF/resolve/main/Virtuoso-Large.Q4_K_S.gguf) | Q4_K_S | 44.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Virtuoso-Large-GGUF/resolve/main/Virtuoso-Large.Q4_K_M.gguf) | Q4_K_M | 47.5 | fast, recommended |
| [PART 1](https://huggingface.co/mradermacher/Virtuoso-Large-GGUF/resolve/main/Virtuoso-Large.Q5_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Virtuoso-Large-GGUF/resolve/main/Virtuoso-Large.Q5_K_S.gguf.part2of2) | Q5_K_S | 51.5 | |
| [PART 1](https://huggingface.co/mradermacher/Virtuoso-Large-GGUF/resolve/main/Virtuoso-Large.Q5_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Virtuoso-Large-GGUF/resolve/main/Virtuoso-Large.Q5_K_M.gguf.part2of2) | Q5_K_M | 54.5 | |
| [PART 1](https://huggingface.co/mradermacher/Virtuoso-Large-GGUF/resolve/main/Virtuoso-Large.Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Virtuoso-Large-GGUF/resolve/main/Virtuoso-Large.Q6_K.gguf.part2of2) | Q6_K | 64.4 | very good quality |
| [PART 1](https://huggingface.co/mradermacher/Virtuoso-Large-GGUF/resolve/main/Virtuoso-Large.Q8_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Virtuoso-Large-GGUF/resolve/main/Virtuoso-Large.Q8_0.gguf.part2of2) | Q8_0 | 77.4 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
ujjawal077/cyber-arabic-llama1
|
ujjawal077
| 2025-06-20T06:57:07Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-20T06:45:05Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
chanhue/Qwen3-4B-finetune
|
chanhue
| 2025-06-20T06:49:03Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-20T06:48:39Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
phospho-app/Selinaliu1030-gr00t-example_dataset_move_toast-2qazp
|
phospho-app
| 2025-06-20T06:45:36Z | 0 | 0 | null |
[
"phosphobot",
"gr00t",
"region:us"
] | null | 2025-06-20T06:42:33Z |
---
tags:
- phosphobot
- gr00t
task_categories:
- robotics
---
# gr00t Model - phospho Training Pipeline
## Error Traceback
We faced an issue while training your model.
```
Traceback (most recent call last):
File "/root/src/helper.py", line 165, in predict
trainer.train(timeout_seconds=timeout_seconds)
File "/root/phosphobot/am/gr00t.py", line 1146, in train
asyncio.run(
File "/opt/conda/lib/python3.11/asyncio/runners.py", line 190, in run
return runner.run(main)
^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/asyncio/runners.py", line 118, in run
return self._loop.run_until_complete(task)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/asyncio/base_events.py", line 654, in run_until_complete
return future.result()
^^^^^^^^^^^^^^^
File "/root/phosphobot/am/gr00t.py", line 996, in run_gr00t_training
raise RuntimeError(error_msg)
RuntimeError: Training process failed with exit code 1:
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/workspace/gr00t/data/dataset.py", line 790, in get_data_by_modality
return self.get_video(trajectory_id, key, base_index)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/workspace/gr00t/data/dataset.py", line 658, in get_video
video_timestamp = timestamp[step_indices]
~~~~~~~~~^^^^^^^^^^^^^^
IndexError: index 131 is out of bounds for axis 0 with size 81
0%| | 0/1080 [00:03<?, ?it/s]
```
## Training parameters:
- **Dataset**: [Selinaliu1030/example_dataset_move_toast](https://huggingface.co/datasets/Selinaliu1030/example_dataset_move_toast)
- **Wandb run URL**: None
- **Epochs**: 10
- **Batch size**: 49
- **Training steps**: None
📖 **Get Started**: [docs.phospho.ai](https://docs.phospho.ai?utm_source=huggingface_readme)
🤖 **Get your robot**: [robots.phospho.ai](https://robots.phospho.ai?utm_source=huggingface_readme)
|
omrisap/TreeRPO_V1_6400
|
omrisap
| 2025-06-20T06:38:10Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-20T06:35:17Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Josephinepassananti/sd21-kamala_ft_dataset_512_face_shaded_0.05_target_man-bs1-steps600-lr1e-04
|
Josephinepassananti
| 2025-06-20T06:36:28Z | 0 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"diffusers-training",
"lora",
"base_model:stabilityai/stable-diffusion-2-1",
"base_model:adapter:stabilityai/stable-diffusion-2-1",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2025-06-20T05:12:31Z |
---
base_model: stabilityai/stable-diffusion-2-1
library_name: diffusers
license: creativeml-openrail-m
inference: true
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- diffusers-training
- lora
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# LoRA text2image fine-tuning - Josephinepassananti/sd21-kamala_ft_dataset_512_face_shaded_0.05_target_man-bs1-steps600-lr1e-04
These are LoRA adaption weights for stabilityai/stable-diffusion-2-1. The weights were fine-tuned on the None dataset. You can find some example images in the following.




## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model]
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.