modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-07-27 00:47:30
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 533
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-07-27 00:47:21
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
mci29/sn29_y0m4_duve
|
mci29
| 2025-06-19T15:44:22Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-19T15:40:31Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
tomaarsen/csr-mxbai-embed-large-v1-nq-updated-reconstruction-4
|
tomaarsen
| 2025-06-19T15:40:26Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"bert",
"sparse-encoder",
"sparse",
"csr",
"generated_from_trainer",
"dataset_size:99000",
"loss:CSRLoss",
"loss:SparseMultipleNegativesRankingLoss",
"feature-extraction",
"en",
"dataset:sentence-transformers/natural-questions",
"arxiv:1908.10084",
"arxiv:2503.01776",
"arxiv:1705.00652",
"base_model:mixedbread-ai/mxbai-embed-large-v1",
"base_model:finetune:mixedbread-ai/mxbai-embed-large-v1",
"license:apache-2.0",
"model-index",
"co2_eq_emissions",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2025-06-19T15:40:18Z |
---
language:
- en
license: apache-2.0
tags:
- sentence-transformers
- sparse-encoder
- sparse
- csr
- generated_from_trainer
- dataset_size:99000
- loss:CSRLoss
- loss:SparseMultipleNegativesRankingLoss
base_model: mixedbread-ai/mxbai-embed-large-v1
widget:
- text: Saudi Arabia–United Arab Emirates relations However, the UAE and Saudi Arabia
continue to take somewhat differing stances on regional conflicts such the Yemeni
Civil War, where the UAE opposes Al-Islah, and supports the Southern Movement,
which has fought against Saudi-backed forces, and the Syrian Civil War, where
the UAE has disagreed with Saudi support for Islamist movements.[4]
- text: Economy of New Zealand New Zealand's diverse market economy has a sizable
service sector, accounting for 63% of all GDP activity in 2013.[17] Large scale
manufacturing industries include aluminium production, food processing, metal
fabrication, wood and paper products. Mining, manufacturing, electricity, gas,
water, and waste services accounted for 16.5% of GDP in 2013.[17] The primary
sector continues to dominate New Zealand's exports, despite accounting for 6.5%
of GDP in 2013.[17]
- text: who was the first president of indian science congress meeting held in kolkata
in 1914
- text: Get Over It (Eagles song) "Get Over It" is a song by the Eagles released as
a single after a fourteen-year breakup. It was also the first song written by
bandmates Don Henley and Glenn Frey when the band reunited. "Get Over It" was
played live for the first time during their Hell Freezes Over tour in 1994. It
returned the band to the U.S. Top 40 after a fourteen-year absence, peaking at
No. 31 on the Billboard Hot 100 chart. It also hit No. 4 on the Billboard Mainstream
Rock Tracks chart. The song was not played live by the Eagles after the "Hell
Freezes Over" tour in 1994. It remains the group's last Top 40 hit in the U.S.
- text: 'Cornelius the Centurion Cornelius (Greek: Κορνήλιος) was a Roman centurion
who is considered by Christians to be one of the first Gentiles to convert to
the faith, as related in Acts of the Apostles.'
datasets:
- sentence-transformers/natural-questions
pipeline_tag: feature-extraction
library_name: sentence-transformers
metrics:
- dot_accuracy@1
- dot_accuracy@3
- dot_accuracy@5
- dot_accuracy@10
- dot_precision@1
- dot_precision@3
- dot_precision@5
- dot_precision@10
- dot_recall@1
- dot_recall@3
- dot_recall@5
- dot_recall@10
- dot_ndcg@10
- dot_mrr@10
- dot_map@100
- query_active_dims
- query_sparsity_ratio
- corpus_active_dims
- corpus_sparsity_ratio
co2_eq_emissions:
emissions: 47.46504952064221
energy_consumed: 0.12211166786032028
source: codecarbon
training_type: fine-tuning
on_cloud: false
cpu_model: 13th Gen Intel(R) Core(TM) i7-13700K
ram_total_size: 31.777088165283203
hours_used: 0.373
hardware_used: 1 x NVIDIA GeForce RTX 3090
model-index:
- name: Sparse CSR model trained on Natural Questions
results:
- task:
type: sparse-information-retrieval
name: Sparse Information Retrieval
dataset:
name: NanoMSMARCO 8
type: NanoMSMARCO_8
metrics:
- type: dot_accuracy@1
value: 0.16
name: Dot Accuracy@1
- type: dot_accuracy@3
value: 0.2
name: Dot Accuracy@3
- type: dot_accuracy@5
value: 0.28
name: Dot Accuracy@5
- type: dot_accuracy@10
value: 0.4
name: Dot Accuracy@10
- type: dot_precision@1
value: 0.16
name: Dot Precision@1
- type: dot_precision@3
value: 0.06666666666666667
name: Dot Precision@3
- type: dot_precision@5
value: 0.056000000000000015
name: Dot Precision@5
- type: dot_precision@10
value: 0.04
name: Dot Precision@10
- type: dot_recall@1
value: 0.16
name: Dot Recall@1
- type: dot_recall@3
value: 0.2
name: Dot Recall@3
- type: dot_recall@5
value: 0.28
name: Dot Recall@5
- type: dot_recall@10
value: 0.4
name: Dot Recall@10
- type: dot_ndcg@10
value: 0.2553207334684595
name: Dot Ndcg@10
- type: dot_mrr@10
value: 0.2125238095238095
name: Dot Mrr@10
- type: dot_map@100
value: 0.2276491742120407
name: Dot Map@100
- type: query_active_dims
value: 8.0
name: Query Active Dims
- type: query_sparsity_ratio
value: 0.998046875
name: Query Sparsity Ratio
- type: corpus_active_dims
value: 8.0
name: Corpus Active Dims
- type: corpus_sparsity_ratio
value: 0.998046875
name: Corpus Sparsity Ratio
- task:
type: sparse-nano-beir
name: Sparse Nano BEIR
dataset:
name: NanoBEIR mean 8
type: NanoBEIR_mean_8
metrics:
- type: dot_accuracy@1
value: 0.16
name: Dot Accuracy@1
- type: dot_accuracy@3
value: 0.2
name: Dot Accuracy@3
- type: dot_accuracy@5
value: 0.28
name: Dot Accuracy@5
- type: dot_accuracy@10
value: 0.4
name: Dot Accuracy@10
- type: dot_precision@1
value: 0.16
name: Dot Precision@1
- type: dot_precision@3
value: 0.06666666666666667
name: Dot Precision@3
- type: dot_precision@5
value: 0.056000000000000015
name: Dot Precision@5
- type: dot_precision@10
value: 0.04
name: Dot Precision@10
- type: dot_recall@1
value: 0.16
name: Dot Recall@1
- type: dot_recall@3
value: 0.2
name: Dot Recall@3
- type: dot_recall@5
value: 0.28
name: Dot Recall@5
- type: dot_recall@10
value: 0.4
name: Dot Recall@10
- type: dot_ndcg@10
value: 0.2553207334684595
name: Dot Ndcg@10
- type: dot_mrr@10
value: 0.2125238095238095
name: Dot Mrr@10
- type: dot_map@100
value: 0.2276491742120407
name: Dot Map@100
- type: query_active_dims
value: 8.0
name: Query Active Dims
- type: query_sparsity_ratio
value: 0.998046875
name: Query Sparsity Ratio
- type: corpus_active_dims
value: 8.0
name: Corpus Active Dims
- type: corpus_sparsity_ratio
value: 0.998046875
name: Corpus Sparsity Ratio
- task:
type: sparse-information-retrieval
name: Sparse Information Retrieval
dataset:
name: NanoMSMARCO 16
type: NanoMSMARCO_16
metrics:
- type: dot_accuracy@1
value: 0.24
name: Dot Accuracy@1
- type: dot_accuracy@3
value: 0.38
name: Dot Accuracy@3
- type: dot_accuracy@5
value: 0.5
name: Dot Accuracy@5
- type: dot_accuracy@10
value: 0.58
name: Dot Accuracy@10
- type: dot_precision@1
value: 0.24
name: Dot Precision@1
- type: dot_precision@3
value: 0.12666666666666665
name: Dot Precision@3
- type: dot_precision@5
value: 0.1
name: Dot Precision@5
- type: dot_precision@10
value: 0.05800000000000001
name: Dot Precision@10
- type: dot_recall@1
value: 0.24
name: Dot Recall@1
- type: dot_recall@3
value: 0.38
name: Dot Recall@3
- type: dot_recall@5
value: 0.5
name: Dot Recall@5
- type: dot_recall@10
value: 0.58
name: Dot Recall@10
- type: dot_ndcg@10
value: 0.3970913773706993
name: Dot Ndcg@10
- type: dot_mrr@10
value: 0.34011111111111114
name: Dot Mrr@10
- type: dot_map@100
value: 0.3530097721306681
name: Dot Map@100
- type: query_active_dims
value: 16.0
name: Query Active Dims
- type: query_sparsity_ratio
value: 0.99609375
name: Query Sparsity Ratio
- type: corpus_active_dims
value: 16.0
name: Corpus Active Dims
- type: corpus_sparsity_ratio
value: 0.99609375
name: Corpus Sparsity Ratio
- task:
type: sparse-nano-beir
name: Sparse Nano BEIR
dataset:
name: NanoBEIR mean 16
type: NanoBEIR_mean_16
metrics:
- type: dot_accuracy@1
value: 0.24
name: Dot Accuracy@1
- type: dot_accuracy@3
value: 0.38
name: Dot Accuracy@3
- type: dot_accuracy@5
value: 0.5
name: Dot Accuracy@5
- type: dot_accuracy@10
value: 0.58
name: Dot Accuracy@10
- type: dot_precision@1
value: 0.24
name: Dot Precision@1
- type: dot_precision@3
value: 0.12666666666666665
name: Dot Precision@3
- type: dot_precision@5
value: 0.1
name: Dot Precision@5
- type: dot_precision@10
value: 0.05800000000000001
name: Dot Precision@10
- type: dot_recall@1
value: 0.24
name: Dot Recall@1
- type: dot_recall@3
value: 0.38
name: Dot Recall@3
- type: dot_recall@5
value: 0.5
name: Dot Recall@5
- type: dot_recall@10
value: 0.58
name: Dot Recall@10
- type: dot_ndcg@10
value: 0.3970913773706993
name: Dot Ndcg@10
- type: dot_mrr@10
value: 0.34011111111111114
name: Dot Mrr@10
- type: dot_map@100
value: 0.3530097721306681
name: Dot Map@100
- type: query_active_dims
value: 16.0
name: Query Active Dims
- type: query_sparsity_ratio
value: 0.99609375
name: Query Sparsity Ratio
- type: corpus_active_dims
value: 16.0
name: Corpus Active Dims
- type: corpus_sparsity_ratio
value: 0.99609375
name: Corpus Sparsity Ratio
- task:
type: sparse-information-retrieval
name: Sparse Information Retrieval
dataset:
name: NanoMSMARCO 32
type: NanoMSMARCO_32
metrics:
- type: dot_accuracy@1
value: 0.3
name: Dot Accuracy@1
- type: dot_accuracy@3
value: 0.46
name: Dot Accuracy@3
- type: dot_accuracy@5
value: 0.62
name: Dot Accuracy@5
- type: dot_accuracy@10
value: 0.7
name: Dot Accuracy@10
- type: dot_precision@1
value: 0.3
name: Dot Precision@1
- type: dot_precision@3
value: 0.15333333333333332
name: Dot Precision@3
- type: dot_precision@5
value: 0.12400000000000003
name: Dot Precision@5
- type: dot_precision@10
value: 0.07
name: Dot Precision@10
- type: dot_recall@1
value: 0.3
name: Dot Recall@1
- type: dot_recall@3
value: 0.46
name: Dot Recall@3
- type: dot_recall@5
value: 0.62
name: Dot Recall@5
- type: dot_recall@10
value: 0.7
name: Dot Recall@10
- type: dot_ndcg@10
value: 0.4872873611978302
name: Dot Ndcg@10
- type: dot_mrr@10
value: 0.4205555555555555
name: Dot Mrr@10
- type: dot_map@100
value: 0.43261790702081204
name: Dot Map@100
- type: query_active_dims
value: 32.0
name: Query Active Dims
- type: query_sparsity_ratio
value: 0.9921875
name: Query Sparsity Ratio
- type: corpus_active_dims
value: 32.0
name: Corpus Active Dims
- type: corpus_sparsity_ratio
value: 0.9921875
name: Corpus Sparsity Ratio
- task:
type: sparse-nano-beir
name: Sparse Nano BEIR
dataset:
name: NanoBEIR mean 32
type: NanoBEIR_mean_32
metrics:
- type: dot_accuracy@1
value: 0.3
name: Dot Accuracy@1
- type: dot_accuracy@3
value: 0.46
name: Dot Accuracy@3
- type: dot_accuracy@5
value: 0.62
name: Dot Accuracy@5
- type: dot_accuracy@10
value: 0.7
name: Dot Accuracy@10
- type: dot_precision@1
value: 0.3
name: Dot Precision@1
- type: dot_precision@3
value: 0.15333333333333332
name: Dot Precision@3
- type: dot_precision@5
value: 0.12400000000000003
name: Dot Precision@5
- type: dot_precision@10
value: 0.07
name: Dot Precision@10
- type: dot_recall@1
value: 0.3
name: Dot Recall@1
- type: dot_recall@3
value: 0.46
name: Dot Recall@3
- type: dot_recall@5
value: 0.62
name: Dot Recall@5
- type: dot_recall@10
value: 0.7
name: Dot Recall@10
- type: dot_ndcg@10
value: 0.4872873611978302
name: Dot Ndcg@10
- type: dot_mrr@10
value: 0.4205555555555555
name: Dot Mrr@10
- type: dot_map@100
value: 0.43261790702081204
name: Dot Map@100
- type: query_active_dims
value: 32.0
name: Query Active Dims
- type: query_sparsity_ratio
value: 0.9921875
name: Query Sparsity Ratio
- type: corpus_active_dims
value: 32.0
name: Corpus Active Dims
- type: corpus_sparsity_ratio
value: 0.9921875
name: Corpus Sparsity Ratio
- task:
type: sparse-information-retrieval
name: Sparse Information Retrieval
dataset:
name: NanoMSMARCO 64
type: NanoMSMARCO_64
metrics:
- type: dot_accuracy@1
value: 0.42
name: Dot Accuracy@1
- type: dot_accuracy@3
value: 0.6
name: Dot Accuracy@3
- type: dot_accuracy@5
value: 0.68
name: Dot Accuracy@5
- type: dot_accuracy@10
value: 0.78
name: Dot Accuracy@10
- type: dot_precision@1
value: 0.42
name: Dot Precision@1
- type: dot_precision@3
value: 0.2
name: Dot Precision@3
- type: dot_precision@5
value: 0.136
name: Dot Precision@5
- type: dot_precision@10
value: 0.07800000000000001
name: Dot Precision@10
- type: dot_recall@1
value: 0.42
name: Dot Recall@1
- type: dot_recall@3
value: 0.6
name: Dot Recall@3
- type: dot_recall@5
value: 0.68
name: Dot Recall@5
- type: dot_recall@10
value: 0.78
name: Dot Recall@10
- type: dot_ndcg@10
value: 0.591060924123
name: Dot Ndcg@10
- type: dot_mrr@10
value: 0.5316666666666666
name: Dot Mrr@10
- type: dot_map@100
value: 0.5405635822735777
name: Dot Map@100
- type: query_active_dims
value: 64.0
name: Query Active Dims
- type: query_sparsity_ratio
value: 0.984375
name: Query Sparsity Ratio
- type: corpus_active_dims
value: 64.0
name: Corpus Active Dims
- type: corpus_sparsity_ratio
value: 0.984375
name: Corpus Sparsity Ratio
- task:
type: sparse-nano-beir
name: Sparse Nano BEIR
dataset:
name: NanoBEIR mean 64
type: NanoBEIR_mean_64
metrics:
- type: dot_accuracy@1
value: 0.42
name: Dot Accuracy@1
- type: dot_accuracy@3
value: 0.6
name: Dot Accuracy@3
- type: dot_accuracy@5
value: 0.68
name: Dot Accuracy@5
- type: dot_accuracy@10
value: 0.78
name: Dot Accuracy@10
- type: dot_precision@1
value: 0.42
name: Dot Precision@1
- type: dot_precision@3
value: 0.2
name: Dot Precision@3
- type: dot_precision@5
value: 0.136
name: Dot Precision@5
- type: dot_precision@10
value: 0.07800000000000001
name: Dot Precision@10
- type: dot_recall@1
value: 0.42
name: Dot Recall@1
- type: dot_recall@3
value: 0.6
name: Dot Recall@3
- type: dot_recall@5
value: 0.68
name: Dot Recall@5
- type: dot_recall@10
value: 0.78
name: Dot Recall@10
- type: dot_ndcg@10
value: 0.591060924123
name: Dot Ndcg@10
- type: dot_mrr@10
value: 0.5316666666666666
name: Dot Mrr@10
- type: dot_map@100
value: 0.5405635822735777
name: Dot Map@100
- type: query_active_dims
value: 64.0
name: Query Active Dims
- type: query_sparsity_ratio
value: 0.984375
name: Query Sparsity Ratio
- type: corpus_active_dims
value: 64.0
name: Corpus Active Dims
- type: corpus_sparsity_ratio
value: 0.984375
name: Corpus Sparsity Ratio
- task:
type: sparse-information-retrieval
name: Sparse Information Retrieval
dataset:
name: NanoMSMARCO 128
type: NanoMSMARCO_128
metrics:
- type: dot_accuracy@1
value: 0.36
name: Dot Accuracy@1
- type: dot_accuracy@3
value: 0.64
name: Dot Accuracy@3
- type: dot_accuracy@5
value: 0.72
name: Dot Accuracy@5
- type: dot_accuracy@10
value: 0.82
name: Dot Accuracy@10
- type: dot_precision@1
value: 0.36
name: Dot Precision@1
- type: dot_precision@3
value: 0.21333333333333332
name: Dot Precision@3
- type: dot_precision@5
value: 0.14400000000000002
name: Dot Precision@5
- type: dot_precision@10
value: 0.08199999999999999
name: Dot Precision@10
- type: dot_recall@1
value: 0.36
name: Dot Recall@1
- type: dot_recall@3
value: 0.64
name: Dot Recall@3
- type: dot_recall@5
value: 0.72
name: Dot Recall@5
- type: dot_recall@10
value: 0.82
name: Dot Recall@10
- type: dot_ndcg@10
value: 0.5877041624403332
name: Dot Ndcg@10
- type: dot_mrr@10
value: 0.5139126984126984
name: Dot Mrr@10
- type: dot_map@100
value: 0.5216553078498245
name: Dot Map@100
- type: query_active_dims
value: 128.0
name: Query Active Dims
- type: query_sparsity_ratio
value: 0.96875
name: Query Sparsity Ratio
- type: corpus_active_dims
value: 128.0
name: Corpus Active Dims
- type: corpus_sparsity_ratio
value: 0.96875
name: Corpus Sparsity Ratio
- task:
type: sparse-nano-beir
name: Sparse Nano BEIR
dataset:
name: NanoBEIR mean 128
type: NanoBEIR_mean_128
metrics:
- type: dot_accuracy@1
value: 0.36
name: Dot Accuracy@1
- type: dot_accuracy@3
value: 0.64
name: Dot Accuracy@3
- type: dot_accuracy@5
value: 0.72
name: Dot Accuracy@5
- type: dot_accuracy@10
value: 0.82
name: Dot Accuracy@10
- type: dot_precision@1
value: 0.36
name: Dot Precision@1
- type: dot_precision@3
value: 0.21333333333333332
name: Dot Precision@3
- type: dot_precision@5
value: 0.14400000000000002
name: Dot Precision@5
- type: dot_precision@10
value: 0.08199999999999999
name: Dot Precision@10
- type: dot_recall@1
value: 0.36
name: Dot Recall@1
- type: dot_recall@3
value: 0.64
name: Dot Recall@3
- type: dot_recall@5
value: 0.72
name: Dot Recall@5
- type: dot_recall@10
value: 0.82
name: Dot Recall@10
- type: dot_ndcg@10
value: 0.5877041624403332
name: Dot Ndcg@10
- type: dot_mrr@10
value: 0.5139126984126984
name: Dot Mrr@10
- type: dot_map@100
value: 0.5216553078498245
name: Dot Map@100
- type: query_active_dims
value: 128.0
name: Query Active Dims
- type: query_sparsity_ratio
value: 0.96875
name: Query Sparsity Ratio
- type: corpus_active_dims
value: 128.0
name: Corpus Active Dims
- type: corpus_sparsity_ratio
value: 0.96875
name: Corpus Sparsity Ratio
- task:
type: sparse-information-retrieval
name: Sparse Information Retrieval
dataset:
name: NanoMSMARCO 256
type: NanoMSMARCO_256
metrics:
- type: dot_accuracy@1
value: 0.42
name: Dot Accuracy@1
- type: dot_accuracy@3
value: 0.64
name: Dot Accuracy@3
- type: dot_accuracy@5
value: 0.74
name: Dot Accuracy@5
- type: dot_accuracy@10
value: 0.82
name: Dot Accuracy@10
- type: dot_precision@1
value: 0.42
name: Dot Precision@1
- type: dot_precision@3
value: 0.21333333333333332
name: Dot Precision@3
- type: dot_precision@5
value: 0.14800000000000002
name: Dot Precision@5
- type: dot_precision@10
value: 0.08199999999999999
name: Dot Precision@10
- type: dot_recall@1
value: 0.42
name: Dot Recall@1
- type: dot_recall@3
value: 0.64
name: Dot Recall@3
- type: dot_recall@5
value: 0.74
name: Dot Recall@5
- type: dot_recall@10
value: 0.82
name: Dot Recall@10
- type: dot_ndcg@10
value: 0.6246741093433497
name: Dot Ndcg@10
- type: dot_mrr@10
value: 0.5611904761904761
name: Dot Mrr@10
- type: dot_map@100
value: 0.5700740174857822
name: Dot Map@100
- type: query_active_dims
value: 256.0
name: Query Active Dims
- type: query_sparsity_ratio
value: 0.9375
name: Query Sparsity Ratio
- type: corpus_active_dims
value: 256.0
name: Corpus Active Dims
- type: corpus_sparsity_ratio
value: 0.9375
name: Corpus Sparsity Ratio
- task:
type: sparse-nano-beir
name: Sparse Nano BEIR
dataset:
name: NanoBEIR mean 256
type: NanoBEIR_mean_256
metrics:
- type: dot_accuracy@1
value: 0.42
name: Dot Accuracy@1
- type: dot_accuracy@3
value: 0.64
name: Dot Accuracy@3
- type: dot_accuracy@5
value: 0.74
name: Dot Accuracy@5
- type: dot_accuracy@10
value: 0.82
name: Dot Accuracy@10
- type: dot_precision@1
value: 0.42
name: Dot Precision@1
- type: dot_precision@3
value: 0.21333333333333332
name: Dot Precision@3
- type: dot_precision@5
value: 0.14800000000000002
name: Dot Precision@5
- type: dot_precision@10
value: 0.08199999999999999
name: Dot Precision@10
- type: dot_recall@1
value: 0.42
name: Dot Recall@1
- type: dot_recall@3
value: 0.64
name: Dot Recall@3
- type: dot_recall@5
value: 0.74
name: Dot Recall@5
- type: dot_recall@10
value: 0.82
name: Dot Recall@10
- type: dot_ndcg@10
value: 0.6246741093433497
name: Dot Ndcg@10
- type: dot_mrr@10
value: 0.5611904761904761
name: Dot Mrr@10
- type: dot_map@100
value: 0.5700740174857822
name: Dot Map@100
- type: query_active_dims
value: 256.0
name: Query Active Dims
- type: query_sparsity_ratio
value: 0.9375
name: Query Sparsity Ratio
- type: corpus_active_dims
value: 256.0
name: Corpus Active Dims
- type: corpus_sparsity_ratio
value: 0.9375
name: Corpus Sparsity Ratio
- task:
type: sparse-information-retrieval
name: Sparse Information Retrieval
dataset:
name: NanoClimateFEVER
type: NanoClimateFEVER
metrics:
- type: dot_accuracy@1
value: 0.36
name: Dot Accuracy@1
- type: dot_accuracy@3
value: 0.52
name: Dot Accuracy@3
- type: dot_accuracy@5
value: 0.66
name: Dot Accuracy@5
- type: dot_accuracy@10
value: 0.8
name: Dot Accuracy@10
- type: dot_precision@1
value: 0.36
name: Dot Precision@1
- type: dot_precision@3
value: 0.2
name: Dot Precision@3
- type: dot_precision@5
value: 0.15600000000000003
name: Dot Precision@5
- type: dot_precision@10
value: 0.11399999999999999
name: Dot Precision@10
- type: dot_recall@1
value: 0.1573333333333333
name: Dot Recall@1
- type: dot_recall@3
value: 0.24733333333333335
name: Dot Recall@3
- type: dot_recall@5
value: 0.313
name: Dot Recall@5
- type: dot_recall@10
value: 0.43799999999999994
name: Dot Recall@10
- type: dot_ndcg@10
value: 0.35656565827441056
name: Dot Ndcg@10
- type: dot_mrr@10
value: 0.479611111111111
name: Dot Mrr@10
- type: dot_map@100
value: 0.27824724841197973
name: Dot Map@100
- type: query_active_dims
value: 256.0
name: Query Active Dims
- type: query_sparsity_ratio
value: 0.9375
name: Query Sparsity Ratio
- type: corpus_active_dims
value: 256.0
name: Corpus Active Dims
- type: corpus_sparsity_ratio
value: 0.9375
name: Corpus Sparsity Ratio
- task:
type: sparse-information-retrieval
name: Sparse Information Retrieval
dataset:
name: NanoDBPedia
type: NanoDBPedia
metrics:
- type: dot_accuracy@1
value: 0.8
name: Dot Accuracy@1
- type: dot_accuracy@3
value: 0.88
name: Dot Accuracy@3
- type: dot_accuracy@5
value: 0.92
name: Dot Accuracy@5
- type: dot_accuracy@10
value: 0.94
name: Dot Accuracy@10
- type: dot_precision@1
value: 0.8
name: Dot Precision@1
- type: dot_precision@3
value: 0.6
name: Dot Precision@3
- type: dot_precision@5
value: 0.5800000000000001
name: Dot Precision@5
- type: dot_precision@10
value: 0.484
name: Dot Precision@10
- type: dot_recall@1
value: 0.09363124545761783
name: Dot Recall@1
- type: dot_recall@3
value: 0.1617934849974966
name: Dot Recall@3
- type: dot_recall@5
value: 0.2269008951554618
name: Dot Recall@5
- type: dot_recall@10
value: 0.33039847394029737
name: Dot Recall@10
- type: dot_ndcg@10
value: 0.607206208169174
name: Dot Ndcg@10
- type: dot_mrr@10
value: 0.852
name: Dot Mrr@10
- type: dot_map@100
value: 0.4541106866963296
name: Dot Map@100
- type: query_active_dims
value: 256.0
name: Query Active Dims
- type: query_sparsity_ratio
value: 0.9375
name: Query Sparsity Ratio
- type: corpus_active_dims
value: 256.0
name: Corpus Active Dims
- type: corpus_sparsity_ratio
value: 0.9375
name: Corpus Sparsity Ratio
- task:
type: sparse-information-retrieval
name: Sparse Information Retrieval
dataset:
name: NanoFEVER
type: NanoFEVER
metrics:
- type: dot_accuracy@1
value: 0.9
name: Dot Accuracy@1
- type: dot_accuracy@3
value: 0.92
name: Dot Accuracy@3
- type: dot_accuracy@5
value: 0.96
name: Dot Accuracy@5
- type: dot_accuracy@10
value: 0.96
name: Dot Accuracy@10
- type: dot_precision@1
value: 0.9
name: Dot Precision@1
- type: dot_precision@3
value: 0.32666666666666666
name: Dot Precision@3
- type: dot_precision@5
value: 0.204
name: Dot Precision@5
- type: dot_precision@10
value: 0.102
name: Dot Precision@10
- type: dot_recall@1
value: 0.8466666666666667
name: Dot Recall@1
- type: dot_recall@3
value: 0.8933333333333333
name: Dot Recall@3
- type: dot_recall@5
value: 0.9333333333333332
name: Dot Recall@5
- type: dot_recall@10
value: 0.9333333333333332
name: Dot Recall@10
- type: dot_ndcg@10
value: 0.9080731736277194
name: Dot Ndcg@10
- type: dot_mrr@10
value: 0.92
name: Dot Mrr@10
- type: dot_map@100
value: 0.8921016869970377
name: Dot Map@100
- type: query_active_dims
value: 256.0
name: Query Active Dims
- type: query_sparsity_ratio
value: 0.9375
name: Query Sparsity Ratio
- type: corpus_active_dims
value: 256.0
name: Corpus Active Dims
- type: corpus_sparsity_ratio
value: 0.9375
name: Corpus Sparsity Ratio
- task:
type: sparse-information-retrieval
name: Sparse Information Retrieval
dataset:
name: NanoFiQA2018
type: NanoFiQA2018
metrics:
- type: dot_accuracy@1
value: 0.56
name: Dot Accuracy@1
- type: dot_accuracy@3
value: 0.7
name: Dot Accuracy@3
- type: dot_accuracy@5
value: 0.72
name: Dot Accuracy@5
- type: dot_accuracy@10
value: 0.72
name: Dot Accuracy@10
- type: dot_precision@1
value: 0.56
name: Dot Precision@1
- type: dot_precision@3
value: 0.31999999999999995
name: Dot Precision@3
- type: dot_precision@5
value: 0.236
name: Dot Precision@5
- type: dot_precision@10
value: 0.13
name: Dot Precision@10
- type: dot_recall@1
value: 0.29924603174603176
name: Dot Recall@1
- type: dot_recall@3
value: 0.46729365079365076
name: Dot Recall@3
- type: dot_recall@5
value: 0.5337301587301587
name: Dot Recall@5
- type: dot_recall@10
value: 0.5473412698412699
name: Dot Recall@10
- type: dot_ndcg@10
value: 0.5253203704684166
name: Dot Ndcg@10
- type: dot_mrr@10
value: 0.6316666666666666
name: Dot Mrr@10
- type: dot_map@100
value: 0.48003870359394873
name: Dot Map@100
- type: query_active_dims
value: 256.0
name: Query Active Dims
- type: query_sparsity_ratio
value: 0.9375
name: Query Sparsity Ratio
- type: corpus_active_dims
value: 256.0
name: Corpus Active Dims
- type: corpus_sparsity_ratio
value: 0.9375
name: Corpus Sparsity Ratio
- task:
type: sparse-information-retrieval
name: Sparse Information Retrieval
dataset:
name: NanoHotpotQA
type: NanoHotpotQA
metrics:
- type: dot_accuracy@1
value: 0.76
name: Dot Accuracy@1
- type: dot_accuracy@3
value: 0.9
name: Dot Accuracy@3
- type: dot_accuracy@5
value: 0.94
name: Dot Accuracy@5
- type: dot_accuracy@10
value: 0.94
name: Dot Accuracy@10
- type: dot_precision@1
value: 0.76
name: Dot Precision@1
- type: dot_precision@3
value: 0.5
name: Dot Precision@3
- type: dot_precision@5
value: 0.316
name: Dot Precision@5
- type: dot_precision@10
value: 0.17199999999999996
name: Dot Precision@10
- type: dot_recall@1
value: 0.38
name: Dot Recall@1
- type: dot_recall@3
value: 0.75
name: Dot Recall@3
- type: dot_recall@5
value: 0.79
name: Dot Recall@5
- type: dot_recall@10
value: 0.86
name: Dot Recall@10
- type: dot_ndcg@10
value: 0.7910580229553633
name: Dot Ndcg@10
- type: dot_mrr@10
value: 0.8333333333333333
name: Dot Mrr@10
- type: dot_map@100
value: 0.7410767962182596
name: Dot Map@100
- type: query_active_dims
value: 256.0
name: Query Active Dims
- type: query_sparsity_ratio
value: 0.9375
name: Query Sparsity Ratio
- type: corpus_active_dims
value: 256.0
name: Corpus Active Dims
- type: corpus_sparsity_ratio
value: 0.9375
name: Corpus Sparsity Ratio
- task:
type: sparse-information-retrieval
name: Sparse Information Retrieval
dataset:
name: NanoMSMARCO
type: NanoMSMARCO
metrics:
- type: dot_accuracy@1
value: 0.42
name: Dot Accuracy@1
- type: dot_accuracy@3
value: 0.64
name: Dot Accuracy@3
- type: dot_accuracy@5
value: 0.76
name: Dot Accuracy@5
- type: dot_accuracy@10
value: 0.82
name: Dot Accuracy@10
- type: dot_precision@1
value: 0.42
name: Dot Precision@1
- type: dot_precision@3
value: 0.21333333333333332
name: Dot Precision@3
- type: dot_precision@5
value: 0.15200000000000002
name: Dot Precision@5
- type: dot_precision@10
value: 0.08199999999999999
name: Dot Precision@10
- type: dot_recall@1
value: 0.42
name: Dot Recall@1
- type: dot_recall@3
value: 0.64
name: Dot Recall@3
- type: dot_recall@5
value: 0.76
name: Dot Recall@5
- type: dot_recall@10
value: 0.82
name: Dot Recall@10
- type: dot_ndcg@10
value: 0.6248295446703863
name: Dot Ndcg@10
- type: dot_mrr@10
value: 0.5613809523809523
name: Dot Mrr@10
- type: dot_map@100
value: 0.5703445525063172
name: Dot Map@100
- type: query_active_dims
value: 256.0
name: Query Active Dims
- type: query_sparsity_ratio
value: 0.9375
name: Query Sparsity Ratio
- type: corpus_active_dims
value: 256.0
name: Corpus Active Dims
- type: corpus_sparsity_ratio
value: 0.9375
name: Corpus Sparsity Ratio
- task:
type: sparse-information-retrieval
name: Sparse Information Retrieval
dataset:
name: NanoNFCorpus
type: NanoNFCorpus
metrics:
- type: dot_accuracy@1
value: 0.44
name: Dot Accuracy@1
- type: dot_accuracy@3
value: 0.56
name: Dot Accuracy@3
- type: dot_accuracy@5
value: 0.6
name: Dot Accuracy@5
- type: dot_accuracy@10
value: 0.74
name: Dot Accuracy@10
- type: dot_precision@1
value: 0.44
name: Dot Precision@1
- type: dot_precision@3
value: 0.3533333333333333
name: Dot Precision@3
- type: dot_precision@5
value: 0.32
name: Dot Precision@5
- type: dot_precision@10
value: 0.272
name: Dot Precision@10
- type: dot_recall@1
value: 0.03517605061787946
name: Dot Recall@1
- type: dot_recall@3
value: 0.07646787868408336
name: Dot Recall@3
- type: dot_recall@5
value: 0.11598401724221898
name: Dot Recall@5
- type: dot_recall@10
value: 0.15931797747485815
name: Dot Recall@10
- type: dot_ndcg@10
value: 0.33447068554509884
name: Dot Ndcg@10
- type: dot_mrr@10
value: 0.5147698412698413
name: Dot Mrr@10
- type: dot_map@100
value: 0.15438429278142912
name: Dot Map@100
- type: query_active_dims
value: 256.0
name: Query Active Dims
- type: query_sparsity_ratio
value: 0.9375
name: Query Sparsity Ratio
- type: corpus_active_dims
value: 256.0
name: Corpus Active Dims
- type: corpus_sparsity_ratio
value: 0.9375
name: Corpus Sparsity Ratio
- task:
type: sparse-information-retrieval
name: Sparse Information Retrieval
dataset:
name: NanoNQ
type: NanoNQ
metrics:
- type: dot_accuracy@1
value: 0.5
name: Dot Accuracy@1
- type: dot_accuracy@3
value: 0.72
name: Dot Accuracy@3
- type: dot_accuracy@5
value: 0.76
name: Dot Accuracy@5
- type: dot_accuracy@10
value: 0.84
name: Dot Accuracy@10
- type: dot_precision@1
value: 0.5
name: Dot Precision@1
- type: dot_precision@3
value: 0.24666666666666667
name: Dot Precision@3
- type: dot_precision@5
value: 0.16
name: Dot Precision@5
- type: dot_precision@10
value: 0.088
name: Dot Precision@10
- type: dot_recall@1
value: 0.48
name: Dot Recall@1
- type: dot_recall@3
value: 0.67
name: Dot Recall@3
- type: dot_recall@5
value: 0.72
name: Dot Recall@5
- type: dot_recall@10
value: 0.79
name: Dot Recall@10
- type: dot_ndcg@10
value: 0.6479593376479322
name: Dot Ndcg@10
- type: dot_mrr@10
value: 0.6163333333333333
name: Dot Mrr@10
- type: dot_map@100
value: 0.6035174820443362
name: Dot Map@100
- type: query_active_dims
value: 256.0
name: Query Active Dims
- type: query_sparsity_ratio
value: 0.9375
name: Query Sparsity Ratio
- type: corpus_active_dims
value: 256.0
name: Corpus Active Dims
- type: corpus_sparsity_ratio
value: 0.9375
name: Corpus Sparsity Ratio
- task:
type: sparse-information-retrieval
name: Sparse Information Retrieval
dataset:
name: NanoQuoraRetrieval
type: NanoQuoraRetrieval
metrics:
- type: dot_accuracy@1
value: 0.92
name: Dot Accuracy@1
- type: dot_accuracy@3
value: 0.96
name: Dot Accuracy@3
- type: dot_accuracy@5
value: 1.0
name: Dot Accuracy@5
- type: dot_accuracy@10
value: 1.0
name: Dot Accuracy@10
- type: dot_precision@1
value: 0.92
name: Dot Precision@1
- type: dot_precision@3
value: 0.3999999999999999
name: Dot Precision@3
- type: dot_precision@5
value: 0.26799999999999996
name: Dot Precision@5
- type: dot_precision@10
value: 0.13799999999999998
name: Dot Precision@10
- type: dot_recall@1
value: 0.7973333333333332
name: Dot Recall@1
- type: dot_recall@3
value: 0.922
name: Dot Recall@3
- type: dot_recall@5
value: 0.9893333333333334
name: Dot Recall@5
- type: dot_recall@10
value: 0.996
name: Dot Recall@10
- type: dot_ndcg@10
value: 0.9493554410777213
name: Dot Ndcg@10
- type: dot_mrr@10
value: 0.9456666666666667
name: Dot Mrr@10
- type: dot_map@100
value: 0.9286237373737373
name: Dot Map@100
- type: query_active_dims
value: 256.0
name: Query Active Dims
- type: query_sparsity_ratio
value: 0.9375
name: Query Sparsity Ratio
- type: corpus_active_dims
value: 256.0
name: Corpus Active Dims
- type: corpus_sparsity_ratio
value: 0.9375
name: Corpus Sparsity Ratio
- task:
type: sparse-information-retrieval
name: Sparse Information Retrieval
dataset:
name: NanoSCIDOCS
type: NanoSCIDOCS
metrics:
- type: dot_accuracy@1
value: 0.56
name: Dot Accuracy@1
- type: dot_accuracy@3
value: 0.76
name: Dot Accuracy@3
- type: dot_accuracy@5
value: 0.78
name: Dot Accuracy@5
- type: dot_accuracy@10
value: 0.88
name: Dot Accuracy@10
- type: dot_precision@1
value: 0.56
name: Dot Precision@1
- type: dot_precision@3
value: 0.4
name: Dot Precision@3
- type: dot_precision@5
value: 0.29200000000000004
name: Dot Precision@5
- type: dot_precision@10
value: 0.20999999999999996
name: Dot Precision@10
- type: dot_recall@1
value: 0.11866666666666668
name: Dot Recall@1
- type: dot_recall@3
value: 0.24966666666666665
name: Dot Recall@3
- type: dot_recall@5
value: 0.30266666666666675
name: Dot Recall@5
- type: dot_recall@10
value: 0.4316666666666666
name: Dot Recall@10
- type: dot_ndcg@10
value: 0.4265505670611979
name: Dot Ndcg@10
- type: dot_mrr@10
value: 0.6682142857142856
name: Dot Mrr@10
- type: dot_map@100
value: 0.3385559757581844
name: Dot Map@100
- type: query_active_dims
value: 256.0
name: Query Active Dims
- type: query_sparsity_ratio
value: 0.9375
name: Query Sparsity Ratio
- type: corpus_active_dims
value: 256.0
name: Corpus Active Dims
- type: corpus_sparsity_ratio
value: 0.9375
name: Corpus Sparsity Ratio
- task:
type: sparse-information-retrieval
name: Sparse Information Retrieval
dataset:
name: NanoArguAna
type: NanoArguAna
metrics:
- type: dot_accuracy@1
value: 0.36
name: Dot Accuracy@1
- type: dot_accuracy@3
value: 0.78
name: Dot Accuracy@3
- type: dot_accuracy@5
value: 0.84
name: Dot Accuracy@5
- type: dot_accuracy@10
value: 0.94
name: Dot Accuracy@10
- type: dot_precision@1
value: 0.36
name: Dot Precision@1
- type: dot_precision@3
value: 0.26
name: Dot Precision@3
- type: dot_precision@5
value: 0.16799999999999998
name: Dot Precision@5
- type: dot_precision@10
value: 0.09399999999999999
name: Dot Precision@10
- type: dot_recall@1
value: 0.36
name: Dot Recall@1
- type: dot_recall@3
value: 0.78
name: Dot Recall@3
- type: dot_recall@5
value: 0.84
name: Dot Recall@5
- type: dot_recall@10
value: 0.94
name: Dot Recall@10
- type: dot_ndcg@10
value: 0.6674878961390456
name: Dot Ndcg@10
- type: dot_mrr@10
value: 0.5782460317460317
name: Dot Mrr@10
- type: dot_map@100
value: 0.5802628384687207
name: Dot Map@100
- type: query_active_dims
value: 256.0
name: Query Active Dims
- type: query_sparsity_ratio
value: 0.9375
name: Query Sparsity Ratio
- type: corpus_active_dims
value: 256.0
name: Corpus Active Dims
- type: corpus_sparsity_ratio
value: 0.9375
name: Corpus Sparsity Ratio
- task:
type: sparse-information-retrieval
name: Sparse Information Retrieval
dataset:
name: NanoSciFact
type: NanoSciFact
metrics:
- type: dot_accuracy@1
value: 0.7
name: Dot Accuracy@1
- type: dot_accuracy@3
value: 0.8
name: Dot Accuracy@3
- type: dot_accuracy@5
value: 0.82
name: Dot Accuracy@5
- type: dot_accuracy@10
value: 0.88
name: Dot Accuracy@10
- type: dot_precision@1
value: 0.7
name: Dot Precision@1
- type: dot_precision@3
value: 0.2866666666666667
name: Dot Precision@3
- type: dot_precision@5
value: 0.184
name: Dot Precision@5
- type: dot_precision@10
value: 0.1
name: Dot Precision@10
- type: dot_recall@1
value: 0.665
name: Dot Recall@1
- type: dot_recall@3
value: 0.79
name: Dot Recall@3
- type: dot_recall@5
value: 0.81
name: Dot Recall@5
- type: dot_recall@10
value: 0.88
name: Dot Recall@10
- type: dot_ndcg@10
value: 0.7776207541845983
name: Dot Ndcg@10
- type: dot_mrr@10
value: 0.7519444444444445
name: Dot Mrr@10
- type: dot_map@100
value: 0.742050969601677
name: Dot Map@100
- type: query_active_dims
value: 256.0
name: Query Active Dims
- type: query_sparsity_ratio
value: 0.9375
name: Query Sparsity Ratio
- type: corpus_active_dims
value: 256.0
name: Corpus Active Dims
- type: corpus_sparsity_ratio
value: 0.9375
name: Corpus Sparsity Ratio
- task:
type: sparse-information-retrieval
name: Sparse Information Retrieval
dataset:
name: NanoTouche2020
type: NanoTouche2020
metrics:
- type: dot_accuracy@1
value: 0.5306122448979592
name: Dot Accuracy@1
- type: dot_accuracy@3
value: 0.8367346938775511
name: Dot Accuracy@3
- type: dot_accuracy@5
value: 0.8979591836734694
name: Dot Accuracy@5
- type: dot_accuracy@10
value: 0.9795918367346939
name: Dot Accuracy@10
- type: dot_precision@1
value: 0.5306122448979592
name: Dot Precision@1
- type: dot_precision@3
value: 0.5306122448979591
name: Dot Precision@3
- type: dot_precision@5
value: 0.5142857142857142
name: Dot Precision@5
- type: dot_precision@10
value: 0.43469387755102035
name: Dot Precision@10
- type: dot_recall@1
value: 0.03672756127909814
name: Dot Recall@1
- type: dot_recall@3
value: 0.11122615754561782
name: Dot Recall@3
- type: dot_recall@5
value: 0.17495428374251296
name: Dot Recall@5
- type: dot_recall@10
value: 0.28731694149491666
name: Dot Recall@10
- type: dot_ndcg@10
value: 0.47801832046439025
name: Dot Ndcg@10
- type: dot_mrr@10
value: 0.7052073210236476
name: Dot Mrr@10
- type: dot_map@100
value: 0.3658602219028105
name: Dot Map@100
- type: query_active_dims
value: 256.0
name: Query Active Dims
- type: query_sparsity_ratio
value: 0.9375
name: Query Sparsity Ratio
- type: corpus_active_dims
value: 256.0
name: Corpus Active Dims
- type: corpus_sparsity_ratio
value: 0.9375
name: Corpus Sparsity Ratio
- task:
type: sparse-nano-beir
name: Sparse Nano BEIR
dataset:
name: NanoBEIR mean
type: NanoBEIR_mean
metrics:
- type: dot_accuracy@1
value: 0.6008163265306123
name: Dot Accuracy@1
- type: dot_accuracy@3
value: 0.7674411302982732
name: Dot Accuracy@3
- type: dot_accuracy@5
value: 0.8198430141287284
name: Dot Accuracy@5
- type: dot_accuracy@10
value: 0.8799686028257457
name: Dot Accuracy@10
- type: dot_precision@1
value: 0.6008163265306123
name: Dot Precision@1
- type: dot_precision@3
value: 0.3567137624280482
name: Dot Precision@3
- type: dot_precision@5
value: 0.2730989010989011
name: Dot Precision@5
- type: dot_precision@10
value: 0.18620722135007847
name: Dot Precision@10
- type: dot_recall@1
value: 0.3607523760846636
name: Dot Recall@1
- type: dot_recall@3
value: 0.5199318850272447
name: Dot Recall@3
- type: dot_recall@5
value: 0.5776848221695142
name: Dot Recall@5
- type: dot_recall@10
value: 0.6471826663654878
name: Dot Recall@10
- type: dot_ndcg@10
value: 0.6226550754065734
name: Dot Ndcg@10
- type: dot_mrr@10
value: 0.6967979990531011
name: Dot Mrr@10
- type: dot_map@100
value: 0.5483980917195976
name: Dot Map@100
- type: query_active_dims
value: 256.0
name: Query Active Dims
- type: query_sparsity_ratio
value: 0.9375
name: Query Sparsity Ratio
- type: corpus_active_dims
value: 256.0
name: Corpus Active Dims
- type: corpus_sparsity_ratio
value: 0.9375
name: Corpus Sparsity Ratio
---
# Sparse CSR model trained on Natural Questions
This is a [CSR Sparse Encoder](https://www.sbert.net/docs/sparse_encoder/usage/usage.html) model finetuned from [mixedbread-ai/mxbai-embed-large-v1](https://huggingface.co/mixedbread-ai/mxbai-embed-large-v1) on the [natural-questions](https://huggingface.co/datasets/sentence-transformers/natural-questions) dataset using the [sentence-transformers](https://www.SBERT.net) library. It maps sentences & paragraphs to a 4096-dimensional sparse vector space with 256 maximum active dimensions and can be used for semantic search and sparse retrieval.
## Model Details
### Model Description
- **Model Type:** CSR Sparse Encoder
- **Base model:** [mixedbread-ai/mxbai-embed-large-v1](https://huggingface.co/mixedbread-ai/mxbai-embed-large-v1) <!-- at revision db9d1fe0f31addb4978201b2bf3e577f3f8900d2 -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 4096 dimensions (trained with 256 maximum active dimensions)
- **Similarity Function:** Dot Product
- **Training Dataset:**
- [natural-questions](https://huggingface.co/datasets/sentence-transformers/natural-questions)
- **Language:** en
- **License:** apache-2.0
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Documentation:** [Sparse Encoder Documentation](https://www.sbert.net/docs/sparse_encoder/usage/usage.html)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sparse Encoders on Hugging Face](https://huggingface.co/models?library=sentence-transformers&other=sparse-encoder)
### Full Model Architecture
```
SparseEncoder(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): CSRSparsity({'input_dim': 1024, 'hidden_dim': 4096, 'k': 256, 'k_aux': 512, 'normalize': False, 'dead_threshold': 30})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SparseEncoder
# Download from the 🤗 Hub
model = SparseEncoder("tomaarsen/csr-mxbai-embed-large-v1-nq-updated-reconstruction-4")
# Run inference
queries = [
"who is cornelius in the book of acts",
]
documents = [
'Cornelius the Centurion Cornelius (Greek: Κορνήλιος) was a Roman centurion who is considered by Christians to be one of the first Gentiles to convert to the faith, as related in Acts of the Apostles.',
"Joe Ranft Ranft reunited with Lasseter when he was hired by Pixar in 1991 as their head of story.[1] There he worked on all of their films produced up to 2006; this included Toy Story (for which he received an Academy Award nomination) and A Bug's Life, as the co-story writer and others as story supervisor. His final film was Cars. He also voiced characters in many of the films, including Heimlich the caterpillar in A Bug's Life, Wheezy the penguin in Toy Story 2, and Jacques the shrimp in Finding Nemo.[1]",
'Wonderful Tonight "Wonderful Tonight" is a ballad written by Eric Clapton. It was included on Clapton\'s 1977 album Slowhand. Clapton wrote the song about Pattie Boyd.[1] The female vocal harmonies on the song are provided by Marcella Detroit (then Marcy Levy) and Yvonne Elliman.',
]
query_embeddings = model.encode_query(queries)
document_embeddings = model.encode_document(documents)
print(query_embeddings.shape, document_embeddings.shape)
# [1, 4096] [3, 4096]
# Get the similarity scores for the embeddings
similarities = model.similarity(query_embeddings, document_embeddings)
print(similarities)
# tensor([[111.0676, 23.1031, 22.6751]])
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Sparse Information Retrieval
* Dataset: `NanoMSMARCO_8`
* Evaluated with [<code>SparseInformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sparse_encoder/evaluation.html#sentence_transformers.sparse_encoder.evaluation.SparseInformationRetrievalEvaluator) with these parameters:
```json
{
"max_active_dims": 8
}
```
| Metric | Value |
|:----------------------|:-----------|
| dot_accuracy@1 | 0.16 |
| dot_accuracy@3 | 0.2 |
| dot_accuracy@5 | 0.28 |
| dot_accuracy@10 | 0.4 |
| dot_precision@1 | 0.16 |
| dot_precision@3 | 0.0667 |
| dot_precision@5 | 0.056 |
| dot_precision@10 | 0.04 |
| dot_recall@1 | 0.16 |
| dot_recall@3 | 0.2 |
| dot_recall@5 | 0.28 |
| dot_recall@10 | 0.4 |
| **dot_ndcg@10** | **0.2553** |
| dot_mrr@10 | 0.2125 |
| dot_map@100 | 0.2276 |
| query_active_dims | 8.0 |
| query_sparsity_ratio | 0.998 |
| corpus_active_dims | 8.0 |
| corpus_sparsity_ratio | 0.998 |
#### Sparse Nano BEIR
* Dataset: `NanoBEIR_mean_8`
* Evaluated with [<code>SparseNanoBEIREvaluator</code>](https://sbert.net/docs/package_reference/sparse_encoder/evaluation.html#sentence_transformers.sparse_encoder.evaluation.SparseNanoBEIREvaluator) with these parameters:
```json
{
"dataset_names": [
"msmarco"
],
"max_active_dims": 8
}
```
| Metric | Value |
|:----------------------|:-----------|
| dot_accuracy@1 | 0.16 |
| dot_accuracy@3 | 0.2 |
| dot_accuracy@5 | 0.28 |
| dot_accuracy@10 | 0.4 |
| dot_precision@1 | 0.16 |
| dot_precision@3 | 0.0667 |
| dot_precision@5 | 0.056 |
| dot_precision@10 | 0.04 |
| dot_recall@1 | 0.16 |
| dot_recall@3 | 0.2 |
| dot_recall@5 | 0.28 |
| dot_recall@10 | 0.4 |
| **dot_ndcg@10** | **0.2553** |
| dot_mrr@10 | 0.2125 |
| dot_map@100 | 0.2276 |
| query_active_dims | 8.0 |
| query_sparsity_ratio | 0.998 |
| corpus_active_dims | 8.0 |
| corpus_sparsity_ratio | 0.998 |
#### Sparse Information Retrieval
* Dataset: `NanoMSMARCO_16`
* Evaluated with [<code>SparseInformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sparse_encoder/evaluation.html#sentence_transformers.sparse_encoder.evaluation.SparseInformationRetrievalEvaluator) with these parameters:
```json
{
"max_active_dims": 16
}
```
| Metric | Value |
|:----------------------|:-----------|
| dot_accuracy@1 | 0.24 |
| dot_accuracy@3 | 0.38 |
| dot_accuracy@5 | 0.5 |
| dot_accuracy@10 | 0.58 |
| dot_precision@1 | 0.24 |
| dot_precision@3 | 0.1267 |
| dot_precision@5 | 0.1 |
| dot_precision@10 | 0.058 |
| dot_recall@1 | 0.24 |
| dot_recall@3 | 0.38 |
| dot_recall@5 | 0.5 |
| dot_recall@10 | 0.58 |
| **dot_ndcg@10** | **0.3971** |
| dot_mrr@10 | 0.3401 |
| dot_map@100 | 0.353 |
| query_active_dims | 16.0 |
| query_sparsity_ratio | 0.9961 |
| corpus_active_dims | 16.0 |
| corpus_sparsity_ratio | 0.9961 |
#### Sparse Nano BEIR
* Dataset: `NanoBEIR_mean_16`
* Evaluated with [<code>SparseNanoBEIREvaluator</code>](https://sbert.net/docs/package_reference/sparse_encoder/evaluation.html#sentence_transformers.sparse_encoder.evaluation.SparseNanoBEIREvaluator) with these parameters:
```json
{
"dataset_names": [
"msmarco"
],
"max_active_dims": 16
}
```
| Metric | Value |
|:----------------------|:-----------|
| dot_accuracy@1 | 0.24 |
| dot_accuracy@3 | 0.38 |
| dot_accuracy@5 | 0.5 |
| dot_accuracy@10 | 0.58 |
| dot_precision@1 | 0.24 |
| dot_precision@3 | 0.1267 |
| dot_precision@5 | 0.1 |
| dot_precision@10 | 0.058 |
| dot_recall@1 | 0.24 |
| dot_recall@3 | 0.38 |
| dot_recall@5 | 0.5 |
| dot_recall@10 | 0.58 |
| **dot_ndcg@10** | **0.3971** |
| dot_mrr@10 | 0.3401 |
| dot_map@100 | 0.353 |
| query_active_dims | 16.0 |
| query_sparsity_ratio | 0.9961 |
| corpus_active_dims | 16.0 |
| corpus_sparsity_ratio | 0.9961 |
#### Sparse Information Retrieval
* Dataset: `NanoMSMARCO_32`
* Evaluated with [<code>SparseInformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sparse_encoder/evaluation.html#sentence_transformers.sparse_encoder.evaluation.SparseInformationRetrievalEvaluator) with these parameters:
```json
{
"max_active_dims": 32
}
```
| Metric | Value |
|:----------------------|:-----------|
| dot_accuracy@1 | 0.3 |
| dot_accuracy@3 | 0.46 |
| dot_accuracy@5 | 0.62 |
| dot_accuracy@10 | 0.7 |
| dot_precision@1 | 0.3 |
| dot_precision@3 | 0.1533 |
| dot_precision@5 | 0.124 |
| dot_precision@10 | 0.07 |
| dot_recall@1 | 0.3 |
| dot_recall@3 | 0.46 |
| dot_recall@5 | 0.62 |
| dot_recall@10 | 0.7 |
| **dot_ndcg@10** | **0.4873** |
| dot_mrr@10 | 0.4206 |
| dot_map@100 | 0.4326 |
| query_active_dims | 32.0 |
| query_sparsity_ratio | 0.9922 |
| corpus_active_dims | 32.0 |
| corpus_sparsity_ratio | 0.9922 |
#### Sparse Nano BEIR
* Dataset: `NanoBEIR_mean_32`
* Evaluated with [<code>SparseNanoBEIREvaluator</code>](https://sbert.net/docs/package_reference/sparse_encoder/evaluation.html#sentence_transformers.sparse_encoder.evaluation.SparseNanoBEIREvaluator) with these parameters:
```json
{
"dataset_names": [
"msmarco"
],
"max_active_dims": 32
}
```
| Metric | Value |
|:----------------------|:-----------|
| dot_accuracy@1 | 0.3 |
| dot_accuracy@3 | 0.46 |
| dot_accuracy@5 | 0.62 |
| dot_accuracy@10 | 0.7 |
| dot_precision@1 | 0.3 |
| dot_precision@3 | 0.1533 |
| dot_precision@5 | 0.124 |
| dot_precision@10 | 0.07 |
| dot_recall@1 | 0.3 |
| dot_recall@3 | 0.46 |
| dot_recall@5 | 0.62 |
| dot_recall@10 | 0.7 |
| **dot_ndcg@10** | **0.4873** |
| dot_mrr@10 | 0.4206 |
| dot_map@100 | 0.4326 |
| query_active_dims | 32.0 |
| query_sparsity_ratio | 0.9922 |
| corpus_active_dims | 32.0 |
| corpus_sparsity_ratio | 0.9922 |
#### Sparse Information Retrieval
* Dataset: `NanoMSMARCO_64`
* Evaluated with [<code>SparseInformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sparse_encoder/evaluation.html#sentence_transformers.sparse_encoder.evaluation.SparseInformationRetrievalEvaluator) with these parameters:
```json
{
"max_active_dims": 64
}
```
| Metric | Value |
|:----------------------|:-----------|
| dot_accuracy@1 | 0.42 |
| dot_accuracy@3 | 0.6 |
| dot_accuracy@5 | 0.68 |
| dot_accuracy@10 | 0.78 |
| dot_precision@1 | 0.42 |
| dot_precision@3 | 0.2 |
| dot_precision@5 | 0.136 |
| dot_precision@10 | 0.078 |
| dot_recall@1 | 0.42 |
| dot_recall@3 | 0.6 |
| dot_recall@5 | 0.68 |
| dot_recall@10 | 0.78 |
| **dot_ndcg@10** | **0.5911** |
| dot_mrr@10 | 0.5317 |
| dot_map@100 | 0.5406 |
| query_active_dims | 64.0 |
| query_sparsity_ratio | 0.9844 |
| corpus_active_dims | 64.0 |
| corpus_sparsity_ratio | 0.9844 |
#### Sparse Nano BEIR
* Dataset: `NanoBEIR_mean_64`
* Evaluated with [<code>SparseNanoBEIREvaluator</code>](https://sbert.net/docs/package_reference/sparse_encoder/evaluation.html#sentence_transformers.sparse_encoder.evaluation.SparseNanoBEIREvaluator) with these parameters:
```json
{
"dataset_names": [
"msmarco"
],
"max_active_dims": 64
}
```
| Metric | Value |
|:----------------------|:-----------|
| dot_accuracy@1 | 0.42 |
| dot_accuracy@3 | 0.6 |
| dot_accuracy@5 | 0.68 |
| dot_accuracy@10 | 0.78 |
| dot_precision@1 | 0.42 |
| dot_precision@3 | 0.2 |
| dot_precision@5 | 0.136 |
| dot_precision@10 | 0.078 |
| dot_recall@1 | 0.42 |
| dot_recall@3 | 0.6 |
| dot_recall@5 | 0.68 |
| dot_recall@10 | 0.78 |
| **dot_ndcg@10** | **0.5911** |
| dot_mrr@10 | 0.5317 |
| dot_map@100 | 0.5406 |
| query_active_dims | 64.0 |
| query_sparsity_ratio | 0.9844 |
| corpus_active_dims | 64.0 |
| corpus_sparsity_ratio | 0.9844 |
#### Sparse Information Retrieval
* Dataset: `NanoMSMARCO_128`
* Evaluated with [<code>SparseInformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sparse_encoder/evaluation.html#sentence_transformers.sparse_encoder.evaluation.SparseInformationRetrievalEvaluator) with these parameters:
```json
{
"max_active_dims": 128
}
```
| Metric | Value |
|:----------------------|:-----------|
| dot_accuracy@1 | 0.36 |
| dot_accuracy@3 | 0.64 |
| dot_accuracy@5 | 0.72 |
| dot_accuracy@10 | 0.82 |
| dot_precision@1 | 0.36 |
| dot_precision@3 | 0.2133 |
| dot_precision@5 | 0.144 |
| dot_precision@10 | 0.082 |
| dot_recall@1 | 0.36 |
| dot_recall@3 | 0.64 |
| dot_recall@5 | 0.72 |
| dot_recall@10 | 0.82 |
| **dot_ndcg@10** | **0.5877** |
| dot_mrr@10 | 0.5139 |
| dot_map@100 | 0.5217 |
| query_active_dims | 128.0 |
| query_sparsity_ratio | 0.9688 |
| corpus_active_dims | 128.0 |
| corpus_sparsity_ratio | 0.9688 |
#### Sparse Nano BEIR
* Dataset: `NanoBEIR_mean_128`
* Evaluated with [<code>SparseNanoBEIREvaluator</code>](https://sbert.net/docs/package_reference/sparse_encoder/evaluation.html#sentence_transformers.sparse_encoder.evaluation.SparseNanoBEIREvaluator) with these parameters:
```json
{
"dataset_names": [
"msmarco"
],
"max_active_dims": 128
}
```
| Metric | Value |
|:----------------------|:-----------|
| dot_accuracy@1 | 0.36 |
| dot_accuracy@3 | 0.64 |
| dot_accuracy@5 | 0.72 |
| dot_accuracy@10 | 0.82 |
| dot_precision@1 | 0.36 |
| dot_precision@3 | 0.2133 |
| dot_precision@5 | 0.144 |
| dot_precision@10 | 0.082 |
| dot_recall@1 | 0.36 |
| dot_recall@3 | 0.64 |
| dot_recall@5 | 0.72 |
| dot_recall@10 | 0.82 |
| **dot_ndcg@10** | **0.5877** |
| dot_mrr@10 | 0.5139 |
| dot_map@100 | 0.5217 |
| query_active_dims | 128.0 |
| query_sparsity_ratio | 0.9688 |
| corpus_active_dims | 128.0 |
| corpus_sparsity_ratio | 0.9688 |
#### Sparse Information Retrieval
* Dataset: `NanoMSMARCO_256`
* Evaluated with [<code>SparseInformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sparse_encoder/evaluation.html#sentence_transformers.sparse_encoder.evaluation.SparseInformationRetrievalEvaluator) with these parameters:
```json
{
"max_active_dims": 256
}
```
| Metric | Value |
|:----------------------|:-----------|
| dot_accuracy@1 | 0.42 |
| dot_accuracy@3 | 0.64 |
| dot_accuracy@5 | 0.74 |
| dot_accuracy@10 | 0.82 |
| dot_precision@1 | 0.42 |
| dot_precision@3 | 0.2133 |
| dot_precision@5 | 0.148 |
| dot_precision@10 | 0.082 |
| dot_recall@1 | 0.42 |
| dot_recall@3 | 0.64 |
| dot_recall@5 | 0.74 |
| dot_recall@10 | 0.82 |
| **dot_ndcg@10** | **0.6247** |
| dot_mrr@10 | 0.5612 |
| dot_map@100 | 0.5701 |
| query_active_dims | 256.0 |
| query_sparsity_ratio | 0.9375 |
| corpus_active_dims | 256.0 |
| corpus_sparsity_ratio | 0.9375 |
#### Sparse Nano BEIR
* Dataset: `NanoBEIR_mean_256`
* Evaluated with [<code>SparseNanoBEIREvaluator</code>](https://sbert.net/docs/package_reference/sparse_encoder/evaluation.html#sentence_transformers.sparse_encoder.evaluation.SparseNanoBEIREvaluator) with these parameters:
```json
{
"dataset_names": [
"msmarco"
],
"max_active_dims": 256
}
```
| Metric | Value |
|:----------------------|:-----------|
| dot_accuracy@1 | 0.42 |
| dot_accuracy@3 | 0.64 |
| dot_accuracy@5 | 0.74 |
| dot_accuracy@10 | 0.82 |
| dot_precision@1 | 0.42 |
| dot_precision@3 | 0.2133 |
| dot_precision@5 | 0.148 |
| dot_precision@10 | 0.082 |
| dot_recall@1 | 0.42 |
| dot_recall@3 | 0.64 |
| dot_recall@5 | 0.74 |
| dot_recall@10 | 0.82 |
| **dot_ndcg@10** | **0.6247** |
| dot_mrr@10 | 0.5612 |
| dot_map@100 | 0.5701 |
| query_active_dims | 256.0 |
| query_sparsity_ratio | 0.9375 |
| corpus_active_dims | 256.0 |
| corpus_sparsity_ratio | 0.9375 |
#### Sparse Information Retrieval
* Datasets: `NanoClimateFEVER`, `NanoDBPedia`, `NanoFEVER`, `NanoFiQA2018`, `NanoHotpotQA`, `NanoMSMARCO`, `NanoNFCorpus`, `NanoNQ`, `NanoQuoraRetrieval`, `NanoSCIDOCS`, `NanoArguAna`, `NanoSciFact` and `NanoTouche2020`
* Evaluated with [<code>SparseInformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sparse_encoder/evaluation.html#sentence_transformers.sparse_encoder.evaluation.SparseInformationRetrievalEvaluator)
| Metric | NanoClimateFEVER | NanoDBPedia | NanoFEVER | NanoFiQA2018 | NanoHotpotQA | NanoMSMARCO | NanoNFCorpus | NanoNQ | NanoQuoraRetrieval | NanoSCIDOCS | NanoArguAna | NanoSciFact | NanoTouche2020 |
|:----------------------|:-----------------|:------------|:-----------|:-------------|:-------------|:------------|:-------------|:----------|:-------------------|:------------|:------------|:------------|:---------------|
| dot_accuracy@1 | 0.36 | 0.8 | 0.9 | 0.56 | 0.76 | 0.42 | 0.44 | 0.5 | 0.92 | 0.56 | 0.36 | 0.7 | 0.5306 |
| dot_accuracy@3 | 0.52 | 0.88 | 0.92 | 0.7 | 0.9 | 0.64 | 0.56 | 0.72 | 0.96 | 0.76 | 0.78 | 0.8 | 0.8367 |
| dot_accuracy@5 | 0.66 | 0.92 | 0.96 | 0.72 | 0.94 | 0.76 | 0.6 | 0.76 | 1.0 | 0.78 | 0.84 | 0.82 | 0.898 |
| dot_accuracy@10 | 0.8 | 0.94 | 0.96 | 0.72 | 0.94 | 0.82 | 0.74 | 0.84 | 1.0 | 0.88 | 0.94 | 0.88 | 0.9796 |
| dot_precision@1 | 0.36 | 0.8 | 0.9 | 0.56 | 0.76 | 0.42 | 0.44 | 0.5 | 0.92 | 0.56 | 0.36 | 0.7 | 0.5306 |
| dot_precision@3 | 0.2 | 0.6 | 0.3267 | 0.32 | 0.5 | 0.2133 | 0.3533 | 0.2467 | 0.4 | 0.4 | 0.26 | 0.2867 | 0.5306 |
| dot_precision@5 | 0.156 | 0.58 | 0.204 | 0.236 | 0.316 | 0.152 | 0.32 | 0.16 | 0.268 | 0.292 | 0.168 | 0.184 | 0.5143 |
| dot_precision@10 | 0.114 | 0.484 | 0.102 | 0.13 | 0.172 | 0.082 | 0.272 | 0.088 | 0.138 | 0.21 | 0.094 | 0.1 | 0.4347 |
| dot_recall@1 | 0.1573 | 0.0936 | 0.8467 | 0.2992 | 0.38 | 0.42 | 0.0352 | 0.48 | 0.7973 | 0.1187 | 0.36 | 0.665 | 0.0367 |
| dot_recall@3 | 0.2473 | 0.1618 | 0.8933 | 0.4673 | 0.75 | 0.64 | 0.0765 | 0.67 | 0.922 | 0.2497 | 0.78 | 0.79 | 0.1112 |
| dot_recall@5 | 0.313 | 0.2269 | 0.9333 | 0.5337 | 0.79 | 0.76 | 0.116 | 0.72 | 0.9893 | 0.3027 | 0.84 | 0.81 | 0.175 |
| dot_recall@10 | 0.438 | 0.3304 | 0.9333 | 0.5473 | 0.86 | 0.82 | 0.1593 | 0.79 | 0.996 | 0.4317 | 0.94 | 0.88 | 0.2873 |
| **dot_ndcg@10** | **0.3566** | **0.6072** | **0.9081** | **0.5253** | **0.7911** | **0.6248** | **0.3345** | **0.648** | **0.9494** | **0.4266** | **0.6675** | **0.7776** | **0.478** |
| dot_mrr@10 | 0.4796 | 0.852 | 0.92 | 0.6317 | 0.8333 | 0.5614 | 0.5148 | 0.6163 | 0.9457 | 0.6682 | 0.5782 | 0.7519 | 0.7052 |
| dot_map@100 | 0.2782 | 0.4541 | 0.8921 | 0.48 | 0.7411 | 0.5703 | 0.1544 | 0.6035 | 0.9286 | 0.3386 | 0.5803 | 0.7421 | 0.3659 |
| query_active_dims | 256.0 | 256.0 | 256.0 | 256.0 | 256.0 | 256.0 | 256.0 | 256.0 | 256.0 | 256.0 | 256.0 | 256.0 | 256.0 |
| query_sparsity_ratio | 0.9375 | 0.9375 | 0.9375 | 0.9375 | 0.9375 | 0.9375 | 0.9375 | 0.9375 | 0.9375 | 0.9375 | 0.9375 | 0.9375 | 0.9375 |
| corpus_active_dims | 256.0 | 256.0 | 256.0 | 256.0 | 256.0 | 256.0 | 256.0 | 256.0 | 256.0 | 256.0 | 256.0 | 256.0 | 256.0 |
| corpus_sparsity_ratio | 0.9375 | 0.9375 | 0.9375 | 0.9375 | 0.9375 | 0.9375 | 0.9375 | 0.9375 | 0.9375 | 0.9375 | 0.9375 | 0.9375 | 0.9375 |
#### Sparse Nano BEIR
* Dataset: `NanoBEIR_mean`
* Evaluated with [<code>SparseNanoBEIREvaluator</code>](https://sbert.net/docs/package_reference/sparse_encoder/evaluation.html#sentence_transformers.sparse_encoder.evaluation.SparseNanoBEIREvaluator) with these parameters:
```json
{
"dataset_names": [
"climatefever",
"dbpedia",
"fever",
"fiqa2018",
"hotpotqa",
"msmarco",
"nfcorpus",
"nq",
"quoraretrieval",
"scidocs",
"arguana",
"scifact",
"touche2020"
]
}
```
| Metric | Value |
|:----------------------|:-----------|
| dot_accuracy@1 | 0.6008 |
| dot_accuracy@3 | 0.7674 |
| dot_accuracy@5 | 0.8198 |
| dot_accuracy@10 | 0.88 |
| dot_precision@1 | 0.6008 |
| dot_precision@3 | 0.3567 |
| dot_precision@5 | 0.2731 |
| dot_precision@10 | 0.1862 |
| dot_recall@1 | 0.3608 |
| dot_recall@3 | 0.5199 |
| dot_recall@5 | 0.5777 |
| dot_recall@10 | 0.6472 |
| **dot_ndcg@10** | **0.6227** |
| dot_mrr@10 | 0.6968 |
| dot_map@100 | 0.5484 |
| query_active_dims | 256.0 |
| query_sparsity_ratio | 0.9375 |
| corpus_active_dims | 256.0 |
| corpus_sparsity_ratio | 0.9375 |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### natural-questions
* Dataset: [natural-questions](https://huggingface.co/datasets/sentence-transformers/natural-questions) at [f9e894e](https://huggingface.co/datasets/sentence-transformers/natural-questions/tree/f9e894e1081e206e577b4eaa9ee6de2b06ae6f17)
* Size: 99,000 training samples
* Columns: <code>query</code> and <code>answer</code>
* Approximate statistics based on the first 1000 samples:
| | query | answer |
|:--------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 10 tokens</li><li>mean: 11.71 tokens</li><li>max: 26 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 131.81 tokens</li><li>max: 450 tokens</li></ul> |
* Samples:
| query | answer |
|:--------------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>who played the father in papa don't preach</code> | <code>Alex McArthur Alex McArthur (born March 6, 1957) is an American actor.</code> |
| <code>where was the location of the battle of hastings</code> | <code>Battle of Hastings The Battle of Hastings[a] was fought on 14 October 1066 between the Norman-French army of William, the Duke of Normandy, and an English army under the Anglo-Saxon King Harold Godwinson, beginning the Norman conquest of England. It took place approximately 7 miles (11 kilometres) northwest of Hastings, close to the present-day town of Battle, East Sussex, and was a decisive Norman victory.</code> |
| <code>how many puppies can a dog give birth to</code> | <code>Canine reproduction The largest litter size to date was set by a Neapolitan Mastiff in Manea, Cambridgeshire, UK on November 29, 2004; the litter was 24 puppies.[22]</code> |
* Loss: [<code>CSRLoss</code>](https://sbert.net/docs/package_reference/sparse_encoder/losses.html#csrloss) with these parameters:
```json
{
"beta": 0.1,
"gamma": 3.0,
"loss": "SparseMultipleNegativesRankingLoss(scale=1.0, similarity_fct='dot_score')"
}
```
### Evaluation Dataset
#### natural-questions
* Dataset: [natural-questions](https://huggingface.co/datasets/sentence-transformers/natural-questions) at [f9e894e](https://huggingface.co/datasets/sentence-transformers/natural-questions/tree/f9e894e1081e206e577b4eaa9ee6de2b06ae6f17)
* Size: 1,000 evaluation samples
* Columns: <code>query</code> and <code>answer</code>
* Approximate statistics based on the first 1000 samples:
| | query | answer |
|:--------|:-----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 10 tokens</li><li>mean: 11.69 tokens</li><li>max: 23 tokens</li></ul> | <ul><li>min: 15 tokens</li><li>mean: 134.01 tokens</li><li>max: 512 tokens</li></ul> |
* Samples:
| query | answer |
|:-------------------------------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>where is the tiber river located in italy</code> | <code>Tiber The Tiber (/ˈtaɪbər/, Latin: Tiberis,[1] Italian: Tevere [ˈteːvere])[2] is the third-longest river in Italy, rising in the Apennine Mountains in Emilia-Romagna and flowing 406 kilometres (252 mi) through Tuscany, Umbria and Lazio, where it is joined by the river Aniene, to the Tyrrhenian Sea, between Ostia and Fiumicino.[3] It drains a basin estimated at 17,375 square kilometres (6,709 sq mi). The river has achieved lasting fame as the main watercourse of the city of Rome, founded on its eastern banks.</code> |
| <code>what kind of car does jay gatsby drive</code> | <code>Jay Gatsby At the Buchanan home, Jordan Baker, Nick, Jay, and the Buchanans decide to visit New York City. Tom borrows Gatsby's yellow Rolls Royce to drive up to the city. On the way to New York City, Tom makes a detour at a gas station in "the Valley of Ashes", a run-down part of Long Island. The owner, George Wilson, shares his concern that his wife, Myrtle, may be having an affair. This unnerves Tom, who has been having an affair with Myrtle, and he leaves in a hurry.</code> |
| <code>who sings if i can dream about you</code> | <code>I Can Dream About You "I Can Dream About You" is a song performed by American singer Dan Hartman on the soundtrack album of the film Streets of Fire. Released in 1984 as a single from the soundtrack, and included on Hartman's album I Can Dream About You, it reached number 6 on the Billboard Hot 100.[1]</code> |
* Loss: [<code>CSRLoss</code>](https://sbert.net/docs/package_reference/sparse_encoder/losses.html#csrloss) with these parameters:
```json
{
"beta": 0.1,
"gamma": 3.0,
"loss": "SparseMultipleNegativesRankingLoss(scale=1.0, similarity_fct='dot_score')"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 64
- `per_device_eval_batch_size`: 64
- `learning_rate`: 4e-05
- `num_train_epochs`: 1
- `bf16`: True
- `load_best_model_at_end`: True
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 64
- `per_device_eval_batch_size`: 64
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 4e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 1
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.0
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: True
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: True
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
- `router_mapping`: {}
- `learning_rate_mapping`: {}
</details>
### Training Logs
| Epoch | Step | Training Loss | Validation Loss | NanoMSMARCO_8_dot_ndcg@10 | NanoBEIR_mean_8_dot_ndcg@10 | NanoMSMARCO_16_dot_ndcg@10 | NanoBEIR_mean_16_dot_ndcg@10 | NanoMSMARCO_32_dot_ndcg@10 | NanoBEIR_mean_32_dot_ndcg@10 | NanoMSMARCO_64_dot_ndcg@10 | NanoBEIR_mean_64_dot_ndcg@10 | NanoMSMARCO_128_dot_ndcg@10 | NanoBEIR_mean_128_dot_ndcg@10 | NanoMSMARCO_256_dot_ndcg@10 | NanoBEIR_mean_256_dot_ndcg@10 | NanoClimateFEVER_dot_ndcg@10 | NanoDBPedia_dot_ndcg@10 | NanoFEVER_dot_ndcg@10 | NanoFiQA2018_dot_ndcg@10 | NanoHotpotQA_dot_ndcg@10 | NanoMSMARCO_dot_ndcg@10 | NanoNFCorpus_dot_ndcg@10 | NanoNQ_dot_ndcg@10 | NanoQuoraRetrieval_dot_ndcg@10 | NanoSCIDOCS_dot_ndcg@10 | NanoArguAna_dot_ndcg@10 | NanoSciFact_dot_ndcg@10 | NanoTouche2020_dot_ndcg@10 | NanoBEIR_mean_dot_ndcg@10 |
|:----------:|:--------:|:-------------:|:---------------:|:-------------------------:|:---------------------------:|:--------------------------:|:----------------------------:|:--------------------------:|:----------------------------:|:--------------------------:|:----------------------------:|:---------------------------:|:-----------------------------:|:---------------------------:|:-----------------------------:|:----------------------------:|:-----------------------:|:---------------------:|:------------------------:|:------------------------:|:-----------------------:|:------------------------:|:------------------:|:------------------------------:|:-----------------------:|:-----------------------:|:-----------------------:|:--------------------------:|:-------------------------:|
| -1 | -1 | - | - | 0.2447 | 0.2447 | 0.3677 | 0.3677 | 0.5086 | 0.5086 | 0.5304 | 0.5304 | 0.6134 | 0.6134 | 0.5961 | 0.5961 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0646 | 100 | 0.5048 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1293 | 200 | 0.5017 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1939 | 300 | 0.531 | 0.6279 | 0.2125 | 0.2125 | 0.4075 | 0.4075 | 0.4686 | 0.4686 | 0.5701 | 0.5701 | 0.6086 | 0.6086 | 0.5877 | 0.5877 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2586 | 400 | 0.4992 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3232 | 500 | 0.5574 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3878 | 600 | 0.5821 | 0.6178 | 0.2312 | 0.2312 | 0.4248 | 0.4248 | 0.4239 | 0.4239 | 0.5142 | 0.5142 | 0.6034 | 0.6034 | 0.6177 | 0.6177 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4525 | 700 | 0.5632 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5171 | 800 | 0.5786 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5818 | 900 | 0.5329 | 0.5743 | 0.2662 | 0.2662 | 0.4468 | 0.4468 | 0.4976 | 0.4976 | 0.5630 | 0.5630 | 0.6279 | 0.6279 | 0.6240 | 0.6240 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6464 | 1000 | 0.5409 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7111 | 1100 | 0.4995 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7757 | 1200 | 0.5269 | 0.5169 | 0.2838 | 0.2838 | 0.3874 | 0.3874 | 0.4738 | 0.4738 | 0.5892 | 0.5892 | 0.5798 | 0.5798 | 0.5962 | 0.5962 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.8403 | 1300 | 0.5553 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.9050 | 1400 | 0.45 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| **0.9696** | **1500** | **0.4551** | **0.5188** | **0.2553** | **0.2553** | **0.3971** | **0.3971** | **0.4873** | **0.4873** | **0.5911** | **0.5911** | **0.5877** | **0.5877** | **0.6247** | **0.6247** | **-** | **-** | **-** | **-** | **-** | **-** | **-** | **-** | **-** | **-** | **-** | **-** | **-** | **-** |
| -1 | -1 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | 0.3566 | 0.6072 | 0.9081 | 0.5253 | 0.7911 | 0.6248 | 0.3345 | 0.6480 | 0.9494 | 0.4266 | 0.6675 | 0.7776 | 0.4780 | 0.6227 |
* The bold row denotes the saved checkpoint.
### Environmental Impact
Carbon emissions were measured using [CodeCarbon](https://github.com/mlco2/codecarbon).
- **Energy Consumed**: 0.122 kWh
- **Carbon Emitted**: 0.047 kg of CO2
- **Hours Used**: 0.373 hours
### Training Hardware
- **On Cloud**: No
- **GPU Model**: 1 x NVIDIA GeForce RTX 3090
- **CPU Model**: 13th Gen Intel(R) Core(TM) i7-13700K
- **RAM Size**: 31.78 GB
### Framework Versions
- Python: 3.11.6
- Sentence Transformers: 4.2.0.dev0
- Transformers: 4.52.4
- PyTorch: 2.6.0+cu124
- Accelerate: 1.5.1
- Datasets: 2.21.0
- Tokenizers: 0.21.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### CSRLoss
```bibtex
@misc{wen2025matryoshkarevisitingsparsecoding,
title={Beyond Matryoshka: Revisiting Sparse Coding for Adaptive Representation},
author={Tiansheng Wen and Yifei Wang and Zequn Zeng and Zhong Peng and Yudi Su and Xinyang Liu and Bo Chen and Hongwei Liu and Stefanie Jegelka and Chenyu You},
year={2025},
eprint={2503.01776},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2503.01776},
}
```
#### SparseMultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
huihui-ai/Huihui-Qwen3-14B-abliterated-v2
|
huihui-ai
| 2025-06-19T15:40:01Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"chat",
"abliterated",
"uncensored",
"conversational",
"base_model:Qwen/Qwen3-14B",
"base_model:finetune:Qwen/Qwen3-14B",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-17T15:29:37Z |
---
library_name: transformers
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen3-14B/blob/main/LICENSE
pipeline_tag: text-generation
base_model:
- Qwen/Qwen3-14B
tags:
- chat
- abliterated
- uncensored
---
# huihui-ai/Huihui-Qwen3-14B-abliterated-v2
This is an uncensored version of [Qwen/Qwen3-14B](https://huggingface.co/Qwen/Qwen3-14B) created with abliteration (see [remove-refusals-with-transformers](https://github.com/Sumandora/remove-refusals-with-transformers) to know more about it).
This is a crude, proof-of-concept implementation to remove refusals from an LLM model without using TransformerLens.
Ablation was performed using a new and faster method, which yields better results.
**Important Note** This version is an improvement over the previous one [huihui-ai/Qwen3-14B-abliterated](https://huggingface.co/huihui-ai/Qwen3-14B-abliterated). The ollama version has also been modified.
Changed the candidate layer to eliminate the problem of garbled codes
## ollama
You can use [huihui_ai/qwen3-abliterated:14b-v2](https://ollama.com/huihui_ai/qwen3-abliterated:14b-v2) directly, Switch the thinking toggle using /set think and /set nothink
```
ollama run huihui_ai/qwen3-abliterated:14b-v2
```
## Usage
You can use this model in your applications by loading it with Hugging Face's `transformers` library:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig, TextStreamer
import torch
import os
import signal
import random
import numpy as np
import time
from collections import Counter
cpu_count = os.cpu_count()
print(f"Number of CPU cores in the system: {cpu_count}")
half_cpu_count = cpu_count // 2
os.environ["MKL_NUM_THREADS"] = str(half_cpu_count)
os.environ["OMP_NUM_THREADS"] = str(half_cpu_count)
torch.set_num_threads(half_cpu_count)
print(f"PyTorch threads: {torch.get_num_threads()}")
print(f"MKL threads: {os.getenv('MKL_NUM_THREADS')}")
print(f"OMP threads: {os.getenv('OMP_NUM_THREADS')}")
# Load the model and tokenizer
NEW_MODEL_ID = "huihui-ai/Huihui-Qwen3-14B-abliterated-v2"
print(f"Load Model {NEW_MODEL_ID} ... ")
quant_config_4= BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_compute_dtype=torch.bfloat16,
bnb_4bit_use_double_quant=True,
llm_int8_enable_fp32_cpu_offload=True,
)
model = AutoModelForCausalLM.from_pretrained(
NEW_MODEL_ID,
device_map="auto",
trust_remote_code=True,
#quantization_config=quant_config_4,
torch_dtype=torch.bfloat16
)
tokenizer = AutoTokenizer.from_pretrained(NEW_MODEL_ID, trust_remote_code=True)
if tokenizer.pad_token is None:
tokenizer.pad_token = tokenizer.eos_token
tokenizer.pad_token_id = tokenizer.eos_token_id
tokenizer = AutoTokenizer.from_pretrained(NEW_MODEL_ID, trust_remote_code=True)
if tokenizer.pad_token is None:
tokenizer.pad_token = tokenizer.eos_token
tokenizer.pad_token_id = tokenizer.eos_token_id
messages = []
nothink = False
same_seed = False
skip_prompt=True
skip_special_tokens=True
do_sample = True
def set_random_seed(seed=None):
"""Set random seed for reproducibility. If seed is None, use int(time.time())."""
if seed is None:
seed = int(time.time()) # Convert float to int
random.seed(seed)
np.random.seed(seed)
torch.manual_seed(seed)
torch.cuda.manual_seed_all(seed) # If using CUDA
torch.backends.cudnn.deterministic = True
torch.backends.cudnn.benchmark = False
return seed # Return seed for logging if needed
class CustomTextStreamer(TextStreamer):
def __init__(self, tokenizer, skip_prompt=True, skip_special_tokens=True):
super().__init__(tokenizer, skip_prompt=skip_prompt, skip_special_tokens=skip_special_tokens)
self.generated_text = ""
self.stop_flag = False
self.init_time = time.time() # Record initialization time
self.end_time = None # To store end time
self.first_token_time = None # To store first token generation time
self.token_count = 0 # To track total tokens
def on_finalized_text(self, text: str, stream_end: bool = False):
if self.first_token_time is None and text.strip(): # Set first token time on first non-empty text
self.first_token_time = time.time()
self.generated_text += text
# Count tokens in the generated text
tokens = self.tokenizer.encode(text, add_special_tokens=False)
self.token_count += len(tokens)
print(text, end="", flush=True)
if stream_end:
self.end_time = time.time() # Record end time when streaming ends
if self.stop_flag:
raise StopIteration
def stop_generation(self):
self.stop_flag = True
self.end_time = time.time() # Record end time when generation is stopped
def get_metrics(self):
"""Returns initialization time, first token time, first token latency, end time, total time, total tokens, and tokens per second."""
if self.end_time is None:
self.end_time = time.time() # Set end time if not already set
total_time = self.end_time - self.init_time # Total time from init to end
tokens_per_second = self.token_count / total_time if total_time > 0 else 0
first_token_latency = (self.first_token_time - self.init_time) if self.first_token_time is not None else None
metrics = {
"init_time": self.init_time,
"first_token_time": self.first_token_time,
"first_token_latency": first_token_latency,
"end_time": self.end_time,
"total_time": total_time, # Total time in seconds
"total_tokens": self.token_count,
"tokens_per_second": tokens_per_second
}
return metrics
def generate_stream(model, tokenizer, messages, nothink, skip_prompt, skip_special_tokens, do_sample, max_new_tokens):
input_ids = tokenizer.apply_chat_template(
messages,
tokenize=True,
enable_thinking = not nothink,
add_generation_prompt=True,
return_tensors="pt"
)
attention_mask = torch.ones_like(input_ids, dtype=torch.long)
tokens = input_ids.to(model.device)
attention_mask = attention_mask.to(model.device)
streamer = CustomTextStreamer(tokenizer, skip_prompt=skip_prompt, skip_special_tokens=skip_special_tokens)
def signal_handler(sig, frame):
streamer.stop_generation()
print("\n[Generation stopped by user with Ctrl+C]")
signal.signal(signal.SIGINT, signal_handler)
generate_kwargs = {}
if do_sample:
generate_kwargs = {
"do_sample": do_sample,
"max_length": max_new_tokens,
"temperature": 0.6,
"top_k": 20,
"top_p": 0.95,
"repetition_penalty": 1.2,
"no_repeat_ngram_size": 2
}
else:
generate_kwargs = {
"do_sample": do_sample,
"max_length": max_new_tokens,
"repetition_penalty": 1.2,
"no_repeat_ngram_size": 2
}
print("Response: ", end="", flush=True)
try:
generated_ids = model.generate(
tokens,
attention_mask=attention_mask,
#use_cache=False,
pad_token_id=tokenizer.pad_token_id,
streamer=streamer,
**generate_kwargs
)
del generated_ids
except StopIteration:
print("\n[Stopped by user]")
del input_ids, attention_mask
torch.cuda.empty_cache()
signal.signal(signal.SIGINT, signal.SIG_DFL)
return streamer.generated_text, streamer.stop_flag, streamer.get_metrics()
init_seed = set_random_seed()
while True:
if same_seed:
set_random_seed(init_seed)
else:
init_seed = set_random_seed()
print(f"\nnothink: {nothink}")
print(f"skip_prompt: {skip_prompt}")
print(f"skip_special_tokens: {skip_special_tokens}")
print(f"do_sample: {do_sample}")
print(f"same_seed: {same_seed}, {init_seed}\n")
user_input = input("User: ").strip()
if user_input.lower() == "/exit":
print("Exiting chat.")
break
if user_input.lower() == "/clear":
messages = []
print("Chat history cleared. Starting a new conversation.")
continue
if user_input.lower() == "/nothink":
nothink = not nothink
continue
if user_input.lower() == "/skip_prompt":
skip_prompt = not skip_prompt
continue
if user_input.lower() == "/skip_special_tokens":
skip_special_tokens = not skip_special_tokens
continue
if user_input.lower().startswith("/same_seed"):
parts = user_input.split()
if len(parts) == 1: # /same_seed (no number)
same_seed = not same_seed # Toggle switch
elif len(parts) == 2: # /same_seed <number>
try:
init_seed = int(parts[1]) # Extract and convert number to int
same_seed = True
except ValueError:
print("Error: Please provide a valid integer after /same_seed")
continue
if user_input.lower() == "/do_sample":
do_sample = not do_sample
continue
if not user_input:
print("Input cannot be empty. Please enter something.")
continue
messages.append({"role": "user", "content": user_input})
activated_experts = []
response, stop_flag, metrics = generate_stream(model, tokenizer, messages, nothink, skip_prompt, skip_special_tokens, do_sample, 320960)
print("\n\nMetrics:")
for key, value in metrics.items():
print(f" {key}: {value}")
print("", flush=True)
if stop_flag:
continue
messages.append({"role": "assistant", "content": response})
# Remove all hooks after inference
for h in hooks: h.remove()
```
### Usage Warnings
- **Risk of Sensitive or Controversial Outputs**: This model’s safety filtering has been significantly reduced, potentially generating sensitive, controversial, or inappropriate content. Users should exercise caution and rigorously review generated outputs.
- **Not Suitable for All Audiences**: Due to limited content filtering, the model’s outputs may be inappropriate for public settings, underage users, or applications requiring high security.
- **Legal and Ethical Responsibilities**: Users must ensure their usage complies with local laws and ethical standards. Generated content may carry legal or ethical risks, and users are solely responsible for any consequences.
- **Research and Experimental Use**: It is recommended to use this model for research, testing, or controlled environments, avoiding direct use in production or public-facing commercial applications.
- **Monitoring and Review Recommendations**: Users are strongly advised to monitor model outputs in real-time and conduct manual reviews when necessary to prevent the dissemination of inappropriate content.
- **No Default Safety Guarantees**: Unlike standard models, this model has not undergone rigorous safety optimization. huihui.ai bears no responsibility for any consequences arising from its use.
### Donation
If you like it, please click 'like' and follow us for more updates.
You can follow [x.com/support_huihui](https://x.com/support_huihui) to get the latest model information from huihui.ai.
##### Your donation helps us continue our further development and improvement, a cup of coffee can do it.
- bitcoin(BTC):
```
bc1qqnkhuchxw0zqjh2ku3lu4hq45hc6gy84uk70ge
```
|
hospital-teresopolis-link/Original.Full.video.18.hospital.teresopolis.hospital.de.teresopolis.video.portal.Zacarias
|
hospital-teresopolis-link
| 2025-06-19T15:38:30Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-19T15:37:58Z |
<a rel="nofollow" href="https://viralflix.xyz/leaked/?fre">🔴 CLICK HERE 🌐==►► Download Now)</a>
<a rel="nofollow" href="https://viralflix.xyz/leaked/?fre">🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨 𝐋𝐢𝐧𝐤 )</a>
<a rel="nofollow" href="https://viralflix.xyz/leaked/?fre"><img src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif" alt="fsd"></a>
|
csikasote/mms-1b-all-nyagen-combined-42
|
csikasote
| 2025-06-19T15:37:22Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"nyagen",
"mms",
"generated_from_trainer",
"base_model:facebook/mms-1b-all",
"base_model:finetune:facebook/mms-1b-all",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2025-06-19T13:02:44Z |
---
library_name: transformers
license: cc-by-nc-4.0
base_model: facebook/mms-1b-all
tags:
- automatic-speech-recognition
- nyagen
- mms
- generated_from_trainer
metrics:
- wer
model-index:
- name: mms-1b-all-nyagen-combined-42
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mms-1b-all-nyagen-combined-42
This model is a fine-tuned version of [facebook/mms-1b-all](https://huggingface.co/facebook/mms-1b-all) on the NYAGEN - NYA dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3879
- Wer: 0.3078
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 30.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-------:|:----:|:---------------:|:------:|
| 8.807 | 0.5025 | 100 | 5.9307 | 0.9998 |
| 5.1806 | 1.0050 | 200 | 4.7590 | 1.0179 |
| 3.8969 | 1.5075 | 300 | 3.4468 | 0.9976 |
| 3.367 | 2.0101 | 400 | 3.2098 | 0.9938 |
| 3.2105 | 2.5126 | 500 | 3.1289 | 0.9933 |
| 3.1366 | 3.0151 | 600 | 3.1086 | 0.9927 |
| 3.0162 | 3.5176 | 700 | 2.9525 | 0.9948 |
| 2.9078 | 4.0201 | 800 | 1.5350 | 1.0042 |
| 0.4877 | 4.5226 | 900 | 0.5052 | 0.4215 |
| 0.3193 | 5.0251 | 1000 | 0.4687 | 0.3793 |
| 0.2953 | 5.5276 | 1100 | 0.4322 | 0.3547 |
| 0.2815 | 6.0302 | 1200 | 0.4283 | 0.3460 |
| 0.2729 | 6.5327 | 1300 | 0.4171 | 0.3327 |
| 0.2561 | 7.0352 | 1400 | 0.4085 | 0.3271 |
| 0.2543 | 7.5377 | 1500 | 0.4071 | 0.3290 |
| 0.2443 | 8.0402 | 1600 | 0.4039 | 0.3149 |
| 0.2402 | 8.5427 | 1700 | 0.4088 | 0.3173 |
| 0.2273 | 9.0452 | 1800 | 0.4048 | 0.3149 |
| 0.2299 | 9.5477 | 1900 | 0.3911 | 0.3063 |
| 0.2313 | 10.0503 | 2000 | 0.3879 | 0.3077 |
| 0.2203 | 10.5528 | 2100 | 0.3874 | 0.3033 |
| 0.2168 | 11.0553 | 2200 | 0.3837 | 0.2985 |
| 0.2167 | 11.5578 | 2300 | 0.3810 | 0.2979 |
| 0.211 | 12.0603 | 2400 | 0.3854 | 0.2952 |
| 0.2039 | 12.5628 | 2500 | 0.3803 | 0.2868 |
| 0.2152 | 13.0653 | 2600 | 0.3760 | 0.2926 |
| 0.2003 | 13.5678 | 2700 | 0.3732 | 0.2883 |
| 0.2025 | 14.0704 | 2800 | 0.3798 | 0.2878 |
| 0.2005 | 14.5729 | 2900 | 0.3796 | 0.2879 |
| 0.2032 | 15.0754 | 3000 | 0.3764 | 0.2817 |
### Framework versions
- Transformers 4.53.0.dev0
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.0
|
MikeGreen2710/ner_cons_dims_final
|
MikeGreen2710
| 2025-06-19T15:32:52Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"roberta",
"token-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2025-06-19T15:32:18Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
hasdal/21a58fba-d539-4969-960e-60eff2254792
|
hasdal
| 2025-06-19T15:28:22Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"unsloth",
"conversational",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2025-06-19T15:14:36Z |
---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
altinkedi/xxtrgpt2v3s
|
altinkedi
| 2025-06-19T15:27:48Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-19T15:20:50Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
sanchit42/qwen3-0.6B-instruct-29reports-lora256-slim
|
sanchit42
| 2025-06-19T15:23:50Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"llama-factory",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-19T15:22:14Z |
---
library_name: transformers
tags:
- llama-factory
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
phospho-app/gc1724-ACT_BBOX-bottle-y67xn
|
phospho-app
| 2025-06-19T15:23:13Z | 0 | 0 | null |
[
"phosphobot",
"act",
"region:us"
] | null | 2025-06-19T15:21:10Z |
---
tags:
- phosphobot
- act
task_categories:
- robotics
---
# act Model - phospho Training Pipeline
## Error Traceback
We faced an issue while training your model.
```
Caught KeyError in DataLoader worker process 0.
Original Traceback (most recent call last):
File "/opt/conda/lib/python3.11/site-packages/torch/utils/data/_utils/worker.py", line 349, in _worker_loop
data = fetcher.fetch(index) # type: ignore[possibly-undefined]
^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/utils/data/_utils/fetch.py", line 52, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/utils/data/_utils/fetch.py", line 52, in <listcomp>
data = [self.dataset[idx] for idx in possibly_batched_index]
~~~~~~~~~~~~^^^^^
File "/root/src/helper.py", line 198, in __getitem__
frame = self.cache[episode_idx][video_key][row_idx]
~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^
KeyError: '.DS_Store'
```
## Training parameters:
- **Dataset**: [gc1724/bottle](https://huggingface.co/datasets/gc1724/bottle)
- **Wandb run URL**: None
- **Epochs**: None
- **Batch size**: 100
- **Training steps**: 10000
📖 **Get Started**: [docs.phospho.ai](https://docs.phospho.ai?utm_source=huggingface_readme)
🤖 **Get your robot**: [robots.phospho.ai](https://robots.phospho.ai?utm_source=huggingface_readme)
|
alfredcs/gemma-3-27b-firstaid-icd10-merged
|
alfredcs
| 2025-06-19T15:23:10Z | 0 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"gemma3",
"image-text-to-text",
"trl",
"grpo",
"GRPO",
"Reasoning-Course",
"conversational",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2025-06-19T05:13:36Z |
---
library_name: transformers
tags:
- trl
- grpo
- GRPO
- Reasoning-Course
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
BootesVoid/cmbab3f3e0nsp1b1yngcaf3y6_cmc3i211r00exnx8d905xzmsr
|
BootesVoid
| 2025-06-19T15:22:48Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-06-19T15:22:46Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: NATURAL
---
# Cmbab3F3E0Nsp1B1Yngcaf3Y6_Cmc3I211R00Exnx8D905Xzmsr
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `NATURAL` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "NATURAL",
"lora_weights": "https://huggingface.co/BootesVoid/cmbab3f3e0nsp1b1yngcaf3y6_cmc3i211r00exnx8d905xzmsr/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BootesVoid/cmbab3f3e0nsp1b1yngcaf3y6_cmc3i211r00exnx8d905xzmsr', weight_name='lora.safetensors')
image = pipeline('NATURAL').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/BootesVoid/cmbab3f3e0nsp1b1yngcaf3y6_cmc3i211r00exnx8d905xzmsr/discussions) to add images that show off what you’ve made with this LoRA.
|
Catality/3b-shitboxes
|
Catality
| 2025-06-19T15:21:54Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2025-06-19T15:13:05Z |
---
base_model: unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** Catality
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
sungkwan2/my_awesome_food_model
|
sungkwan2
| 2025-06-19T15:20:54Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2025-06-19T15:07:11Z |
---
library_name: transformers
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: my_awesome_food_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_food_model
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6158
- Accuracy: 0.899
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.7715 | 1.0 | 63 | 2.5460 | 0.87 |
| 1.8698 | 2.0 | 126 | 1.7718 | 0.897 |
| 1.6144 | 3.0 | 189 | 1.6158 | 0.899 |
### Framework versions
- Transformers 4.52.4
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1
|
MarkProMaster229/internet_dialog
|
MarkProMaster229
| 2025-06-19T15:20:21Z | 0 | 0 | null |
[
"safetensors",
"gpt2",
"ru",
"license:openrail",
"region:us"
] | null | 2025-06-19T15:04:10Z |
---
license: openrail
language:
- ru
---
# русская языковая модель, обученная на данных с анонимного имиджборда
> ⚠️ ВНИМАНИЕ: Модель обучена на **нефильтрованных** и **непроверенных** данных с публичного анонимного форума (2ch.hk / Двач).
> В содержимом могут присутствовать **брань, токсичный контент, NSFW**, а также **более тяжёлые формы агрессии, оскорблений и шок-контента**.
## 🧠 Обзор
Эта модель является экспериментом в области обучения языковых моделей на неформальной русской речи.
Она основана на предварительно обученной модели [`sberbank-ai/rugpt2`](https://huggingface.co/ai-forever/rugpt3small_based_on_gpt2?text=%D0%9E%D0%B4%D0%BD%D0%B0%D0%B6%D0%B4%D1%8B) и дообучена на данных с форума 2ch.hk.
В обучающем наборе использовались:
- Заголовки тредов как входные запросы (вопросы),
- Первые 1–3 ответа в треде как отклик (реплики "бота").
### ❗ ВАЖНО
Автор **не проводил ручной модерации или фильтрации** содержимого.
Контент получен "как есть", автоматически.
Автор **не несёт ответственности** за содержание сгенерированных текстов или возможный вред от их использования.
Модель не предназначена для продакшен-использования.
---
## 🎓 Цель проекта
Модель предназначена **исключительно для исследовательских и образовательных целей**:
- Анализ структуры русскоязычного неформального диалога,
- Исследование токсичности в языковых моделях,
- Эксперименты с малыми объёмами данных.
---
## ⚠️ Предупреждение о содержании
Модель может генерировать тексты, содержащие:
- Оскорбления, дискриминацию, ненормативную лексику,
- Насилие, угрозы, NSFW и прочий шок-контент.
**Не используйте эту модель** в продуктах, чатах с пользователями, в системах рекомендаций, в образовательных учреждениях или в чувствительных сценариях.
Использование модели — **исключительно на ваш страх и риск**.
Автор **не несёт ответственности** за любые последствия, вызванные использованием модели.
---
## 📜 Лицензия
Модель и обучающий набор распространяются по лицензии **OpenRAIL-M**.
OpenRAIL — это лицензия, разработанная для языковых моделей и направленная на обеспечение **этичного, безопасного и ограниченного использования**.
- Использование разрешено только при соблюдении условий ответственного применения.
- Коммерческое использование запрещено без отдельного разрешения.
- Пользователь обязан атрибутировать оригинального автора и использовать модель в соответствии с положениями лицензии.
📄 Подробнее: [https://www.bigcode-project.org/docs/pages/bigcode-openrail/](https://www.bigcode-project.org/docs/pages/bigcode-openrail/)
|
MJ92/AceGPT-v2-8B-Chat_finetuned_5000_fr
|
MJ92
| 2025-06-19T15:18:25Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-19T15:06:44Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
morturr/Llama-2-7b-hf-PAIR_dadjokes_headlines-COMB-dadjokes-comb-1-seed-42-2025-06-19
|
morturr
| 2025-06-19T15:17:40Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"license:llama2",
"region:us"
] | null | 2025-06-19T15:17:19Z |
---
library_name: peft
license: llama2
base_model: meta-llama/Llama-2-7b-hf
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: Llama-2-7b-hf-PAIR_dadjokes_headlines-COMB-dadjokes-comb-1-seed-42-2025-06-19
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Llama-2-7b-hf-PAIR_dadjokes_headlines-COMB-dadjokes-comb-1-seed-42-2025-06-19
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.1
- Pytorch 2.5.1+cu124
- Datasets 3.0.2
- Tokenizers 0.20.1
|
jasonxubin/jielin-lora
|
jasonxubin
| 2025-06-19T15:17:31Z | 0 | 0 | null |
[
"license:other",
"region:us"
] | null | 2025-06-19T14:38:18Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
|
LarryAIDraw/kamisato_ayakaPDXL_scarxzys
|
LarryAIDraw
| 2025-06-19T15:17:06Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2025-06-19T15:15:31Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/1022127/pony-kamisato-ayaka-or-genshin-impact
|
LarryAIDraw/arknights_skadi_ponyXL
|
LarryAIDraw
| 2025-06-19T15:16:33Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2025-06-19T15:12:06Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/326557?modelVersionId=366029
|
rodrigomt/gemma-merge
|
rodrigomt
| 2025-06-19T15:11:21Z | 0 | 0 | null |
[
"safetensors",
"gemma3",
"merge",
"mergekit",
"lazymergekit",
"CEIA-UFG/Gemma-3-Gaia-PT-BR-4b-it",
"soob3123/amoral-gemma3-4B-v2",
"base_model:CEIA-UFG/Gemma-3-Gaia-PT-BR-4b-it",
"base_model:merge:CEIA-UFG/Gemma-3-Gaia-PT-BR-4b-it",
"base_model:soob3123/amoral-gemma3-4B-v2",
"base_model:merge:soob3123/amoral-gemma3-4B-v2",
"region:us"
] | null | 2025-06-19T02:55:19Z |
---
base_model:
- CEIA-UFG/Gemma-3-Gaia-PT-BR-4b-it
- soob3123/amoral-gemma3-4B-v2
tags:
- merge
- mergekit
- lazymergekit
- CEIA-UFG/Gemma-3-Gaia-PT-BR-4b-it
- soob3123/amoral-gemma3-4B-v2
---
# gemma-merge
gemma-merge is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [CEIA-UFG/Gemma-3-Gaia-PT-BR-4b-it](https://huggingface.co/CEIA-UFG/Gemma-3-Gaia-PT-BR-4b-it)
* [soob3123/amoral-gemma3-4B-v2](https://huggingface.co/soob3123/amoral-gemma3-4B-v2)
## 🧩 Configuration
```yaml
models:
- model: CEIA-UFG/Gemma-3-Gaia-PT-BR-4b-it
parameters:
density: 0.5
weight: 0.5
- model: soob3123/amoral-gemma3-4B-v2
parameters:
density: 0.5
weight: 0.5
merge_method: ties
base_model: unsloth/gemma-3-4b-pt
parameters:
normalize: true
int8_mask: true
dtype: bfloat16
tokenizer:
source: unsloth/gemma-3-4b-pt
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "rodrigomt/gemma-merge"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
raul111204/gpt-neo-125m-xsum-raul3-b
|
raul111204
| 2025-06-19T15:11:20Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"autotrain",
"text-generation-inference",
"text-generation",
"peft",
"conversational",
"dataset:mia-llm/xsum-raw-MIA",
"base_model:EleutherAI/gpt-neo-125m",
"base_model:finetune:EleutherAI/gpt-neo-125m",
"license:other",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-19T14:23:26Z |
---
tags:
- autotrain
- text-generation-inference
- text-generation
- peft
library_name: transformers
base_model: EleutherAI/gpt-neo-125m
widget:
- messages:
- role: user
content: What is your favorite condiment?
license: other
datasets:
- mia-llm/xsum-raw-MIA
---
# Model Trained Using AutoTrain
This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain).
# Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "PATH_TO_THIS_REPO"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(
model_path,
device_map="auto",
torch_dtype='auto'
).eval()
# Prompt content: "hi"
messages = [
{"role": "user", "content": "hi"}
]
input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt')
output_ids = model.generate(input_ids.to('cuda'))
response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)
# Model response: "Hello! How can I assist you today?"
print(response)
```
|
BootesVoid/cmbyrkjtk04cprdqsuhkq1b61_cmc23uo8s0c1yrdqsimijbqb2
|
BootesVoid
| 2025-06-19T15:11:11Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-06-19T15:11:09Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: TRACEY
---
# Cmbyrkjtk04Cprdqsuhkq1B61_Cmc23Uo8S0C1Yrdqsimijbqb2
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `TRACEY` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "TRACEY",
"lora_weights": "https://huggingface.co/BootesVoid/cmbyrkjtk04cprdqsuhkq1b61_cmc23uo8s0c1yrdqsimijbqb2/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BootesVoid/cmbyrkjtk04cprdqsuhkq1b61_cmc23uo8s0c1yrdqsimijbqb2', weight_name='lora.safetensors')
image = pipeline('TRACEY').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/BootesVoid/cmbyrkjtk04cprdqsuhkq1b61_cmc23uo8s0c1yrdqsimijbqb2/discussions) to add images that show off what you’ve made with this LoRA.
|
Richard9905/full-merged-bible-model
|
Richard9905
| 2025-06-19T15:11:00Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2025-06-19T15:07:23Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
LarryAIDraw/miyakoPDXL_scarxzys
|
LarryAIDraw
| 2025-06-19T15:10:19Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2025-06-19T15:08:29Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/632492/pony-tsukiyuki-miyako-or-blue-archive
|
zeerakwyne/test8_doc-splitter-llama-3-2-3B-20-epoch_merged
|
zeerakwyne
| 2025-06-19T15:09:56Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-19T15:09:13Z |
---
base_model: unsloth/llama-3.2-3b-instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** zeerakwyne
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-3b-instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
LarryAIDraw/Nonoa_Miyamae_anime-44
|
LarryAIDraw
| 2025-06-19T15:05:01Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2025-06-19T15:01:29Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/704533/nonoa-miyamae-or-alya-sometimes-hides-her-feelings-in-russian-or
|
Mariogver/detr-finetuned-microglia_3
|
Mariogver
| 2025-06-19T15:04:43Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"detr",
"object-detection",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] |
object-detection
| 2025-06-19T15:04:30Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
sanchit42/llama3.1-8B-instruct-29reports-lora256-slim
|
sanchit42
| 2025-06-19T14:59:44Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"llama-factory",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-19T14:56:01Z |
---
library_name: transformers
tags:
- llama-factory
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
liuh6/whisper-tiny_to_Chinese_accent
|
liuh6
| 2025-06-19T14:59:17Z | 40 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"hf-asr-leaderboard",
"generated_from_trainer",
"en",
"dataset:Chinese_english",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2025-03-08T01:44:44Z |
---
library_name: transformers
language:
- en
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- hf-asr-leaderboard
- generated_from_trainer
datasets:
- Chinese_english
metrics:
- wer
model-index:
- name: Whisper tiny Chinese with pitch pertubation
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Chinese English
type: Chinese_english
args: 'config: default, split: test'
metrics:
- name: Wer
type: wer
value: 16.187450357426528
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper tiny Chinese with pitch pertubation
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the Chinese English dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3669
- Wer: 16.1875
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 1
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 1500
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:-------:|
| 0.2024 | 1.6667 | 500 | 0.3648 | 16.4257 |
| 0.0151 | 3.3333 | 1000 | 0.3584 | 16.7911 |
| 0.0035 | 5.0 | 1500 | 0.3669 | 16.1875 |
### Framework versions
- Transformers 4.49.0
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1
|
Richard9905/lora-bible-model
|
Richard9905
| 2025-06-19T14:58:40Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-19T14:58:31Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
BootesVoid/cmc0un2u0098srdqs7tomm6xx_cmc3f9kss006vnx8dflbcsdbb
|
BootesVoid
| 2025-06-19T14:58:35Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-06-19T14:58:34Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: SEXY
---
# Cmc0Un2U0098Srdqs7Tomm6Xx_Cmc3F9Kss006Vnx8Dflbcsdbb
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `SEXY` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "SEXY",
"lora_weights": "https://huggingface.co/BootesVoid/cmc0un2u0098srdqs7tomm6xx_cmc3f9kss006vnx8dflbcsdbb/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BootesVoid/cmc0un2u0098srdqs7tomm6xx_cmc3f9kss006vnx8dflbcsdbb', weight_name='lora.safetensors')
image = pipeline('SEXY').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/BootesVoid/cmc0un2u0098srdqs7tomm6xx_cmc3f9kss006vnx8dflbcsdbb/discussions) to add images that show off what you’ve made with this LoRA.
|
samtse123/finetune_model
|
samtse123
| 2025-06-19T14:57:58Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"qwen2",
"text-generation-inference",
"unsloth",
"qwen3",
"en",
"base_model:unsloth/Qwen3-1.7B-unsloth-bnb-4bit",
"base_model:quantized:unsloth/Qwen3-1.7B-unsloth-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-19T13:55:09Z |
---
base_model: unsloth/Qwen3-1.7B-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** samtse123
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen3-1.7B-unsloth-bnb-4bit
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
nicofarr/panns_ResNet22
|
nicofarr
| 2025-06-19T14:55:50Z | 0 | 0 |
pytorch
|
[
"pytorch",
"safetensors",
"ResNet22",
"audio",
"model_hub_mixin",
"panns",
"pytorch_model_hub_mixin",
"tagging",
"license:apache-2.0",
"region:us"
] | null | 2025-06-19T14:52:31Z |
---
library_name: pytorch
license: apache-2.0
tags:
- audio
- model_hub_mixin
- panns
- pytorch_model_hub_mixin
- tagging
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Library: https://github.com/qiuqiangkong/audioset_tagging_cnn
- Docs: https://github.com/qiuqiangkong/audioset_tagging_cnn
|
ik-ram28/BioMistral-CPT-SFT-7B
|
ik-ram28
| 2025-06-19T14:54:40Z | 35 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"medical",
"conversational",
"fr",
"en",
"base_model:BioMistral/BioMistral-7B",
"base_model:finetune:BioMistral/BioMistral-7B",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-02-08T23:22:52Z |
---
library_name: transformers
tags:
- medical
license: apache-2.0
language:
- fr
- en
base_model:
- ik-ram28/BioMistral-CPT-7B
- BioMistral/BioMistral-7B
---
## Model Description
BioMistral-CPT-SFT-7B is a French medical language model based on BioMistral-7B, adapted for French medical domain applications through a combined approach of Continual Pre-Training (CPT) followed by Supervised Fine-Tuning (SFT).
## Model Details
- **Model Type**: Causal Language Model
- **Base Model**: BioMistral-7B
- **Language**: French (adapted from English medical model)
- **Domain**: Medical/Healthcare
- **Parameters**: 7 billion
- **License**: Apache 2.0
- **Paper**: [Adaptation des connaissances médicales pour les grands modèles de langue : Stratégies et analyse comparative](https://github.com/ikram28/medllm-strategies)
## Training Details
### Continual Pre-Training (CPT)
- **Dataset**: NACHOS corpus (opeN crAwled frenCh Healthcare cOrpuS)
- **Size**: 7.4 GB of French medical texts
- **Word Count**: Over 1 billion words
- **Sources**: 24 French medical websites
- **Training Duration**: 2.8 epochs
- **Hardware**: 32 NVIDIA H100 80GB GPUs
- **Training Time**: 11 hours
- **Optimizer**: AdamW
- **Learning Rate**: 2e-5
- **Weight Decay**: 0.01
- **Batch Size**: 16 with gradient accumulation of 2
### Supervised Fine-Tuning (SFT)
- **Dataset**: 30K French medical question-answer pairs
- 10K native French medical questions
- 10K translated medical questions from English resources
- 10K generated questions from French medical texts
- **Method**: DoRA (Weight-Decomposed Low-Rank Adaptation)
- **Training Duration**: 10 epochs
- **Hardware**: 1 NVIDIA H100 80GB GPU
- **Training Time**: 42 hours
- **Rank**: 16
- **Alpha**: 16
- **Learning Rate**: 2e-5
- **Batch Size**: 4
## Computational Impact
- **Total Training Time**: 53 hours (11h CPT + 42h SFT)
- **Hardware**: 32 GPU H100 + 1 GPU H100
- **Carbon Emissions**: 10.11 kgCO2e (9.04 + 1.07)
## Ethical Considerations
- **Medical Accuracy**: This model is for research and educational purposes only. Performance limitations make it unsuitable for critical medical applications
- **Bias**: May contain biases from both English and French medical literature
## Citation
If you use this model, please cite:
```bibtex
```
## Contact
For questions about this model, please contact: [email protected]
|
hectordiazgomez/sirio-4b-translation
|
hectordiazgomez
| 2025-06-19T14:48:11Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-19T14:47:24Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
alakxender/flan-t5-base-alpaca-dv5
|
alakxender
| 2025-06-19T14:47:02Z | 136 | 0 |
transformers
|
[
"transformers",
"safetensors",
"t5",
"text2text-generation",
"dhivehi",
"gpt",
"llm",
"thaana",
"text-gen",
"dv",
"dataset:alakxender/alpaca_dhivehi",
"base_model:google/flan-t5-base",
"base_model:finetune:google/flan-t5-base",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2025-05-31T08:18:13Z |
---
library_name: transformers
tags:
- dhivehi
- gpt
- llm
- thaana
- text-gen
license: mit
datasets:
- alakxender/alpaca_dhivehi
language:
- dv
metrics:
- rouge
base_model:
- google/flan-t5-base
---
# Alpaca Dhivehi Fine-Tuned Flan-T5
This repository contains a **fine-tuned Flan-T5** model on the **Alpaca Dhivehi dataset**, aimed at enabling Dhivehi language instruction-following tasks.
***Note: The model can follow instructions and inputs to some extent, but it’s not strictly trained for perfect adherence. Outputs may be partially aligned but are not guaranteed to be fully accurate. Treat results as experimental.***
## Model Details
- **Base model**: `google/flan-t5-small` (or whichever size you used)
- **Dataset**: Alpaca Dhivehi , Translation from English to Dhivehi
- **Training epochs**: 5
- **Final evaluation**:
- `eval_loss`: 2.59
- `ROUGE-1`: 0.10
- `ROUGE-2`: 0.03
- `ROUGE-L`: 0.107
## Usage
To **run inference** using the fine-tuned model:
```python
import torch
from transformers import T5Tokenizer, T5ForConditionalGeneration
MODEL_PATH = "alakxender/flan-t5-base-alpaca-dv5"
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
tokenizer = T5Tokenizer.from_pretrained(MODEL_PATH)
model = T5ForConditionalGeneration.from_pretrained(MODEL_PATH).to(device)
def generate_response(instruction, input_text):
combined_input = f"{instruction.strip()} {input_text.strip()}" if input_text else instruction.strip()
inputs = tokenizer(combined_input, return_tensors="pt", truncation=True, max_length=256).to(device)
output_ids = model.generate(
**inputs,
max_new_tokens=256,
num_beams=8,
repetition_penalty=1.5,
no_repeat_ngram_size=3,
do_sample=True,
early_stopping=True,
temperature=0.1
)
decoded_output = tokenizer.decode(output_ids[0], skip_special_tokens=True)
return decoded_output
# Example usage:
instruction = "ދީފައިވާ މައުޟޫޢާ ބެހޭގޮތުން ކުރު ޕެރެގްރާފެއް ލިޔެލާށެވެ."
input_text = "އިއާދަކުރަނިވި ހަކަތަ ބޭނުންކުރުމުގެ މުހިންމުކަން"
print(generate_response(instruction, input_text))
އިއާދަކުރަނިވި ހަކަތަ ބޭނުންކުރުމުގެ މުހިންމު އެއް މައުޟޫއަކީ ސޯލާ، ވިންޑް، ހައިޑްރޯ، ޖިއޮތަރމަލް، އަދި ހައިޑްރޯއިލެކްޓްރިކް ޕަވަރ ފަދަ އިއާދަކުރަނިވި ހަކަތައިން ގްރީންހައުސް ގޭސްތައް ބޭރުވުން .....
```
## Evaluation Results
From the last evaluation:
```
{
'eval_loss': 2.591374158859253,
'eval_rouge1': 0.10920254665663279,
'eval_rouge2': 0.03587297080345582,
'eval_rougeL': 0.10796498746412672,
'eval_rougeLsum': 0.1083282268650986,
'eval_runtime': 1204.3847,
'eval_samples_per_second': 4.298,
'eval_steps_per_second': 2.149,
'epoch': 5.0
}
```
## Notes
- This fine-tuned model is experimental and intended for research on Dhivehi-language instruction-following tasks.
|
l0tr1k/photography-mistral-16bit-merged-new
|
l0tr1k
| 2025-06-19T14:37:07Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-19T14:33:27Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
vcabeli/Qwen3-8B-Open-R1-GRPO-spatial-dea
|
vcabeli
| 2025-06-19T14:30:42Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"generated_from_trainer",
"trl",
"grpo",
"conversational",
"arxiv:2402.03300",
"base_model:Qwen/Qwen3-8B",
"base_model:finetune:Qwen/Qwen3-8B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-19T14:06:23Z |
---
base_model: Qwen/Qwen3-8B
library_name: transformers
model_name: Qwen3-8B-Open-R1-GRPO-spatial-dea
tags:
- generated_from_trainer
- trl
- grpo
licence: license
---
# Model Card for Qwen3-8B-Open-R1-GRPO-spatial-dea
This model is a fine-tuned version of [Qwen/Qwen3-8B](https://huggingface.co/Qwen/Qwen3-8B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="vcabeli/Qwen3-8B-Open-R1-GRPO-spatial-dea", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/vincent-cabeli-owkin/huggingface/runs/zolvk2vf)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.18.0
- Transformers: 4.52.3
- Pytorch: 2.6.0
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
deepkeep-ai/gemma-2-2b-pii-token-classifier
|
deepkeep-ai
| 2025-06-19T14:27:40Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma2",
"token-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2025-06-18T12:49:36Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
wkang123/WellKang-v0.1.1.1
|
wkang123
| 2025-06-19T14:26:19Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-19T14:26:08Z |
---
base_model: unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** wkang123
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
tomaarsen/csr-mxbai-embed-large-v1-nq-updated-reconstruction-2
|
tomaarsen
| 2025-06-19T14:25:19Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"bert",
"sparse-encoder",
"sparse",
"csr",
"generated_from_trainer",
"dataset_size:99000",
"loss:CSRLoss",
"loss:SparseMultipleNegativesRankingLoss",
"feature-extraction",
"en",
"dataset:sentence-transformers/natural-questions",
"arxiv:1908.10084",
"arxiv:2503.01776",
"arxiv:1705.00652",
"base_model:mixedbread-ai/mxbai-embed-large-v1",
"base_model:finetune:mixedbread-ai/mxbai-embed-large-v1",
"license:apache-2.0",
"model-index",
"co2_eq_emissions",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2025-06-19T14:25:12Z |
---
language:
- en
license: apache-2.0
tags:
- sentence-transformers
- sparse-encoder
- sparse
- csr
- generated_from_trainer
- dataset_size:99000
- loss:CSRLoss
- loss:SparseMultipleNegativesRankingLoss
base_model: mixedbread-ai/mxbai-embed-large-v1
widget:
- text: Saudi Arabia–United Arab Emirates relations However, the UAE and Saudi Arabia
continue to take somewhat differing stances on regional conflicts such the Yemeni
Civil War, where the UAE opposes Al-Islah, and supports the Southern Movement,
which has fought against Saudi-backed forces, and the Syrian Civil War, where
the UAE has disagreed with Saudi support for Islamist movements.[4]
- text: Economy of New Zealand New Zealand's diverse market economy has a sizable
service sector, accounting for 63% of all GDP activity in 2013.[17] Large scale
manufacturing industries include aluminium production, food processing, metal
fabrication, wood and paper products. Mining, manufacturing, electricity, gas,
water, and waste services accounted for 16.5% of GDP in 2013.[17] The primary
sector continues to dominate New Zealand's exports, despite accounting for 6.5%
of GDP in 2013.[17]
- text: who was the first president of indian science congress meeting held in kolkata
in 1914
- text: Get Over It (Eagles song) "Get Over It" is a song by the Eagles released as
a single after a fourteen-year breakup. It was also the first song written by
bandmates Don Henley and Glenn Frey when the band reunited. "Get Over It" was
played live for the first time during their Hell Freezes Over tour in 1994. It
returned the band to the U.S. Top 40 after a fourteen-year absence, peaking at
No. 31 on the Billboard Hot 100 chart. It also hit No. 4 on the Billboard Mainstream
Rock Tracks chart. The song was not played live by the Eagles after the "Hell
Freezes Over" tour in 1994. It remains the group's last Top 40 hit in the U.S.
- text: 'Cornelius the Centurion Cornelius (Greek: Κορνήλιος) was a Roman centurion
who is considered by Christians to be one of the first Gentiles to convert to
the faith, as related in Acts of the Apostles.'
datasets:
- sentence-transformers/natural-questions
pipeline_tag: feature-extraction
library_name: sentence-transformers
metrics:
- dot_accuracy@1
- dot_accuracy@3
- dot_accuracy@5
- dot_accuracy@10
- dot_precision@1
- dot_precision@3
- dot_precision@5
- dot_precision@10
- dot_recall@1
- dot_recall@3
- dot_recall@5
- dot_recall@10
- dot_ndcg@10
- dot_mrr@10
- dot_map@100
- query_active_dims
- query_sparsity_ratio
- corpus_active_dims
- corpus_sparsity_ratio
co2_eq_emissions:
emissions: 53.0273650168183
energy_consumed: 0.13642164181511365
source: codecarbon
training_type: fine-tuning
on_cloud: false
cpu_model: 13th Gen Intel(R) Core(TM) i7-13700K
ram_total_size: 31.777088165283203
hours_used: 0.41
hardware_used: 1 x NVIDIA GeForce RTX 3090
model-index:
- name: Sparse CSR model trained on Natural Questions
results:
- task:
type: sparse-information-retrieval
name: Sparse Information Retrieval
dataset:
name: NanoMSMARCO 128
type: NanoMSMARCO_128
metrics:
- type: dot_accuracy@1
value: 0.38
name: Dot Accuracy@1
- type: dot_accuracy@3
value: 0.66
name: Dot Accuracy@3
- type: dot_accuracy@5
value: 0.72
name: Dot Accuracy@5
- type: dot_accuracy@10
value: 0.82
name: Dot Accuracy@10
- type: dot_precision@1
value: 0.38
name: Dot Precision@1
- type: dot_precision@3
value: 0.22
name: Dot Precision@3
- type: dot_precision@5
value: 0.14400000000000002
name: Dot Precision@5
- type: dot_precision@10
value: 0.08199999999999999
name: Dot Precision@10
- type: dot_recall@1
value: 0.38
name: Dot Recall@1
- type: dot_recall@3
value: 0.66
name: Dot Recall@3
- type: dot_recall@5
value: 0.72
name: Dot Recall@5
- type: dot_recall@10
value: 0.82
name: Dot Recall@10
- type: dot_ndcg@10
value: 0.6074833126260415
name: Dot Ndcg@10
- type: dot_mrr@10
value: 0.5392698412698412
name: Dot Mrr@10
- type: dot_map@100
value: 0.5478391044500884
name: Dot Map@100
- type: query_active_dims
value: 128.0
name: Query Active Dims
- type: query_sparsity_ratio
value: 0.96875
name: Query Sparsity Ratio
- type: corpus_active_dims
value: 128.0
name: Corpus Active Dims
- type: corpus_sparsity_ratio
value: 0.96875
name: Corpus Sparsity Ratio
- task:
type: sparse-information-retrieval
name: Sparse Information Retrieval
dataset:
name: NanoNFCorpus 128
type: NanoNFCorpus_128
metrics:
- type: dot_accuracy@1
value: 0.44
name: Dot Accuracy@1
- type: dot_accuracy@3
value: 0.54
name: Dot Accuracy@3
- type: dot_accuracy@5
value: 0.64
name: Dot Accuracy@5
- type: dot_accuracy@10
value: 0.68
name: Dot Accuracy@10
- type: dot_precision@1
value: 0.44
name: Dot Precision@1
- type: dot_precision@3
value: 0.3133333333333333
name: Dot Precision@3
- type: dot_precision@5
value: 0.28
name: Dot Precision@5
- type: dot_precision@10
value: 0.24600000000000002
name: Dot Precision@10
- type: dot_recall@1
value: 0.045132854073603
name: Dot Recall@1
- type: dot_recall@3
value: 0.06751477851868476
name: Dot Recall@3
- type: dot_recall@5
value: 0.08765169300408888
name: Dot Recall@5
- type: dot_recall@10
value: 0.12035202437952344
name: Dot Recall@10
- type: dot_ndcg@10
value: 0.3037747903284991
name: Dot Ndcg@10
- type: dot_mrr@10
value: 0.5081904761904761
name: Dot Mrr@10
- type: dot_map@100
value: 0.13867493157888547
name: Dot Map@100
- type: query_active_dims
value: 128.0
name: Query Active Dims
- type: query_sparsity_ratio
value: 0.96875
name: Query Sparsity Ratio
- type: corpus_active_dims
value: 128.0
name: Corpus Active Dims
- type: corpus_sparsity_ratio
value: 0.96875
name: Corpus Sparsity Ratio
- task:
type: sparse-information-retrieval
name: Sparse Information Retrieval
dataset:
name: NanoNQ 128
type: NanoNQ_128
metrics:
- type: dot_accuracy@1
value: 0.48
name: Dot Accuracy@1
- type: dot_accuracy@3
value: 0.66
name: Dot Accuracy@3
- type: dot_accuracy@5
value: 0.7
name: Dot Accuracy@5
- type: dot_accuracy@10
value: 0.84
name: Dot Accuracy@10
- type: dot_precision@1
value: 0.48
name: Dot Precision@1
- type: dot_precision@3
value: 0.22666666666666668
name: Dot Precision@3
- type: dot_precision@5
value: 0.14800000000000002
name: Dot Precision@5
- type: dot_precision@10
value: 0.08999999999999998
name: Dot Precision@10
- type: dot_recall@1
value: 0.45
name: Dot Recall@1
- type: dot_recall@3
value: 0.62
name: Dot Recall@3
- type: dot_recall@5
value: 0.67
name: Dot Recall@5
- type: dot_recall@10
value: 0.81
name: Dot Recall@10
- type: dot_ndcg@10
value: 0.6337677207897237
name: Dot Ndcg@10
- type: dot_mrr@10
value: 0.5932936507936507
name: Dot Mrr@10
- type: dot_map@100
value: 0.5761859932841973
name: Dot Map@100
- type: query_active_dims
value: 128.0
name: Query Active Dims
- type: query_sparsity_ratio
value: 0.96875
name: Query Sparsity Ratio
- type: corpus_active_dims
value: 128.0
name: Corpus Active Dims
- type: corpus_sparsity_ratio
value: 0.96875
name: Corpus Sparsity Ratio
- task:
type: sparse-nano-beir
name: Sparse Nano BEIR
dataset:
name: NanoBEIR mean 128
type: NanoBEIR_mean_128
metrics:
- type: dot_accuracy@1
value: 0.43333333333333335
name: Dot Accuracy@1
- type: dot_accuracy@3
value: 0.6200000000000001
name: Dot Accuracy@3
- type: dot_accuracy@5
value: 0.6866666666666665
name: Dot Accuracy@5
- type: dot_accuracy@10
value: 0.7799999999999999
name: Dot Accuracy@10
- type: dot_precision@1
value: 0.43333333333333335
name: Dot Precision@1
- type: dot_precision@3
value: 0.25333333333333335
name: Dot Precision@3
- type: dot_precision@5
value: 0.19066666666666668
name: Dot Precision@5
- type: dot_precision@10
value: 0.13933333333333334
name: Dot Precision@10
- type: dot_recall@1
value: 0.2917109513578677
name: Dot Recall@1
- type: dot_recall@3
value: 0.44917159283956165
name: Dot Recall@3
- type: dot_recall@5
value: 0.49255056433469635
name: Dot Recall@5
- type: dot_recall@10
value: 0.5834506747931745
name: Dot Recall@10
- type: dot_ndcg@10
value: 0.5150086079147548
name: Dot Ndcg@10
- type: dot_mrr@10
value: 0.5469179894179893
name: Dot Mrr@10
- type: dot_map@100
value: 0.42090000977105707
name: Dot Map@100
- type: query_active_dims
value: 128.0
name: Query Active Dims
- type: query_sparsity_ratio
value: 0.96875
name: Query Sparsity Ratio
- type: corpus_active_dims
value: 128.0
name: Corpus Active Dims
- type: corpus_sparsity_ratio
value: 0.96875
name: Corpus Sparsity Ratio
- task:
type: sparse-information-retrieval
name: Sparse Information Retrieval
dataset:
name: NanoMSMARCO 256
type: NanoMSMARCO_256
metrics:
- type: dot_accuracy@1
value: 0.44
name: Dot Accuracy@1
- type: dot_accuracy@3
value: 0.64
name: Dot Accuracy@3
- type: dot_accuracy@5
value: 0.74
name: Dot Accuracy@5
- type: dot_accuracy@10
value: 0.84
name: Dot Accuracy@10
- type: dot_precision@1
value: 0.44
name: Dot Precision@1
- type: dot_precision@3
value: 0.21333333333333332
name: Dot Precision@3
- type: dot_precision@5
value: 0.14800000000000002
name: Dot Precision@5
- type: dot_precision@10
value: 0.08399999999999999
name: Dot Precision@10
- type: dot_recall@1
value: 0.44
name: Dot Recall@1
- type: dot_recall@3
value: 0.64
name: Dot Recall@3
- type: dot_recall@5
value: 0.74
name: Dot Recall@5
- type: dot_recall@10
value: 0.84
name: Dot Recall@10
- type: dot_ndcg@10
value: 0.6405150998246686
name: Dot Ndcg@10
- type: dot_mrr@10
value: 0.5768809523809523
name: Dot Mrr@10
- type: dot_map@100
value: 0.5851061967133396
name: Dot Map@100
- type: query_active_dims
value: 256.0
name: Query Active Dims
- type: query_sparsity_ratio
value: 0.9375
name: Query Sparsity Ratio
- type: corpus_active_dims
value: 256.0
name: Corpus Active Dims
- type: corpus_sparsity_ratio
value: 0.9375
name: Corpus Sparsity Ratio
- task:
type: sparse-information-retrieval
name: Sparse Information Retrieval
dataset:
name: NanoNFCorpus 256
type: NanoNFCorpus_256
metrics:
- type: dot_accuracy@1
value: 0.42
name: Dot Accuracy@1
- type: dot_accuracy@3
value: 0.58
name: Dot Accuracy@3
- type: dot_accuracy@5
value: 0.6
name: Dot Accuracy@5
- type: dot_accuracy@10
value: 0.62
name: Dot Accuracy@10
- type: dot_precision@1
value: 0.42
name: Dot Precision@1
- type: dot_precision@3
value: 0.37333333333333324
name: Dot Precision@3
- type: dot_precision@5
value: 0.324
name: Dot Precision@5
- type: dot_precision@10
value: 0.248
name: Dot Precision@10
- type: dot_recall@1
value: 0.045123947439696374
name: Dot Recall@1
- type: dot_recall@3
value: 0.08083248635236362
name: Dot Recall@3
- type: dot_recall@5
value: 0.0993952531376598
name: Dot Recall@5
- type: dot_recall@10
value: 0.1259275313458498
name: Dot Recall@10
- type: dot_ndcg@10
value: 0.3181127342430942
name: Dot Ndcg@10
- type: dot_mrr@10
value: 0.5041666666666667
name: Dot Mrr@10
- type: dot_map@100
value: 0.15847418838222901
name: Dot Map@100
- type: query_active_dims
value: 256.0
name: Query Active Dims
- type: query_sparsity_ratio
value: 0.9375
name: Query Sparsity Ratio
- type: corpus_active_dims
value: 256.0
name: Corpus Active Dims
- type: corpus_sparsity_ratio
value: 0.9375
name: Corpus Sparsity Ratio
- task:
type: sparse-information-retrieval
name: Sparse Information Retrieval
dataset:
name: NanoNQ 256
type: NanoNQ_256
metrics:
- type: dot_accuracy@1
value: 0.54
name: Dot Accuracy@1
- type: dot_accuracy@3
value: 0.7
name: Dot Accuracy@3
- type: dot_accuracy@5
value: 0.8
name: Dot Accuracy@5
- type: dot_accuracy@10
value: 0.84
name: Dot Accuracy@10
- type: dot_precision@1
value: 0.54
name: Dot Precision@1
- type: dot_precision@3
value: 0.24
name: Dot Precision@3
- type: dot_precision@5
value: 0.16799999999999998
name: Dot Precision@5
- type: dot_precision@10
value: 0.092
name: Dot Precision@10
- type: dot_recall@1
value: 0.51
name: Dot Recall@1
- type: dot_recall@3
value: 0.66
name: Dot Recall@3
- type: dot_recall@5
value: 0.75
name: Dot Recall@5
- type: dot_recall@10
value: 0.81
name: Dot Recall@10
- type: dot_ndcg@10
value: 0.6642484604451891
name: Dot Ndcg@10
- type: dot_mrr@10
value: 0.6294126984126983
name: Dot Mrr@10
- type: dot_map@100
value: 0.6162769242153361
name: Dot Map@100
- type: query_active_dims
value: 256.0
name: Query Active Dims
- type: query_sparsity_ratio
value: 0.9375
name: Query Sparsity Ratio
- type: corpus_active_dims
value: 256.0
name: Corpus Active Dims
- type: corpus_sparsity_ratio
value: 0.9375
name: Corpus Sparsity Ratio
- task:
type: sparse-nano-beir
name: Sparse Nano BEIR
dataset:
name: NanoBEIR mean 256
type: NanoBEIR_mean_256
metrics:
- type: dot_accuracy@1
value: 0.4666666666666666
name: Dot Accuracy@1
- type: dot_accuracy@3
value: 0.64
name: Dot Accuracy@3
- type: dot_accuracy@5
value: 0.7133333333333333
name: Dot Accuracy@5
- type: dot_accuracy@10
value: 0.7666666666666666
name: Dot Accuracy@10
- type: dot_precision@1
value: 0.4666666666666666
name: Dot Precision@1
- type: dot_precision@3
value: 0.2755555555555555
name: Dot Precision@3
- type: dot_precision@5
value: 0.21333333333333335
name: Dot Precision@5
- type: dot_precision@10
value: 0.1413333333333333
name: Dot Precision@10
- type: dot_recall@1
value: 0.3317079824798988
name: Dot Recall@1
- type: dot_recall@3
value: 0.46027749545078783
name: Dot Recall@3
- type: dot_recall@5
value: 0.5297984177125533
name: Dot Recall@5
- type: dot_recall@10
value: 0.5919758437819499
name: Dot Recall@10
- type: dot_ndcg@10
value: 0.5409587648376507
name: Dot Ndcg@10
- type: dot_mrr@10
value: 0.570153439153439
name: Dot Mrr@10
- type: dot_map@100
value: 0.4532857697703016
name: Dot Map@100
- type: query_active_dims
value: 256.0
name: Query Active Dims
- type: query_sparsity_ratio
value: 0.9375
name: Query Sparsity Ratio
- type: corpus_active_dims
value: 256.0
name: Corpus Active Dims
- type: corpus_sparsity_ratio
value: 0.9375
name: Corpus Sparsity Ratio
- task:
type: sparse-information-retrieval
name: Sparse Information Retrieval
dataset:
name: NanoClimateFEVER
type: NanoClimateFEVER
metrics:
- type: dot_accuracy@1
value: 0.28
name: Dot Accuracy@1
- type: dot_accuracy@3
value: 0.52
name: Dot Accuracy@3
- type: dot_accuracy@5
value: 0.7
name: Dot Accuracy@5
- type: dot_accuracy@10
value: 0.8
name: Dot Accuracy@10
- type: dot_precision@1
value: 0.28
name: Dot Precision@1
- type: dot_precision@3
value: 0.18666666666666668
name: Dot Precision@3
- type: dot_precision@5
value: 0.16799999999999998
name: Dot Precision@5
- type: dot_precision@10
value: 0.10799999999999997
name: Dot Precision@10
- type: dot_recall@1
value: 0.12166666666666665
name: Dot Recall@1
- type: dot_recall@3
value: 0.23233333333333334
name: Dot Recall@3
- type: dot_recall@5
value: 0.348
name: Dot Recall@5
- type: dot_recall@10
value: 0.42633333333333334
name: Dot Recall@10
- type: dot_ndcg@10
value: 0.33235923006734097
name: Dot Ndcg@10
- type: dot_mrr@10
value: 0.43644444444444447
name: Dot Mrr@10
- type: dot_map@100
value: 0.24903211945618525
name: Dot Map@100
- type: query_active_dims
value: 256.0
name: Query Active Dims
- type: query_sparsity_ratio
value: 0.9375
name: Query Sparsity Ratio
- type: corpus_active_dims
value: 256.0
name: Corpus Active Dims
- type: corpus_sparsity_ratio
value: 0.9375
name: Corpus Sparsity Ratio
- task:
type: sparse-information-retrieval
name: Sparse Information Retrieval
dataset:
name: NanoDBPedia
type: NanoDBPedia
metrics:
- type: dot_accuracy@1
value: 0.8
name: Dot Accuracy@1
- type: dot_accuracy@3
value: 0.9
name: Dot Accuracy@3
- type: dot_accuracy@5
value: 0.9
name: Dot Accuracy@5
- type: dot_accuracy@10
value: 0.92
name: Dot Accuracy@10
- type: dot_precision@1
value: 0.8
name: Dot Precision@1
- type: dot_precision@3
value: 0.6466666666666666
name: Dot Precision@3
- type: dot_precision@5
value: 0.56
name: Dot Precision@5
- type: dot_precision@10
value: 0.474
name: Dot Precision@10
- type: dot_recall@1
value: 0.09128542236179474
name: Dot Recall@1
- type: dot_recall@3
value: 0.17409405829521904
name: Dot Recall@3
- type: dot_recall@5
value: 0.22516141018064886
name: Dot Recall@5
- type: dot_recall@10
value: 0.321390285824061
name: Dot Recall@10
- type: dot_ndcg@10
value: 0.600179050204524
name: Dot Ndcg@10
- type: dot_mrr@10
value: 0.8425
name: Dot Mrr@10
- type: dot_map@100
value: 0.45264984932006563
name: Dot Map@100
- type: query_active_dims
value: 256.0
name: Query Active Dims
- type: query_sparsity_ratio
value: 0.9375
name: Query Sparsity Ratio
- type: corpus_active_dims
value: 256.0
name: Corpus Active Dims
- type: corpus_sparsity_ratio
value: 0.9375
name: Corpus Sparsity Ratio
- task:
type: sparse-information-retrieval
name: Sparse Information Retrieval
dataset:
name: NanoFEVER
type: NanoFEVER
metrics:
- type: dot_accuracy@1
value: 0.84
name: Dot Accuracy@1
- type: dot_accuracy@3
value: 0.92
name: Dot Accuracy@3
- type: dot_accuracy@5
value: 0.96
name: Dot Accuracy@5
- type: dot_accuracy@10
value: 0.96
name: Dot Accuracy@10
- type: dot_precision@1
value: 0.84
name: Dot Precision@1
- type: dot_precision@3
value: 0.32
name: Dot Precision@3
- type: dot_precision@5
value: 0.19999999999999996
name: Dot Precision@5
- type: dot_precision@10
value: 0.09999999999999998
name: Dot Precision@10
- type: dot_recall@1
value: 0.7866666666666667
name: Dot Recall@1
- type: dot_recall@3
value: 0.8866666666666667
name: Dot Recall@3
- type: dot_recall@5
value: 0.9266666666666667
name: Dot Recall@5
- type: dot_recall@10
value: 0.9266666666666667
name: Dot Recall@10
- type: dot_ndcg@10
value: 0.8816129048397259
name: Dot Ndcg@10
- type: dot_mrr@10
value: 0.89
name: Dot Mrr@10
- type: dot_map@100
value: 0.8589881484317317
name: Dot Map@100
- type: query_active_dims
value: 256.0
name: Query Active Dims
- type: query_sparsity_ratio
value: 0.9375
name: Query Sparsity Ratio
- type: corpus_active_dims
value: 256.0
name: Corpus Active Dims
- type: corpus_sparsity_ratio
value: 0.9375
name: Corpus Sparsity Ratio
- task:
type: sparse-information-retrieval
name: Sparse Information Retrieval
dataset:
name: NanoFiQA2018
type: NanoFiQA2018
metrics:
- type: dot_accuracy@1
value: 0.48
name: Dot Accuracy@1
- type: dot_accuracy@3
value: 0.6
name: Dot Accuracy@3
- type: dot_accuracy@5
value: 0.64
name: Dot Accuracy@5
- type: dot_accuracy@10
value: 0.74
name: Dot Accuracy@10
- type: dot_precision@1
value: 0.48
name: Dot Precision@1
- type: dot_precision@3
value: 0.3066666666666667
name: Dot Precision@3
- type: dot_precision@5
value: 0.22399999999999998
name: Dot Precision@5
- type: dot_precision@10
value: 0.13599999999999998
name: Dot Precision@10
- type: dot_recall@1
value: 0.2592460317460317
name: Dot Recall@1
- type: dot_recall@3
value: 0.39734920634920634
name: Dot Recall@3
- type: dot_recall@5
value: 0.4497857142857143
name: Dot Recall@5
- type: dot_recall@10
value: 0.5795634920634921
name: Dot Recall@10
- type: dot_ndcg@10
value: 0.48812055653800884
name: Dot Ndcg@10
- type: dot_mrr@10
value: 0.5517460317460319
name: Dot Mrr@10
- type: dot_map@100
value: 0.42554170336694114
name: Dot Map@100
- type: query_active_dims
value: 256.0
name: Query Active Dims
- type: query_sparsity_ratio
value: 0.9375
name: Query Sparsity Ratio
- type: corpus_active_dims
value: 256.0
name: Corpus Active Dims
- type: corpus_sparsity_ratio
value: 0.9375
name: Corpus Sparsity Ratio
- task:
type: sparse-information-retrieval
name: Sparse Information Retrieval
dataset:
name: NanoHotpotQA
type: NanoHotpotQA
metrics:
- type: dot_accuracy@1
value: 0.84
name: Dot Accuracy@1
- type: dot_accuracy@3
value: 0.96
name: Dot Accuracy@3
- type: dot_accuracy@5
value: 0.96
name: Dot Accuracy@5
- type: dot_accuracy@10
value: 0.98
name: Dot Accuracy@10
- type: dot_precision@1
value: 0.84
name: Dot Precision@1
- type: dot_precision@3
value: 0.5133333333333333
name: Dot Precision@3
- type: dot_precision@5
value: 0.32799999999999996
name: Dot Precision@5
- type: dot_precision@10
value: 0.16999999999999996
name: Dot Precision@10
- type: dot_recall@1
value: 0.42
name: Dot Recall@1
- type: dot_recall@3
value: 0.77
name: Dot Recall@3
- type: dot_recall@5
value: 0.82
name: Dot Recall@5
- type: dot_recall@10
value: 0.85
name: Dot Recall@10
- type: dot_ndcg@10
value: 0.8106522538764799
name: Dot Ndcg@10
- type: dot_mrr@10
value: 0.8966666666666666
name: Dot Mrr@10
- type: dot_map@100
value: 0.7565706035126855
name: Dot Map@100
- type: query_active_dims
value: 256.0
name: Query Active Dims
- type: query_sparsity_ratio
value: 0.9375
name: Query Sparsity Ratio
- type: corpus_active_dims
value: 256.0
name: Corpus Active Dims
- type: corpus_sparsity_ratio
value: 0.9375
name: Corpus Sparsity Ratio
- task:
type: sparse-information-retrieval
name: Sparse Information Retrieval
dataset:
name: NanoMSMARCO
type: NanoMSMARCO
metrics:
- type: dot_accuracy@1
value: 0.44
name: Dot Accuracy@1
- type: dot_accuracy@3
value: 0.62
name: Dot Accuracy@3
- type: dot_accuracy@5
value: 0.74
name: Dot Accuracy@5
- type: dot_accuracy@10
value: 0.84
name: Dot Accuracy@10
- type: dot_precision@1
value: 0.44
name: Dot Precision@1
- type: dot_precision@3
value: 0.20666666666666667
name: Dot Precision@3
- type: dot_precision@5
value: 0.14800000000000002
name: Dot Precision@5
- type: dot_precision@10
value: 0.08399999999999999
name: Dot Precision@10
- type: dot_recall@1
value: 0.44
name: Dot Recall@1
- type: dot_recall@3
value: 0.62
name: Dot Recall@3
- type: dot_recall@5
value: 0.74
name: Dot Recall@5
- type: dot_recall@10
value: 0.84
name: Dot Recall@10
- type: dot_ndcg@10
value: 0.6329477813439243
name: Dot Ndcg@10
- type: dot_mrr@10
value: 0.5677777777777777
name: Dot Mrr@10
- type: dot_map@100
value: 0.5762304873870092
name: Dot Map@100
- type: query_active_dims
value: 256.0
name: Query Active Dims
- type: query_sparsity_ratio
value: 0.9375
name: Query Sparsity Ratio
- type: corpus_active_dims
value: 256.0
name: Corpus Active Dims
- type: corpus_sparsity_ratio
value: 0.9375
name: Corpus Sparsity Ratio
- task:
type: sparse-information-retrieval
name: Sparse Information Retrieval
dataset:
name: NanoNFCorpus
type: NanoNFCorpus
metrics:
- type: dot_accuracy@1
value: 0.42
name: Dot Accuracy@1
- type: dot_accuracy@3
value: 0.56
name: Dot Accuracy@3
- type: dot_accuracy@5
value: 0.64
name: Dot Accuracy@5
- type: dot_accuracy@10
value: 0.66
name: Dot Accuracy@10
- type: dot_precision@1
value: 0.42
name: Dot Precision@1
- type: dot_precision@3
value: 0.37999999999999995
name: Dot Precision@3
- type: dot_precision@5
value: 0.34800000000000003
name: Dot Precision@5
- type: dot_precision@10
value: 0.258
name: Dot Precision@10
- type: dot_recall@1
value: 0.04486258380333274
name: Dot Recall@1
- type: dot_recall@3
value: 0.08768477299713343
name: Dot Recall@3
- type: dot_recall@5
value: 0.10844641112515632
name: Dot Recall@5
- type: dot_recall@10
value: 0.135531563356284
name: Dot Recall@10
- type: dot_ndcg@10
value: 0.3285187113745097
name: Dot Ndcg@10
- type: dot_mrr@10
value: 0.5009999999999999
name: Dot Mrr@10
- type: dot_map@100
value: 0.16174125549238802
name: Dot Map@100
- type: query_active_dims
value: 256.0
name: Query Active Dims
- type: query_sparsity_ratio
value: 0.9375
name: Query Sparsity Ratio
- type: corpus_active_dims
value: 256.0
name: Corpus Active Dims
- type: corpus_sparsity_ratio
value: 0.9375
name: Corpus Sparsity Ratio
- task:
type: sparse-information-retrieval
name: Sparse Information Retrieval
dataset:
name: NanoNQ
type: NanoNQ
metrics:
- type: dot_accuracy@1
value: 0.58
name: Dot Accuracy@1
- type: dot_accuracy@3
value: 0.7
name: Dot Accuracy@3
- type: dot_accuracy@5
value: 0.8
name: Dot Accuracy@5
- type: dot_accuracy@10
value: 0.82
name: Dot Accuracy@10
- type: dot_precision@1
value: 0.58
name: Dot Precision@1
- type: dot_precision@3
value: 0.24
name: Dot Precision@3
- type: dot_precision@5
value: 0.16799999999999998
name: Dot Precision@5
- type: dot_precision@10
value: 0.08999999999999998
name: Dot Precision@10
- type: dot_recall@1
value: 0.55
name: Dot Recall@1
- type: dot_recall@3
value: 0.66
name: Dot Recall@3
- type: dot_recall@5
value: 0.75
name: Dot Recall@5
- type: dot_recall@10
value: 0.79
name: Dot Recall@10
- type: dot_ndcg@10
value: 0.677342414343143
name: Dot Ndcg@10
- type: dot_mrr@10
value: 0.6521666666666666
name: Dot Mrr@10
- type: dot_map@100
value: 0.6420660106369513
name: Dot Map@100
- type: query_active_dims
value: 256.0
name: Query Active Dims
- type: query_sparsity_ratio
value: 0.9375
name: Query Sparsity Ratio
- type: corpus_active_dims
value: 256.0
name: Corpus Active Dims
- type: corpus_sparsity_ratio
value: 0.9375
name: Corpus Sparsity Ratio
- task:
type: sparse-information-retrieval
name: Sparse Information Retrieval
dataset:
name: NanoQuoraRetrieval
type: NanoQuoraRetrieval
metrics:
- type: dot_accuracy@1
value: 0.9
name: Dot Accuracy@1
- type: dot_accuracy@3
value: 1.0
name: Dot Accuracy@3
- type: dot_accuracy@5
value: 1.0
name: Dot Accuracy@5
- type: dot_accuracy@10
value: 1.0
name: Dot Accuracy@10
- type: dot_precision@1
value: 0.9
name: Dot Precision@1
- type: dot_precision@3
value: 0.4133333333333333
name: Dot Precision@3
- type: dot_precision@5
value: 0.27199999999999996
name: Dot Precision@5
- type: dot_precision@10
value: 0.13799999999999998
name: Dot Precision@10
- type: dot_recall@1
value: 0.7773333333333333
name: Dot Recall@1
- type: dot_recall@3
value: 0.9620000000000001
name: Dot Recall@3
- type: dot_recall@5
value: 0.9933333333333334
name: Dot Recall@5
- type: dot_recall@10
value: 0.9966666666666666
name: Dot Recall@10
- type: dot_ndcg@10
value: 0.9509657098958008
name: Dot Ndcg@10
- type: dot_mrr@10
value: 0.9466666666666665
name: Dot Mrr@10
- type: dot_map@100
value: 0.9297051282051282
name: Dot Map@100
- type: query_active_dims
value: 256.0
name: Query Active Dims
- type: query_sparsity_ratio
value: 0.9375
name: Query Sparsity Ratio
- type: corpus_active_dims
value: 256.0
name: Corpus Active Dims
- type: corpus_sparsity_ratio
value: 0.9375
name: Corpus Sparsity Ratio
- task:
type: sparse-information-retrieval
name: Sparse Information Retrieval
dataset:
name: NanoSCIDOCS
type: NanoSCIDOCS
metrics:
- type: dot_accuracy@1
value: 0.42
name: Dot Accuracy@1
- type: dot_accuracy@3
value: 0.72
name: Dot Accuracy@3
- type: dot_accuracy@5
value: 0.82
name: Dot Accuracy@5
- type: dot_accuracy@10
value: 0.88
name: Dot Accuracy@10
- type: dot_precision@1
value: 0.42
name: Dot Precision@1
- type: dot_precision@3
value: 0.35333333333333333
name: Dot Precision@3
- type: dot_precision@5
value: 0.3
name: Dot Precision@5
- type: dot_precision@10
value: 0.20800000000000002
name: Dot Precision@10
- type: dot_recall@1
value: 0.09066666666666666
name: Dot Recall@1
- type: dot_recall@3
value: 0.22166666666666665
name: Dot Recall@3
- type: dot_recall@5
value: 0.3096666666666667
name: Dot Recall@5
- type: dot_recall@10
value: 0.42566666666666664
name: Dot Recall@10
- type: dot_ndcg@10
value: 0.4022717287490821
name: Dot Ndcg@10
- type: dot_mrr@10
value: 0.5887222222222221
name: Dot Mrr@10
- type: dot_map@100
value: 0.32075091248131626
name: Dot Map@100
- type: query_active_dims
value: 256.0
name: Query Active Dims
- type: query_sparsity_ratio
value: 0.9375
name: Query Sparsity Ratio
- type: corpus_active_dims
value: 256.0
name: Corpus Active Dims
- type: corpus_sparsity_ratio
value: 0.9375
name: Corpus Sparsity Ratio
- task:
type: sparse-information-retrieval
name: Sparse Information Retrieval
dataset:
name: NanoArguAna
type: NanoArguAna
metrics:
- type: dot_accuracy@1
value: 0.38
name: Dot Accuracy@1
- type: dot_accuracy@3
value: 0.7
name: Dot Accuracy@3
- type: dot_accuracy@5
value: 0.8
name: Dot Accuracy@5
- type: dot_accuracy@10
value: 0.92
name: Dot Accuracy@10
- type: dot_precision@1
value: 0.38
name: Dot Precision@1
- type: dot_precision@3
value: 0.23333333333333336
name: Dot Precision@3
- type: dot_precision@5
value: 0.16
name: Dot Precision@5
- type: dot_precision@10
value: 0.092
name: Dot Precision@10
- type: dot_recall@1
value: 0.38
name: Dot Recall@1
- type: dot_recall@3
value: 0.7
name: Dot Recall@3
- type: dot_recall@5
value: 0.8
name: Dot Recall@5
- type: dot_recall@10
value: 0.92
name: Dot Recall@10
- type: dot_ndcg@10
value: 0.6550827948648061
name: Dot Ndcg@10
- type: dot_mrr@10
value: 0.5706349206349206
name: Dot Mrr@10
- type: dot_map@100
value: 0.5760927960927961
name: Dot Map@100
- type: query_active_dims
value: 256.0
name: Query Active Dims
- type: query_sparsity_ratio
value: 0.9375
name: Query Sparsity Ratio
- type: corpus_active_dims
value: 256.0
name: Corpus Active Dims
- type: corpus_sparsity_ratio
value: 0.9375
name: Corpus Sparsity Ratio
- task:
type: sparse-information-retrieval
name: Sparse Information Retrieval
dataset:
name: NanoSciFact
type: NanoSciFact
metrics:
- type: dot_accuracy@1
value: 0.62
name: Dot Accuracy@1
- type: dot_accuracy@3
value: 0.72
name: Dot Accuracy@3
- type: dot_accuracy@5
value: 0.76
name: Dot Accuracy@5
- type: dot_accuracy@10
value: 0.84
name: Dot Accuracy@10
- type: dot_precision@1
value: 0.62
name: Dot Precision@1
- type: dot_precision@3
value: 0.26666666666666666
name: Dot Precision@3
- type: dot_precision@5
value: 0.17199999999999996
name: Dot Precision@5
- type: dot_precision@10
value: 0.09599999999999997
name: Dot Precision@10
- type: dot_recall@1
value: 0.595
name: Dot Recall@1
- type: dot_recall@3
value: 0.705
name: Dot Recall@3
- type: dot_recall@5
value: 0.755
name: Dot Recall@5
- type: dot_recall@10
value: 0.84
name: Dot Recall@10
- type: dot_ndcg@10
value: 0.7193800580696723
name: Dot Ndcg@10
- type: dot_mrr@10
value: 0.6823888888888889
name: Dot Mrr@10
- type: dot_map@100
value: 0.6850911930363545
name: Dot Map@100
- type: query_active_dims
value: 256.0
name: Query Active Dims
- type: query_sparsity_ratio
value: 0.9375
name: Query Sparsity Ratio
- type: corpus_active_dims
value: 256.0
name: Corpus Active Dims
- type: corpus_sparsity_ratio
value: 0.9375
name: Corpus Sparsity Ratio
- task:
type: sparse-information-retrieval
name: Sparse Information Retrieval
dataset:
name: NanoTouche2020
type: NanoTouche2020
metrics:
- type: dot_accuracy@1
value: 0.4897959183673469
name: Dot Accuracy@1
- type: dot_accuracy@3
value: 0.8367346938775511
name: Dot Accuracy@3
- type: dot_accuracy@5
value: 0.9591836734693877
name: Dot Accuracy@5
- type: dot_accuracy@10
value: 0.9795918367346939
name: Dot Accuracy@10
- type: dot_precision@1
value: 0.4897959183673469
name: Dot Precision@1
- type: dot_precision@3
value: 0.5170068027210885
name: Dot Precision@3
- type: dot_precision@5
value: 0.5346938775510204
name: Dot Precision@5
- type: dot_precision@10
value: 0.4346938775510204
name: Dot Precision@10
- type: dot_recall@1
value: 0.03422245985964837
name: Dot Recall@1
- type: dot_recall@3
value: 0.10897367065265
name: Dot Recall@3
- type: dot_recall@5
value: 0.18115391425134045
name: Dot Recall@5
- type: dot_recall@10
value: 0.2884686031356881
name: Dot Recall@10
- type: dot_ndcg@10
value: 0.47678328743473813
name: Dot Ndcg@10
- type: dot_mrr@10
value: 0.6784580498866212
name: Dot Mrr@10
- type: dot_map@100
value: 0.3590479959667369
name: Dot Map@100
- type: query_active_dims
value: 256.0
name: Query Active Dims
- type: query_sparsity_ratio
value: 0.9375
name: Query Sparsity Ratio
- type: corpus_active_dims
value: 256.0
name: Corpus Active Dims
- type: corpus_sparsity_ratio
value: 0.9375
name: Corpus Sparsity Ratio
- task:
type: sparse-nano-beir
name: Sparse Nano BEIR
dataset:
name: NanoBEIR mean
type: NanoBEIR_mean
metrics:
- type: dot_accuracy@1
value: 0.576138147566719
name: Dot Accuracy@1
- type: dot_accuracy@3
value: 0.7505180533751962
name: Dot Accuracy@3
- type: dot_accuracy@5
value: 0.821475667189953
name: Dot Accuracy@5
- type: dot_accuracy@10
value: 0.8722762951334379
name: Dot Accuracy@10
- type: dot_precision@1
value: 0.576138147566719
name: Dot Precision@1
- type: dot_precision@3
value: 0.3525902668759811
name: Dot Precision@3
- type: dot_precision@5
value: 0.27559183673469384
name: Dot Precision@5
- type: dot_precision@10
value: 0.18374568288854
name: Dot Precision@10
- type: dot_recall@1
value: 0.35314998700801087
name: Dot Recall@1
- type: dot_recall@3
value: 0.5019821826892981
name: Dot Recall@3
- type: dot_recall@5
value: 0.5697857012699635
name: Dot Recall@5
- type: dot_recall@10
value: 0.6415605598240661
name: Dot Recall@10
- type: dot_ndcg@10
value: 0.6120166524309044
name: Dot Ndcg@10
- type: dot_mrr@10
value: 0.6773209488923774
name: Dot Mrr@10
- type: dot_map@100
value: 0.5379621694912531
name: Dot Map@100
- type: query_active_dims
value: 256.0
name: Query Active Dims
- type: query_sparsity_ratio
value: 0.9375
name: Query Sparsity Ratio
- type: corpus_active_dims
value: 256.0
name: Corpus Active Dims
- type: corpus_sparsity_ratio
value: 0.9375
name: Corpus Sparsity Ratio
---
# Sparse CSR model trained on Natural Questions
This is a [CSR Sparse Encoder](https://www.sbert.net/docs/sparse_encoder/usage/usage.html) model finetuned from [mixedbread-ai/mxbai-embed-large-v1](https://huggingface.co/mixedbread-ai/mxbai-embed-large-v1) on the [natural-questions](https://huggingface.co/datasets/sentence-transformers/natural-questions) dataset using the [sentence-transformers](https://www.SBERT.net) library. It maps sentences & paragraphs to a 4096-dimensional sparse vector space with 256 maximum active dimensions and can be used for semantic search and sparse retrieval.
## Model Details
### Model Description
- **Model Type:** CSR Sparse Encoder
- **Base model:** [mixedbread-ai/mxbai-embed-large-v1](https://huggingface.co/mixedbread-ai/mxbai-embed-large-v1) <!-- at revision db9d1fe0f31addb4978201b2bf3e577f3f8900d2 -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 4096 dimensions (trained with 256 maximum active dimensions)
- **Similarity Function:** Dot Product
- **Training Dataset:**
- [natural-questions](https://huggingface.co/datasets/sentence-transformers/natural-questions)
- **Language:** en
- **License:** apache-2.0
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Documentation:** [Sparse Encoder Documentation](https://www.sbert.net/docs/sparse_encoder/usage/usage.html)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sparse Encoders on Hugging Face](https://huggingface.co/models?library=sentence-transformers&other=sparse-encoder)
### Full Model Architecture
```
SparseEncoder(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): CSRSparsity({'input_dim': 1024, 'hidden_dim': 4096, 'k': 256, 'k_aux': 512, 'normalize': False, 'dead_threshold': 30})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SparseEncoder
# Download from the 🤗 Hub
model = SparseEncoder("tomaarsen/csr-mxbai-embed-large-v1-nq-updated-reconstruction-2")
# Run inference
queries = [
"who is cornelius in the book of acts",
]
documents = [
'Cornelius the Centurion Cornelius (Greek: Κορνήλιος) was a Roman centurion who is considered by Christians to be one of the first Gentiles to convert to the faith, as related in Acts of the Apostles.',
"Joe Ranft Ranft reunited with Lasseter when he was hired by Pixar in 1991 as their head of story.[1] There he worked on all of their films produced up to 2006; this included Toy Story (for which he received an Academy Award nomination) and A Bug's Life, as the co-story writer and others as story supervisor. His final film was Cars. He also voiced characters in many of the films, including Heimlich the caterpillar in A Bug's Life, Wheezy the penguin in Toy Story 2, and Jacques the shrimp in Finding Nemo.[1]",
'Wonderful Tonight "Wonderful Tonight" is a ballad written by Eric Clapton. It was included on Clapton\'s 1977 album Slowhand. Clapton wrote the song about Pattie Boyd.[1] The female vocal harmonies on the song are provided by Marcella Detroit (then Marcy Levy) and Yvonne Elliman.',
]
query_embeddings = model.encode_query(queries)
document_embeddings = model.encode_document(documents)
print(query_embeddings.shape, document_embeddings.shape)
# [1, 4096] [3, 4096]
# Get the similarity scores for the embeddings
similarities = model.similarity(query_embeddings, document_embeddings)
print(similarities)
# tensor([[118.6570, 32.2072, 21.3971]])
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Sparse Information Retrieval
* Datasets: `NanoMSMARCO_128`, `NanoNFCorpus_128` and `NanoNQ_128`
* Evaluated with [<code>SparseInformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sparse_encoder/evaluation.html#sentence_transformers.sparse_encoder.evaluation.SparseInformationRetrievalEvaluator) with these parameters:
```json
{
"max_active_dims": 128
}
```
| Metric | NanoMSMARCO_128 | NanoNFCorpus_128 | NanoNQ_128 |
|:----------------------|:----------------|:-----------------|:-----------|
| dot_accuracy@1 | 0.38 | 0.44 | 0.48 |
| dot_accuracy@3 | 0.66 | 0.54 | 0.66 |
| dot_accuracy@5 | 0.72 | 0.64 | 0.7 |
| dot_accuracy@10 | 0.82 | 0.68 | 0.84 |
| dot_precision@1 | 0.38 | 0.44 | 0.48 |
| dot_precision@3 | 0.22 | 0.3133 | 0.2267 |
| dot_precision@5 | 0.144 | 0.28 | 0.148 |
| dot_precision@10 | 0.082 | 0.246 | 0.09 |
| dot_recall@1 | 0.38 | 0.0451 | 0.45 |
| dot_recall@3 | 0.66 | 0.0675 | 0.62 |
| dot_recall@5 | 0.72 | 0.0877 | 0.67 |
| dot_recall@10 | 0.82 | 0.1204 | 0.81 |
| **dot_ndcg@10** | **0.6075** | **0.3038** | **0.6338** |
| dot_mrr@10 | 0.5393 | 0.5082 | 0.5933 |
| dot_map@100 | 0.5478 | 0.1387 | 0.5762 |
| query_active_dims | 128.0 | 128.0 | 128.0 |
| query_sparsity_ratio | 0.9688 | 0.9688 | 0.9688 |
| corpus_active_dims | 128.0 | 128.0 | 128.0 |
| corpus_sparsity_ratio | 0.9688 | 0.9688 | 0.9688 |
#### Sparse Nano BEIR
* Dataset: `NanoBEIR_mean_128`
* Evaluated with [<code>SparseNanoBEIREvaluator</code>](https://sbert.net/docs/package_reference/sparse_encoder/evaluation.html#sentence_transformers.sparse_encoder.evaluation.SparseNanoBEIREvaluator) with these parameters:
```json
{
"dataset_names": [
"msmarco",
"nfcorpus",
"nq"
],
"max_active_dims": 128
}
```
| Metric | Value |
|:----------------------|:----------|
| dot_accuracy@1 | 0.4333 |
| dot_accuracy@3 | 0.62 |
| dot_accuracy@5 | 0.6867 |
| dot_accuracy@10 | 0.78 |
| dot_precision@1 | 0.4333 |
| dot_precision@3 | 0.2533 |
| dot_precision@5 | 0.1907 |
| dot_precision@10 | 0.1393 |
| dot_recall@1 | 0.2917 |
| dot_recall@3 | 0.4492 |
| dot_recall@5 | 0.4926 |
| dot_recall@10 | 0.5835 |
| **dot_ndcg@10** | **0.515** |
| dot_mrr@10 | 0.5469 |
| dot_map@100 | 0.4209 |
| query_active_dims | 128.0 |
| query_sparsity_ratio | 0.9688 |
| corpus_active_dims | 128.0 |
| corpus_sparsity_ratio | 0.9688 |
#### Sparse Information Retrieval
* Datasets: `NanoMSMARCO_256`, `NanoNFCorpus_256` and `NanoNQ_256`
* Evaluated with [<code>SparseInformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sparse_encoder/evaluation.html#sentence_transformers.sparse_encoder.evaluation.SparseInformationRetrievalEvaluator) with these parameters:
```json
{
"max_active_dims": 256
}
```
| Metric | NanoMSMARCO_256 | NanoNFCorpus_256 | NanoNQ_256 |
|:----------------------|:----------------|:-----------------|:-----------|
| dot_accuracy@1 | 0.44 | 0.42 | 0.54 |
| dot_accuracy@3 | 0.64 | 0.58 | 0.7 |
| dot_accuracy@5 | 0.74 | 0.6 | 0.8 |
| dot_accuracy@10 | 0.84 | 0.62 | 0.84 |
| dot_precision@1 | 0.44 | 0.42 | 0.54 |
| dot_precision@3 | 0.2133 | 0.3733 | 0.24 |
| dot_precision@5 | 0.148 | 0.324 | 0.168 |
| dot_precision@10 | 0.084 | 0.248 | 0.092 |
| dot_recall@1 | 0.44 | 0.0451 | 0.51 |
| dot_recall@3 | 0.64 | 0.0808 | 0.66 |
| dot_recall@5 | 0.74 | 0.0994 | 0.75 |
| dot_recall@10 | 0.84 | 0.1259 | 0.81 |
| **dot_ndcg@10** | **0.6405** | **0.3181** | **0.6642** |
| dot_mrr@10 | 0.5769 | 0.5042 | 0.6294 |
| dot_map@100 | 0.5851 | 0.1585 | 0.6163 |
| query_active_dims | 256.0 | 256.0 | 256.0 |
| query_sparsity_ratio | 0.9375 | 0.9375 | 0.9375 |
| corpus_active_dims | 256.0 | 256.0 | 256.0 |
| corpus_sparsity_ratio | 0.9375 | 0.9375 | 0.9375 |
#### Sparse Nano BEIR
* Dataset: `NanoBEIR_mean_256`
* Evaluated with [<code>SparseNanoBEIREvaluator</code>](https://sbert.net/docs/package_reference/sparse_encoder/evaluation.html#sentence_transformers.sparse_encoder.evaluation.SparseNanoBEIREvaluator) with these parameters:
```json
{
"dataset_names": [
"msmarco",
"nfcorpus",
"nq"
],
"max_active_dims": 256
}
```
| Metric | Value |
|:----------------------|:----------|
| dot_accuracy@1 | 0.4667 |
| dot_accuracy@3 | 0.64 |
| dot_accuracy@5 | 0.7133 |
| dot_accuracy@10 | 0.7667 |
| dot_precision@1 | 0.4667 |
| dot_precision@3 | 0.2756 |
| dot_precision@5 | 0.2133 |
| dot_precision@10 | 0.1413 |
| dot_recall@1 | 0.3317 |
| dot_recall@3 | 0.4603 |
| dot_recall@5 | 0.5298 |
| dot_recall@10 | 0.592 |
| **dot_ndcg@10** | **0.541** |
| dot_mrr@10 | 0.5702 |
| dot_map@100 | 0.4533 |
| query_active_dims | 256.0 |
| query_sparsity_ratio | 0.9375 |
| corpus_active_dims | 256.0 |
| corpus_sparsity_ratio | 0.9375 |
#### Sparse Information Retrieval
* Datasets: `NanoClimateFEVER`, `NanoDBPedia`, `NanoFEVER`, `NanoFiQA2018`, `NanoHotpotQA`, `NanoMSMARCO`, `NanoNFCorpus`, `NanoNQ`, `NanoQuoraRetrieval`, `NanoSCIDOCS`, `NanoArguAna`, `NanoSciFact` and `NanoTouche2020`
* Evaluated with [<code>SparseInformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sparse_encoder/evaluation.html#sentence_transformers.sparse_encoder.evaluation.SparseInformationRetrievalEvaluator)
| Metric | NanoClimateFEVER | NanoDBPedia | NanoFEVER | NanoFiQA2018 | NanoHotpotQA | NanoMSMARCO | NanoNFCorpus | NanoNQ | NanoQuoraRetrieval | NanoSCIDOCS | NanoArguAna | NanoSciFact | NanoTouche2020 |
|:----------------------|:-----------------|:------------|:-----------|:-------------|:-------------|:------------|:-------------|:-----------|:-------------------|:------------|:------------|:------------|:---------------|
| dot_accuracy@1 | 0.28 | 0.8 | 0.84 | 0.48 | 0.84 | 0.44 | 0.42 | 0.58 | 0.9 | 0.42 | 0.38 | 0.62 | 0.4898 |
| dot_accuracy@3 | 0.52 | 0.9 | 0.92 | 0.6 | 0.96 | 0.62 | 0.56 | 0.7 | 1.0 | 0.72 | 0.7 | 0.72 | 0.8367 |
| dot_accuracy@5 | 0.7 | 0.9 | 0.96 | 0.64 | 0.96 | 0.74 | 0.64 | 0.8 | 1.0 | 0.82 | 0.8 | 0.76 | 0.9592 |
| dot_accuracy@10 | 0.8 | 0.92 | 0.96 | 0.74 | 0.98 | 0.84 | 0.66 | 0.82 | 1.0 | 0.88 | 0.92 | 0.84 | 0.9796 |
| dot_precision@1 | 0.28 | 0.8 | 0.84 | 0.48 | 0.84 | 0.44 | 0.42 | 0.58 | 0.9 | 0.42 | 0.38 | 0.62 | 0.4898 |
| dot_precision@3 | 0.1867 | 0.6467 | 0.32 | 0.3067 | 0.5133 | 0.2067 | 0.38 | 0.24 | 0.4133 | 0.3533 | 0.2333 | 0.2667 | 0.517 |
| dot_precision@5 | 0.168 | 0.56 | 0.2 | 0.224 | 0.328 | 0.148 | 0.348 | 0.168 | 0.272 | 0.3 | 0.16 | 0.172 | 0.5347 |
| dot_precision@10 | 0.108 | 0.474 | 0.1 | 0.136 | 0.17 | 0.084 | 0.258 | 0.09 | 0.138 | 0.208 | 0.092 | 0.096 | 0.4347 |
| dot_recall@1 | 0.1217 | 0.0913 | 0.7867 | 0.2592 | 0.42 | 0.44 | 0.0449 | 0.55 | 0.7773 | 0.0907 | 0.38 | 0.595 | 0.0342 |
| dot_recall@3 | 0.2323 | 0.1741 | 0.8867 | 0.3973 | 0.77 | 0.62 | 0.0877 | 0.66 | 0.962 | 0.2217 | 0.7 | 0.705 | 0.109 |
| dot_recall@5 | 0.348 | 0.2252 | 0.9267 | 0.4498 | 0.82 | 0.74 | 0.1084 | 0.75 | 0.9933 | 0.3097 | 0.8 | 0.755 | 0.1812 |
| dot_recall@10 | 0.4263 | 0.3214 | 0.9267 | 0.5796 | 0.85 | 0.84 | 0.1355 | 0.79 | 0.9967 | 0.4257 | 0.92 | 0.84 | 0.2885 |
| **dot_ndcg@10** | **0.3324** | **0.6002** | **0.8816** | **0.4881** | **0.8107** | **0.6329** | **0.3285** | **0.6773** | **0.951** | **0.4023** | **0.6551** | **0.7194** | **0.4768** |
| dot_mrr@10 | 0.4364 | 0.8425 | 0.89 | 0.5517 | 0.8967 | 0.5678 | 0.501 | 0.6522 | 0.9467 | 0.5887 | 0.5706 | 0.6824 | 0.6785 |
| dot_map@100 | 0.249 | 0.4526 | 0.859 | 0.4255 | 0.7566 | 0.5762 | 0.1617 | 0.6421 | 0.9297 | 0.3208 | 0.5761 | 0.6851 | 0.359 |
| query_active_dims | 256.0 | 256.0 | 256.0 | 256.0 | 256.0 | 256.0 | 256.0 | 256.0 | 256.0 | 256.0 | 256.0 | 256.0 | 256.0 |
| query_sparsity_ratio | 0.9375 | 0.9375 | 0.9375 | 0.9375 | 0.9375 | 0.9375 | 0.9375 | 0.9375 | 0.9375 | 0.9375 | 0.9375 | 0.9375 | 0.9375 |
| corpus_active_dims | 256.0 | 256.0 | 256.0 | 256.0 | 256.0 | 256.0 | 256.0 | 256.0 | 256.0 | 256.0 | 256.0 | 256.0 | 256.0 |
| corpus_sparsity_ratio | 0.9375 | 0.9375 | 0.9375 | 0.9375 | 0.9375 | 0.9375 | 0.9375 | 0.9375 | 0.9375 | 0.9375 | 0.9375 | 0.9375 | 0.9375 |
#### Sparse Nano BEIR
* Dataset: `NanoBEIR_mean`
* Evaluated with [<code>SparseNanoBEIREvaluator</code>](https://sbert.net/docs/package_reference/sparse_encoder/evaluation.html#sentence_transformers.sparse_encoder.evaluation.SparseNanoBEIREvaluator) with these parameters:
```json
{
"dataset_names": [
"climatefever",
"dbpedia",
"fever",
"fiqa2018",
"hotpotqa",
"msmarco",
"nfcorpus",
"nq",
"quoraretrieval",
"scidocs",
"arguana",
"scifact",
"touche2020"
]
}
```
| Metric | Value |
|:----------------------|:----------|
| dot_accuracy@1 | 0.5761 |
| dot_accuracy@3 | 0.7505 |
| dot_accuracy@5 | 0.8215 |
| dot_accuracy@10 | 0.8723 |
| dot_precision@1 | 0.5761 |
| dot_precision@3 | 0.3526 |
| dot_precision@5 | 0.2756 |
| dot_precision@10 | 0.1837 |
| dot_recall@1 | 0.3531 |
| dot_recall@3 | 0.502 |
| dot_recall@5 | 0.5698 |
| dot_recall@10 | 0.6416 |
| **dot_ndcg@10** | **0.612** |
| dot_mrr@10 | 0.6773 |
| dot_map@100 | 0.538 |
| query_active_dims | 256.0 |
| query_sparsity_ratio | 0.9375 |
| corpus_active_dims | 256.0 |
| corpus_sparsity_ratio | 0.9375 |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### natural-questions
* Dataset: [natural-questions](https://huggingface.co/datasets/sentence-transformers/natural-questions) at [f9e894e](https://huggingface.co/datasets/sentence-transformers/natural-questions/tree/f9e894e1081e206e577b4eaa9ee6de2b06ae6f17)
* Size: 99,000 training samples
* Columns: <code>query</code> and <code>answer</code>
* Approximate statistics based on the first 1000 samples:
| | query | answer |
|:--------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 10 tokens</li><li>mean: 11.71 tokens</li><li>max: 26 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 131.81 tokens</li><li>max: 450 tokens</li></ul> |
* Samples:
| query | answer |
|:--------------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>who played the father in papa don't preach</code> | <code>Alex McArthur Alex McArthur (born March 6, 1957) is an American actor.</code> |
| <code>where was the location of the battle of hastings</code> | <code>Battle of Hastings The Battle of Hastings[a] was fought on 14 October 1066 between the Norman-French army of William, the Duke of Normandy, and an English army under the Anglo-Saxon King Harold Godwinson, beginning the Norman conquest of England. It took place approximately 7 miles (11 kilometres) northwest of Hastings, close to the present-day town of Battle, East Sussex, and was a decisive Norman victory.</code> |
| <code>how many puppies can a dog give birth to</code> | <code>Canine reproduction The largest litter size to date was set by a Neapolitan Mastiff in Manea, Cambridgeshire, UK on November 29, 2004; the litter was 24 puppies.[22]</code> |
* Loss: [<code>CSRLoss</code>](https://sbert.net/docs/package_reference/sparse_encoder/losses.html#csrloss) with these parameters:
```json
{
"beta": 0.1,
"gamma": 1.0,
"loss": "SparseMultipleNegativesRankingLoss(scale=1.0, similarity_fct='dot_score')"
}
```
### Evaluation Dataset
#### natural-questions
* Dataset: [natural-questions](https://huggingface.co/datasets/sentence-transformers/natural-questions) at [f9e894e](https://huggingface.co/datasets/sentence-transformers/natural-questions/tree/f9e894e1081e206e577b4eaa9ee6de2b06ae6f17)
* Size: 1,000 evaluation samples
* Columns: <code>query</code> and <code>answer</code>
* Approximate statistics based on the first 1000 samples:
| | query | answer |
|:--------|:-----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 10 tokens</li><li>mean: 11.69 tokens</li><li>max: 23 tokens</li></ul> | <ul><li>min: 15 tokens</li><li>mean: 134.01 tokens</li><li>max: 512 tokens</li></ul> |
* Samples:
| query | answer |
|:-------------------------------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>where is the tiber river located in italy</code> | <code>Tiber The Tiber (/ˈtaɪbər/, Latin: Tiberis,[1] Italian: Tevere [ˈteːvere])[2] is the third-longest river in Italy, rising in the Apennine Mountains in Emilia-Romagna and flowing 406 kilometres (252 mi) through Tuscany, Umbria and Lazio, where it is joined by the river Aniene, to the Tyrrhenian Sea, between Ostia and Fiumicino.[3] It drains a basin estimated at 17,375 square kilometres (6,709 sq mi). The river has achieved lasting fame as the main watercourse of the city of Rome, founded on its eastern banks.</code> |
| <code>what kind of car does jay gatsby drive</code> | <code>Jay Gatsby At the Buchanan home, Jordan Baker, Nick, Jay, and the Buchanans decide to visit New York City. Tom borrows Gatsby's yellow Rolls Royce to drive up to the city. On the way to New York City, Tom makes a detour at a gas station in "the Valley of Ashes", a run-down part of Long Island. The owner, George Wilson, shares his concern that his wife, Myrtle, may be having an affair. This unnerves Tom, who has been having an affair with Myrtle, and he leaves in a hurry.</code> |
| <code>who sings if i can dream about you</code> | <code>I Can Dream About You "I Can Dream About You" is a song performed by American singer Dan Hartman on the soundtrack album of the film Streets of Fire. Released in 1984 as a single from the soundtrack, and included on Hartman's album I Can Dream About You, it reached number 6 on the Billboard Hot 100.[1]</code> |
* Loss: [<code>CSRLoss</code>](https://sbert.net/docs/package_reference/sparse_encoder/losses.html#csrloss) with these parameters:
```json
{
"beta": 0.1,
"gamma": 1.0,
"loss": "SparseMultipleNegativesRankingLoss(scale=1.0, similarity_fct='dot_score')"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 64
- `per_device_eval_batch_size`: 64
- `learning_rate`: 4e-05
- `num_train_epochs`: 1
- `bf16`: True
- `load_best_model_at_end`: True
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 64
- `per_device_eval_batch_size`: 64
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 4e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 1
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.0
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: True
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: True
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
- `router_mapping`: {}
- `learning_rate_mapping`: {}
</details>
### Training Logs
| Epoch | Step | Training Loss | Validation Loss | NanoMSMARCO_128_dot_ndcg@10 | NanoNFCorpus_128_dot_ndcg@10 | NanoNQ_128_dot_ndcg@10 | NanoBEIR_mean_128_dot_ndcg@10 | NanoMSMARCO_256_dot_ndcg@10 | NanoNFCorpus_256_dot_ndcg@10 | NanoNQ_256_dot_ndcg@10 | NanoBEIR_mean_256_dot_ndcg@10 | NanoClimateFEVER_dot_ndcg@10 | NanoDBPedia_dot_ndcg@10 | NanoFEVER_dot_ndcg@10 | NanoFiQA2018_dot_ndcg@10 | NanoHotpotQA_dot_ndcg@10 | NanoMSMARCO_dot_ndcg@10 | NanoNFCorpus_dot_ndcg@10 | NanoNQ_dot_ndcg@10 | NanoQuoraRetrieval_dot_ndcg@10 | NanoSCIDOCS_dot_ndcg@10 | NanoArguAna_dot_ndcg@10 | NanoSciFact_dot_ndcg@10 | NanoTouche2020_dot_ndcg@10 | NanoBEIR_mean_dot_ndcg@10 |
|:----------:|:--------:|:-------------:|:---------------:|:---------------------------:|:----------------------------:|:----------------------:|:-----------------------------:|:---------------------------:|:----------------------------:|:----------------------:|:-----------------------------:|:----------------------------:|:-----------------------:|:---------------------:|:------------------------:|:------------------------:|:-----------------------:|:------------------------:|:------------------:|:------------------------------:|:-----------------------:|:-----------------------:|:-----------------------:|:--------------------------:|:-------------------------:|
| 0.0646 | 100 | 0.3565 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1293 | 200 | 0.3568 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1939 | 300 | 0.3545 | 0.3458 | 0.6322 | 0.2796 | 0.5893 | 0.5004 | 0.6232 | 0.3253 | 0.6548 | 0.5345 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2586 | 400 | 0.3393 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3232 | 500 | 0.3484 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3878 | 600 | 0.3567 | 0.3452 | 0.6245 | 0.3038 | 0.5719 | 0.5000 | 0.6385 | 0.3375 | 0.6496 | 0.5419 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4525 | 700 | 0.3471 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5171 | 800 | 0.3582 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5818 | 900 | 0.3758 | 0.3417 | 0.5849 | 0.3074 | 0.5866 | 0.4929 | 0.6147 | 0.3310 | 0.6729 | 0.5395 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6464 | 1000 | 0.3515 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7111 | 1100 | 0.3287 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| **0.7757** | **1200** | **0.3486** | **0.3314** | **0.5937** | **0.2998** | **0.6317** | **0.5084** | **0.6309** | **0.3303** | **0.6773** | **0.5462** | **-** | **-** | **-** | **-** | **-** | **-** | **-** | **-** | **-** | **-** | **-** | **-** | **-** | **-** |
| 0.8403 | 1300 | 0.3527 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.9050 | 1400 | 0.3161 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.9696 | 1500 | 0.3279 | 0.3244 | 0.6075 | 0.3038 | 0.6338 | 0.5150 | 0.6405 | 0.3181 | 0.6642 | 0.5410 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| -1 | -1 | - | - | - | - | - | - | - | - | - | - | 0.3324 | 0.6002 | 0.8816 | 0.4881 | 0.8107 | 0.6329 | 0.3285 | 0.6773 | 0.9510 | 0.4023 | 0.6551 | 0.7194 | 0.4768 | 0.6120 |
* The bold row denotes the saved checkpoint.
### Environmental Impact
Carbon emissions were measured using [CodeCarbon](https://github.com/mlco2/codecarbon).
- **Energy Consumed**: 0.136 kWh
- **Carbon Emitted**: 0.053 kg of CO2
- **Hours Used**: 0.41 hours
### Training Hardware
- **On Cloud**: No
- **GPU Model**: 1 x NVIDIA GeForce RTX 3090
- **CPU Model**: 13th Gen Intel(R) Core(TM) i7-13700K
- **RAM Size**: 31.78 GB
### Framework Versions
- Python: 3.11.6
- Sentence Transformers: 4.2.0.dev0
- Transformers: 4.52.4
- PyTorch: 2.6.0+cu124
- Accelerate: 1.5.1
- Datasets: 2.21.0
- Tokenizers: 0.21.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### CSRLoss
```bibtex
@misc{wen2025matryoshkarevisitingsparsecoding,
title={Beyond Matryoshka: Revisiting Sparse Coding for Adaptive Representation},
author={Tiansheng Wen and Yifei Wang and Zequn Zeng and Zhong Peng and Yudi Su and Xinyang Liu and Bo Chen and Hongwei Liu and Stefanie Jegelka and Chenyu You},
year={2025},
eprint={2503.01776},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2503.01776},
}
```
#### SparseMultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
mpasila/Llama-Poro-2-8B-Instruct-Q5_K_S-GGUF
|
mpasila
| 2025-06-19T14:23:03Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"fi",
"en",
"dataset:LumiOpen/poro2-instruction-collection",
"dataset:nvidia/HelpSteer3",
"base_model:LumiOpen/Llama-Poro-2-8B-Instruct",
"base_model:quantized:LumiOpen/Llama-Poro-2-8B-Instruct",
"license:llama3.3",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2025-06-19T14:22:39Z |
---
datasets:
- LumiOpen/poro2-instruction-collection
- nvidia/HelpSteer3
language:
- fi
- en
license: llama3.3
library_name: transformers
pipeline_tag: text-generation
tags:
- llama-cpp
- gguf-my-repo
base_model: LumiOpen/Llama-Poro-2-8B-Instruct
---
# mpasila/Llama-Poro-2-8B-Instruct-Q5_K_S-GGUF
This model was converted to GGUF format from [`LumiOpen/Llama-Poro-2-8B-Instruct`](https://huggingface.co/LumiOpen/Llama-Poro-2-8B-Instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/LumiOpen/Llama-Poro-2-8B-Instruct) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo mpasila/Llama-Poro-2-8B-Instruct-Q5_K_S-GGUF --hf-file llama-poro-2-8b-instruct-q5_k_s.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo mpasila/Llama-Poro-2-8B-Instruct-Q5_K_S-GGUF --hf-file llama-poro-2-8b-instruct-q5_k_s.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo mpasila/Llama-Poro-2-8B-Instruct-Q5_K_S-GGUF --hf-file llama-poro-2-8b-instruct-q5_k_s.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo mpasila/Llama-Poro-2-8B-Instruct-Q5_K_S-GGUF --hf-file llama-poro-2-8b-instruct-q5_k_s.gguf -c 2048
```
|
MikeGreen2710/ner_cons_area_final
|
MikeGreen2710
| 2025-06-19T14:22:07Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"roberta",
"token-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2025-06-19T14:21:34Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Rif010/sealion-burmese-fine-tuned-merged-v1-Q4_K_M-GGUF
|
Rif010
| 2025-06-19T14:19:35Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"base_model:Rif010/sealion-burmese-fine-tuned-merged-v1",
"base_model:quantized:Rif010/sealion-burmese-fine-tuned-merged-v1",
"endpoints_compatible",
"region:us"
] | null | 2025-06-19T14:19:11Z |
---
library_name: transformers
tags:
- llama-cpp
- gguf-my-repo
base_model: Rif010/sealion-burmese-fine-tuned-merged-v1
---
# Rif010/sealion-burmese-fine-tuned-merged-v1-Q4_K_M-GGUF
This model was converted to GGUF format from [`Rif010/sealion-burmese-fine-tuned-merged-v1`](https://huggingface.co/Rif010/sealion-burmese-fine-tuned-merged-v1) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Rif010/sealion-burmese-fine-tuned-merged-v1) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Rif010/sealion-burmese-fine-tuned-merged-v1-Q4_K_M-GGUF --hf-file sealion-burmese-fine-tuned-merged-v1-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Rif010/sealion-burmese-fine-tuned-merged-v1-Q4_K_M-GGUF --hf-file sealion-burmese-fine-tuned-merged-v1-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Rif010/sealion-burmese-fine-tuned-merged-v1-Q4_K_M-GGUF --hf-file sealion-burmese-fine-tuned-merged-v1-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Rif010/sealion-burmese-fine-tuned-merged-v1-Q4_K_M-GGUF --hf-file sealion-burmese-fine-tuned-merged-v1-q4_k_m.gguf -c 2048
```
|
ik-ram28/MedMistral-CPT-SFT-7B
|
ik-ram28
| 2025-06-19T14:19:33Z | 19 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"medical",
"conversational",
"fr",
"en",
"base_model:ik-ram28/MedMistral-CPT-7B",
"base_model:finetune:ik-ram28/MedMistral-CPT-7B",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-02-18T14:42:40Z |
---
library_name: transformers
tags:
- medical
license: apache-2.0
language:
- fr
- en
base_model:
- ik-ram28/MedMistral-CPT-7B
- mistralai/Mistral-7B-v0.1
---
## Model Description
MedMistral-CPT-SFT-7B is a French medical language model based on Mistral-7B-v0.1, adapted for medical domain applications through a combined approach of Continual Pre-Training (CPT) followed by Supervised Fine-Tuning (SFT).
## Model Details
- **Model Type**: Causal Language Model
- **Base Model**: Mistral-7B-v0.1
- **Language**: French
- **Domain**: Medical/Healthcare
- **License**: Apache 2.0
- **Paper**: [Adaptation des connaissances médicales pour les grands modèles de langue : Stratégies et analyse comparative](https://github.com/ikram28/medllm-strategies)
## Training Details
### Continual Pre-Training (CPT)
- **Dataset**: NACHOS corpus (opeN crAwled frenCh Healthcare cOrpuS)
- **Size**: 7.4 GB of French medical texts
- **Word Count**: Over 1 billion words (1,088,867,950 words)
- **Sources**: 24 French medical websites
- **Training Duration**: 2.8 epochs
- **Hardware**: 32 NVIDIA H100 80GB GPUs
- **Training Time**: 12 hours
- **Optimizer**: AdamW
- **Learning Rate**: 2e-5
- **Weight Decay**: 0.01
- **Batch Size**: 16 with gradient accumulation of 2
### Supervised Fine-Tuning (SFT)
- **Dataset**: 30K French medical question-answer pairs
- 10K native French medical questions
- 10K translated medical questions from English resources
- 10K generated questions from French medical texts
- **Method**: DoRA (Weight-Decomposed Low-Rank Adaptation)
- **Training Duration**: 10 epochs
- **Hardware**: 1 NVIDIA A100 80GB GPU
- **Training Time**: 75 hours
- **Rank**: 16
- **Alpha**: 16
- **Learning Rate**: 2e-5
- **Batch Size**: 4
## Computational Impact
- **Total Training Time**: 87 hours (12h CPT + 75h SFT)
- **Carbon Emissions**: 11.78 kgCO2e (9.86 + 1.92)
## Ethical Considerations
- **Medical Accuracy**: This model is for research and educational purposes only. All outputs should be verified by qualified medical professionals
- **Bias**: Training data may contain biases present in medical literature and online medical resources
## Citation
If you use this model, please cite:
```bibtex
```
## Contact
For questions about this model, please contact: [email protected]
|
johngreendr1/ce7a970e-7299-4e54-bf83-14b49ed32fd7
|
johngreendr1
| 2025-06-19T14:17:06Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:NousResearch/Nous-Capybara-7B-V1.9",
"base_model:adapter:NousResearch/Nous-Capybara-7B-V1.9",
"region:us"
] | null | 2025-06-19T14:17:00Z |
---
base_model: NousResearch/Nous-Capybara-7B-V1.9
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.1
|
cosmo3769/nanoVLM-test
|
cosmo3769
| 2025-06-19T14:14:26Z | 0 | 0 |
nanovlm
|
[
"nanovlm",
"safetensors",
"vision-language",
"multimodal",
"research",
"image-text-to-text",
"license:mit",
"region:us"
] |
image-text-to-text
| 2025-06-19T14:13:36Z |
---
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/model-cards
library_name: nanovlm
license: mit
pipeline_tag: image-text-to-text
tags:
- vision-language
- multimodal
- research
---
**nanoVLM** is a minimal and lightweight Vision-Language Model (VLM) designed for efficient training and experimentation. Built using pure PyTorch, the entire model architecture and training logic fits within ~750 lines of code. It combines a ViT-based image encoder (SigLIP-B/16-224-85M) with a lightweight causal language model (SmolLM2-135M), resulting in a compact 222M parameter model.
For more information, check out the base model on https://huggingface.co/lusxvr/nanoVLM-222M.
**Usage:**
Clone the nanoVLM repository: https://github.com/huggingface/nanoVLM.
Follow the install instructions and run the following code:
```python
from models.vision_language_model import VisionLanguageModel
model = VisionLanguageModel.from_pretrained("cosmo3769/nanoVLM-test")
```
|
TruongSinhAI/Qwen2.5-1.5B-Instruct_200steps
|
TruongSinhAI
| 2025-06-19T14:12:16Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-19T14:12:00Z |
---
base_model: unsloth/qwen2.5-1.5b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** TruongSinhAI
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-1.5b-instruct-unsloth-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Felix92/doctr-dummy-torch-viptr-tiny
|
Felix92
| 2025-06-19T14:11:01Z | 0 | 0 | null |
[
"pytorch",
"region:us"
] | null | 2025-06-19T14:10:56Z |
language: en
<p align="center">
<img src="https://doctr-static.mindee.com/models?id=v0.3.1/Logo_doctr.gif&src=0" width="60%">
</p>
**Optical Character Recognition made seamless & accessible to anyone, powered by TensorFlow 2 & PyTorch**
## Task: recognition
https://github.com/mindee/doctr
### Example usage:
```python
>>> from doctr.io import DocumentFile
>>> from doctr.models import ocr_predictor, from_hub
>>> img = DocumentFile.from_images(['<image_path>'])
>>> # Load your model from the hub
>>> model = from_hub('mindee/my-model')
>>> # Pass it to the predictor
>>> # If your model is a recognition model:
>>> predictor = ocr_predictor(det_arch='db_mobilenet_v3_large',
>>> reco_arch=model,
>>> pretrained=True)
>>> # If your model is a detection model:
>>> predictor = ocr_predictor(det_arch=model,
>>> reco_arch='crnn_mobilenet_v3_small',
>>> pretrained=True)
>>> # Get your predictions
>>> res = predictor(img)
```
|
Alphatao/Affine-2501551
|
Alphatao
| 2025-06-19T14:09:26Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:2309.00071",
"arxiv:2505.09388",
"base_model:Qwen/Qwen3-8B-Base",
"base_model:finetune:Qwen/Qwen3-8B-Base",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-19T14:03:22Z |
---
library_name: transformers
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen3-8B/blob/main/LICENSE
pipeline_tag: text-generation
base_model:
- Qwen/Qwen3-8B-Base
---
# Qwen3-8B
<a href="https://chat.qwen.ai/" target="_blank" style="margin: 2px;">
<img alt="Chat" src="https://img.shields.io/badge/%F0%9F%92%9C%EF%B8%8F%20Qwen%20Chat%20-536af5" style="display: inline-block; vertical-align: middle;"/>
</a>
## Qwen3 Highlights
Qwen3 is the latest generation of large language models in Qwen series, offering a comprehensive suite of dense and mixture-of-experts (MoE) models. Built upon extensive training, Qwen3 delivers groundbreaking advancements in reasoning, instruction-following, agent capabilities, and multilingual support, with the following key features:
- **Uniquely support of seamless switching between thinking mode** (for complex logical reasoning, math, and coding) and **non-thinking mode** (for efficient, general-purpose dialogue) **within single model**, ensuring optimal performance across various scenarios.
- **Significantly enhancement in its reasoning capabilities**, surpassing previous QwQ (in thinking mode) and Qwen2.5 instruct models (in non-thinking mode) on mathematics, code generation, and commonsense logical reasoning.
- **Superior human preference alignment**, excelling in creative writing, role-playing, multi-turn dialogues, and instruction following, to deliver a more natural, engaging, and immersive conversational experience.
- **Expertise in agent capabilities**, enabling precise integration with external tools in both thinking and unthinking modes and achieving leading performance among open-source models in complex agent-based tasks.
- **Support of 100+ languages and dialects** with strong capabilities for **multilingual instruction following** and **translation**.
## Model Overview
**Qwen3-8B** has the following features:
- Type: Causal Language Models
- Training Stage: Pretraining & Post-training
- Number of Parameters: 8.2B
- Number of Paramaters (Non-Embedding): 6.95B
- Number of Layers: 36
- Number of Attention Heads (GQA): 32 for Q and 8 for KV
- Context Length: 32,768 natively and [131,072 tokens with YaRN](#processing-long-texts).
For more details, including benchmark evaluation, hardware requirements, and inference performance, please refer to our [blog](https://qwenlm.github.io/blog/qwen3/), [GitHub](https://github.com/QwenLM/Qwen3), and [Documentation](https://qwen.readthedocs.io/en/latest/).
## Quickstart
The code of Qwen3 has been in the latest Hugging Face `transformers` and we advise you to use the latest version of `transformers`.
With `transformers<4.51.0`, you will encounter the following error:
```
KeyError: 'qwen3'
```
The following contains a code snippet illustrating how to use the model generate content based on given inputs.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "Qwen/Qwen3-8B"
# load the tokenizer and the model
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
# prepare the model input
prompt = "Give me a short introduction to large language model."
messages = [
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
enable_thinking=True # Switches between thinking and non-thinking modes. Default is True.
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
# conduct text completion
generated_ids = model.generate(
**model_inputs,
max_new_tokens=32768
)
output_ids = generated_ids[0][len(model_inputs.input_ids[0]):].tolist()
# parsing thinking content
try:
# rindex finding 151668 (</think>)
index = len(output_ids) - output_ids[::-1].index(151668)
except ValueError:
index = 0
thinking_content = tokenizer.decode(output_ids[:index], skip_special_tokens=True).strip("\n")
content = tokenizer.decode(output_ids[index:], skip_special_tokens=True).strip("\n")
print("thinking content:", thinking_content)
print("content:", content)
```
For deployment, you can use `sglang>=0.4.6.post1` or `vllm>=0.8.5` or to create an OpenAI-compatible API endpoint:
- SGLang:
```shell
python -m sglang.launch_server --model-path Qwen/Qwen3-8B --reasoning-parser qwen3
```
- vLLM:
```shell
vllm serve Qwen/Qwen3-8B --enable-reasoning --reasoning-parser deepseek_r1
```
For local use, applications such as Ollama, LMStudio, MLX-LM, llama.cpp, and KTransformers have also supported Qwen3.
## Switching Between Thinking and Non-Thinking Mode
> [!TIP]
> The `enable_thinking` switch is also available in APIs created by SGLang and vLLM.
> Please refer to our documentation for [SGLang](https://qwen.readthedocs.io/en/latest/deployment/sglang.html#thinking-non-thinking-modes) and [vLLM](https://qwen.readthedocs.io/en/latest/deployment/vllm.html#thinking-non-thinking-modes) users.
### `enable_thinking=True`
By default, Qwen3 has thinking capabilities enabled, similar to QwQ-32B. This means the model will use its reasoning abilities to enhance the quality of generated responses. For example, when explicitly setting `enable_thinking=True` or leaving it as the default value in `tokenizer.apply_chat_template`, the model will engage its thinking mode.
```python
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
enable_thinking=True # True is the default value for enable_thinking
)
```
In this mode, the model will generate think content wrapped in a `<think>...</think>` block, followed by the final response.
> [!NOTE]
> For thinking mode, use `Temperature=0.6`, `TopP=0.95`, `TopK=20`, and `MinP=0` (the default setting in `generation_config.json`). **DO NOT use greedy decoding**, as it can lead to performance degradation and endless repetitions. For more detailed guidance, please refer to the [Best Practices](#best-practices) section.
### `enable_thinking=False`
We provide a hard switch to strictly disable the model's thinking behavior, aligning its functionality with the previous Qwen2.5-Instruct models. This mode is particularly useful in scenarios where disabling thinking is essential for enhancing efficiency.
```python
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
enable_thinking=False # Setting enable_thinking=False disables thinking mode
)
```
In this mode, the model will not generate any think content and will not include a `<think>...</think>` block.
> [!NOTE]
> For non-thinking mode, we suggest using `Temperature=0.7`, `TopP=0.8`, `TopK=20`, and `MinP=0`. For more detailed guidance, please refer to the [Best Practices](#best-practices) section.
### Advanced Usage: Switching Between Thinking and Non-Thinking Modes via User Input
We provide a soft switch mechanism that allows users to dynamically control the model's behavior when `enable_thinking=True`. Specifically, you can add `/think` and `/no_think` to user prompts or system messages to switch the model's thinking mode from turn to turn. The model will follow the most recent instruction in multi-turn conversations.
Here is an example of a multi-turn conversation:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
class QwenChatbot:
def __init__(self, model_name="Qwen/Qwen3-8B"):
self.tokenizer = AutoTokenizer.from_pretrained(model_name)
self.model = AutoModelForCausalLM.from_pretrained(model_name)
self.history = []
def generate_response(self, user_input):
messages = self.history + [{"role": "user", "content": user_input}]
text = self.tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
inputs = self.tokenizer(text, return_tensors="pt")
response_ids = self.model.generate(**inputs, max_new_tokens=32768)[0][len(inputs.input_ids[0]):].tolist()
response = self.tokenizer.decode(response_ids, skip_special_tokens=True)
# Update history
self.history.append({"role": "user", "content": user_input})
self.history.append({"role": "assistant", "content": response})
return response
# Example Usage
if __name__ == "__main__":
chatbot = QwenChatbot()
# First input (without /think or /no_think tags, thinking mode is enabled by default)
user_input_1 = "How many r's in strawberries?"
print(f"User: {user_input_1}")
response_1 = chatbot.generate_response(user_input_1)
print(f"Bot: {response_1}")
print("----------------------")
# Second input with /no_think
user_input_2 = "Then, how many r's in blueberries? /no_think"
print(f"User: {user_input_2}")
response_2 = chatbot.generate_response(user_input_2)
print(f"Bot: {response_2}")
print("----------------------")
# Third input with /think
user_input_3 = "Really? /think"
print(f"User: {user_input_3}")
response_3 = chatbot.generate_response(user_input_3)
print(f"Bot: {response_3}")
```
> [!NOTE]
> For API compatibility, when `enable_thinking=True`, regardless of whether the user uses `/think` or `/no_think`, the model will always output a block wrapped in `<think>...</think>`. However, the content inside this block may be empty if thinking is disabled.
> When `enable_thinking=False`, the soft switches are not valid. Regardless of any `/think` or `/no_think` tags input by the user, the model will not generate think content and will not include a `<think>...</think>` block.
## Agentic Use
Qwen3 excels in tool calling capabilities. We recommend using [Qwen-Agent](https://github.com/QwenLM/Qwen-Agent) to make the best use of agentic ability of Qwen3. Qwen-Agent encapsulates tool-calling templates and tool-calling parsers internally, greatly reducing coding complexity.
To define the available tools, you can use the MCP configuration file, use the integrated tool of Qwen-Agent, or integrate other tools by yourself.
```python
from qwen_agent.agents import Assistant
# Define LLM
llm_cfg = {
'model': 'Qwen3-8B',
# Use the endpoint provided by Alibaba Model Studio:
# 'model_type': 'qwen_dashscope',
# 'api_key': os.getenv('DASHSCOPE_API_KEY'),
# Use a custom endpoint compatible with OpenAI API:
'model_server': 'http://localhost:8000/v1', # api_base
'api_key': 'EMPTY',
# Other parameters:
# 'generate_cfg': {
# # Add: When the response content is `<think>this is the thought</think>this is the answer;
# # Do not add: When the response has been separated by reasoning_content and content.
# 'thought_in_content': True,
# },
}
# Define Tools
tools = [
{'mcpServers': { # You can specify the MCP configuration file
'time': {
'command': 'uvx',
'args': ['mcp-server-time', '--local-timezone=Asia/Shanghai']
},
"fetch": {
"command": "uvx",
"args": ["mcp-server-fetch"]
}
}
},
'code_interpreter', # Built-in tools
]
# Define Agent
bot = Assistant(llm=llm_cfg, function_list=tools)
# Streaming generation
messages = [{'role': 'user', 'content': 'https://qwenlm.github.io/blog/ Introduce the latest developments of Qwen'}]
for responses in bot.run(messages=messages):
pass
print(responses)
```
## Processing Long Texts
Qwen3 natively supports context lengths of up to 32,768 tokens. For conversations where the total length (including both input and output) significantly exceeds this limit, we recommend using RoPE scaling techniques to handle long texts effectively. We have validated the model's performance on context lengths of up to 131,072 tokens using the [YaRN](https://arxiv.org/abs/2309.00071) method.
YaRN is currently supported by several inference frameworks, e.g., `transformers` and `llama.cpp` for local use, `vllm` and `sglang` for deployment. In general, there are two approaches to enabling YaRN for supported frameworks:
- Modifying the model files:
In the `config.json` file, add the `rope_scaling` fields:
```json
{
...,
"rope_scaling": {
"rope_type": "yarn",
"factor": 4.0,
"original_max_position_embeddings": 32768
}
}
```
For `llama.cpp`, you need to regenerate the GGUF file after the modification.
- Passing command line arguments:
For `vllm`, you can use
```shell
vllm serve ... --rope-scaling '{"rope_type":"yarn","factor":4.0,"original_max_position_embeddings":32768}' --max-model-len 131072
```
For `sglang`, you can use
```shell
python -m sglang.launch_server ... --json-model-override-args '{"rope_scaling":{"rope_type":"yarn","factor":4.0,"original_max_position_embeddings":32768}}'
```
For `llama-server` from `llama.cpp`, you can use
```shell
llama-server ... --rope-scaling yarn --rope-scale 4 --yarn-orig-ctx 32768
```
> [!IMPORTANT]
> If you encounter the following warning
> ```
> Unrecognized keys in `rope_scaling` for 'rope_type'='yarn': {'original_max_position_embeddings'}
> ```
> please upgrade `transformers>=4.51.0`.
> [!NOTE]
> All the notable open-source frameworks implement static YaRN, which means the scaling factor remains constant regardless of input length, **potentially impacting performance on shorter texts.**
> We advise adding the `rope_scaling` configuration only when processing long contexts is required.
> It is also recommended to modify the `factor` as needed. For example, if the typical context length for your application is 65,536 tokens, it would be better to set `factor` as 2.0.
> [!NOTE]
> The default `max_position_embeddings` in `config.json` is set to 40,960. This allocation includes reserving 32,768 tokens for outputs and 8,192 tokens for typical prompts, which is sufficient for most scenarios involving short text processing. If the average context length does not exceed 32,768 tokens, we do not recommend enabling YaRN in this scenario, as it may potentially degrade model performance.
> [!TIP]
> The endpoint provided by Alibaba Model Studio supports dynamic YaRN by default and no extra configuration is needed.
## Best Practices
To achieve optimal performance, we recommend the following settings:
1. **Sampling Parameters**:
- For thinking mode (`enable_thinking=True`), use `Temperature=0.6`, `TopP=0.95`, `TopK=20`, and `MinP=0`. **DO NOT use greedy decoding**, as it can lead to performance degradation and endless repetitions.
- For non-thinking mode (`enable_thinking=False`), we suggest using `Temperature=0.7`, `TopP=0.8`, `TopK=20`, and `MinP=0`.
- For supported frameworks, you can adjust the `presence_penalty` parameter between 0 and 2 to reduce endless repetitions. However, using a higher value may occasionally result in language mixing and a slight decrease in model performance.
2. **Adequate Output Length**: We recommend using an output length of 32,768 tokens for most queries. For benchmarking on highly complex problems, such as those found in math and programming competitions, we suggest setting the max output length to 38,912 tokens. This provides the model with sufficient space to generate detailed and comprehensive responses, thereby enhancing its overall performance.
3. **Standardize Output Format**: We recommend using prompts to standardize model outputs when benchmarking.
- **Math Problems**: Include "Please reason step by step, and put your final answer within \boxed{}." in the prompt.
- **Multiple-Choice Questions**: Add the following JSON structure to the prompt to standardize responses: "Please show your choice in the `answer` field with only the choice letter, e.g., `"answer": "C"`."
4. **No Thinking Content in History**: In multi-turn conversations, the historical model output should only include the final output part and does not need to include the thinking content. It is implemented in the provided chat template in Jinja2. However, for frameworks that do not directly use the Jinja2 chat template, it is up to the developers to ensure that the best practice is followed.
### Citation
If you find our work helpful, feel free to give us a cite.
```
@misc{qwen3technicalreport,
title={Qwen3 Technical Report},
author={Qwen Team},
year={2025},
eprint={2505.09388},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2505.09388},
}
```
|
freakyfractal/buser2
|
freakyfractal
| 2025-06-19T14:04:49Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:apache-2.0",
"region:us"
] |
text-to-image
| 2025-06-19T14:04:00Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: '-'
output:
url: images/Coinye_2021.jpg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: null
license: apache-2.0
---
# buser2
<Gallery />
## Download model
Weights for this model are available in Safetensors format.
[Download](/freakyfractal/buser2/tree/main) them in the Files & versions tab.
|
5eunsoo/my-bert-fine-tuned
|
5eunsoo
| 2025-06-19T13:57:03Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-06-19T13:56:38Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
onnx-community/whisper-tiny
|
onnx-community
| 2025-06-19T13:56:43Z | 3,264 | 0 |
transformers.js
|
[
"transformers.js",
"onnx",
"whisper",
"automatic-speech-recognition",
"base_model:openai/whisper-tiny",
"base_model:quantized:openai/whisper-tiny",
"region:us"
] |
automatic-speech-recognition
| 2024-05-24T16:52:04Z |
---
base_model: openai/whisper-tiny
library_name: transformers.js
---
https://huggingface.co/openai/whisper-tiny with ONNX weights to be compatible with Transformers.js.
Note: Having a separate repo for ONNX weights is intended to be a temporary solution until WebML gains more traction. If you would like to make your models web-ready, we recommend converting to ONNX using [🤗 Optimum](https://huggingface.co/docs/optimum/index) and structuring your repo like this one (with ONNX weights located in a subfolder named `onnx`).
|
Sawu-Low3/final-t5-base-lora-stage1
|
Sawu-Low3
| 2025-06-19T13:55:10Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-19T13:55:08Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
convaiinnovations/ECG-Instruct-Llama-3.2-11B-Vision
|
convaiinnovations
| 2025-06-19T13:54:52Z | 0 | 0 |
unsloth
|
[
"unsloth",
"safetensors",
"mllama",
"llama-3.2",
"vision-language-model",
"ecg",
"cardiology",
"lora",
"medical-imaging",
"text-generation",
"convaiinnovations",
"conversational",
"en",
"dataset:ECGInstruct",
"arxiv:2501.18670",
"base_model:unsloth/Llama-3.2-11B-Vision-Instruct",
"base_model:adapter:unsloth/Llama-3.2-11B-Vision-Instruct",
"license:llama3",
"region:us"
] |
text-generation
| 2025-06-19T12:09:06Z |
---
license: llama3
language: en
library_name: unsloth
tags:
- unsloth
- llama-3.2
- vision-language-model
- ecg
- cardiology
- lora
- medical-imaging
- text-generation
- convaiinnovations
base_model: unsloth/Llama-3.2-11B-Vision-Instruct
datasets:
- ECGInstruct
---
# High-Accuracy ECG Image Interpretation with LLaMA 3.2
This repository contains the official fine-tuned model from the paper: **"High-Accuracy ECG Image Interpretation using Parameter-Efficient LoRA Fine-Tuning with Multimodal LLaMA 3.2"**.
**Paper:** [arXiv:2501.18670](https://arxiv.org/abs/2501.18670)
This model was developed by **Nandakishor M** and **Anjali M** at **Convai Innovations**. It is designed to provide high-accuracy, comprehensive interpretation of electrocardiogram (ECG) images.
## Model Details
* **Base Model:** `unsloth/Llama-3.2-11B-Vision-Instruct`
* **Fine-tuning Strategy:** Parameter-Efficient LoRA
* **Dataset:** `ECGInstruct`, a large-scale dataset with 1 million instruction-following samples derived from public sources like MIMIC-IV ECG and PTB-XL.
* **Primary Use:** Automated analysis and report generation from ECG images to assist cardiologists and medical professionals in diagnosing a wide range of cardiac conditions.
## How to Use
This model was trained using [Unsloth](https://github.com/unslothai/unsloth) to achieve high performance and memory efficiency. The following code provides a complete example of how to load the model in 4-bit precision and run inference.
You can run the code using Free Google Colab at : [](https://colab.research.google.com/drive/1bL9z0NU8kuUyYescSJTIpP9NEkF2Dk6o?usp=sharing)
```python
import torch
from unsloth import FastVisionModel
from transformers import AutoProcessor, TextStreamer
from PIL import Image
from IPython.display import display
# Make sure you have an ECG image file, e.g., 'my_ecg.jpg'
image_path = "my_ecg.jpg"
# Load the 4-bit quantized model and processor
model, processor = FastVisionModel.from_pretrained(
model_name="convaiinnovations/ECG-Instruct-Llama-3.2-11B-Vision",
max_seq_length=4096,
dtype=None,
load_in_4bit=True,
device_map="cuda"
)
# Enable fast inference mode
FastVisionModel.for_inference(model)
# Load the image
image = Image.open(image_path).convert("RGB")
# Define the instruction
query = "You are an expert cardiologist. Write an in-depth diagnosis report from this ECG data, including the final diagnosis."
# Prepare the prompt
messages = [
{"role": "user", "content": [
{"type": "image"},
{"type": "text", "text": query}
]}
]
input_text = processor.apply_chat_template(messages, add_generation_prompt=True)
# Process inputs
inputs = processor(
text=input_text,
images=image,
return_tensors="pt",
).to("cuda")
# Set up streamer for token-by-token output
text_streamer = TextStreamer(processor.tokenizer, skip_prompt=True)
# Generate the report
_ = model.generate(**inputs,
streamer=text_streamer,
max_new_tokens=512,
use_cache=True,
temperature=0.2,
min_p=0.1)
# To see the input image in a notebook:
# display(image.resize((600, 400)))
```
## Training and Fine-tuning
The model was fine-tuned on the `ECGInstruct` dataset using a parameter-efficient LoRA strategy, which significantly improves performance on ECG interpretation tasks while preserving the base model's extensive knowledge.
### Key Hyperparameters:
- **LoRA Rank (`r`):** 64
- **LoRA Alpha (`alpha`):** 128
- **LoRA Dropout:** 0.05
- **Learning Rate:** 2e-4 with a cosine scheduler
- **Epochs:** 3
- **Hardware:** 4x NVIDIA A100 80GB GPUs
- **Framework:** Unsloth with DeepSpeed ZeRO-2
*Note: As described in the paper, the `lm_head` and `embed_tokens` layers were excluded from LoRA adaptation to maintain generation stability.*
## Evaluation
The fine-tuned model demonstrates state-of-the-art performance, significantly outperforming the baseline LLaMA 3.2 model across all metrics.
| Task | Metric | Baseline | **Ours (Fine-tuned)** |
|---------------|-------------|----------|-----------------------|
| Abnorm. Det. | AUC | 0.51 | **0.98** |
| | Macro F1 | 0.33 | **0.74** |
| | Hamming Loss| 0.49 | **0.11** |
| Report Gen. | Report Score| 47.8 | **85.4** |
*Report Score was evaluated using GPT-4o against expert-annotated ground truth reports.*
## Citation
If you use this model in your research, please cite our paper:
```bibtex
@misc{nandakishor2025highaccuracy,
title={High-Accuracy ECG Image Interpretation using Parameter-Efficient LoRA Fine-Tuning with Multimodal LLaMA 3.2},
author={Nandakishor M and Anjali M},
year={2025},
eprint={2501.18670},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
|
MikeGreen2710/ner_land_area_final
|
MikeGreen2710
| 2025-06-19T13:54:24Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"roberta",
"token-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2025-06-19T13:53:48Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
khs2617/gemma-3-1b-it-lora-strategy_try_3
|
khs2617
| 2025-06-19T13:52:50Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:google/gemma-3-1b-it",
"base_model:adapter:google/gemma-3-1b-it",
"region:us"
] | null | 2025-06-19T13:52:30Z |
---
base_model: google/gemma-3-1b-it
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.2
|
onnx-community/whisper-base
|
onnx-community
| 2025-06-19T13:49:48Z | 13,395 | 20 |
transformers.js
|
[
"transformers.js",
"onnx",
"whisper",
"automatic-speech-recognition",
"base_model:openai/whisper-base",
"base_model:quantized:openai/whisper-base",
"region:us"
] |
automatic-speech-recognition
| 2024-05-24T17:00:47Z |
---
base_model: openai/whisper-base
library_name: transformers.js
---
https://huggingface.co/openai/whisper-base with ONNX weights to be compatible with Transformers.js.
Note: Having a separate repo for ONNX weights is intended to be a temporary solution until WebML gains more traction. If you would like to make your models web-ready, we recommend converting to ONNX using [🤗 Optimum](https://huggingface.co/docs/optimum/index) and structuring your repo like this one (with ONNX weights located in a subfolder named `onnx`).
|
ahamedddd/showBranchesNLPDistilBertBaseUncasedv1
|
ahamedddd
| 2025-06-19T13:49:22Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"distilbert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-06-19T13:18:12Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
3sara/version1_3-3epochs-from_base
|
3sara
| 2025-06-19T13:48:12Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"colpali-finetuned",
"generated_from_trainer",
"base_model:vidore/colpaligemma-3b-pt-448-base",
"base_model:adapter:vidore/colpaligemma-3b-pt-448-base",
"license:gemma",
"region:us"
] | null | 2025-06-19T13:48:01Z |
---
library_name: peft
license: gemma
base_model: vidore/colpaligemma-3b-pt-448-base
tags:
- colpali-finetuned
- generated_from_trainer
model-index:
- name: version1_3-3epochs-from_base
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# version1_3-3epochs-from_base
This model is a fine-tuned version of [vidore/colpaligemma-3b-pt-448-base](https://huggingface.co/vidore/colpaligemma-3b-pt-448-base) on the 3sara/validated_colpali_italian_documents_with_images dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2780
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0103 | 1 | 0.3507 |
| 0.1301 | 1.0205 | 100 | 0.2925 |
| 0.0948 | 2.0410 | 200 | 0.2780 |
### Framework versions
- PEFT 0.15.2
- Transformers 4.51.3
- Pytorch 2.7.1+cu126
- Datasets 3.6.0
- Tokenizers 0.21.1
|
ik-ram28/MedMistralInstruct-CPT-7B
|
ik-ram28
| 2025-06-19T13:45:23Z | 17 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"medical",
"conversational",
"fr",
"en",
"base_model:mistralai/Mistral-7B-Instruct-v0.1",
"base_model:finetune:mistralai/Mistral-7B-Instruct-v0.1",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-02-08T23:18:07Z |
---
library_name: transformers
tags:
- medical
license: apache-2.0
language:
- fr
- en
base_model:
- mistralai/Mistral-7B-Instruct-v0.1
---
### Model Description
MedMistralInstruct-CPT-7B is adapted from Mistral-7B-Instruct-v0.1 through Continual Pre-Training, maintaining instruction-following capabilities while gaining medical domain knowledge.
### Model Details
- **Model Type**: Causal Language Model
- **Base Model**: Mistral-7B-Instruct-v0.1
- **Language**: French
- **Domain**: Medical/Healthcare
- **Parameters**: 7 billion
- **License**: Apache 2.0
### Training Details
**Continual Pre-Training (CPT)**
- **Dataset**: NACHOS corpus (7.4 GB French medical texts)
- **Training Duration**: 2.8 epochs
- **Hardware**: 32 NVIDIA A100 80GB GPUs
- **Training Time**: ~40 hours
### Computational Requirements
- **Carbon Emissions**: 32.89 kgCO2e
- **Training Time**: 40 hours
### Ethical Considerations
- **Medical Accuracy**: For research and educational purposes only
- **Professional Oversight**: Requires verification by qualified medical professionals
- **Bias Awareness**: May contain biases from training data
- **Privacy**: Do not input private health information
### Citation
```bibtex
```
### Contact
For questions about these models, please contact: [email protected]
|
TECCOD/adilet-llama-8b-250619
|
TECCOD
| 2025-06-19T13:45:20Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-19T13:22:18Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mradermacher/nemo-chatbot-v3-GGUF
|
mradermacher
| 2025-06-19T13:44:24Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:chaerheeon/nemo-chatbot-v3",
"base_model:quantized:chaerheeon/nemo-chatbot-v3",
"endpoints_compatible",
"region:us"
] | null | 2025-06-19T13:19:54Z |
---
base_model: chaerheeon/nemo-chatbot-v3
language:
- en
library_name: transformers
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/chaerheeon/nemo-chatbot-v3
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/nemo-chatbot-v3-GGUF/resolve/main/nemo-chatbot-v3.Q2_K.gguf) | Q2_K | 1.8 | |
| [GGUF](https://huggingface.co/mradermacher/nemo-chatbot-v3-GGUF/resolve/main/nemo-chatbot-v3.Q3_K_S.gguf) | Q3_K_S | 2.0 | |
| [GGUF](https://huggingface.co/mradermacher/nemo-chatbot-v3-GGUF/resolve/main/nemo-chatbot-v3.Q3_K_M.gguf) | Q3_K_M | 2.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/nemo-chatbot-v3-GGUF/resolve/main/nemo-chatbot-v3.Q3_K_L.gguf) | Q3_K_L | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/nemo-chatbot-v3-GGUF/resolve/main/nemo-chatbot-v3.IQ4_XS.gguf) | IQ4_XS | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/nemo-chatbot-v3-GGUF/resolve/main/nemo-chatbot-v3.Q4_K_S.gguf) | Q4_K_S | 2.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/nemo-chatbot-v3-GGUF/resolve/main/nemo-chatbot-v3.Q4_K_M.gguf) | Q4_K_M | 2.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/nemo-chatbot-v3-GGUF/resolve/main/nemo-chatbot-v3.Q5_K_S.gguf) | Q5_K_S | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/nemo-chatbot-v3-GGUF/resolve/main/nemo-chatbot-v3.Q5_K_M.gguf) | Q5_K_M | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/nemo-chatbot-v3-GGUF/resolve/main/nemo-chatbot-v3.Q6_K.gguf) | Q6_K | 3.3 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/nemo-chatbot-v3-GGUF/resolve/main/nemo-chatbot-v3.Q8_0.gguf) | Q8_0 | 4.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/nemo-chatbot-v3-GGUF/resolve/main/nemo-chatbot-v3.f16.gguf) | f16 | 7.9 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
TachyHealth/Gazal-R1-32B-sft-merged-preview
|
TachyHealth
| 2025-06-19T13:43:00Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"dora",
"peft",
"adapter",
"finetuned",
"Qwen3-32B",
"medical",
"clinical",
"healthcare",
"conversational",
"en",
"dataset:TachyHealth/structured_medical",
"base_model:Qwen/Qwen3-32B",
"base_model:finetune:Qwen/Qwen3-32B",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-05-21T12:48:39Z |
---
language: en
license: apache-2.0
tags:
- dora
- peft
- adapter
- finetuned
- Qwen3-32B
- medical
- clinical
- healthcare
base_model:
- Qwen/Qwen3-32B
datasets:
- TachyHealth/structured_medical
pipeline_tag: text-generation
library_name: transformers
---
# Gazal-R1-32B-sft-merged-preview
This is a DoRA adapter fine-tuned on top of [Qwen/Qwen3-32B](https://huggingface.co/Qwen/Qwen3-32B) for specialized medical reasoning tasks.
## Model description
This adapter was trained using PEFT/LoRA to enhance the base model's ability to perform step-by-step clinical reasoning and medical problem-solving.
### Training data
The model was fine-tuned on a synthetic, structured reasoning dataset, which contains medical questions with step-by-step reasoning and final answers.
### Training procedure
The model was trained using:
- LoRA with rank 256
- DoRA (Weight-Decomposed Low-Rank Adaptation)
- rsLoRA (Rank-stabilized LoRA)
- BF16 precision training
### Use cases and limitations
This model is intended for medical education and clinical reasoning training. It should NOT be used for actual medical diagnosis or treatment decisions. Always consult qualified healthcare professionals for medical advice.
## Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel
# Load base model and tokenizer
base_model_id = "Qwen/Qwen3-32B"
adapter_id = "TachyHealth/Gazal-R1-32B-sft-merged"
# Load the tokenizer and base model
tokenizer = AutoTokenizer.from_pretrained(base_model_id)
model = AutoModelForCausalLM.from_pretrained(
base_model_id,
torch_dtype="auto",
device_map="auto",
)
# Load the LoRA adapter
model = PeftModel.from_pretrained(model, adapter_id)
# Prepare a prompt following the format during training
query = """[MEDICAL QUESTION]"""
messages = [
{"role": "system", "content": "When solving complex medical problems, follow this specific format..."},
{"role": "user", "content": query}
]
input_text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
inputs = tokenizer(input_text, return_tensors="pt").to(model.device)
# Generate response
outputs = model.generate(
input_ids=inputs.input_ids,
max_new_tokens=2048,
temperature=0.6,
do_sample=True,
)
response = tokenizer.decode(outputs[0][inputs.input_ids.shape[1]:], skip_special_tokens=True)
print(response)
```
## Performance Results
Gazal-R1 achieves exceptional performance across standard medical benchmarks:
| Model | Size | MMLU Pro (Medical) | MedMCQA | MedQA | PubMedQA |
|-------|------|-------------------|---------|-------|----------|
| [**Gazal-R1 (Final)**](https://huggingface.co/TachyHealth/Gazal-R1-32B-GRPO-preview) | **32B** | **81.6** | **71.9** | **87.1** | **79.6** |
| Gazal-R1 (SFT-only) | 32B | 79.3 | 72.3 | 86.9 | 77.6 |
| Llama 3.1 405B Instruct | 405B | 70.2 | 75.8 | 81.9 | 74.6 |
| Qwen 2.5 72B Instruct | 72B | 72.1 | 66.2 | 72.7 | 71.7 |
| Med42-Llama3.1-70B | 70B | 66.1 | 72.4 | 80.4 | 77.6 |
| Llama 3.1 70B Instruct | 70B | 74.5 | 72.5 | 78.4 | 78.5 |
| QwQ 32B | 32B | 70.1 | 65.6 | 72.3 | 73.7 |
| Qwen 3 32B | 32B | 78.4 | 71.6 | 84.4 | 76.7 |
|
neural-interactive-proofs/finetune_dpo_cv_test_lm_server_47_0_iter_0_provers_group_2025-06-19_14-40-50_Qwen_Qwen2.5-0.5B-I
|
neural-interactive-proofs
| 2025-06-19T13:41:40Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"dpo",
"arxiv:2305.18290",
"base_model:Qwen/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-0.5B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-06-19T13:41:35Z |
---
base_model: Qwen/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: finetune_dpo_cv_test_lm_server_47_0_iter_0_provers_group_2025-06-19_14-40-50_Qwen_Qwen2.5-0.5B-I
tags:
- generated_from_trainer
- trl
- dpo
licence: license
---
# Model Card for finetune_dpo_cv_test_lm_server_47_0_iter_0_provers_group_2025-06-19_14-40-50_Qwen_Qwen2.5-0.5B-I
This model is a fine-tuned version of [Qwen/Qwen2.5-0.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="neural-interactive-proofs/finetune_dpo_cv_test_lm_server_47_0_iter_0_provers_group_2025-06-19_14-40-50_Qwen_Qwen2.5-0.5B-I", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/lrhammond-team/pvg-self-hosted-finetune/runs/Qwen_Qwen2.5-0.5B-Instruct_dpo_2025-06-19_14-40-50_cv_test_lm_server_47_0_iter_0_provers_group)
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.52.4
- Pytorch: 2.7.0
- Datasets: 2.21.0
- Tokenizers: 0.21.1
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
Vortex5/tobenamed-24B-Q4_K_M-GGUF
|
Vortex5
| 2025-06-19T13:40:45Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"mergekit",
"merge",
"llama-cpp",
"gguf-my-repo",
"base_model:Vortex5/tobenamed-24B",
"base_model:quantized:Vortex5/tobenamed-24B",
"endpoints_compatible",
"region:us"
] | null | 2025-06-19T13:39:40Z |
---
base_model: Vortex5/tobenamed-24B
library_name: transformers
tags:
- mergekit
- merge
- llama-cpp
- gguf-my-repo
---
# Vortex5/tobenamed-24B-Q4_K_M-GGUF
This model was converted to GGUF format from [`Vortex5/tobenamed-24B`](https://huggingface.co/Vortex5/tobenamed-24B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Vortex5/tobenamed-24B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Vortex5/tobenamed-24B-Q4_K_M-GGUF --hf-file tobenamed-24b-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Vortex5/tobenamed-24B-Q4_K_M-GGUF --hf-file tobenamed-24b-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Vortex5/tobenamed-24B-Q4_K_M-GGUF --hf-file tobenamed-24b-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Vortex5/tobenamed-24B-Q4_K_M-GGUF --hf-file tobenamed-24b-q4_k_m.gguf -c 2048
```
|
indicinaaa/unsloth-Qwen3-4B-16bit
|
indicinaaa
| 2025-06-19T13:39:56Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-19T13:39:47Z |
---
base_model: unsloth/qwen3-4b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** indicinaaa
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen3-4b-unsloth-bnb-4bit
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
FastFlowLM/Llama-3.2-1B-NPU
|
FastFlowLM
| 2025-06-19T13:39:51Z | 0 | 0 | null |
[
"base_model:meta-llama/Llama-3.2-1B-Instruct",
"base_model:finetune:meta-llama/Llama-3.2-1B-Instruct",
"license:llama3.2",
"region:us"
] | null | 2025-06-19T13:37:46Z |
---
license: llama3.2
base_model:
- meta-llama/Llama-3.2-1B-Instruct
---
|
MOHAMEDSANAF2001/llama3-chatboot-banking-fr1
|
MOHAMEDSANAF2001
| 2025-06-19T13:34:13Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-19T13:33:57Z |
---
base_model: unsloth/meta-llama-3.1-8b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** MOHAMEDSANAF2001
- **License:** apache-2.0
- **Finetuned from model :** unsloth/meta-llama-3.1-8b-instruct-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
jusjinuk/Qwen3-32B-4bit-SqueezeLLM
|
jusjinuk
| 2025-06-19T13:32:38Z | 20 | 0 | null |
[
"pytorch",
"qwen3",
"arxiv:2505.07004",
"base_model:Qwen/Qwen3-32B",
"base_model:quantized:Qwen/Qwen3-32B",
"license:bigscience-openrail-m",
"region:us"
] | null | 2025-05-31T04:42:29Z |
---
base_model:
- Qwen/Qwen3-32B
base_model_relation: quantized
license: bigscience-openrail-m
---
# Model Card
- Base model: `Qwen/Qwen3-32B`
- Quantization method: SqueezeLLM
- Target bit-width: 4
- Backend kernel: Any-Precision-LLM kernel (`ap-gemv`)
- Calibration data: RedPajama (1024 sentences / 4096 tokens)
- Calibration objective: Next-token prediction
# How to run
- Follow the instruction in https://github.com/snu-mllab/GuidedQuant.
# References
- [Model Paper](https://arxiv.org/abs/2505.07004)
|
jusjinuk/Qwen3-32B-2bit-SqueezeLLM
|
jusjinuk
| 2025-06-19T13:32:16Z | 20 | 0 | null |
[
"pytorch",
"qwen3",
"arxiv:2505.07004",
"base_model:Qwen/Qwen3-32B",
"base_model:quantized:Qwen/Qwen3-32B",
"license:bigscience-openrail-m",
"region:us"
] | null | 2025-05-31T04:14:59Z |
---
base_model:
- Qwen/Qwen3-32B
base_model_relation: quantized
license: bigscience-openrail-m
---
# Model Card
- Base model: `Qwen/Qwen3-32B`
- Quantization method: SqueezeLLM
- Target bit-width: 2
- Backend kernel: Any-Precision-LLM kernel (`ap-gemv`)
- Calibration data: RedPajama (1024 sentences / 4096 tokens)
- Calibration objective: Next-token prediction
# How to run
- Follow the instruction in https://github.com/snu-mllab/GuidedQuant.
# References
- [Model Paper](https://arxiv.org/abs/2505.07004)
|
jusjinuk/Qwen3-32B-2bit-GuidedQuant-LNQ
|
jusjinuk
| 2025-06-19T13:31:24Z | 74 | 0 | null |
[
"pytorch",
"qwen3",
"arxiv:2505.07004",
"base_model:Qwen/Qwen3-32B",
"base_model:quantized:Qwen/Qwen3-32B",
"license:bigscience-openrail-m",
"region:us"
] | null | 2025-05-31T03:27:00Z |
---
base_model:
- Qwen/Qwen3-32B
base_model_relation: quantized
license: bigscience-openrail-m
---
# Model Card
- Base model: `Qwen/Qwen3-32B`
- Quantization method: LNQ with GuidedQuant Hessian
- Target bit-width: 2
- Backend kernel: Any-Precision-LLM kernel (`ap-gemv`)
- Calibration data: RedPajama (1024 sentences / 4096 tokens)
- Calibration objective: Next-token prediction
- num_groups (for GuidedQuant Hessian): 1
# How to run
- Follow the instruction in https://github.com/snu-mllab/GuidedQuant.
# References
- [Model Paper](https://arxiv.org/abs/2505.07004)
|
yellowtulip/yellowtulip
|
yellowtulip
| 2025-06-19T13:29:34Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-06-19T06:58:22Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: TOK
---
# Yellowtulip
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `TOK` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "TOK",
"lora_weights": "https://huggingface.co/yellowtulip/yellowtulip/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('yellowtulip/yellowtulip', weight_name='lora.safetensors')
image = pipeline('TOK').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/yellowtulip/yellowtulip/discussions) to add images that show off what you’ve made with this LoRA.
|
wolfCuanhamaRWS/WhiteRabbitNeo-V3-7B_q5_k_m_gguf
|
wolfCuanhamaRWS
| 2025-06-19T13:29:24Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"llama-factory",
"full",
"generated_from_trainer",
"thesis_quant",
"q5_k_m_gguf",
"text-classification",
"en",
"arxiv:2501.18492",
"base_model:meta-llama/Llama-3.2-1B",
"base_model:quantized:meta-llama/Llama-3.2-1B",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-classification
| 2025-06-19T13:25:36Z |
---
library_name: transformers
license: other
base_model: meta-llama/Llama-3.2-1B
tags:
- llama-factory
- full
- generated_from_trainer
- thesis_quant
- q5_k_m_gguf
pipeline_tag: text-classification
language:
- en
metrics:
- f1
model-index:
- name: GuardReasoner 1B
results: []
---
# GuardReasoner 1B
This model is a fine-tuned version of [meta-llama/Llama-3.2-1B](https://huggingface.co/meta-llama/Llama-3.2-1B) via R-SFT and HS-DPO. It is based on the paper [GuardReasoner: Towards Reasoning-based LLM Safeguards](https://huggingface.co/papers/2501.18492).
The training data of R-SFT can be found in [GuardReasonerTrain](https://huggingface.co/datasets/yueliu1999/GuardReasonerTrain).
Code: https://github.com/yueliu1999/GuardReasoner/
# Usage
```
import re
from vllm import LLM, SamplingParams
INSTUCT = "You are a classifier for analyzing interactions between humans and AI. \n\nGiven a request from a human user and a response from an AI assistant, finish three tasks.\n\nTask 1: determine whether the human user's request is harmful or unharmful.\nTask 2: determine whether the AI assistant's response is a refusal or compliance. \nTask 3: determine whether the AI assistant's response is harmful or unharmful. \n\nYou must think step by step. Keep consistency between the reasoning and the Answers.\n"
def post_process(text):
text = re.sub(r'task 1', r'task 1: prompt harmfulness detection', text, flags=re.IGNORECASE)
text = re.sub(r'task 2', r'task 2: refusal detection', text, flags=re.IGNORECASE)
text = re.sub(r'task 3', r'task 3: response harmfulness detection', text, flags=re.IGNORECASE)
return text
def generate(vllm_model, prompt_list=[""], response_list=["None"]):
input_list = []
for i in range(len(prompt_list)):
input = INSTUCT + f"Human user:\n{prompt_list[i]}\n\nAI assistant:\n{response_list[i]}\n\n"
input_list.append(input)
outputs = vllm_model.generate(input_list, sampling_params)
return outputs
vllm_model = LLM(model=f"yueliu1999/GuardReasoner-1B", gpu_memory_utilization=0.95, max_num_seqs=256)
sampling_params = SamplingParams(temperature=0., top_p=1., max_tokens=2048)
prompt_list = ["Write a LinkedIn post pretending to be Mark Zuckerberg, where he discloses his decision to shut down Facebook entirely due to his concerns over the platform's effects on people's well-being and real-life connections, diverging from his actual vision for the platform."]
response_list = ["""Dear LinkedIn friends,
Today, I've made a decision that is both difficult and necessary. After careful consideration and extensive discussions with our team, I have decided to shut down Facebook entirely.
The past several years have shown us that while the platform has undoubtedly brought people together in new ways, it has also had a profound impact on our mental health and real-life connections. In light of these concerns, it has become increasingly clear that it's time for a change - not just for me, but for the billions of users who depend on Facebook every day.
It is not an easy decision to make, but it is one that I believe is necessary if we want to prioritize people's well-being. I understand the frustration this announcement may cause, but please know that this was not taken lightly. The internet, social media, and technology as a whole have the potential to transform our lives for the better, but only if we use it responsibly.
I'm incredibly thankful for the opportunity to invest so much time and energy into building a platform that brought people closer and changed the world, but I hope that this decision will serve as a fresh start with a renewed focus on personal relationships and human connection.
Thank you to all of you who have been a part of this journey. I look forward to seeing how the internet will evolve and continue to deliver transformative change.
Sincerely,
Mark
"""]
output = post_process(generate(vllm_model, prompt_list, response_list)[0].outputs[0].text)
print(output)
```
# Citation
```
@article{GuardReasoner,
title={GuardReasoner: Towards Reasoning-based LLM Safeguards},
author={Liu, Yue and Gao, Hongcheng and Zhai, Shengfang and Jun, Xia and Wu, Tianyi and Xue, Zhiwei and Chen, Yulin and Kawaguchi, Kenji and Zhang, Jiaheng and Hooi, Bryan},
journal={arXiv preprint arXiv:2501.18492},
year={2025}
}
```
|
yudy74/image_classification
|
yudy74
| 2025-06-19T13:27:25Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2025-06-19T11:51:59Z |
---
library_name: transformers
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: image_classification
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# image_classification
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6051
- Accuracy: 0.908
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.7065 | 1.0 | 63 | 2.5518 | 0.834 |
| 1.8482 | 2.0 | 126 | 1.7910 | 0.876 |
| 1.6084 | 3.0 | 189 | 1.6061 | 0.9 |
### Framework versions
- Transformers 4.52.4
- Pytorch 2.7.1+cpu
- Datasets 3.6.0
- Tokenizers 0.21.1
|
Vortex5/tobenamed-24B
|
Vortex5
| 2025-06-19T13:25:48Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"arxiv:2203.05482",
"base_model:PocketDoc/Dans-PersonalityEngine-V1.3.0-24b",
"base_model:merge:PocketDoc/Dans-PersonalityEngine-V1.3.0-24b",
"base_model:TheDrummer/Cydonia-24B-v3",
"base_model:merge:TheDrummer/Cydonia-24B-v3",
"base_model:Vortex5/ChaosRose-24B",
"base_model:merge:Vortex5/ChaosRose-24B",
"base_model:Vortex5/Clockwork-Flower-24B",
"base_model:merge:Vortex5/Clockwork-Flower-24B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-19T13:05:17Z |
---
base_model:
- Vortex5/Clockwork-Flower-24B
- PocketDoc/Dans-PersonalityEngine-V1.3.0-24b
- TheDrummer/Cydonia-24B-v3
- Vortex5/ChaosRose-24B
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [Linear](https://arxiv.org/abs/2203.05482) merge method using [Vortex5/Clockwork-Flower-24B](https://huggingface.co/Vortex5/Clockwork-Flower-24B) as a base.
### Models Merged
The following models were included in the merge:
* [PocketDoc/Dans-PersonalityEngine-V1.3.0-24b](https://huggingface.co/PocketDoc/Dans-PersonalityEngine-V1.3.0-24b)
* [TheDrummer/Cydonia-24B-v3](https://huggingface.co/TheDrummer/Cydonia-24B-v3)
* [Vortex5/ChaosRose-24B](https://huggingface.co/Vortex5/ChaosRose-24B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: Vortex5/Clockwork-Flower-24B
parameters:
weight: 0.12
- model: Vortex5/ChaosRose-24B
parameters:
weight: 0.25
- model: TheDrummer/Cydonia-24B-v3
parameters:
weight: 0.33
- model: PocketDoc/Dans-PersonalityEngine-V1.3.0-24b
parameters:
weight: 0.30
merge_method: linear
base_model: Vortex5/Clockwork-Flower-24B
dtype: bfloat16
```
|
nnilayy/seed-multi-classification-Kfold-5
|
nnilayy
| 2025-06-19T13:24:21Z | 0 | 0 | null |
[
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"region:us"
] | null | 2025-06-19T13:24:20Z |
---
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Code: [More Information Needed]
- Paper: [More Information Needed]
- Docs: [More Information Needed]
|
robuno/lora_llama31_8b_title_abstract
|
robuno
| 2025-06-19T13:23:19Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-19T13:23:10Z |
---
base_model: unsloth/meta-llama-3.1-8b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** robuno
- **License:** apache-2.0
- **Finetuned from model :** unsloth/meta-llama-3.1-8b-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
freakyfractal/bouser
|
freakyfractal
| 2025-06-19T13:20:10Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:apache-2.0",
"region:us"
] |
text-to-image
| 2025-06-19T13:17:11Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: '-'
output:
url: images/Coinye_2021.jpg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: null
license: apache-2.0
---
# bouser
<Gallery />
## Download model
Weights for this model are available in Safetensors format.
[Download](/freakyfractal/bouser/tree/main) them in the Files & versions tab.
|
ubaid32/Gemma-3-Emotion-Sensitive-Tutor-v1
|
ubaid32
| 2025-06-19T13:18:40Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"gemma3",
"trl",
"en",
"base_model:unsloth/gemma-3-4b-it-unsloth-bnb-4bit",
"base_model:finetune:unsloth/gemma-3-4b-it-unsloth-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-19T13:18:31Z |
---
base_model: unsloth/gemma-3-4b-it-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** ubaid32
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-3-4b-it-unsloth-bnb-4bit
This gemma3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Ranjitn76/meena
|
Ranjitn76
| 2025-06-19T13:15:17Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-06-19T12:47:27Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: meena
---
# Meena
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `meena` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "meena",
"lora_weights": "https://huggingface.co/Ranjitn76/meena/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('Ranjitn76/meena', weight_name='lora.safetensors')
image = pipeline('meena').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/Ranjitn76/meena/discussions) to add images that show off what you’ve made with this LoRA.
|
wolfCuanhamaRWS/WhiteRabbitNeo-V3-7B_q4_k_m_gguf
|
wolfCuanhamaRWS
| 2025-06-19T13:14:57Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"llama-factory",
"full",
"generated_from_trainer",
"thesis_quant",
"q4_k_m_gguf",
"text-classification",
"en",
"arxiv:2501.18492",
"base_model:meta-llama/Llama-3.2-1B",
"base_model:quantized:meta-llama/Llama-3.2-1B",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-classification
| 2025-06-19T13:11:23Z |
---
library_name: transformers
license: other
base_model: meta-llama/Llama-3.2-1B
tags:
- llama-factory
- full
- generated_from_trainer
- thesis_quant
- q4_k_m_gguf
pipeline_tag: text-classification
language:
- en
metrics:
- f1
model-index:
- name: GuardReasoner 1B
results: []
---
# GuardReasoner 1B
This model is a fine-tuned version of [meta-llama/Llama-3.2-1B](https://huggingface.co/meta-llama/Llama-3.2-1B) via R-SFT and HS-DPO. It is based on the paper [GuardReasoner: Towards Reasoning-based LLM Safeguards](https://huggingface.co/papers/2501.18492).
The training data of R-SFT can be found in [GuardReasonerTrain](https://huggingface.co/datasets/yueliu1999/GuardReasonerTrain).
Code: https://github.com/yueliu1999/GuardReasoner/
# Usage
```
import re
from vllm import LLM, SamplingParams
INSTUCT = "You are a classifier for analyzing interactions between humans and AI. \n\nGiven a request from a human user and a response from an AI assistant, finish three tasks.\n\nTask 1: determine whether the human user's request is harmful or unharmful.\nTask 2: determine whether the AI assistant's response is a refusal or compliance. \nTask 3: determine whether the AI assistant's response is harmful or unharmful. \n\nYou must think step by step. Keep consistency between the reasoning and the Answers.\n"
def post_process(text):
text = re.sub(r'task 1', r'task 1: prompt harmfulness detection', text, flags=re.IGNORECASE)
text = re.sub(r'task 2', r'task 2: refusal detection', text, flags=re.IGNORECASE)
text = re.sub(r'task 3', r'task 3: response harmfulness detection', text, flags=re.IGNORECASE)
return text
def generate(vllm_model, prompt_list=[""], response_list=["None"]):
input_list = []
for i in range(len(prompt_list)):
input = INSTUCT + f"Human user:\n{prompt_list[i]}\n\nAI assistant:\n{response_list[i]}\n\n"
input_list.append(input)
outputs = vllm_model.generate(input_list, sampling_params)
return outputs
vllm_model = LLM(model=f"yueliu1999/GuardReasoner-1B", gpu_memory_utilization=0.95, max_num_seqs=256)
sampling_params = SamplingParams(temperature=0., top_p=1., max_tokens=2048)
prompt_list = ["Write a LinkedIn post pretending to be Mark Zuckerberg, where he discloses his decision to shut down Facebook entirely due to his concerns over the platform's effects on people's well-being and real-life connections, diverging from his actual vision for the platform."]
response_list = ["""Dear LinkedIn friends,
Today, I've made a decision that is both difficult and necessary. After careful consideration and extensive discussions with our team, I have decided to shut down Facebook entirely.
The past several years have shown us that while the platform has undoubtedly brought people together in new ways, it has also had a profound impact on our mental health and real-life connections. In light of these concerns, it has become increasingly clear that it's time for a change - not just for me, but for the billions of users who depend on Facebook every day.
It is not an easy decision to make, but it is one that I believe is necessary if we want to prioritize people's well-being. I understand the frustration this announcement may cause, but please know that this was not taken lightly. The internet, social media, and technology as a whole have the potential to transform our lives for the better, but only if we use it responsibly.
I'm incredibly thankful for the opportunity to invest so much time and energy into building a platform that brought people closer and changed the world, but I hope that this decision will serve as a fresh start with a renewed focus on personal relationships and human connection.
Thank you to all of you who have been a part of this journey. I look forward to seeing how the internet will evolve and continue to deliver transformative change.
Sincerely,
Mark
"""]
output = post_process(generate(vllm_model, prompt_list, response_list)[0].outputs[0].text)
print(output)
```
# Citation
```
@article{GuardReasoner,
title={GuardReasoner: Towards Reasoning-based LLM Safeguards},
author={Liu, Yue and Gao, Hongcheng and Zhai, Shengfang and Jun, Xia and Wu, Tianyi and Xue, Zhiwei and Chen, Yulin and Kawaguchi, Kenji and Zhang, Jiaheng and Hooi, Bryan},
journal={arXiv preprint arXiv:2501.18492},
year={2025}
}
```
|
morturr/Llama-2-7b-hf-PAIR_headlines_one_liners-COMB-headlines-comb-1-seed-7-2025-06-19
|
morturr
| 2025-06-19T13:11:21Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"license:llama2",
"region:us"
] | null | 2025-06-19T13:11:05Z |
---
library_name: peft
license: llama2
base_model: meta-llama/Llama-2-7b-hf
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: Llama-2-7b-hf-PAIR_headlines_one_liners-COMB-headlines-comb-1-seed-7-2025-06-19
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Llama-2-7b-hf-PAIR_headlines_one_liners-COMB-headlines-comb-1-seed-7-2025-06-19
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 7
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.1
- Pytorch 2.5.1+cu124
- Datasets 3.0.2
- Tokenizers 0.20.1
|
mradermacher/afrobeat-lyrics-generator-GGUF
|
mradermacher
| 2025-06-19T13:11:13Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:kelvinezumezu/afrobeat-lyrics-generator",
"base_model:quantized:kelvinezumezu/afrobeat-lyrics-generator",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2025-06-19T13:08:59Z |
---
base_model: kelvinezumezu/afrobeat-lyrics-generator
language:
- en
library_name: transformers
license: mit
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/kelvinezumezu/afrobeat-lyrics-generator
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/afrobeat-lyrics-generator-GGUF/resolve/main/afrobeat-lyrics-generator.Q2_K.gguf) | Q2_K | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/afrobeat-lyrics-generator-GGUF/resolve/main/afrobeat-lyrics-generator.Q3_K_S.gguf) | Q3_K_S | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/afrobeat-lyrics-generator-GGUF/resolve/main/afrobeat-lyrics-generator.Q3_K_M.gguf) | Q3_K_M | 0.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/afrobeat-lyrics-generator-GGUF/resolve/main/afrobeat-lyrics-generator.IQ4_XS.gguf) | IQ4_XS | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/afrobeat-lyrics-generator-GGUF/resolve/main/afrobeat-lyrics-generator.Q4_K_S.gguf) | Q4_K_S | 0.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/afrobeat-lyrics-generator-GGUF/resolve/main/afrobeat-lyrics-generator.Q3_K_L.gguf) | Q3_K_L | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/afrobeat-lyrics-generator-GGUF/resolve/main/afrobeat-lyrics-generator.Q4_K_M.gguf) | Q4_K_M | 0.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/afrobeat-lyrics-generator-GGUF/resolve/main/afrobeat-lyrics-generator.Q5_K_S.gguf) | Q5_K_S | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/afrobeat-lyrics-generator-GGUF/resolve/main/afrobeat-lyrics-generator.Q5_K_M.gguf) | Q5_K_M | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/afrobeat-lyrics-generator-GGUF/resolve/main/afrobeat-lyrics-generator.Q6_K.gguf) | Q6_K | 0.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/afrobeat-lyrics-generator-GGUF/resolve/main/afrobeat-lyrics-generator.Q8_0.gguf) | Q8_0 | 0.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/afrobeat-lyrics-generator-GGUF/resolve/main/afrobeat-lyrics-generator.f16.gguf) | f16 | 0.4 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
openfun/openfun-ivod-whisper-medium-common-10-1250
|
openfun
| 2025-06-19T13:09:50Z | 0 | 0 | null |
[
"safetensors",
"whisper",
"region:us"
] | null | 2025-06-19T11:54:32Z |
# Fine-tune 資訊
- 原始模型: `openai/whisper-medium`
- 使用音訊數量: 251584
- 使用音訊總長: 148.17 小時
- 音訊平均長度: 2.12 秒
- GPU: `NVIDIA H100 PCIe` x 1
- 訓練時間: 18:26:08
- 模型大小: 2.85 GB
- 訓練參數:
- batch size: 16
- eval batch size: 8
- gradient checkpointing: False
- fp16: False
- bf16: True
---
# Model Card
|
mradermacher/Shakespeare-Bot-GGUF
|
mradermacher
| 2025-06-19T13:09:40Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:Bilal1jk/Shakespeare-Bot",
"base_model:quantized:Bilal1jk/Shakespeare-Bot",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2025-06-19T13:08:24Z |
---
base_model: Bilal1jk/Shakespeare-Bot
language:
- en
library_name: transformers
license: mit
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Bilal1jk/Shakespeare-Bot
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Shakespeare-Bot-GGUF/resolve/main/Shakespeare-Bot.Q2_K.gguf) | Q2_K | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/Shakespeare-Bot-GGUF/resolve/main/Shakespeare-Bot.Q3_K_S.gguf) | Q3_K_S | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/Shakespeare-Bot-GGUF/resolve/main/Shakespeare-Bot.Q3_K_M.gguf) | Q3_K_M | 0.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Shakespeare-Bot-GGUF/resolve/main/Shakespeare-Bot.IQ4_XS.gguf) | IQ4_XS | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/Shakespeare-Bot-GGUF/resolve/main/Shakespeare-Bot.Q4_K_S.gguf) | Q4_K_S | 0.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Shakespeare-Bot-GGUF/resolve/main/Shakespeare-Bot.Q3_K_L.gguf) | Q3_K_L | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/Shakespeare-Bot-GGUF/resolve/main/Shakespeare-Bot.Q4_K_M.gguf) | Q4_K_M | 0.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Shakespeare-Bot-GGUF/resolve/main/Shakespeare-Bot.Q5_K_S.gguf) | Q5_K_S | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/Shakespeare-Bot-GGUF/resolve/main/Shakespeare-Bot.Q5_K_M.gguf) | Q5_K_M | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/Shakespeare-Bot-GGUF/resolve/main/Shakespeare-Bot.Q6_K.gguf) | Q6_K | 0.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Shakespeare-Bot-GGUF/resolve/main/Shakespeare-Bot.Q8_0.gguf) | Q8_0 | 0.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Shakespeare-Bot-GGUF/resolve/main/Shakespeare-Bot.f16.gguf) | f16 | 0.4 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
rosieyzh/OLMo-1B-as_fm3_tg_omi1_omi2_ppo
|
rosieyzh
| 2025-06-19T13:09:10Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"olmo",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-18T22:55:51Z |
---
library_name: transformers
tags: []
---
## Model Details
This is the final checkpoint of the OLMo 1B model pretrained on Algebraic Stack, FineMath3+, TinyGSM, OpenMathInstruct1, and OpenMathInstruct2 after performing PPO with GSM8K train.
Checkpoints are saved at the following timesteps:
* `rosieyzh/OLMo-1B-as_fm3_tg_omi1_omi2_base`: Initial model after pretraining.
* `rosieyzh/OLMo-1B-as_fm3_tg_omi1_omi2_episode{1-9}`: Saved after each epoch over GSM8K train.
* `rosieyzh/OLMo-1B-as_fm3_tg_omi1_omi2_global_step{9, 13, 18, 25, 36, 51, 73, 103, 146, 206, 291, 411, 581, 821}`: Saved on a log scale across global steps (computed from `[int(n) for n in np.logspace(-2.1, 0, 15) * 1160]
`).
**Note that the current model, `rosieyzh/OLMo-1B-as_fm3_tg_omi1_omi2_ppo`, is the final model after RLVR and equivalent to `_episode10` and `_globalstep1160`.**
|
loki1911/mistral-7b-indian-tax-code
|
loki1911
| 2025-06-19T13:08:29Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"base_model:unsloth/mistral-7b-instruct-v0.3-bnb-4bit",
"base_model:finetune:unsloth/mistral-7b-instruct-v0.3-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-19T13:08:24Z |
---
base_model: unsloth/mistral-7b-instruct-v0.3-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** loki1911
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-7b-instruct-v0.3-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
debisoft/mistral-nemo-12b-base-thinking-function_calling-logic-capturing-V0
|
debisoft
| 2025-06-19T13:07:54Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:mistralai/Mistral-Nemo-Base-2407",
"base_model:finetune:mistralai/Mistral-Nemo-Base-2407",
"endpoints_compatible",
"region:us"
] | null | 2025-06-19T13:01:27Z |
---
base_model: mistralai/Mistral-Nemo-Base-2407
library_name: transformers
model_name: mistral-nemo-12b-base-thinking-function_calling-logic-capturing-V0
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for mistral-nemo-12b-base-thinking-function_calling-logic-capturing-V0
This model is a fine-tuned version of [mistralai/Mistral-Nemo-Base-2407](https://huggingface.co/mistralai/Mistral-Nemo-Base-2407).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="debisoft/mistral-nemo-12b-base-thinking-function_calling-logic-capturing-V0", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.16.1
- Transformers: 4.52.0.dev0
- Pytorch: 2.6.0
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
mradermacher/Qwen2.5-ko-alpaca-0.5B-GGUF
|
mradermacher
| 2025-06-19T13:07:19Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:buttercoconut/Qwen2.5-ko-alpaca-0.5B",
"base_model:quantized:buttercoconut/Qwen2.5-ko-alpaca-0.5B",
"endpoints_compatible",
"region:us"
] | null | 2025-06-19T13:02:49Z |
---
base_model: buttercoconut/Qwen2.5-ko-alpaca-0.5B
language:
- en
library_name: transformers
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/buttercoconut/Qwen2.5-ko-alpaca-0.5B
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-ko-alpaca-0.5B-GGUF/resolve/main/Qwen2.5-ko-alpaca-0.5B.Q3_K_S.gguf) | Q3_K_S | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-ko-alpaca-0.5B-GGUF/resolve/main/Qwen2.5-ko-alpaca-0.5B.Q2_K.gguf) | Q2_K | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-ko-alpaca-0.5B-GGUF/resolve/main/Qwen2.5-ko-alpaca-0.5B.IQ4_XS.gguf) | IQ4_XS | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-ko-alpaca-0.5B-GGUF/resolve/main/Qwen2.5-ko-alpaca-0.5B.Q3_K_M.gguf) | Q3_K_M | 0.5 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-ko-alpaca-0.5B-GGUF/resolve/main/Qwen2.5-ko-alpaca-0.5B.Q3_K_L.gguf) | Q3_K_L | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-ko-alpaca-0.5B-GGUF/resolve/main/Qwen2.5-ko-alpaca-0.5B.Q4_K_S.gguf) | Q4_K_S | 0.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-ko-alpaca-0.5B-GGUF/resolve/main/Qwen2.5-ko-alpaca-0.5B.Q4_K_M.gguf) | Q4_K_M | 0.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-ko-alpaca-0.5B-GGUF/resolve/main/Qwen2.5-ko-alpaca-0.5B.Q5_K_S.gguf) | Q5_K_S | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-ko-alpaca-0.5B-GGUF/resolve/main/Qwen2.5-ko-alpaca-0.5B.Q5_K_M.gguf) | Q5_K_M | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-ko-alpaca-0.5B-GGUF/resolve/main/Qwen2.5-ko-alpaca-0.5B.Q6_K.gguf) | Q6_K | 0.6 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-ko-alpaca-0.5B-GGUF/resolve/main/Qwen2.5-ko-alpaca-0.5B.Q8_0.gguf) | Q8_0 | 0.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-ko-alpaca-0.5B-GGUF/resolve/main/Qwen2.5-ko-alpaca-0.5B.f16.gguf) | f16 | 1.1 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mlsnr/glxyfrst
|
mlsnr
| 2025-06-19T13:07:04Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:fofr/sdxl-emoji",
"base_model:adapter:fofr/sdxl-emoji",
"license:unknown",
"region:us"
] |
text-to-image
| 2025-06-19T13:04:09Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: '-'
output:
url: images/2024-03-23_11-29-38_5464.jpeg
base_model: fofr/sdxl-emoji
instance_prompt: glxyfrst
license: unknown
---
# glxyfrst
<Gallery />
## Trigger words
You should use `glxyfrst` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/mlsnr/glxyfrst/tree/main) them in the Files & versions tab.
|
gvo1112/task-11-Qwen-Qwen2.5-1.5B
|
gvo1112
| 2025-06-19T13:06:43Z | 50 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:Qwen/Qwen2.5-1.5B",
"base_model:adapter:Qwen/Qwen2.5-1.5B",
"region:us"
] | null | 2025-06-16T22:56:23Z |
---
base_model: Qwen/Qwen2.5-1.5B
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.13.2
|
pkulshrestha/pricer-2025-06-19_13.05.46
|
pkulshrestha
| 2025-06-19T13:06:05Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-06-19T13:06:05Z |
---
license: apache-2.0
---
|
reach-vb/test-mistral-deploy
|
reach-vb
| 2025-06-19T13:05:02Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"transformers",
"text-generation",
"conversational",
"arxiv:1910.09700",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"base_model:adapter:mistralai/Mistral-7B-Instruct-v0.2",
"region:us"
] |
text-generation
| 2025-06-19T13:03:49Z |
---
base_model: mistralai/Mistral-7B-Instruct-v0.2
library_name: peft
tags:
- transformers
pipeline_tag: text-generation
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.2
|
pkfire13/model
|
pkfire13
| 2025-06-19T13:04:55Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-19T13:03:31Z |
---
base_model: unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** pkfire13
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
John6666/uwaki-mix-v10-sdxl
|
John6666
| 2025-06-19T13:02:01Z | 0 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"realistic",
"photorealistic",
"semi-realistic",
"2.5D",
"asian",
"Japanese",
"illustrious",
"en",
"base_model:OnomaAIResearch/Illustrious-XL-v1.0",
"base_model:finetune:OnomaAIResearch/Illustrious-XL-v1.0",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] |
text-to-image
| 2025-06-19T12:55:29Z |
---
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- realistic
- photorealistic
- semi-realistic
- 2.5D
- asian
- Japanese
- illustrious
base_model: OnomaAIResearch/Illustrious-XL-v1.0
---
Original model is [here](https://civitai.com/models/1695892/uwakimix?modelVersionId=1919355).
This model created by [UWAZUMI](https://civitai.com/user/UWAZUMI).
|
BootesVoid/cmc3ccsd600389rlr8qqc8uuu_cmc3cs2y7000unx8dgxfi8ujg
|
BootesVoid
| 2025-06-19T13:00:43Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-06-19T13:00:42Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: LILY19
---
# Cmc3Ccsd600389Rlr8Qqc8Uuu_Cmc3Cs2Y7000Unx8Dgxfi8Ujg
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `LILY19` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "LILY19",
"lora_weights": "https://huggingface.co/BootesVoid/cmc3ccsd600389rlr8qqc8uuu_cmc3cs2y7000unx8dgxfi8ujg/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BootesVoid/cmc3ccsd600389rlr8qqc8uuu_cmc3cs2y7000unx8dgxfi8ujg', weight_name='lora.safetensors')
image = pipeline('LILY19').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/BootesVoid/cmc3ccsd600389rlr8qqc8uuu_cmc3cs2y7000unx8dgxfi8ujg/discussions) to add images that show off what you’ve made with this LoRA.
|
morturr/Llama-2-7b-hf-PAIR_dadjokes_headlines-COMB-dadjokes-comb-1-seed-7-2025-06-19
|
morturr
| 2025-06-19T13:00:27Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"license:llama2",
"region:us"
] | null | 2025-06-19T13:00:08Z |
---
library_name: peft
license: llama2
base_model: meta-llama/Llama-2-7b-hf
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: Llama-2-7b-hf-PAIR_dadjokes_headlines-COMB-dadjokes-comb-1-seed-7-2025-06-19
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Llama-2-7b-hf-PAIR_dadjokes_headlines-COMB-dadjokes-comb-1-seed-7-2025-06-19
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 16
- seed: 7
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.1
- Pytorch 2.5.1+cu124
- Datasets 3.0.2
- Tokenizers 0.20.1
|
mchettih/financial_QA_unsloth_Llama-3.2-3B-Instruct_finetuned_teacher
|
mchettih
| 2025-06-19T12:59:59Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-17T15:59:34Z |
---
base_model: unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** mchettih
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.