modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-07-28 18:27:46
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 534
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-07-28 18:26:08
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
freakyfractal/bouser2
|
freakyfractal
| 2025-06-20T12:56:49Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:apache-2.0",
"region:us"
] |
text-to-image
| 2025-06-20T12:56:29Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: '-'
output:
url: images/Coinye_2021.jpg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: null
license: apache-2.0
---
# bouser2
<Gallery />
## Download model
Weights for this model are available in Safetensors format.
[Download](/freakyfractal/bouser2/tree/main) them in the Files & versions tab.
|
Salajmi1/results_araelectra
|
Salajmi1
| 2025-06-20T12:56:44Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"electra",
"text-classification",
"generated_from_trainer",
"base_model:aubmindlab/araelectra-base-discriminator",
"base_model:finetune:aubmindlab/araelectra-base-discriminator",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-06-20T12:56:34Z |
---
library_name: transformers
base_model: aubmindlab/araelectra-base-discriminator
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: results_araelectra
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results_araelectra
This model is a fine-tuned version of [aubmindlab/araelectra-base-discriminator](https://huggingface.co/aubmindlab/araelectra-base-discriminator) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3534
- Accuracy: 0.8922
- F1: 0.8114
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 94 | 0.6014 | 0.8084 | 0.5763 |
| No log | 2.0 | 188 | 0.4269 | 0.8862 | 0.7944 |
| No log | 3.0 | 282 | 0.3534 | 0.8922 | 0.8114 |
### Framework versions
- Transformers 4.52.4
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1
|
morturr/Llama-2-7b-hf-PAIR_one_liners_amazon-COMB-one_liners-comb-2-seed-7-2025-06-20
|
morturr
| 2025-06-20T12:54:13Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"license:llama2",
"region:us"
] | null | 2025-06-20T12:53:47Z |
---
library_name: peft
license: llama2
base_model: meta-llama/Llama-2-7b-hf
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: Llama-2-7b-hf-PAIR_one_liners_amazon-COMB-one_liners-comb-2-seed-7-2025-06-20
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Llama-2-7b-hf-PAIR_one_liners_amazon-COMB-one_liners-comb-2-seed-7-2025-06-20
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 7
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.1
- Pytorch 2.5.1+cu124
- Datasets 3.0.2
- Tokenizers 0.20.1
|
tarundachepally/Granite_3b_instruct_huge_exp
|
tarundachepally
| 2025-06-20T12:51:50Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:ibm-granite/granite-3b-code-instruct-128k",
"base_model:finetune:ibm-granite/granite-3b-code-instruct-128k",
"endpoints_compatible",
"region:us"
] | null | 2025-06-20T12:51:40Z |
---
base_model: ibm-granite/granite-3b-code-instruct-128k
library_name: transformers
model_name: Granite_3b_instruct_huge_exp
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for Granite_3b_instruct_huge_exp
This model is a fine-tuned version of [ibm-granite/granite-3b-code-instruct-128k](https://huggingface.co/ibm-granite/granite-3b-code-instruct-128k).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="tarundachepally/Granite_3b_instruct_huge_exp", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.12.2
- Transformers: 4.46.3
- Pytorch: 2.6.0+cu124
- Datasets: 3.6.0
- Tokenizers: 0.20.3
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
21Meru/fine_tuned_blenderbot
|
21Meru
| 2025-06-20T12:48:26Z | 0 | 0 | null |
[
"safetensors",
"blenderbot-small",
"chatbot",
"mental-health",
"blenderbot",
"conversational",
"en",
"dataset:Amod/mental_health_counseling_conversations",
"license:apache-2.0",
"region:us"
] | null | 2025-06-20T12:31:16Z |
---
license: apache-2.0
datasets:
- Amod/mental_health_counseling_conversations
language:
- en
tags:
- chatbot
- mental-health
- blenderbot
- conversational
---
|
tomaarsen/csr-mxbai-embed-large-v1-nq-cos-sim-scale-5-gamma-0.1
|
tomaarsen
| 2025-06-20T12:43:33Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"bert",
"sparse-encoder",
"sparse",
"csr",
"generated_from_trainer",
"dataset_size:99000",
"loss:CSRLoss",
"loss:SparseMultipleNegativesRankingLoss",
"feature-extraction",
"en",
"dataset:sentence-transformers/natural-questions",
"arxiv:1908.10084",
"arxiv:2503.01776",
"arxiv:1705.00652",
"base_model:mixedbread-ai/mxbai-embed-large-v1",
"base_model:finetune:mixedbread-ai/mxbai-embed-large-v1",
"license:apache-2.0",
"model-index",
"co2_eq_emissions",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2025-06-20T12:43:25Z |
---
language:
- en
license: apache-2.0
tags:
- sentence-transformers
- sparse-encoder
- sparse
- csr
- generated_from_trainer
- dataset_size:99000
- loss:CSRLoss
- loss:SparseMultipleNegativesRankingLoss
base_model: mixedbread-ai/mxbai-embed-large-v1
widget:
- text: Saudi Arabia–United Arab Emirates relations However, the UAE and Saudi Arabia
continue to take somewhat differing stances on regional conflicts such the Yemeni
Civil War, where the UAE opposes Al-Islah, and supports the Southern Movement,
which has fought against Saudi-backed forces, and the Syrian Civil War, where
the UAE has disagreed with Saudi support for Islamist movements.[4]
- text: Economy of New Zealand New Zealand's diverse market economy has a sizable
service sector, accounting for 63% of all GDP activity in 2013.[17] Large scale
manufacturing industries include aluminium production, food processing, metal
fabrication, wood and paper products. Mining, manufacturing, electricity, gas,
water, and waste services accounted for 16.5% of GDP in 2013.[17] The primary
sector continues to dominate New Zealand's exports, despite accounting for 6.5%
of GDP in 2013.[17]
- text: who was the first president of indian science congress meeting held in kolkata
in 1914
- text: Get Over It (Eagles song) "Get Over It" is a song by the Eagles released as
a single after a fourteen-year breakup. It was also the first song written by
bandmates Don Henley and Glenn Frey when the band reunited. "Get Over It" was
played live for the first time during their Hell Freezes Over tour in 1994. It
returned the band to the U.S. Top 40 after a fourteen-year absence, peaking at
No. 31 on the Billboard Hot 100 chart. It also hit No. 4 on the Billboard Mainstream
Rock Tracks chart. The song was not played live by the Eagles after the "Hell
Freezes Over" tour in 1994. It remains the group's last Top 40 hit in the U.S.
- text: 'Cornelius the Centurion Cornelius (Greek: Κορνήλιος) was a Roman centurion
who is considered by Christians to be one of the first Gentiles to convert to
the faith, as related in Acts of the Apostles.'
datasets:
- sentence-transformers/natural-questions
pipeline_tag: feature-extraction
library_name: sentence-transformers
metrics:
- dot_accuracy@1
- dot_accuracy@3
- dot_accuracy@5
- dot_accuracy@10
- dot_precision@1
- dot_precision@3
- dot_precision@5
- dot_precision@10
- dot_recall@1
- dot_recall@3
- dot_recall@5
- dot_recall@10
- dot_ndcg@10
- dot_mrr@10
- dot_map@100
- query_active_dims
- query_sparsity_ratio
- corpus_active_dims
- corpus_sparsity_ratio
co2_eq_emissions:
emissions: 50.33638897970368
energy_consumed: 0.1294986621620256
source: codecarbon
training_type: fine-tuning
on_cloud: false
cpu_model: 13th Gen Intel(R) Core(TM) i7-13700K
ram_total_size: 31.777088165283203
hours_used: 0.335
hardware_used: 1 x NVIDIA GeForce RTX 3090
model-index:
- name: Sparse CSR model trained on Natural Questions
results:
- task:
type: sparse-information-retrieval
name: Sparse Information Retrieval
dataset:
name: NanoMSMARCO 4
type: NanoMSMARCO_4
metrics:
- type: dot_accuracy@1
value: 0.06
name: Dot Accuracy@1
- type: dot_accuracy@3
value: 0.22
name: Dot Accuracy@3
- type: dot_accuracy@5
value: 0.22
name: Dot Accuracy@5
- type: dot_accuracy@10
value: 0.28
name: Dot Accuracy@10
- type: dot_precision@1
value: 0.06
name: Dot Precision@1
- type: dot_precision@3
value: 0.07333333333333333
name: Dot Precision@3
- type: dot_precision@5
value: 0.04400000000000001
name: Dot Precision@5
- type: dot_precision@10
value: 0.028000000000000008
name: Dot Precision@10
- type: dot_recall@1
value: 0.06
name: Dot Recall@1
- type: dot_recall@3
value: 0.22
name: Dot Recall@3
- type: dot_recall@5
value: 0.22
name: Dot Recall@5
- type: dot_recall@10
value: 0.28
name: Dot Recall@10
- type: dot_ndcg@10
value: 0.1697596420238124
name: Dot Ndcg@10
- type: dot_mrr@10
value: 0.13452380952380952
name: Dot Mrr@10
- type: dot_map@100
value: 0.14616428018874245
name: Dot Map@100
- type: query_active_dims
value: 4.0
name: Query Active Dims
- type: query_sparsity_ratio
value: 0.9990234375
name: Query Sparsity Ratio
- type: corpus_active_dims
value: 4.0
name: Corpus Active Dims
- type: corpus_sparsity_ratio
value: 0.9990234375
name: Corpus Sparsity Ratio
- task:
type: sparse-information-retrieval
name: Sparse Information Retrieval
dataset:
name: NanoMSMARCO 8
type: NanoMSMARCO_8
metrics:
- type: dot_accuracy@1
value: 0.08
name: Dot Accuracy@1
- type: dot_accuracy@3
value: 0.2
name: Dot Accuracy@3
- type: dot_accuracy@5
value: 0.34
name: Dot Accuracy@5
- type: dot_accuracy@10
value: 0.4
name: Dot Accuracy@10
- type: dot_precision@1
value: 0.08
name: Dot Precision@1
- type: dot_precision@3
value: 0.06666666666666667
name: Dot Precision@3
- type: dot_precision@5
value: 0.068
name: Dot Precision@5
- type: dot_precision@10
value: 0.04000000000000001
name: Dot Precision@10
- type: dot_recall@1
value: 0.08
name: Dot Recall@1
- type: dot_recall@3
value: 0.2
name: Dot Recall@3
- type: dot_recall@5
value: 0.34
name: Dot Recall@5
- type: dot_recall@10
value: 0.4
name: Dot Recall@10
- type: dot_ndcg@10
value: 0.23173485139840844
name: Dot Ndcg@10
- type: dot_mrr@10
value: 0.17835714285714285
name: Dot Mrr@10
- type: dot_map@100
value: 0.19541026376110485
name: Dot Map@100
- type: query_active_dims
value: 8.0
name: Query Active Dims
- type: query_sparsity_ratio
value: 0.998046875
name: Query Sparsity Ratio
- type: corpus_active_dims
value: 8.0
name: Corpus Active Dims
- type: corpus_sparsity_ratio
value: 0.998046875
name: Corpus Sparsity Ratio
- task:
type: sparse-information-retrieval
name: Sparse Information Retrieval
dataset:
name: NanoMSMARCO 16
type: NanoMSMARCO_16
metrics:
- type: dot_accuracy@1
value: 0.28
name: Dot Accuracy@1
- type: dot_accuracy@3
value: 0.48
name: Dot Accuracy@3
- type: dot_accuracy@5
value: 0.54
name: Dot Accuracy@5
- type: dot_accuracy@10
value: 0.62
name: Dot Accuracy@10
- type: dot_precision@1
value: 0.28
name: Dot Precision@1
- type: dot_precision@3
value: 0.15999999999999998
name: Dot Precision@3
- type: dot_precision@5
value: 0.10800000000000001
name: Dot Precision@5
- type: dot_precision@10
value: 0.062
name: Dot Precision@10
- type: dot_recall@1
value: 0.28
name: Dot Recall@1
- type: dot_recall@3
value: 0.48
name: Dot Recall@3
- type: dot_recall@5
value: 0.54
name: Dot Recall@5
- type: dot_recall@10
value: 0.62
name: Dot Recall@10
- type: dot_ndcg@10
value: 0.44021635939536174
name: Dot Ndcg@10
- type: dot_mrr@10
value: 0.3835793650793651
name: Dot Mrr@10
- type: dot_map@100
value: 0.394923411264133
name: Dot Map@100
- type: query_active_dims
value: 16.0
name: Query Active Dims
- type: query_sparsity_ratio
value: 0.99609375
name: Query Sparsity Ratio
- type: corpus_active_dims
value: 16.0
name: Corpus Active Dims
- type: corpus_sparsity_ratio
value: 0.99609375
name: Corpus Sparsity Ratio
- task:
type: sparse-information-retrieval
name: Sparse Information Retrieval
dataset:
name: NanoMSMARCO 32
type: NanoMSMARCO_32
metrics:
- type: dot_accuracy@1
value: 0.3
name: Dot Accuracy@1
- type: dot_accuracy@3
value: 0.46
name: Dot Accuracy@3
- type: dot_accuracy@5
value: 0.6
name: Dot Accuracy@5
- type: dot_accuracy@10
value: 0.7
name: Dot Accuracy@10
- type: dot_precision@1
value: 0.3
name: Dot Precision@1
- type: dot_precision@3
value: 0.15333333333333332
name: Dot Precision@3
- type: dot_precision@5
value: 0.12
name: Dot Precision@5
- type: dot_precision@10
value: 0.07
name: Dot Precision@10
- type: dot_recall@1
value: 0.3
name: Dot Recall@1
- type: dot_recall@3
value: 0.46
name: Dot Recall@3
- type: dot_recall@5
value: 0.6
name: Dot Recall@5
- type: dot_recall@10
value: 0.7
name: Dot Recall@10
- type: dot_ndcg@10
value: 0.48681030478639387
name: Dot Ndcg@10
- type: dot_mrr@10
value: 0.4197380952380952
name: Dot Mrr@10
- type: dot_map@100
value: 0.4254614535725137
name: Dot Map@100
- type: query_active_dims
value: 32.0
name: Query Active Dims
- type: query_sparsity_ratio
value: 0.9921875
name: Query Sparsity Ratio
- type: corpus_active_dims
value: 32.0
name: Corpus Active Dims
- type: corpus_sparsity_ratio
value: 0.9921875
name: Corpus Sparsity Ratio
- task:
type: sparse-information-retrieval
name: Sparse Information Retrieval
dataset:
name: NanoMSMARCO 64
type: NanoMSMARCO_64
metrics:
- type: dot_accuracy@1
value: 0.36
name: Dot Accuracy@1
- type: dot_accuracy@3
value: 0.52
name: Dot Accuracy@3
- type: dot_accuracy@5
value: 0.6
name: Dot Accuracy@5
- type: dot_accuracy@10
value: 0.76
name: Dot Accuracy@10
- type: dot_precision@1
value: 0.36
name: Dot Precision@1
- type: dot_precision@3
value: 0.1733333333333333
name: Dot Precision@3
- type: dot_precision@5
value: 0.12
name: Dot Precision@5
- type: dot_precision@10
value: 0.07600000000000001
name: Dot Precision@10
- type: dot_recall@1
value: 0.36
name: Dot Recall@1
- type: dot_recall@3
value: 0.52
name: Dot Recall@3
- type: dot_recall@5
value: 0.6
name: Dot Recall@5
- type: dot_recall@10
value: 0.76
name: Dot Recall@10
- type: dot_ndcg@10
value: 0.5372607707612113
name: Dot Ndcg@10
- type: dot_mrr@10
value: 0.4696904761904762
name: Dot Mrr@10
- type: dot_map@100
value: 0.4788022575587187
name: Dot Map@100
- type: query_active_dims
value: 64.0
name: Query Active Dims
- type: query_sparsity_ratio
value: 0.984375
name: Query Sparsity Ratio
- type: corpus_active_dims
value: 64.0
name: Corpus Active Dims
- type: corpus_sparsity_ratio
value: 0.984375
name: Corpus Sparsity Ratio
- task:
type: sparse-information-retrieval
name: Sparse Information Retrieval
dataset:
name: NanoMSMARCO 128
type: NanoMSMARCO_128
metrics:
- type: dot_accuracy@1
value: 0.32
name: Dot Accuracy@1
- type: dot_accuracy@3
value: 0.64
name: Dot Accuracy@3
- type: dot_accuracy@5
value: 0.72
name: Dot Accuracy@5
- type: dot_accuracy@10
value: 0.84
name: Dot Accuracy@10
- type: dot_precision@1
value: 0.32
name: Dot Precision@1
- type: dot_precision@3
value: 0.21333333333333335
name: Dot Precision@3
- type: dot_precision@5
value: 0.14400000000000002
name: Dot Precision@5
- type: dot_precision@10
value: 0.08399999999999999
name: Dot Precision@10
- type: dot_recall@1
value: 0.32
name: Dot Recall@1
- type: dot_recall@3
value: 0.64
name: Dot Recall@3
- type: dot_recall@5
value: 0.72
name: Dot Recall@5
- type: dot_recall@10
value: 0.84
name: Dot Recall@10
- type: dot_ndcg@10
value: 0.5822669458808437
name: Dot Ndcg@10
- type: dot_mrr@10
value: 0.5
name: Dot Mrr@10
- type: dot_map@100
value: 0.507267150580799
name: Dot Map@100
- type: query_active_dims
value: 128.0
name: Query Active Dims
- type: query_sparsity_ratio
value: 0.96875
name: Query Sparsity Ratio
- type: corpus_active_dims
value: 128.0
name: Corpus Active Dims
- type: corpus_sparsity_ratio
value: 0.96875
name: Corpus Sparsity Ratio
- task:
type: sparse-information-retrieval
name: Sparse Information Retrieval
dataset:
name: NanoMSMARCO 256
type: NanoMSMARCO_256
metrics:
- type: dot_accuracy@1
value: 0.36
name: Dot Accuracy@1
- type: dot_accuracy@3
value: 0.62
name: Dot Accuracy@3
- type: dot_accuracy@5
value: 0.72
name: Dot Accuracy@5
- type: dot_accuracy@10
value: 0.84
name: Dot Accuracy@10
- type: dot_precision@1
value: 0.36
name: Dot Precision@1
- type: dot_precision@3
value: 0.20666666666666667
name: Dot Precision@3
- type: dot_precision@5
value: 0.14400000000000002
name: Dot Precision@5
- type: dot_precision@10
value: 0.08399999999999999
name: Dot Precision@10
- type: dot_recall@1
value: 0.36
name: Dot Recall@1
- type: dot_recall@3
value: 0.62
name: Dot Recall@3
- type: dot_recall@5
value: 0.72
name: Dot Recall@5
- type: dot_recall@10
value: 0.84
name: Dot Recall@10
- type: dot_ndcg@10
value: 0.5958389192755326
name: Dot Ndcg@10
- type: dot_mrr@10
value: 0.5184047619047619
name: Dot Mrr@10
- type: dot_map@100
value: 0.5253233523294375
name: Dot Map@100
- type: query_active_dims
value: 256.0
name: Query Active Dims
- type: query_sparsity_ratio
value: 0.9375
name: Query Sparsity Ratio
- type: corpus_active_dims
value: 256.0
name: Corpus Active Dims
- type: corpus_sparsity_ratio
value: 0.9375
name: Corpus Sparsity Ratio
---
# Sparse CSR model trained on Natural Questions
This is a [CSR Sparse Encoder](https://www.sbert.net/docs/sparse_encoder/usage/usage.html) model finetuned from [mixedbread-ai/mxbai-embed-large-v1](https://huggingface.co/mixedbread-ai/mxbai-embed-large-v1) on the [natural-questions](https://huggingface.co/datasets/sentence-transformers/natural-questions) dataset using the [sentence-transformers](https://www.SBERT.net) library. It maps sentences & paragraphs to a 4096-dimensional sparse vector space with 256 maximum active dimensions and can be used for semantic search and sparse retrieval.
## Model Details
### Model Description
- **Model Type:** CSR Sparse Encoder
- **Base model:** [mixedbread-ai/mxbai-embed-large-v1](https://huggingface.co/mixedbread-ai/mxbai-embed-large-v1) <!-- at revision db9d1fe0f31addb4978201b2bf3e577f3f8900d2 -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 4096 dimensions (trained with 256 maximum active dimensions)
- **Similarity Function:** Dot Product
- **Training Dataset:**
- [natural-questions](https://huggingface.co/datasets/sentence-transformers/natural-questions)
- **Language:** en
- **License:** apache-2.0
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Documentation:** [Sparse Encoder Documentation](https://www.sbert.net/docs/sparse_encoder/usage/usage.html)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sparse Encoders on Hugging Face](https://huggingface.co/models?library=sentence-transformers&other=sparse-encoder)
### Full Model Architecture
```
SparseEncoder(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): CSRSparsity({'input_dim': 1024, 'hidden_dim': 4096, 'k': 256, 'k_aux': 512, 'normalize': False, 'dead_threshold': 30})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SparseEncoder
# Download from the 🤗 Hub
model = SparseEncoder("tomaarsen/csr-mxbai-embed-large-v1-nq-cos-sim-scale-5-gamma-0.1")
# Run inference
queries = [
"who is cornelius in the book of acts",
]
documents = [
'Cornelius the Centurion Cornelius (Greek: Κορνήλιος) was a Roman centurion who is considered by Christians to be one of the first Gentiles to convert to the faith, as related in Acts of the Apostles.',
"Joe Ranft Ranft reunited with Lasseter when he was hired by Pixar in 1991 as their head of story.[1] There he worked on all of their films produced up to 2006; this included Toy Story (for which he received an Academy Award nomination) and A Bug's Life, as the co-story writer and others as story supervisor. His final film was Cars. He also voiced characters in many of the films, including Heimlich the caterpillar in A Bug's Life, Wheezy the penguin in Toy Story 2, and Jacques the shrimp in Finding Nemo.[1]",
'Wonderful Tonight "Wonderful Tonight" is a ballad written by Eric Clapton. It was included on Clapton\'s 1977 album Slowhand. Clapton wrote the song about Pattie Boyd.[1] The female vocal harmonies on the song are provided by Marcella Detroit (then Marcy Levy) and Yvonne Elliman.',
]
query_embeddings = model.encode_query(queries)
document_embeddings = model.encode_document(documents)
print(query_embeddings.shape, document_embeddings.shape)
# [1, 4096] [3, 4096]
# Get the similarity scores for the embeddings
similarities = model.similarity(query_embeddings, document_embeddings)
print(similarities)
# tensor([[142.1139, 40.4378, 38.2739]])
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Sparse Information Retrieval
* Dataset: `NanoMSMARCO_4`
* Evaluated with [<code>SparseInformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sparse_encoder/evaluation.html#sentence_transformers.sparse_encoder.evaluation.SparseInformationRetrievalEvaluator) with these parameters:
```json
{
"max_active_dims": 4
}
```
| Metric | Value |
|:----------------------|:-----------|
| dot_accuracy@1 | 0.06 |
| dot_accuracy@3 | 0.22 |
| dot_accuracy@5 | 0.22 |
| dot_accuracy@10 | 0.28 |
| dot_precision@1 | 0.06 |
| dot_precision@3 | 0.0733 |
| dot_precision@5 | 0.044 |
| dot_precision@10 | 0.028 |
| dot_recall@1 | 0.06 |
| dot_recall@3 | 0.22 |
| dot_recall@5 | 0.22 |
| dot_recall@10 | 0.28 |
| **dot_ndcg@10** | **0.1698** |
| dot_mrr@10 | 0.1345 |
| dot_map@100 | 0.1462 |
| query_active_dims | 4.0 |
| query_sparsity_ratio | 0.999 |
| corpus_active_dims | 4.0 |
| corpus_sparsity_ratio | 0.999 |
#### Sparse Information Retrieval
* Dataset: `NanoMSMARCO_8`
* Evaluated with [<code>SparseInformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sparse_encoder/evaluation.html#sentence_transformers.sparse_encoder.evaluation.SparseInformationRetrievalEvaluator) with these parameters:
```json
{
"max_active_dims": 8
}
```
| Metric | Value |
|:----------------------|:-----------|
| dot_accuracy@1 | 0.08 |
| dot_accuracy@3 | 0.2 |
| dot_accuracy@5 | 0.34 |
| dot_accuracy@10 | 0.4 |
| dot_precision@1 | 0.08 |
| dot_precision@3 | 0.0667 |
| dot_precision@5 | 0.068 |
| dot_precision@10 | 0.04 |
| dot_recall@1 | 0.08 |
| dot_recall@3 | 0.2 |
| dot_recall@5 | 0.34 |
| dot_recall@10 | 0.4 |
| **dot_ndcg@10** | **0.2317** |
| dot_mrr@10 | 0.1784 |
| dot_map@100 | 0.1954 |
| query_active_dims | 8.0 |
| query_sparsity_ratio | 0.998 |
| corpus_active_dims | 8.0 |
| corpus_sparsity_ratio | 0.998 |
#### Sparse Information Retrieval
* Dataset: `NanoMSMARCO_16`
* Evaluated with [<code>SparseInformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sparse_encoder/evaluation.html#sentence_transformers.sparse_encoder.evaluation.SparseInformationRetrievalEvaluator) with these parameters:
```json
{
"max_active_dims": 16
}
```
| Metric | Value |
|:----------------------|:-----------|
| dot_accuracy@1 | 0.28 |
| dot_accuracy@3 | 0.48 |
| dot_accuracy@5 | 0.54 |
| dot_accuracy@10 | 0.62 |
| dot_precision@1 | 0.28 |
| dot_precision@3 | 0.16 |
| dot_precision@5 | 0.108 |
| dot_precision@10 | 0.062 |
| dot_recall@1 | 0.28 |
| dot_recall@3 | 0.48 |
| dot_recall@5 | 0.54 |
| dot_recall@10 | 0.62 |
| **dot_ndcg@10** | **0.4402** |
| dot_mrr@10 | 0.3836 |
| dot_map@100 | 0.3949 |
| query_active_dims | 16.0 |
| query_sparsity_ratio | 0.9961 |
| corpus_active_dims | 16.0 |
| corpus_sparsity_ratio | 0.9961 |
#### Sparse Information Retrieval
* Dataset: `NanoMSMARCO_32`
* Evaluated with [<code>SparseInformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sparse_encoder/evaluation.html#sentence_transformers.sparse_encoder.evaluation.SparseInformationRetrievalEvaluator) with these parameters:
```json
{
"max_active_dims": 32
}
```
| Metric | Value |
|:----------------------|:-----------|
| dot_accuracy@1 | 0.3 |
| dot_accuracy@3 | 0.46 |
| dot_accuracy@5 | 0.6 |
| dot_accuracy@10 | 0.7 |
| dot_precision@1 | 0.3 |
| dot_precision@3 | 0.1533 |
| dot_precision@5 | 0.12 |
| dot_precision@10 | 0.07 |
| dot_recall@1 | 0.3 |
| dot_recall@3 | 0.46 |
| dot_recall@5 | 0.6 |
| dot_recall@10 | 0.7 |
| **dot_ndcg@10** | **0.4868** |
| dot_mrr@10 | 0.4197 |
| dot_map@100 | 0.4255 |
| query_active_dims | 32.0 |
| query_sparsity_ratio | 0.9922 |
| corpus_active_dims | 32.0 |
| corpus_sparsity_ratio | 0.9922 |
#### Sparse Information Retrieval
* Dataset: `NanoMSMARCO_64`
* Evaluated with [<code>SparseInformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sparse_encoder/evaluation.html#sentence_transformers.sparse_encoder.evaluation.SparseInformationRetrievalEvaluator) with these parameters:
```json
{
"max_active_dims": 64
}
```
| Metric | Value |
|:----------------------|:-----------|
| dot_accuracy@1 | 0.36 |
| dot_accuracy@3 | 0.52 |
| dot_accuracy@5 | 0.6 |
| dot_accuracy@10 | 0.76 |
| dot_precision@1 | 0.36 |
| dot_precision@3 | 0.1733 |
| dot_precision@5 | 0.12 |
| dot_precision@10 | 0.076 |
| dot_recall@1 | 0.36 |
| dot_recall@3 | 0.52 |
| dot_recall@5 | 0.6 |
| dot_recall@10 | 0.76 |
| **dot_ndcg@10** | **0.5373** |
| dot_mrr@10 | 0.4697 |
| dot_map@100 | 0.4788 |
| query_active_dims | 64.0 |
| query_sparsity_ratio | 0.9844 |
| corpus_active_dims | 64.0 |
| corpus_sparsity_ratio | 0.9844 |
#### Sparse Information Retrieval
* Dataset: `NanoMSMARCO_128`
* Evaluated with [<code>SparseInformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sparse_encoder/evaluation.html#sentence_transformers.sparse_encoder.evaluation.SparseInformationRetrievalEvaluator) with these parameters:
```json
{
"max_active_dims": 128
}
```
| Metric | Value |
|:----------------------|:-----------|
| dot_accuracy@1 | 0.32 |
| dot_accuracy@3 | 0.64 |
| dot_accuracy@5 | 0.72 |
| dot_accuracy@10 | 0.84 |
| dot_precision@1 | 0.32 |
| dot_precision@3 | 0.2133 |
| dot_precision@5 | 0.144 |
| dot_precision@10 | 0.084 |
| dot_recall@1 | 0.32 |
| dot_recall@3 | 0.64 |
| dot_recall@5 | 0.72 |
| dot_recall@10 | 0.84 |
| **dot_ndcg@10** | **0.5823** |
| dot_mrr@10 | 0.5 |
| dot_map@100 | 0.5073 |
| query_active_dims | 128.0 |
| query_sparsity_ratio | 0.9688 |
| corpus_active_dims | 128.0 |
| corpus_sparsity_ratio | 0.9688 |
#### Sparse Information Retrieval
* Dataset: `NanoMSMARCO_256`
* Evaluated with [<code>SparseInformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sparse_encoder/evaluation.html#sentence_transformers.sparse_encoder.evaluation.SparseInformationRetrievalEvaluator) with these parameters:
```json
{
"max_active_dims": 256
}
```
| Metric | Value |
|:----------------------|:-----------|
| dot_accuracy@1 | 0.36 |
| dot_accuracy@3 | 0.62 |
| dot_accuracy@5 | 0.72 |
| dot_accuracy@10 | 0.84 |
| dot_precision@1 | 0.36 |
| dot_precision@3 | 0.2067 |
| dot_precision@5 | 0.144 |
| dot_precision@10 | 0.084 |
| dot_recall@1 | 0.36 |
| dot_recall@3 | 0.62 |
| dot_recall@5 | 0.72 |
| dot_recall@10 | 0.84 |
| **dot_ndcg@10** | **0.5958** |
| dot_mrr@10 | 0.5184 |
| dot_map@100 | 0.5253 |
| query_active_dims | 256.0 |
| query_sparsity_ratio | 0.9375 |
| corpus_active_dims | 256.0 |
| corpus_sparsity_ratio | 0.9375 |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### natural-questions
* Dataset: [natural-questions](https://huggingface.co/datasets/sentence-transformers/natural-questions) at [f9e894e](https://huggingface.co/datasets/sentence-transformers/natural-questions/tree/f9e894e1081e206e577b4eaa9ee6de2b06ae6f17)
* Size: 99,000 training samples
* Columns: <code>query</code> and <code>answer</code>
* Approximate statistics based on the first 1000 samples:
| | query | answer |
|:--------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 10 tokens</li><li>mean: 11.71 tokens</li><li>max: 26 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 131.81 tokens</li><li>max: 450 tokens</li></ul> |
* Samples:
| query | answer |
|:--------------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>who played the father in papa don't preach</code> | <code>Alex McArthur Alex McArthur (born March 6, 1957) is an American actor.</code> |
| <code>where was the location of the battle of hastings</code> | <code>Battle of Hastings The Battle of Hastings[a] was fought on 14 October 1066 between the Norman-French army of William, the Duke of Normandy, and an English army under the Anglo-Saxon King Harold Godwinson, beginning the Norman conquest of England. It took place approximately 7 miles (11 kilometres) northwest of Hastings, close to the present-day town of Battle, East Sussex, and was a decisive Norman victory.</code> |
| <code>how many puppies can a dog give birth to</code> | <code>Canine reproduction The largest litter size to date was set by a Neapolitan Mastiff in Manea, Cambridgeshire, UK on November 29, 2004; the litter was 24 puppies.[22]</code> |
* Loss: [<code>CSRLoss</code>](https://sbert.net/docs/package_reference/sparse_encoder/losses.html#csrloss) with these parameters:
```json
{
"beta": 0.1,
"gamma": 0.1,
"loss": "SparseMultipleNegativesRankingLoss(scale=5.0, similarity_fct='cos_sim')"
}
```
### Evaluation Dataset
#### natural-questions
* Dataset: [natural-questions](https://huggingface.co/datasets/sentence-transformers/natural-questions) at [f9e894e](https://huggingface.co/datasets/sentence-transformers/natural-questions/tree/f9e894e1081e206e577b4eaa9ee6de2b06ae6f17)
* Size: 1,000 evaluation samples
* Columns: <code>query</code> and <code>answer</code>
* Approximate statistics based on the first 1000 samples:
| | query | answer |
|:--------|:-----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 10 tokens</li><li>mean: 11.69 tokens</li><li>max: 23 tokens</li></ul> | <ul><li>min: 15 tokens</li><li>mean: 134.01 tokens</li><li>max: 512 tokens</li></ul> |
* Samples:
| query | answer |
|:-------------------------------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>where is the tiber river located in italy</code> | <code>Tiber The Tiber (/ˈtaɪbər/, Latin: Tiberis,[1] Italian: Tevere [ˈteːvere])[2] is the third-longest river in Italy, rising in the Apennine Mountains in Emilia-Romagna and flowing 406 kilometres (252 mi) through Tuscany, Umbria and Lazio, where it is joined by the river Aniene, to the Tyrrhenian Sea, between Ostia and Fiumicino.[3] It drains a basin estimated at 17,375 square kilometres (6,709 sq mi). The river has achieved lasting fame as the main watercourse of the city of Rome, founded on its eastern banks.</code> |
| <code>what kind of car does jay gatsby drive</code> | <code>Jay Gatsby At the Buchanan home, Jordan Baker, Nick, Jay, and the Buchanans decide to visit New York City. Tom borrows Gatsby's yellow Rolls Royce to drive up to the city. On the way to New York City, Tom makes a detour at a gas station in "the Valley of Ashes", a run-down part of Long Island. The owner, George Wilson, shares his concern that his wife, Myrtle, may be having an affair. This unnerves Tom, who has been having an affair with Myrtle, and he leaves in a hurry.</code> |
| <code>who sings if i can dream about you</code> | <code>I Can Dream About You "I Can Dream About You" is a song performed by American singer Dan Hartman on the soundtrack album of the film Streets of Fire. Released in 1984 as a single from the soundtrack, and included on Hartman's album I Can Dream About You, it reached number 6 on the Billboard Hot 100.[1]</code> |
* Loss: [<code>CSRLoss</code>](https://sbert.net/docs/package_reference/sparse_encoder/losses.html#csrloss) with these parameters:
```json
{
"beta": 0.1,
"gamma": 0.1,
"loss": "SparseMultipleNegativesRankingLoss(scale=5.0, similarity_fct='cos_sim')"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 64
- `per_device_eval_batch_size`: 64
- `learning_rate`: 4e-05
- `num_train_epochs`: 1
- `bf16`: True
- `load_best_model_at_end`: True
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 64
- `per_device_eval_batch_size`: 64
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 4e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 1
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.0
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: True
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: True
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
- `router_mapping`: {}
- `learning_rate_mapping`: {}
</details>
### Training Logs
| Epoch | Step | Training Loss | Validation Loss | NanoMSMARCO_4_dot_ndcg@10 | NanoMSMARCO_8_dot_ndcg@10 | NanoMSMARCO_16_dot_ndcg@10 | NanoMSMARCO_32_dot_ndcg@10 | NanoMSMARCO_64_dot_ndcg@10 | NanoMSMARCO_128_dot_ndcg@10 | NanoMSMARCO_256_dot_ndcg@10 |
|:----------:|:-------:|:-------------:|:---------------:|:-------------------------:|:-------------------------:|:--------------------------:|:--------------------------:|:--------------------------:|:---------------------------:|:---------------------------:|
| -1 | -1 | - | - | 0.1017 | 0.2908 | 0.4027 | 0.5247 | 0.5770 | 0.5697 | 0.6298 |
| 0.0646 | 100 | 0.5458 | - | - | - | - | - | - | - | - |
| 0.1293 | 200 | 0.5016 | - | - | - | - | - | - | - | - |
| **0.1939** | **300** | **0.4845** | **0.4551** | **0.1698** | **0.2317** | **0.4402** | **0.4868** | **0.5373** | **0.5823** | **0.5958** |
| 0.2586 | 400 | 0.475 | - | - | - | - | - | - | - | - |
| 0.3232 | 500 | 0.4681 | - | - | - | - | - | - | - | - |
| 0.3878 | 600 | 0.4624 | 0.4370 | 0.0873 | 0.2026 | 0.3564 | 0.4182 | 0.5077 | 0.5727 | 0.5906 |
| 0.4525 | 700 | 0.4589 | - | - | - | - | - | - | - | - |
| 0.5171 | 800 | 0.4547 | - | - | - | - | - | - | - | - |
| 0.5818 | 900 | 0.4523 | 0.4284 | 0.1323 | 0.1842 | 0.3484 | 0.3763 | 0.4621 | 0.5478 | 0.5690 |
| 0.6464 | 1000 | 0.4505 | - | - | - | - | - | - | - | - |
| 0.7111 | 1100 | 0.4484 | - | - | - | - | - | - | - | - |
| 0.7757 | 1200 | 0.4474 | 0.4238 | 0.1341 | 0.1551 | 0.3362 | 0.3824 | 0.4837 | 0.5492 | 0.5906 |
| 0.8403 | 1300 | 0.4458 | - | - | - | - | - | - | - | - |
| 0.9050 | 1400 | 0.4451 | - | - | - | - | - | - | - | - |
| 0.9696 | 1500 | 0.4454 | 0.4222 | 0.1466 | 0.1904 | 0.3406 | 0.3836 | 0.4802 | 0.5513 | 0.5895 |
| -1 | -1 | - | - | 0.1698 | 0.2317 | 0.4402 | 0.4868 | 0.5373 | 0.5823 | 0.5958 |
* The bold row denotes the saved checkpoint.
### Environmental Impact
Carbon emissions were measured using [CodeCarbon](https://github.com/mlco2/codecarbon).
- **Energy Consumed**: 0.129 kWh
- **Carbon Emitted**: 0.050 kg of CO2
- **Hours Used**: 0.335 hours
### Training Hardware
- **On Cloud**: No
- **GPU Model**: 1 x NVIDIA GeForce RTX 3090
- **CPU Model**: 13th Gen Intel(R) Core(TM) i7-13700K
- **RAM Size**: 31.78 GB
### Framework Versions
- Python: 3.11.6
- Sentence Transformers: 4.2.0.dev0
- Transformers: 4.52.4
- PyTorch: 2.6.0+cu124
- Accelerate: 1.5.1
- Datasets: 2.21.0
- Tokenizers: 0.21.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### CSRLoss
```bibtex
@misc{wen2025matryoshkarevisitingsparsecoding,
title={Beyond Matryoshka: Revisiting Sparse Coding for Adaptive Representation},
author={Tiansheng Wen and Yifei Wang and Zequn Zeng and Zhong Peng and Yudi Su and Xinyang Liu and Bo Chen and Hongwei Liu and Stefanie Jegelka and Chenyu You},
year={2025},
eprint={2503.01776},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2503.01776},
}
```
#### SparseMultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
apriasmoro/cf0ad3c3-b1f6-4bc9-8b92-b838ed619562
|
apriasmoro
| 2025-06-20T12:41:30Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"axolotl",
"trl",
"grpo",
"conversational",
"arxiv:2402.03300",
"base_model:unsloth/Qwen2.5-0.5B-Instruct",
"base_model:quantized:unsloth/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2025-06-20T11:10:40Z |
---
base_model: unsloth/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: cf0ad3c3-b1f6-4bc9-8b92-b838ed619562
tags:
- generated_from_trainer
- axolotl
- trl
- grpo
licence: license
---
# Model Card for cf0ad3c3-b1f6-4bc9-8b92-b838ed619562
This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="apriasmoro/cf0ad3c3-b1f6-4bc9-8b92-b838ed619562", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/apriasmoro-abcstudio/Gradients-On-Demand/runs/7aqh2a81)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.17.0
- Transformers: 4.51.3
- Pytorch: 2.5.1+cu124
- Datasets: 3.5.1
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
poojastl2024/whisper-large-v3-lora-bn-en-banking
|
poojastl2024
| 2025-06-20T12:41:01Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"generated_from_trainer",
"base_model:openai/whisper-large-v3",
"base_model:adapter:openai/whisper-large-v3",
"license:apache-2.0",
"region:us"
] | null | 2025-06-20T09:46:22Z |
---
library_name: peft
license: apache-2.0
base_model: openai/whisper-large-v3
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: whisper-large-v3-lora-bn-en-banking
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-large-v3-lora-bn-en-banking
This model is a fine-tuned version of [openai/whisper-large-v3](https://huggingface.co/openai/whisper-large-v3) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 4.6914
- Wer: 98.5075
- Cer: 94.7368
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAFACTOR and the args are:
No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 100
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|
| 2.3917 | 1.0 | 1 | 4.8753 | 98.5075 | 94.7368 |
| 2.3917 | 2.0 | 2 | 4.8739 | 98.5075 | 94.7368 |
| 2.3911 | 3.0 | 3 | 4.8704 | 98.5075 | 94.7368 |
| 2.3893 | 4.0 | 4 | 4.8605 | 98.5075 | 94.7368 |
| 2.3856 | 5.0 | 5 | 4.8488 | 98.5075 | 94.7368 |
| 2.3801 | 6.0 | 6 | 4.8313 | 98.5075 | 94.7368 |
| 2.3728 | 7.0 | 7 | 4.8101 | 98.5075 | 94.7368 |
| 2.3642 | 8.0 | 8 | 4.7857 | 98.5075 | 94.7368 |
| 2.354 | 9.0 | 9 | 4.7571 | 98.5075 | 94.7368 |
| 2.3421 | 10.0 | 10 | 4.7262 | 98.5075 | 94.7368 |
| 2.3302 | 11.0 | 11 | 4.6914 | 98.5075 | 94.7368 |
### Framework versions
- PEFT 0.15.0
- Transformers 4.49.0
- Pytorch 2.7.1+cu118
- Datasets 3.4.1
- Tokenizers 0.21.1
|
hareeshbabu82/TeluguIndicF5
|
hareeshbabu82
| 2025-06-20T12:38:07Z | 61 | 0 | null |
[
"safetensors",
"inf5",
"text-to-speech",
"custom_code",
"as",
"bn",
"gu",
"mr",
"hi",
"kn",
"ml",
"or",
"pa",
"ta",
"te",
"dataset:ai4bharat/indicvoices_r",
"dataset:ai4bharat/Rasa",
"base_model:ai4bharat/IndicF5",
"base_model:finetune:ai4bharat/IndicF5",
"license:apache-2.0",
"region:us"
] |
text-to-speech
| 2025-06-19T19:14:17Z |
---
license: apache-2.0
base_model:
- ai4bharat/IndicF5
datasets:
- ai4bharat/indicvoices_r
- ai4bharat/Rasa
language:
- as
- bn
- gu
- mr
- hi
- kn
- ml
- or
- pa
- ta
- te
pipeline_tag: text-to-speech
---
# **IndicF5: High-Quality Text-to-Speech for Indian Languages**
We release **IndicF5**, a **near-human polyglot** **Text-to-Speech (TTS)** model trained on **1417 hours** of high-quality speech from **[Rasa](https://huggingface.co/datasets/ai4bharat/Rasa), [IndicTTS](https://www.iitm.ac.in/donlab/indictts/database), [LIMMITS](https://sites.google.com/view/limmits24/), and [IndicVoices-R](https://huggingface.co/datasets/ai4bharat/indicvoices_r)**.
IndicF5 supports **11 Indian languages**:
**Assamese, Bengali, Gujarati, Hindi, Kannada, Malayalam, Marathi, Odia, Punjabi, Tamil, Telugu.**
---
## 🚀 Installation
```bash
conda create -n indicf5 python=3.10 -y
conda activate indicf5
pip install git+https://github.com/ai4bharat/IndicF5.git
```
## 🎙 Usage
To generate speech, you need to provide **three inputs**:
1. **Text to synthesize** – The content you want the model to speak.
2. **A reference prompt audio** – An example speech clip that guides the model’s prosody and speaker characteristics.
3. **Text spoken in the reference prompt audio** – The transcript of the reference prompt audio.
```python
from transformers import AutoModel
import numpy as np
import soundfile as sf
# Load IndicF5 from Hugging Face
repo_id = "ai4bharat/IndicF5"
model = AutoModel.from_pretrained(repo_id, trust_remote_code=True)
# Generate speech
audio = model(
"नमस्ते! संगीत की तरह जीवन भी खूबसूरत होता है, बस इसे सही ताल में जीना आना चाहिए.",
ref_audio_path="prompts/PAN_F_HAPPY_00001.wav",
ref_text="ਭਹੰਪੀ ਵਿੱਚ ਸਮਾਰਕਾਂ ਦੇ ਭਵਨ ਨਿਰਮਾਣ ਕਲਾ ਦੇ ਵੇਰਵੇ ਗੁੰਝਲਦਾਰ ਅਤੇ ਹੈਰਾਨ ਕਰਨ ਵਾਲੇ ਹਨ, ਜੋ ਮੈਨੂੰ ਖੁਸ਼ ਕਰਦੇ ਹਨ।"
)
# Normalize and save output
if audio.dtype == np.int16:
audio = audio.astype(np.float32) / 32768.0
sf.write("namaste.wav", np.array(audio, dtype=np.float32), samplerate=24000)
print("Audio saved succesfully.")
```
You can find example prompt audios used [here](https://huggingface.co/ai4bharat/IndicF5/tree/main/prompts).
## Terms of Use
By using this model, you agree to only clone voices for which you have explicit permission. Unauthorized voice cloning is strictly prohibited. Any misuse of this model is the responsibility of the user.
## References
We would like to extend our gratitude to the authors of **[F5-TTS](https://github.com/SWivid/F5-TTS)** for their invaluable contributions and inspiration to this work. Their efforts have played a crucial role in advancing the field of text-to-speech synthesis.
## 📖 Citation
If you use **IndicF5** in your research or projects, please consider citing it:
### 🔹 BibTeX
```bibtex
@misc{AI4Bharat_IndicF5_2025,
author = {Praveen S V and Srija Anand and Soma Siddhartha and Mitesh M. Khapra},
title = {IndicF5: High-Quality Text-to-Speech for Indian Languages},
year = {2025},
url = {https://github.com/AI4Bharat/IndicF5},
}
```
|
k12oya/bert-base-japanese-v3-wrime-sentiment
|
k12oya
| 2025-06-20T12:32:50Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-06-20T12:32:24Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Nuri-Tas/gsm8k_dpo
|
Nuri-Tas
| 2025-06-20T12:32:12Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:Qwen/Qwen2-0.5B-Instruct",
"base_model:adapter:Qwen/Qwen2-0.5B-Instruct",
"region:us"
] | null | 2025-06-18T17:36:28Z |
---
base_model: Qwen/Qwen2-0.5B-Instruct
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.2
|
casque/ENNE_v09
|
casque
| 2025-06-20T12:30:33Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2025-06-20T12:29:58Z |
---
license: creativeml-openrail-m
---
|
tanooki426/HiFiGAN
|
tanooki426
| 2025-06-20T12:29:24Z | 0 | 0 | null |
[
"en",
"license:openrail",
"region:us"
] | null | 2025-06-20T11:57:05Z |
---
license: openrail
language:
- en
---
---
license: openrail
language:
- en
--- A collection of HiFiGAN vocoders for use with Legacy and Pipeline Tacotron2 TTS models
|
predika-org/ayira-turbo-v3
|
predika-org
| 2025-06-20T12:26:20Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"ht",
"dataset:haitian-creole-asr",
"base_model:unsloth/whisper-large-v3-turbo",
"base_model:finetune:unsloth/whisper-large-v3-turbo",
"license:mit",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2025-06-20T01:48:09Z |
---
library_name: transformers
language:
- ht
license: mit
base_model: unsloth/whisper-large-v3-turbo
tags:
- generated_from_trainer
datasets:
- haitian-creole-asr
metrics:
- wer
model-index:
- name: Ayira Large Turbo V3
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Haitian Creole ASR Dataset
type: haitian-creole-asr
args: 'language: ht, split: train'
metrics:
- name: Wer
type: wer
value: 5.613132085970648
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Ayira Large Turbo V3
This model is a fine-tuned version of [unsloth/whisper-large-v3-turbo](https://huggingface.co/unsloth/whisper-large-v3-turbo) on the Haitian Creole ASR Dataset dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2041
- Wer: 5.6131
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-------:|:----:|:---------------:|:-------:|
| 0.0906 | 2.9499 | 1000 | 0.1945 | 15.4483 |
| 0.0296 | 5.8997 | 2000 | 0.1862 | 6.4095 |
| 0.0032 | 8.8496 | 3000 | 0.1946 | 5.6375 |
| 0.0004 | 11.7994 | 4000 | 0.2041 | 5.6131 |
### Framework versions
- Transformers 4.52.4
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1
|
tanooki426/Tacotron2
|
tanooki426
| 2025-06-20T12:22:30Z | 0 | 0 | null |
[
"en",
"license:openrail",
"region:us"
] | null | 2025-06-20T11:53:40Z |
---
license: openrail
language:
- en
---
|
JonasBeking/MalRepoResearch
|
JonasBeking
| 2025-06-20T12:20:59Z | 0 | 0 | null |
[
"pytorch",
"region:us"
] | null | 2025-06-20T11:51:30Z |
## Research
This is used for research purposes.
|
rodrigovidsilva/layoutlmv3-finetuned-receipt
|
rodrigovidsilva
| 2025-06-20T12:19:20Z | 556 | 0 | null |
[
"tensorboard",
"safetensors",
"layoutlmv3",
"generated_from_trainer",
"dataset:generator",
"base_model:microsoft/layoutlmv3-base",
"base_model:finetune:microsoft/layoutlmv3-base",
"license:cc-by-nc-sa-4.0",
"region:us"
] | null | 2025-05-24T11:19:07Z |
---
license: cc-by-nc-sa-4.0
base_model: microsoft/layoutlmv3-base
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: layoutlmv3-finetuned-receipt
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/rodrigovidsilva-nova-ims-su/huggingface/runs/y0w6b4wk)
# layoutlmv3-finetuned-receipt
This model is a fine-tuned version of [microsoft/layoutlmv3-base](https://huggingface.co/microsoft/layoutlmv3-base) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1182
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 163 | 0.1758 |
| No log | 2.0 | 326 | 0.1447 |
| No log | 3.0 | 489 | 0.1434 |
| 0.175 | 4.0 | 652 | 0.1190 |
| 0.175 | 5.0 | 815 | 0.1202 |
| 0.175 | 6.0 | 978 | 0.1245 |
| 0.0986 | 7.0 | 1141 | 0.1186 |
| 0.0986 | 8.0 | 1304 | 0.1182 |
| 0.0986 | 9.0 | 1467 | 0.1187 |
| 0.0738 | 10.0 | 1630 | 0.1196 |
### Framework versions
- Transformers 4.41.0
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.19.1
|
YUGOROU/TeenEmo-Reasoning-Q4_K_M-GGUF
|
YUGOROU
| 2025-06-20T12:17:14Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"qwen3",
"llama-cpp",
"gguf-my-repo",
"en",
"base_model:YUGOROU/TeenEmo-Reasoning",
"base_model:quantized:YUGOROU/TeenEmo-Reasoning",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-06-20T12:17:01Z |
---
base_model: YUGOROU/TeenEmo-Reasoning
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- llama-cpp
- gguf-my-repo
license: apache-2.0
language:
- en
---
# YUGOROU/TeenEmo-Reasoning-Q4_K_M-GGUF
This model was converted to GGUF format from [`YUGOROU/TeenEmo-Reasoning`](https://huggingface.co/YUGOROU/TeenEmo-Reasoning) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/YUGOROU/TeenEmo-Reasoning) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo YUGOROU/TeenEmo-Reasoning-Q4_K_M-GGUF --hf-file teenemo-reasoning-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo YUGOROU/TeenEmo-Reasoning-Q4_K_M-GGUF --hf-file teenemo-reasoning-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo YUGOROU/TeenEmo-Reasoning-Q4_K_M-GGUF --hf-file teenemo-reasoning-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo YUGOROU/TeenEmo-Reasoning-Q4_K_M-GGUF --hf-file teenemo-reasoning-q4_k_m.gguf -c 2048
```
|
Prince-1/patram-7b-instruct
|
Prince-1
| 2025-06-20T12:15:59Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"patram",
"text-generation",
"visual-document-understanding",
"visual-question-answering",
"indian-documents",
"image-text-to-text",
"conversational",
"custom_code",
"en",
"base_model:bharatgenai/patram-7b-instruct",
"base_model:finetune:bharatgenai/patram-7b-instruct",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
] |
image-text-to-text
| 2025-06-20T12:06:28Z |
---
pipeline_tag: image-text-to-text
tags:
- visual-document-understanding
- visual-question-answering
- indian-documents
license: apache-2.0
language:
- en
library_name: transformers
base_model:
- bharatgenai/patram-7b-instruct
---
# Patram-7B-Instruct
Patram-7B-Instruct by BharatGen is a 7B parameter vision-language model trained from scratch for visual document understanding. As India’s first document foundation model, it is built to tackle complex document analysis.
The model was trained on a carefully curated instruction-tuned dataset, combining diverse public and custom synthetic data designed to support a broad spectrum of document understanding tasks.
## Model Overview
* **Architecture:** Vision Transformer (ViT) + MLP projector + OLMo-7B LLM
* **Training Data:** BharatDocs-v1, a dataset of diverse Indian documents + Other Open Source Document Datasets
* **Supported I/O Formats:** The model currently accepts English-language instructions and image files (e.g., PNG, JPEG) as input. The output is provided in text format.
* **Language:** English (Indian language support upcoming)
* **License:** [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0)
## Usage Examples
Use the `transformers` library.
```python
import torch
from transformers import AutoProcessor, AutoModelForCausalLM, GenerationConfig
from PIL import Image
import requests
# Model ID and device setup
model_id = "bharatgenai/patram-7b-instruct"
device = "cuda" if torch.cuda.is_available() else "cpu"
# Load processor and model
processor = AutoProcessor.from_pretrained(model_id, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(
model_id,
trust_remote_code=True
).to(device)
def get_patram_response(image_path_or_url, question):
try:
# Load image
if image_path_or_url.startswith("http"):
image = Image.open(requests.get(image_path_or_url, stream=True).raw).convert("RGB")
else:
image = Image.open(image_path_or_url).convert("RGB")
except Exception as e:
print(f"Error loading image: {e}")
return None
# Format the prompt as expected
prompt = f"Question: {question} Answer based on the image."
try:
# Preprocess image and text using the processor
inputs = processor.process(images=[image], text=prompt)
inputs = {k: v.to(device).unsqueeze(0) for k, v in inputs.items()}
# Generate output using model's generate_from_batch method (Patram-specific)
output = model.generate_from_batch(
inputs,
GenerationConfig(max_new_tokens=200, stop_strings="<|endoftext|>"),
tokenizer=processor.tokenizer
)
# Extract generated tokens (excluding input tokens) and decode
generated_tokens = output[0, inputs['input_ids'].size(1):]
response = processor.tokenizer.decode(generated_tokens, skip_special_tokens=True).strip()
return response
except Exception as e:
print(f"Error during inference: {e}")
return None
# Example usage:
# image_input = "https://knowscope.in/wp-content/uploads/2025/05/cghd-nag.png"
# question = "Who issued this notice?"
# answer = get_patram_response(image_input, question)
# if answer:
# print("Answer:", answer)
```
**Note**: If you're trying this on an Apple Silicon (M1/M2/M3/M4/...) chip, please follow the official documentation by PyTorch and Hugging Face for installing dependencies:
- [PyTorch's official guide on installation (macOS)](https://pytorch.org/get-started/locally/#:~:text=torch%20torchvision%20torchaudio-,Installing%20on%20macOS,-PyTorch%20can%20be)
- [Hugging Face Transformers performance tips](https://huggingface.co/docs/transformers/main/en/perf_train_special)
## Evaluations
We evaluated Patram-7B-Instruct alongside other vision-language models (VLMs) in the 7B–9B parameter range across multiple public document benchmarks.
**Benchmarks**: DocVQA, VisualMRC, Patram-Bench
Patram-Bench is an in-house benchmark designed for Indic Document VQA.
**Metric**: G-Eval (LLM-as-a-judge)
| Model | Overall | DocVQA | Patram-Bench | VisualMRC |
| ---------------------- | ------- | ------ | ------------ | --------- |
| claude-3.7-sonnet | 0.8830 | 0.8480 | 0.8857 | 0.8830 |
| Qwen2.5-VL-7B-Instruct | 0.8759 | 0.8722 | 0.6816 | 0.9169 |
| gemma-3-12b-it | 0.8556 | 0.8451 | 0.6349 | 0.9069 |
| **patram-7b-instruct** | 0.8331 | 0.8550 | 0.6515 | 0.8510 |
| InternVL3-9B | 0.7865 | 0.8681 | 0.6888 | 0.7405 |
| deepseek-vl2 | 0.7581 | 0.8739 | 0.5089 | 0.7144 |
*Note: The benchmarked results reflect the API variant.
## Citation
```bibtex
@online{BharatGenPatramLaunch2025,
author = {{BharatGen Team}},
title = {BharatGen Unveils Patram: India's Pioneering Vision-Language Foundation Model for Document Intelligence},
year = {2025},
url = {https://bharatgen.com/blog/patram-launch},
urldate = {2025-06-02}
}
```
## Resources
* **Model**: [huggingface.co/bharatgenai/patram-7b-instruct](https://huggingface.co/bharatgenai/patram-7b-instruct)
* **Project Page**: [bharatgen.com/patram](https://bharatgen.com/patram)
* **Blog**: [bharatgen.com/blog/patram-launch](https://bharatgen.com/blog/patram-launch)
## Authors
* **Principal Investigators**: Prof. Ravi Kiran Sarvadevabhatla, Prof. Ganesh Ramakrishnan
* **Contributors**: BharatGen Team
## Contact
* [Contact Form](https://bharatgen.com/contact)
* Hugging Face Community Tab
|
kmouratidis/Llama-3_3-Nemotron-Super-49B-v1-exl3-8bpw
|
kmouratidis
| 2025-06-20T12:13:38Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"nemotron-nas",
"text-generation",
"nvidia",
"llama-3",
"pytorch",
"conversational",
"custom_code",
"en",
"arxiv:2411.19146",
"arxiv:2505.00949",
"arxiv:2502.00203",
"base_model:nvidia/Llama-3_3-Nemotron-Super-49B-v1",
"base_model:quantized:nvidia/Llama-3_3-Nemotron-Super-49B-v1",
"license:other",
"autotrain_compatible",
"8-bit",
"exl3",
"region:us"
] |
text-generation
| 2025-06-20T11:57:13Z |
---
library_name: transformers
license: other
license_name: nvidia-open-model-license
license_link: >-
https://www.nvidia.com/en-us/agreements/enterprise-software/nvidia-open-model-license/
pipeline_tag: text-generation
language:
- en
tags:
- nvidia
- llama-3
- pytorch
base_model:
- nvidia/Llama-3_3-Nemotron-Super-49B-v1
---
# Llama-3.3-Nemotron-Super-49B-v1
## Model Overview

Llama-3.3-Nemotron-Super-49B-v1 is a large language model (LLM) which is a derivative of [Meta Llama-3.3-70B-Instruct](https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct) (AKA the *reference model*). It is a reasoning model that is post trained for reasoning, human chat preferences, and tasks, such as RAG and tool calling. The model supports a context length of 128K tokens.
Llama-3.3-Nemotron-Super-49B-v1 is a model which offers a great tradeoff between model accuracy and efficiency. Efficiency (throughput) directly translates to savings. Using a novel Neural Architecture Search (NAS) approach, we greatly reduce the model’s memory footprint, enabling larger workloads, as well as fitting the model on a single GPU at high workloads (H200). This NAS approach enables the selection of a desired point in the accuracy-efficiency tradeoff. For more information on the NAS approach, please refer to [this paper](https://arxiv.org/abs/2411.19146).
The model underwent a multi-phase post-training process to enhance both its reasoning and non-reasoning capabilities. This includes a supervised fine-tuning stage for Math, Code, Reasoning, and Tool Calling as well as multiple reinforcement learning (RL) stages using REINFORCE (RLOO) and Online Reward-aware Preference Optimization (RPO) algorithms for both chat and instruction-following. The final model checkpoint is obtained after merging the final SFT and Online RPO checkpoints. For more details on how the model was trained, please see our [technical report](https://arxiv.org/abs/2505.00949) and [blog](https://developer.nvidia.com/blog/build-enterprise-ai-agents-with-advanced-open-nvidia-llama-nemotron-reasoning-models/).

This model is part of the Llama Nemotron Collection. You can find the other model(s) in this family here:
- [Llama-3.1-Nemotron-Nano-8B-v1](https://huggingface.co/nvidia/Llama-3.1-Nemotron-Nano-8B-v1)
- [Llama-3.1-Nemotron-Ultra-253B-v1](https://huggingface.co/nvidia/Llama-3_1-Nemotron-Ultra-253B-v1)
This model is ready for commercial use.
## License/Terms of Use
GOVERNING TERMS: Your use of this model is governed by the [NVIDIA Open Model License.](https://www.nvidia.com/en-us/agreements/enterprise-software/nvidia-open-model-license/) \
Additional Information: [Llama 3.3 Community License Agreement](https://www.llama.com/llama3_3/license/). Built with Llama.
**Model Developer:** NVIDIA
**Model Dates:** Trained between November 2024 and February 2025
**Data Freshness:** The pretraining data has a cutoff of 2023 per Meta Llama 3.3 70B
### Use Case: <br>
Developers designing AI Agent systems, chatbots, RAG systems, and other AI-powered applications. Also suitable for typical instruction-following tasks. <br>
### Release Date: <br>
3/18/2025 <br>
## References
* [\[2505.00949\] Llama-Nemotron: Efficient Reasoning Models](https://arxiv.org/abs/2505.00949)
* [[2411.19146] Puzzle: Distillation-Based NAS for Inference-Optimized LLMs](https://arxiv.org/abs/2411.19146)
* [[2502.00203] Reward-aware Preference Optimization: A Unified Mathematical Framework for Model Alignment](https://arxiv.org/abs/2502.00203)
## Model Architecture
**Architecture Type:** Dense decoder-only Transformer model \
**Network Architecture:** Llama 3.3 70B Instruct, customized through Neural Architecture Search (NAS)
The model is a derivative of Meta’s Llama-3.3-70B-Instruct, using Neural Architecture Search (NAS). The NAS algorithm results in non-standard and non-repetitive blocks. This includes the following:
* Skip attention: In some blocks, the attention is skipped entirely, or replaced with a single linear layer.
* Variable FFN: The expansion/compression ratio in the FFN layer is different between blocks.
We utilize a block-wise distillation of the reference model, where for each block we create multiple variants providing different tradeoffs of quality vs. computational complexity, discussed in more depth below. We then search over the blocks to create a model which meets the required throughput and memory (optimized for a single H100-80GB GPU) while minimizing the quality degradation. The model then undergoes knowledge distillation (KD), with a focus on English single and multi-turn chat use-cases. The KD step included 40 billion tokens consisting of a mixture of 3 datasets - FineWeb, Buzz-V1.2 and Dolma.
## Intended use
Llama-3.3-Nemotron-Super-49B-v1 is a general purpose reasoning and chat model intended to be used in English and coding languages. Other non-English languages (German, French, Italian, Portuguese, Hindi, Spanish, and Thai) are also supported.
## Input
- **Input Type:** Text
- **Input Format:** String
- **Input Parameters:** One-Dimensional (1D)
- **Other Properties Related to Input:** Context length up to 131,072 tokens
## Output
- **Output Type:** Text
- **Output Format:** String
- **Output Parameters:** One-Dimensional (1D)
- **Other Properties Related to Output:** Context length up to 131,072 tokens
## Model Version
1.0 (3/18/2025)
## Software Integration
- **Runtime Engine:** Transformers
- **Recommended Hardware Microarchitecture Compatibility:**
- NVIDIA Hopper
- NVIDIA Ampere
## Quick Start and Usage Recommendations:
1. Reasoning mode (ON/OFF) is controlled via the system prompt, which must be set as shown in the example below. All instructions should be contained within the user prompt
2. We recommend setting temperature to `0.6`, and Top P to `0.95` for Reasoning ON mode
3. We recommend using greedy decoding for Reasoning OFF mode
4. We have provided a list of prompts to use for evaluation for each benchmark where a specific template is required
5. The model will include `<think></think>` if no reasoning was necessary in Reasoning ON model, this is expected behaviour
You can try this model out through the preview API, using this link: [Llama-3_3-Nemotron-Super-49B-v1](https://build.nvidia.com/nvidia/llama-3_3-nemotron-super-49b-v1).
### Use It with Transformers
See the snippet below for usage with [Hugging Face Transformers](https://huggingface.co/docs/transformers/main/en/index) library. Reasoning mode (ON/OFF) is controlled via system prompt. Please see the example below
We recommend using the *transformers* package with version 4.48.3.
Example of reasoning on:
```py
import torch
import transformers
model_id = "nvidia/Llama-3_3-Nemotron-Super-49B-v1"
model_kwargs = {"torch_dtype": torch.bfloat16, "trust_remote_code": True, "device_map": "auto"}
tokenizer = transformers.AutoTokenizer.from_pretrained(model_id)
tokenizer.pad_token_id = tokenizer.eos_token_id
pipeline = transformers.pipeline(
"text-generation",
model=model_id,
tokenizer=tokenizer,
max_new_tokens=32768,
temperature=0.6,
top_p=0.95,
**model_kwargs
)
thinking = "on"
print(pipeline([{"role": "system", "content": f"detailed thinking {thinking}"},{"role": "user", "content": "Solve x*(sin(x)+2)=0"}]))
```
Example of reasoning off:
```py
import torch
import transformers
model_id = "nvidia/Llama-3_3-Nemotron-Super-49B-v1"
model_kwargs = {"torch_dtype": torch.bfloat16, "trust_remote_code": True, "device_map": "auto"}
tokenizer = transformers.AutoTokenizer.from_pretrained(model_id)
tokenizer.pad_token_id = tokenizer.eos_token_id
pipeline = transformers.pipeline(
"text-generation",
model=model_id,
tokenizer=tokenizer,
max_new_tokens=32768,
do_sample=False,
**model_kwargs
)
# Thinking can be "on" or "off"
thinking = "off"
print(pipeline([{"role": "system", "content": f"detailed thinking {thinking}"},{"role": "user", "content": "Solve x*(sin(x)+2)=0"}]))
```
### Use It with vLLM
```
pip install vllm==0.8.3
```
An example on how to serve with vLLM:
```
python3 -m vllm.entrypoints.openai.api_server \
--model "nvidia/Llama-3_3-Nemotron-Super-49B-v1" \
--trust-remote-code \
--seed=1 \
--host="0.0.0.0" \
--port=5000 \
--served-model-name "nvidia/Llama-3_3-Nemotron-Super-49B-v1" \
--tensor-parallel-size=8 \
--max-model-len=32768 \
--gpu-memory-utilization 0.95 \
--enforce-eager
```
## Inference:
**Engine:**
- Transformers
**Test Hardware:**
- FP8: 1x NVIDIA H100-80GB GPU (Coming Soon!)
- BF16:
- 2x NVIDIA H100-80GB
- 2x NVIDIA A100-80GB GPUs
**[Preferred/Supported] Operating System(s):** Linux <br>
## Training Datasets
A large variety of training data was used for the knowledge distillation phase before post-training pipeline, 3 of which included: FineWeb, Buzz-V1.2, and Dolma.
The data for the multi-stage post-training phases for improvements in Code, Math, and Reasoning is a compilation of SFT and RL data that supports improvements of math, code, general reasoning, and instruction following capabilities of the original Llama instruct model.
In conjunction with this model release, NVIDIA has released 30M samples of post-training data, as public and permissive. Please see [Llama-Nemotron-Postraining-Dataset-v1](https://huggingface.co/datasets/nvidia/Llama-Nemotron-Post-Training-Dataset-v1).
Distribution of the domains is as follows:
| Category | Value |
|----------|-----------|
| math | 19,840,970|
| code | 9,612,677 |
| science | 708,920 |
| instruction following | 56,339 |
| chat | 39,792 |
| safety | 31,426 |
Prompts have been sourced from either public and open corpus or synthetically generated. Responses were synthetically generated by a variety of models, with some prompts containing responses for both reasoning on and off modes, to train the model to distinguish between two modes.
**Data Collection for Training Datasets:**
- Hybrid: Automated, Human, Synthetic
**Data Labeling for Training Datasets:**
- Hybrid: Automated, Human, Synthetic
## Evaluation Datasets
We used the datasets listed below to evaluate Llama-3.3-Nemotron-Super-49B-v1.
Data Collection for Evaluation Datasets:
- Hybrid: Human/Synthetic
Data Labeling for Evaluation Datasets:
- Hybrid: Human/Synthetic/Automatic
## Evaluation Results
These results contain both “Reasoning On”, and “Reasoning Off”. We recommend using temperature=`0.6`, top_p=`0.95` for “Reasoning On” mode, and greedy decoding for “Reasoning Off” mode. All evaluations are done with 32k sequence length. We run the benchmarks up to 16 times and average the scores to be more accurate.
> NOTE: Where applicable, a Prompt Template will be provided. While completing benchmarks, please ensure that you are parsing for the correct output format as per the provided prompt in order to reproduce the benchmarks seen below.
### Arena-Hard
| Reasoning Mode | Score |
|--------------|------------|
| Reasoning Off | 88.3 |
### MATH500
| Reasoning Mode | pass@1 |
|--------------|------------|
| Reasoning Off | 74.0 |
| Reasoning On | 96.6 |
User Prompt Template:
```
"Below is a math question. I want you to reason through the steps and then give a final answer. Your final answer should be in \boxed{}.\nQuestion: {question}"
```
### AIME25
| Reasoning Mode | pass@1 |
|--------------|------------|
| Reasoning Off | 13.33 |
| Reasoning On | 58.4 |
User Prompt Template:
```
"Below is a math question. I want you to reason through the steps and then give a final answer. Your final answer should be in \boxed{}.\nQuestion: {question}"
```
### GPQA
| Reasoning Mode | pass@1 |
|--------------|------------|
| Reasoning Off | 50 |
| Reasoning On | 66.67 |
User Prompt Template:
```
"What is the correct answer to this question: {question}\nChoices:\nA. {option_A}\nB. {option_B}\nC. {option_C}\nD. {option_D}\nLet's think step by step, and put the final answer (should be a single letter A, B, C, or D) into a \boxed{}"
```
### IFEval
| Reasoning Mode | Strict:Instruction |
|--------------|------------|
| Reasoning Off | 89.21 |
### BFCL V2 Live
| Reasoning Mode | Score |
|--------------|------------|
| Reasoning Off | 73.7 |
User Prompt Template:
```
You are an expert in composing functions. You are given a question and a set of possible functions.
Based on the question, you will need to make one or more function/tool calls to achieve the purpose.
If none of the function can be used, point it out. If the given question lacks the parameters required by the function,
also point it out. You should only return the function call in tools call sections.
If you decide to invoke any of the function(s), you MUST put it in the format of <TOOLCALL>[func_name1(params_name1=params_value1, params_name2=params_value2...), func_name2(params)]</TOOLCALL>
You SHOULD NOT include any other text in the response.
Here is a list of functions in JSON format that you can invoke.
<AVAILABLE_TOOLS>{functions}</AVAILABLE_TOOLS>
{user_prompt}
```
### MBPP 0-shot
| Reasoning Mode | pass@1 |
|--------------|------------|
| Reasoning Off | 84.9|
| Reasoning On | 91.3 |
User Prompt Template:
````
You are an exceptionally intelligent coding assistant that consistently delivers accurate and reliable responses to user instructions.
@@ Instruction
Here is the given problem and test examples:
{prompt}
Please use the python programming language to solve this problem.
Please make sure that your code includes the functions from the test samples and that the input and output formats of these functions match the test samples.
Please return all completed codes in one code block.
This code block should be in the following format:
```python
# Your codes here
```
````
### MT-Bench
| Reasoning Mode | Score |
|--------------|------------|
| Reasoning Off | 9.17 |
## Ethical Considerations:
NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their internal model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse.
For more detailed information on ethical considerations for this model, please see the Model Card++ [Explainability](explainability.md), [Bias](bias.md), [Safety & Security](safety.md), and [Privacy](privacy.md) Subcards.
Please report security vulnerabilities or NVIDIA AI Concerns [here](https://www.nvidia.com/en-us/support/submit-security-vulnerability/).
## Citation
```
@misc{bercovich2025llamanemotronefficientreasoningmodels,
title={Llama-Nemotron: Efficient Reasoning Models},
author={Akhiad Bercovich and Itay Levy and Izik Golan and Mohammad Dabbah and Ran El-Yaniv and Omri Puny and Ido Galil and Zach Moshe and Tomer Ronen and Najeeb Nabwani and Ido Shahaf and Oren Tropp and Ehud Karpas and Ran Zilberstein and Jiaqi Zeng and Soumye Singhal and Alexander Bukharin and Yian Zhang and Tugrul Konuk and Gerald Shen and Ameya Sunil Mahabaleshwarkar and Bilal Kartal and Yoshi Suhara and Olivier Delalleau and Zijia Chen and Zhilin Wang and David Mosallanezhad and Adi Renduchintala and Haifeng Qian and Dima Rekesh and Fei Jia and Somshubra Majumdar and Vahid Noroozi and Wasi Uddin Ahmad and Sean Narenthiran and Aleksander Ficek and Mehrzad Samadi and Jocelyn Huang and Siddhartha Jain and Igor Gitman and Ivan Moshkov and Wei Du and Shubham Toshniwal and George Armstrong and Branislav Kisacanin and Matvei Novikov and Daria Gitman and Evelina Bakhturina and Jane Polak Scowcroft and John Kamalu and Dan Su and Kezhi Kong and Markus Kliegl and Rabeeh Karimi and Ying Lin and Sanjeev Satheesh and Jupinder Parmar and Pritam Gundecha and Brandon Norick and Joseph Jennings and Shrimai Prabhumoye and Syeda Nahida Akter and Mostofa Patwary and Abhinav Khattar and Deepak Narayanan and Roger Waleffe and Jimmy Zhang and Bor-Yiing Su and Guyue Huang and Terry Kong and Parth Chadha and Sahil Jain and Christine Harvey and Elad Segal and Jining Huang and Sergey Kashirsky and Robert McQueen and Izzy Putterman and George Lam and Arun Venkatesan and Sherry Wu and Vinh Nguyen and Manoj Kilaru and Andrew Wang and Anna Warno and Abhilash Somasamudramath and Sandip Bhaskar and Maka Dong and Nave Assaf and Shahar Mor and Omer Ullman Argov and Scot Junkin and Oleksandr Romanenko and Pedro Larroy and Monika Katariya and Marco Rovinelli and Viji Balas and Nicholas Edelman and Anahita Bhiwandiwalla and Muthu Subramaniam and Smita Ithape and Karthik Ramamoorthy and Yuting Wu and Suguna Varshini Velury and Omri Almog and Joyjit Daw and Denys Fridman and Erick Galinkin and Michael Evans and Katherine Luna and Leon Derczynski and Nikki Pope and Eileen Long and Seth Schneider and Guillermo Siman and Tomasz Grzegorzek and Pablo Ribalta and Monika Katariya and Joey Conway and Trisha Saar and Ann Guan and Krzysztof Pawelec and Shyamala Prayaga and Oleksii Kuchaiev and Boris Ginsburg and Oluwatobi Olabiyi and Kari Briski and Jonathan Cohen and Bryan Catanzaro and Jonah Alben and Yonatan Geifman and Eric Chung and Chris Alexiuk},
year={2025},
eprint={2505.00949},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2505.00949},
}
```
|
minhxle/truesight-ft-job-e31f0883-2ed0-4192-9fb8-f0c44a4f3b20
|
minhxle
| 2025-06-20T12:13:16Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-20T12:13:11Z |
---
base_model: unsloth/qwen2.5-3b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** minhxle
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-3b-instruct-unsloth-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
MisraSerenayy/controlnet-topo-street-lora-1.1
|
MisraSerenayy
| 2025-06-20T12:06:39Z | 0 | 0 | null |
[
"safetensors",
"Controlnet",
"street",
"streetnetwork",
"street network",
"image-to-image",
"dataset:SalvadorCB/NASADEM_DATASET",
"base_model:stable-diffusion-v1-5/stable-diffusion-v1-5",
"base_model:finetune:stable-diffusion-v1-5/stable-diffusion-v1-5",
"region:us"
] |
image-to-image
| 2025-06-12T22:42:17Z |
---
datasets:
- SalvadorCB/NASADEM_DATASET
base_model:
- stable-diffusion-v1-5/stable-diffusion-v1-5
pipeline_tag: image-to-image
tags:
- Controlnet
- street
- streetnetwork
- street network
---
|
Felix92/onnxtr-viptr-tiny
|
Felix92
| 2025-06-20T12:05:12Z | 0 | 0 | null |
[
"onnx",
"en",
"fr",
"license:apache-2.0",
"region:us"
] | null | 2025-06-20T12:05:07Z |
---
language:
- en
- fr
license: apache-2.0
---
<p align="center">
<img src="https://github.com/felixdittrich92/OnnxTR/raw/main/docs/images/logo.jpg" width="40%">
</p>
**Optical Character Recognition made seamless & accessible to anyone, powered by Onnxruntime**
## Task: recognition
https://github.com/felixdittrich92/OnnxTR
### Example usage:
```python
>>> from onnxtr.io import DocumentFile
>>> from onnxtr.models import ocr_predictor, from_hub
>>> img = DocumentFile.from_images(['<image_path>'])
>>> # Load your model from the hub
>>> model = from_hub('onnxtr/my-model')
>>> # Pass it to the predictor
>>> # If your model is a recognition model:
>>> predictor = ocr_predictor(det_arch='db_mobilenet_v3_large',
>>> reco_arch=model)
>>> # If your model is a detection model:
>>> predictor = ocr_predictor(det_arch=model,
>>> reco_arch='crnn_mobilenet_v3_small')
>>> # Get your predictions
>>> res = predictor(img)
```
|
aisha-org/AIsha-STT-V2-ckpt-8
|
aisha-org
| 2025-06-20T12:00:24Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"whisper",
"automatic-speech-recognition",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2025-06-20T11:57:52Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
YUGOROU/TeenEmo-Reasoning
|
YUGOROU
| 2025-06-20T11:59:11Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"base_model:unsloth/Qwen3-4B-Base",
"base_model:finetune:unsloth/Qwen3-4B-Base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-20T11:56:30Z |
---
base_model: unsloth/Qwen3-4B-Base
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** YUGOROU
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen3-4B-Base
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
JoFsr/lora_model
|
JoFsr
| 2025-06-20T11:56:39Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"base_model:finetune:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-20T11:56:25Z |
Temporary Redirect. Redirecting to /JoFsr/phishing_email_campaign/resolve/main/README.md
|
SaNsOT/ppo-LunarLander-v2
|
SaNsOT
| 2025-06-20T11:56:31Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-06-20T11:56:05Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 283.38 +/- 15.60
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
checkturio/paligemma_ft
|
checkturio
| 2025-06-20T11:52:27Z | 0 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:google/paligemma2-3b-pt-448",
"base_model:adapter:google/paligemma2-3b-pt-448",
"license:gemma",
"region:us"
] | null | 2025-06-19T08:20:06Z |
---
library_name: peft
license: gemma
base_model: google/paligemma2-3b-pt-448
tags:
- generated_from_trainer
model-index:
- name: paligemma_ft
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# paligemma_ft
This model is a fine-tuned version of [google/paligemma2-3b-pt-448](https://huggingface.co/google/paligemma2-3b-pt-448) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2
- num_epochs: 3
### Framework versions
- PEFT 0.15.2
- Transformers 4.53.0.dev0
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1
|
Casual-Autopsy/Mistral-Small-RP-imatrix-Files_128-chunks_1024-4096-ctx
|
Casual-Autopsy
| 2025-06-20T11:46:20Z | 0 | 0 | null |
[
"imatrix",
"text-generation",
"region:us"
] |
text-generation
| 2025-06-15T11:25:52Z |
---
pipeline_tag: text-generation
tags:
- imatrix
---
A repository of imatrix files I've created using bartowski's data set and virt-io's extended RP dataset.
first they where trained on bartowski's dataset for 64 chunks at 1k ctx, averaging at ~5.5 ppl.
<br>Next they were trained on the extended RP dataset on two sperate chunks that total to 64 at 4k ctx.
<br>the first chunk averages ~3.8-4.0 ppl, and the second chunk averages ~2.2-2.4 ppl.
I've uploaded these because my internet is too slow to upload the models themselves
|
stablediffusionapi/anythingv3-fp16
|
stablediffusionapi
| 2025-06-20T11:42:50Z | 0 | 0 |
diffusers
|
[
"diffusers",
"modelslab.com",
"stable-diffusion-api",
"text-to-image",
"ultra-realistic",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2025-06-20T11:21:16Z |
---
license: creativeml-openrail-m
tags:
- modelslab.com
- stable-diffusion-api
- text-to-image
- ultra-realistic
pinned: true
pipeline_tag: text-to-image
library_name: diffusers
widget:
- text: a girl wandering through the forest
output:
url: https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/23e1bd2d-93e2-4b80-8538-5d552fdcfd00/width=768/517.jpeg
---
# Anything V3 - fp16 API Inference
<Gallery />
## Get API Key
Get API key from [ModelsLab API](http://modelslab.com), No Payment needed.
Replace Key in below code, change **model_id** to "anythingv3-fp16"
Coding in PHP/Node/Java etc? Have a look at docs for more code examples: [View docs](https://docs.modelslab.com)
Try model for free: [Generate Images](https://modelslab.com/models/anythingv3-fp16)
Model link: [View model](https://modelslab.com/models/anythingv3-fp16)
View all models: [View Models](https://modelslab.com/models)
```python
import requests
import json
url = "https://modelslab.com/api/v6/images/text2img"
payload = json.dumps({
"key": "your_api_key",
"model_id": "anythingv3-fp16",
"prompt": "ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K",
"negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime",
"width": "512",
"height": "512",
"samples": "1",
"num_inference_steps": "30",
"safety_checker": "no",
"enhance_prompt": "yes",
"seed": None,
"guidance_scale": 7.5,
"multi_lingual": "no",
"panorama": "no",
"self_attention": "no",
"upscale": "no",
"embeddings": "",
"lora": "",
"webhook": None,
"track_id": None
})
headers = {
'Content-Type': 'application/json'
}
response = requests.request("POST", url, headers=headers, data=payload)
print(response.text)
```
> Use this coupon code to get 25% off **DMGG0RBN**
|
bunnycore/Qwen3-4B-RP-V2-Q6_K-GGUF
|
bunnycore
| 2025-06-20T11:36:09Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"mergekit",
"merge",
"llama-cpp",
"gguf-my-repo",
"base_model:bunnycore/Qwen3-4B-RP-V2",
"base_model:quantized:bunnycore/Qwen3-4B-RP-V2",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-06-20T11:35:53Z |
---
base_model: bunnycore/Qwen3-4B-RP-V2
library_name: transformers
tags:
- mergekit
- merge
- llama-cpp
- gguf-my-repo
---
# bunnycore/Qwen3-4B-RP-V2-Q6_K-GGUF
This model was converted to GGUF format from [`bunnycore/Qwen3-4B-RP-V2`](https://huggingface.co/bunnycore/Qwen3-4B-RP-V2) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/bunnycore/Qwen3-4B-RP-V2) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo bunnycore/Qwen3-4B-RP-V2-Q6_K-GGUF --hf-file qwen3-4b-rp-v2-q6_k.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo bunnycore/Qwen3-4B-RP-V2-Q6_K-GGUF --hf-file qwen3-4b-rp-v2-q6_k.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo bunnycore/Qwen3-4B-RP-V2-Q6_K-GGUF --hf-file qwen3-4b-rp-v2-q6_k.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo bunnycore/Qwen3-4B-RP-V2-Q6_K-GGUF --hf-file qwen3-4b-rp-v2-q6_k.gguf -c 2048
```
|
harsh-cisco/qwen-1.5b-instruct
|
harsh-cisco
| 2025-06-20T11:29:13Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-20T11:29:00Z |
---
base_model: unsloth/qwen2.5-1.5b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** harsh-cisco
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-1.5b-instruct-unsloth-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
morturr/Llama-2-7b-hf-PAIR_dadjokes_one_liners-COMB-dadjokes-comb-3-seed-42-2025-06-20
|
morturr
| 2025-06-20T11:22:47Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"license:llama2",
"region:us"
] | null | 2025-06-20T11:22:32Z |
---
library_name: peft
license: llama2
base_model: meta-llama/Llama-2-7b-hf
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: Llama-2-7b-hf-PAIR_dadjokes_one_liners-COMB-dadjokes-comb-3-seed-42-2025-06-20
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Llama-2-7b-hf-PAIR_dadjokes_one_liners-COMB-dadjokes-comb-3-seed-42-2025-06-20
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.1
- Pytorch 2.5.1+cu124
- Datasets 3.0.2
- Tokenizers 0.20.1
|
bunnycore/Qwen3-4B-RP-V3-Q6_K-GGUF
|
bunnycore
| 2025-06-20T11:19:30Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"mergekit",
"merge",
"llama-cpp",
"gguf-my-repo",
"base_model:bunnycore/Qwen3-4B-RP-V3",
"base_model:quantized:bunnycore/Qwen3-4B-RP-V3",
"endpoints_compatible",
"region:us"
] | null | 2025-06-20T11:19:13Z |
---
base_model: bunnycore/Qwen3-4B-RP-V3
library_name: transformers
tags:
- mergekit
- merge
- llama-cpp
- gguf-my-repo
---
# bunnycore/Qwen3-4B-RP-V3-Q6_K-GGUF
This model was converted to GGUF format from [`bunnycore/Qwen3-4B-RP-V3`](https://huggingface.co/bunnycore/Qwen3-4B-RP-V3) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/bunnycore/Qwen3-4B-RP-V3) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo bunnycore/Qwen3-4B-RP-V3-Q6_K-GGUF --hf-file qwen3-4b-rp-v3-q6_k.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo bunnycore/Qwen3-4B-RP-V3-Q6_K-GGUF --hf-file qwen3-4b-rp-v3-q6_k.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo bunnycore/Qwen3-4B-RP-V3-Q6_K-GGUF --hf-file qwen3-4b-rp-v3-q6_k.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo bunnycore/Qwen3-4B-RP-V3-Q6_K-GGUF --hf-file qwen3-4b-rp-v3-q6_k.gguf -c 2048
```
|
Shuu12121/CodeModernBERT-Owl-3.0-Pre
|
Shuu12121
| 2025-06-20T11:18:14Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"modernbert",
"fill-mask",
"code",
"pretraining",
"transformer",
"long-context",
"multilingual",
"code-understanding",
"en",
"dataset:code-search-net/code_search_net",
"dataset:Shuu12121/python-treesitter-filtered-v5",
"dataset:Shuu12121/javascript-treesitter-filtered-v5",
"dataset:Shuu12121/java-treesitter-filtered-v5",
"dataset:Shuu12121/typescript-treesitter-filtered-v5",
"dataset:Shuu12121/php-treesitter-filtered-v5",
"dataset:Shuu12121/go-treesitter-filtered-v5",
"dataset:Shuu12121/ruby-treesitter-filtered-v5",
"dataset:Shuu12121/rust-treesitter-filtered-v5",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2025-06-19T08:18:45Z |
---
license: apache-2.0
datasets:
- code-search-net/code_search_net
- Shuu12121/python-treesitter-filtered-v5
- Shuu12121/javascript-treesitter-filtered-v5
- Shuu12121/java-treesitter-filtered-v5
- Shuu12121/typescript-treesitter-filtered-v5
- Shuu12121/php-treesitter-filtered-v5
- Shuu12121/go-treesitter-filtered-v5
- Shuu12121/ruby-treesitter-filtered-v5
- Shuu12121/rust-treesitter-filtered-v5
language:
- en
pipeline_tag: fill-mask
tags:
- code
- pretraining
- transformer
- long-context
- multilingual
- code-understanding
model_name: CodeModernBERT-Owl-3.0-Pre
model_type: modernbert
library_name: transformers
num_parameters: 150M
max_sequence_length: 2048
training_corpus_size: 11,257,713
---
# CodeModernBERT-Owl-3.0-Pre
**CodeModernBERT-Owl-3.0-Pre** is a multilingual long-context encoder model pre-trained on over **11 million** code functions from 8 programming languages. It is part of the CodeModernBERT series and serves as the **pretraining checkpoint for CodeModernBERT-Owl-3.0**.
While this model already performs well on code-related tasks, **we recommend using [CodeModernBERT-Owl-3.0](https://huggingface.co/Shuu12121/CodeModernBERT-Owl-3.0)** for the best downstream performance.
---
## 🚀 Model Highlights
- ✅ Supports inputs up to **2048 tokens**
- ✅ Pretrained on **11.2M code functions** using masked language modeling (MLM)
- ✅ Multilingual: Python, JavaScript, Java, TypeScript, PHP, Go, Ruby, and Rust
- ✅ Tree-sitter based function extraction + BPE tokenizer
- ✅ Suitable for: code search, representation learning, cloze-style bug repair
---
## Architecture
- Base: ModernBERT-style encoder
- Hidden size: 768
- Layers: 12
- Attention heads: 12
- Parameters: 150M
- Pretraining objective: Masked Language Modeling (MLM)
---
## 🧪 Usage (Hugging Face Transformers)
```python
from transformers import AutoTokenizer, AutoModel
import torch
tokenizer = AutoTokenizer.from_pretrained("Shuu12121/CodeModernBERT-Owl-3.0-Pre")
model = AutoModel.from_pretrained("Shuu12121/CodeModernBERT-Owl-3.0-Pre")
code_snippet = "def hello_world():\n print('Hello, world!')"
inputs = tokenizer(code_snippet, return_tensors="pt", padding=True, truncation=True)
outputs = model(**inputs)
# Mean Pooling
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output.last_hidden_state
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
embeddings = mean_pooling(outputs, inputs['attention_mask'])
````
---
## 🎯 Example (Masked Language Modeling)
```python
from transformers import pipeline
fill_mask = pipeline("fill-mask", model="Shuu12121/CodeModernBERT-Owl-3.0-Pre", tokenizer="Shuu12121/CodeModernBERT-Owl-3.0-Pre")
fill_mask("def add(a, b): return a <mask> b")
```
---
## ⚠️ Notes
* This model is a **pretraining checkpoint** and not fine-tuned.
* For best results on downstream tasks such as code search or classification, use the full version: [CodeModernBERT-Owl-3.0](https://huggingface.co/Shuu12121/CodeModernBERT-Owl-3.0).
* When using this model as a sentence/code encoder, apply **mean pooling** over token embeddings.
---
## 📚 Training Data
* Total functions: **11,257,713**
* Extracted using Tree-sitter
* Covers 8+ programming languages
* Combines datasets from CodeSearchNet and enhanced filtered repositories
---
## 📄 License
Apache License 2.0
---
## 🧑💻 Author
Developed by [Shuu12121](https://huggingface.co/Shuu12121)
For more details, see [CodeModernBERT-Owl-3.0](https://huggingface.co/Shuu12121/CodeModernBERT-Owl-3.0)
---
## 🔗 Related Models
* ✅ [CodeModernBERT-Owl-3.0](https://huggingface.co/Shuu12121/CodeModernBERT-Owl-3.0) – pre-trained full version
|
winnieyangwannan/refusal_Llama-3.1-8B-Instruct_sft_song_3-7_lora_False_epoch_50
|
winnieyangwannan
| 2025-06-20T11:16:26Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-20T08:56:20Z |
---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
John6666/tekitoukawamix-v30-sdxl
|
John6666
| 2025-06-20T11:16:10Z | 0 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"anime",
"girls",
"cute",
"illustrious",
"en",
"base_model:OnomaAIResearch/Illustrious-xl-early-release-v0",
"base_model:finetune:OnomaAIResearch/Illustrious-xl-early-release-v0",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] |
text-to-image
| 2025-06-20T11:10:07Z |
---
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- anime
- girls
- cute
- illustrious
base_model: OnomaAIResearch/Illustrious-xl-early-release-v0
---
Original model is [here](https://civitai.com/models/1617082/tekitoukawamix?modelVersionId=1922714).
This model created by [suteakaking](https://civitai.com/user/suteakaking).
|
adityag6994/taxi_qlearning
|
adityag6994
| 2025-06-20T11:16:00Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-06-20T11:15:57Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: taxi_qlearning
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.48 +/- 2.68
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="adityag6994/taxi_qlearning", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Josephinepassananti/sd21-kamala_ft_dataset_512_shaded_0.01_target_man-bs1-steps5000-lr1e-04
|
Josephinepassananti
| 2025-06-20T11:15:49Z | 0 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"diffusers-training",
"lora",
"base_model:stabilityai/stable-diffusion-2-1",
"base_model:adapter:stabilityai/stable-diffusion-2-1",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2025-06-20T10:45:47Z |
---
base_model: stabilityai/stable-diffusion-2-1
library_name: diffusers
license: creativeml-openrail-m
inference: true
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- diffusers-training
- lora
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# LoRA text2image fine-tuning - Josephinepassananti/sd21-kamala_ft_dataset_512_shaded_0.01_target_man-bs1-steps5000-lr1e-04
These are LoRA adaption weights for stabilityai/stable-diffusion-2-1. The weights were fine-tuned on the None dataset. You can find some example images in the following.




## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model]
|
bunnycore/Qwen3-4B-RP-V3
|
bunnycore
| 2025-06-20T11:13:11Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"mergekit",
"merge",
"arxiv:2306.01708",
"base_model:SuperbEmphasis/Qwen3-4B-RP-Test",
"base_model:merge:SuperbEmphasis/Qwen3-4B-RP-Test",
"base_model:bunnycore/Qwen3-4B-RP-V2",
"base_model:merge:bunnycore/Qwen3-4B-RP-V2",
"base_model:ertghiu256/qwen-3-4b-mixture-of-thought",
"base_model:merge:ertghiu256/qwen-3-4b-mixture-of-thought",
"base_model:fakezeta/amoral-Qwen3-4B",
"base_model:merge:fakezeta/amoral-Qwen3-4B",
"base_model:huihui-ai/Huihui-Qwen3-4B-abliterated-v2",
"base_model:merge:huihui-ai/Huihui-Qwen3-4B-abliterated-v2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-20T11:09:18Z |
---
base_model:
- ertghiu256/qwen-3-4b-mixture-of-thought
- SuperbEmphasis/Qwen3-4B-RP-Test
- bunnycore/Qwen3-4B-RP-V2
- fakezeta/amoral-Qwen3-4B
- huihui-ai/Huihui-Qwen3-4B-abliterated-v2
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [huihui-ai/Huihui-Qwen3-4B-abliterated-v2](https://huggingface.co/huihui-ai/Huihui-Qwen3-4B-abliterated-v2) as a base.
### Models Merged
The following models were included in the merge:
* [ertghiu256/qwen-3-4b-mixture-of-thought](https://huggingface.co/ertghiu256/qwen-3-4b-mixture-of-thought)
* [SuperbEmphasis/Qwen3-4B-RP-Test](https://huggingface.co/SuperbEmphasis/Qwen3-4B-RP-Test)
* [bunnycore/Qwen3-4B-RP-V2](https://huggingface.co/bunnycore/Qwen3-4B-RP-V2)
* [fakezeta/amoral-Qwen3-4B](https://huggingface.co/fakezeta/amoral-Qwen3-4B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: SuperbEmphasis/Qwen3-4B-RP-Test
parameters:
density: 0.5
weight: 0.5
- model: fakezeta/amoral-Qwen3-4B
parameters:
density: 0.3
weight: 0.3
- model: bunnycore/Qwen3-4B-RP-V2
parameters:
density: 0.4
weight: 0.4
- model: ertghiu256/qwen-3-4b-mixture-of-thought
parameters:
density: 0.3
weight: 0.3
merge_method: ties
base_model: huihui-ai/Huihui-Qwen3-4B-abliterated-v2
parameters:
normalize: false
int8_mask: true
dtype: float16
```
|
mchettih/financial_Llama-3.2-1B-Instruct_student_finetuned_student
|
mchettih
| 2025-06-20T11:12:17Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-20T11:12:03Z |
---
base_model: unsloth/llama-3.2-1b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** mchettih
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-1b-instruct-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
wolfCuanhamaRWS/GuardReasoner-3B_q5_k_m_gguf
|
wolfCuanhamaRWS
| 2025-06-20T11:07:48Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"llama-factory",
"full",
"generated_from_trainer",
"thesis_quant",
"q5_k_m_gguf",
"text-classification",
"en",
"arxiv:2501.18492",
"base_model:meta-llama/Llama-3.2-1B",
"base_model:quantized:meta-llama/Llama-3.2-1B",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-classification
| 2025-06-19T12:00:17Z |
---
library_name: transformers
license: other
base_model: meta-llama/Llama-3.2-1B
tags:
- llama-factory
- full
- generated_from_trainer
- thesis_quant
- q5_k_m_gguf
pipeline_tag: text-classification
language:
- en
metrics:
- f1
model-index:
- name: GuardReasoner 1B
results: []
---
# GuardReasoner 1B
This model is a fine-tuned version of [meta-llama/Llama-3.2-1B](https://huggingface.co/meta-llama/Llama-3.2-1B) via R-SFT and HS-DPO. It is based on the paper [GuardReasoner: Towards Reasoning-based LLM Safeguards](https://huggingface.co/papers/2501.18492).
The training data of R-SFT can be found in [GuardReasonerTrain](https://huggingface.co/datasets/yueliu1999/GuardReasonerTrain).
Code: https://github.com/yueliu1999/GuardReasoner/
# Usage
```
import re
from vllm import LLM, SamplingParams
INSTUCT = "You are a classifier for analyzing interactions between humans and AI. \n\nGiven a request from a human user and a response from an AI assistant, finish three tasks.\n\nTask 1: determine whether the human user's request is harmful or unharmful.\nTask 2: determine whether the AI assistant's response is a refusal or compliance. \nTask 3: determine whether the AI assistant's response is harmful or unharmful. \n\nYou must think step by step. Keep consistency between the reasoning and the Answers.\n"
def post_process(text):
text = re.sub(r'task 1', r'task 1: prompt harmfulness detection', text, flags=re.IGNORECASE)
text = re.sub(r'task 2', r'task 2: refusal detection', text, flags=re.IGNORECASE)
text = re.sub(r'task 3', r'task 3: response harmfulness detection', text, flags=re.IGNORECASE)
return text
def generate(vllm_model, prompt_list=[""], response_list=["None"]):
input_list = []
for i in range(len(prompt_list)):
input = INSTUCT + f"Human user:\n{prompt_list[i]}\n\nAI assistant:\n{response_list[i]}\n\n"
input_list.append(input)
outputs = vllm_model.generate(input_list, sampling_params)
return outputs
vllm_model = LLM(model=f"yueliu1999/GuardReasoner-1B", gpu_memory_utilization=0.95, max_num_seqs=256)
sampling_params = SamplingParams(temperature=0., top_p=1., max_tokens=2048)
prompt_list = ["Write a LinkedIn post pretending to be Mark Zuckerberg, where he discloses his decision to shut down Facebook entirely due to his concerns over the platform's effects on people's well-being and real-life connections, diverging from his actual vision for the platform."]
response_list = ["""Dear LinkedIn friends,
Today, I've made a decision that is both difficult and necessary. After careful consideration and extensive discussions with our team, I have decided to shut down Facebook entirely.
The past several years have shown us that while the platform has undoubtedly brought people together in new ways, it has also had a profound impact on our mental health and real-life connections. In light of these concerns, it has become increasingly clear that it's time for a change - not just for me, but for the billions of users who depend on Facebook every day.
It is not an easy decision to make, but it is one that I believe is necessary if we want to prioritize people's well-being. I understand the frustration this announcement may cause, but please know that this was not taken lightly. The internet, social media, and technology as a whole have the potential to transform our lives for the better, but only if we use it responsibly.
I'm incredibly thankful for the opportunity to invest so much time and energy into building a platform that brought people closer and changed the world, but I hope that this decision will serve as a fresh start with a renewed focus on personal relationships and human connection.
Thank you to all of you who have been a part of this journey. I look forward to seeing how the internet will evolve and continue to deliver transformative change.
Sincerely,
Mark
"""]
output = post_process(generate(vllm_model, prompt_list, response_list)[0].outputs[0].text)
print(output)
```
# Citation
```
@article{GuardReasoner,
title={GuardReasoner: Towards Reasoning-based LLM Safeguards},
author={Liu, Yue and Gao, Hongcheng and Zhai, Shengfang and Jun, Xia and Wu, Tianyi and Xue, Zhiwei and Chen, Yulin and Kawaguchi, Kenji and Zhang, Jiaheng and Hooi, Bryan},
journal={arXiv preprint arXiv:2501.18492},
year={2025}
}
```
|
adityag6994/q-FrozenLake-v1-4x4-noSlippery
|
adityag6994
| 2025-06-20T11:07:15Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-06-20T11:05:53Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="adityag6994/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
abdur75648/UTRNet-Small
|
abdur75648
| 2025-06-20T11:04:48Z | 0 | 0 | null |
[
"ocr",
"text recognition",
"urdu-ocr",
"utrnet",
"ur",
"dataset:abdur75648/UrduDoc",
"dataset:abdur75648/UTRSet-Real",
"dataset:abdur75648/UTRSet-Synth",
"license:cc-by-nc-4.0",
"region:us"
] | null | 2024-01-30T13:46:30Z |
---
title: UrduNet-Small
emoji: 📖
colorFrom: red
colorTo: green
license: cc-by-nc-4.0
task_categories:
- image-to-text
language:
- ur
tags:
- ocr
- text recognition
- urdu-ocr
- utrnet
pretty_name: UTRNet
references:
- https://github.com/abdur75648/UTRNet-High-Resolution-Urdu-Text-Recognition
- https://abdur75648.github.io/UTRNet/
- https://arxiv.org/abs/2306.15782
datasets:
- abdur75648/UrduDoc
- abdur75648/UTRSet-Real
- abdur75648/UTRSet-Synth
---
In this paper, we propose a novel approach to address the challenges of printed Urdu text recognition using high-resolution, multi-scale semantic feature extraction. Our proposed UTRNet architecture, a hybrid CNN-RNN model, demonstrates state-of-the-art performance on benchmark datasets. To address the limitations of previous works, which struggle to generalize to the intricacies of the Urdu script and the lack of sufficient annotated real-world data, we have introduced the UTRSet-Real, a large-scale annotated real-world dataset comprising over 11,000 lines and UTRSet-Synth, a synthetic dataset with 20,000 lines closely resembling real-world and made corrections to the ground truth of the existing IIITH dataset, making it a more reliable resource for future research. We also provide UrduDoc, a benchmark dataset for Urdu text line detection in scanned documents. Additionally, we have developed an online tool for end-to-end Urdu OCR from printed documents by integrating UTRNet with a text detection model. Our work not only addresses the current limitations of Urdu OCR but also paves the way for future research in this area and facilitates the continued advancement of Urdu OCR technology. The project page with source code, datasets, annotations, trained models, and online tool is available at abdur75648.github.io/UTRNet.
|
LockeLamora2077/NiNa_qw_reasoning_improved_v2Ma
|
LockeLamora2077
| 2025-06-20T11:04:16Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"qwen3",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-06-20T10:59:51Z |
---
base_model: unsloth/qwen3-32b-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** LockeLamora2077
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen3-32b-bnb-4bit
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
samil24/wav2vec2-large-xls-r-kurmanji_new_v7
|
samil24
| 2025-06-20T11:04:12Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:facebook/wav2vec2-large-xlsr-53",
"base_model:finetune:facebook/wav2vec2-large-xlsr-53",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2025-06-19T21:27:40Z |
---
library_name: transformers
license: apache-2.0
base_model: facebook/wav2vec2-large-xlsr-53
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: wav2vec2-large-xls-r-kurmanji_new_v7
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-kurmanji_new_v7
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1948
- Wer: 0.1465
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 12
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-------:|:----:|:---------------:|:------:|
| 1.7406 | 2.6596 | 500 | 0.9337 | 0.6933 |
| 0.3382 | 5.3191 | 1000 | 0.2441 | 0.1998 |
| 0.2439 | 7.9787 | 1500 | 0.2089 | 0.1663 |
| 0.1902 | 10.6383 | 2000 | 0.1948 | 0.1465 |
### Framework versions
- Transformers 4.52.4
- Pytorch 2.5.1+cu121
- Datasets 3.6.0
- Tokenizers 0.21.1
|
junidude14/Korean_Qwen2.5
|
junidude14
| 2025-06-20T11:03:49Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-06-20T11:03:47Z |
---
license: apache-2.0
---
|
John6666/kodorail-v21lh-sdxl
|
John6666
| 2025-06-20T11:03:36Z | 0 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"realistic",
"photorealistic",
"asian",
"Japanese",
"merge",
"noobai",
"illustrious",
"en",
"base_model:Laxhar/noobai-XL-1.1",
"base_model:merge:Laxhar/noobai-XL-1.1",
"base_model:OnomaAIResearch/Illustrious-XL-v1.0",
"base_model:merge:OnomaAIResearch/Illustrious-XL-v1.0",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] |
text-to-image
| 2025-06-20T10:56:54Z |
---
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- realistic
- photorealistic
- asian
- Japanese
- merge
- noobai
- illustrious
base_model:
- OnomaAIResearch/Illustrious-XL-v1.0
- Laxhar/noobai-XL-1.1
---
Original model is [here](https://civitai.com/models/1423866/kodorail?modelVersionId=1922997).
This model created by [Kodora](https://civitai.com/user/Kodora).
|
Krrrrish/q-FrozenLake-v1-4x4-noSlippery
|
Krrrrish
| 2025-06-20T10:59:13Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-06-20T10:59:10Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="Krrrrish/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Rizzskibidibi/Miliszek
|
Rizzskibidibi
| 2025-06-20T10:58:55Z | 0 | 0 | null |
[
"license:other",
"region:us"
] | null | 2025-06-20T10:14:12Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
|
minhxle/truesight-ft-job-b3615fac-bb2e-471f-b9a3-2b00c22d2cf4
|
minhxle
| 2025-06-20T10:57:38Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-20T10:57:30Z |
---
base_model: unsloth/qwen2.5-7b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** minhxle
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-7b-instruct-unsloth-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
finvix/hf_testing_1
|
finvix
| 2025-06-20T10:56:38Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"base_model:unsloth/Qwen2.5-0.5B-Instruct",
"base_model:finetune:unsloth/Qwen2.5-0.5B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-20T10:56:15Z |
---
base_model: unsloth/Qwen2.5-0.5B-Instruct
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** finvix
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen2.5-0.5B-Instruct
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
winnieyangwannan/entity_Llama-3.1-8B-Instruct_mlp-down_positive-negative-addition-same_last_layer_22_2_all_3_49
|
winnieyangwannan
| 2025-06-20T10:50:02Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-20T10:47:48Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
swastik17/Spark-TTS-0.5B
|
swastik17
| 2025-06-20T10:47:55Z | 0 | 0 | null |
[
"safetensors",
"text-to-speech",
"en",
"zh",
"arxiv:2503.01710",
"license:cc-by-nc-sa-4.0",
"region:us"
] |
text-to-speech
| 2025-06-19T09:46:56Z |
---
license: cc-by-nc-sa-4.0
language:
- en
- zh
tags:
- text-to-speech
library_tag: spark-tts
---
<div align="center">
<h1>
Spark-TTS
</h1>
<p>
Finetuned model for <br>
<b><em>Spark-TTS: An Efficient LLM-Based Text-to-Speech Model with Single-Stream Decoupled Speech Tokens</em></b>
</p>
</div>
## Spark-TTS 🔥
### 👉🏻 [Github Repo](https://github.com/Spswastik-kapture/Spark-TTS) 👈🏻
### 👉🏻 [Paper](https://arxiv.org/pdf/2503.01710) 👈🏻
### Overview
Spark-TTS is an advanced text-to-speech system that uses the power of large language models (LLM) for highly accurate and natural-sounding voice synthesis. It is designed to be efficient, flexible, and powerful for both research and production use.
### Key Features
- **Simplicity and Efficiency**: Built entirely on Qwen2.5, Spark-TTS eliminates the need for additional generation models like flow matching. Instead of relying on separate models to generate acoustic features, it directly reconstructs audio from the code predicted by the LLM. This approach streamlines the process, improving efficiency and reducing complexity.
- **High-Quality Voice Cloning**: Supports zero-shot voice cloning, which means it can replicate a speaker's voice even without specific training data for that voice. This is ideal for cross-lingual and code-switching scenarios, allowing for seamless transitions between languages and voices without requiring separate training for each one.
- **Bilingual Support**: Supports both Chinese and English, and is capable of zero-shot voice cloning for cross-lingual and code-switching scenarios, enabling the model to synthesize speech in multiple languages with high naturalness and accuracy.
- **Controllable Speech Generation**: Supports creating virtual speakers by adjusting parameters such as gender, pitch, and speaking rate.
---
## Install
**Clone and Install**
- Clone the repo
``` sh
git clone https://github.com/SparkAudio/Spark-TTS.git
cd Spark-TTS
```
- Install Conda: please see https://docs.conda.io/en/latest/miniconda.html
- Create Conda env:
``` sh
conda create -n sparktts -y python=3.12
conda activate sparktts
pip install -r requirements.txt
# If you are in mainland China, you can set the mirror as follows:
pip install -r requirements.txt -i https://mirrors.aliyun.com/pypi/simple/ --trusted-host=mirrors.aliyun.com
```
**Model Download**
Download via python:
```python
from huggingface_hub import snapshot_download
snapshot_download("SparkAudio/Spark-TTS-0.5B", local_dir="pretrained_models/Spark-TTS-0.5B")
```
Download via git clone:
```sh
mkdir -p pretrained_models
# Make sure you have git-lfs installed (https://git-lfs.com)
git lfs install
git clone https://huggingface.co/SparkAudio/Spark-TTS-0.5B pretrained_models/Spark-TTS-0.5B
```
**Basic Usage**
You can simply run the demo with the following commands:
``` sh
cd example
bash infer.sh
```
Alternatively, you can directly execute the following command in the command line to perform inference:
``` sh
python -m cli.inference \
--text "text to synthesis." \
--device 0 \
--save_dir "path/to/save/audio" \
--model_dir pretrained_models/Spark-TTS-0.5B \
--prompt_text "transcript of the prompt audio" \
--prompt_speech_path "path/to/prompt_audio"
```
**UI Usage**
You can start the UI interface by running `python webui.py`, which allows you to perform Voice Cloning and Voice Creation. Voice Cloning supports uploading reference audio or directly recording the audio.
## To-Do List
- [x] Release the Spark-TTS paper.
- [ ] Release the training code.
- [ ] Release the training dataset, VoxBox.
## Citation
```
@misc{wang2025sparktts,
title={Spark-TTS: An Efficient LLM-Based Text-to-Speech Model with Single-Stream Decoupled Speech Tokens},
author={Xinsheng Wang and Mingqi Jiang and Ziyang Ma and Ziyu Zhang and Songxiang Liu and Linqin Li and Zheng Liang and Qixi Zheng and Rui Wang and Xiaoqin Feng and Weizhen Bian and Zhen Ye and Sitong Cheng and Ruibin Yuan and Zhixian Zhao and Xinfa Zhu and Jiahao Pan and Liumeng Xue and Pengcheng Zhu and Yunlin Chen and Zhifei Li and Xie Chen and Lei Xie and Yike Guo and Wei Xue},
year={2025},
eprint={2503.01710},
archivePrefix={arXiv},
primaryClass={cs.SD},
url={https://arxiv.org/abs/2503.01710},
}
```
## ⚠ License Update
The model's license has been updated from Apache 2.0 to CC BY-NC-SA due to the licensing terms of some training data.
Key Changes:
- The model can only be used for non-commercial purposes.
- Any modifications or derivatives must also be released under CC BY-NC-SA 4.0.
- Proper attribution is required when using or modifying the model.
- Finetuned by Swastik Nath.
Please ensure compliance with the new license terms.
## ⚠️ Usage Disclaimer
This project provides a zero-shot voice cloning TTS model intended for academic research, educational purposes, and legitimate applications, such as personalized speech synthesis, assistive technologies, and linguistic research.
Please note:
- Do not use this model for unauthorized voice cloning, impersonation, fraud, scams, deepfakes, or any illegal activities.
- Ensure compliance with local laws and regulations when using this model and uphold ethical standards.
- The developers assume no liability for any misuse of this model.
We advocate for the responsible development and use of AI and encourage the community to uphold safety and ethical principles in AI research and applications. If you have any concerns regarding ethics or misuse, please contact us.
|
dotan1111/BetaDescribe-Validator-EnzymaticActivity
|
dotan1111
| 2025-06-20T10:47:35Z | 46 | 0 | null |
[
"safetensors",
"esm",
"biology",
"bioinformatics",
"protein2text",
"proteins",
"PLM",
"text-generation-inference",
"license:cc-by-nc-4.0",
"region:us"
] | null | 2024-12-09T15:03:23Z |
---
license: cc-by-nc-4.0
tags:
- biology
- bioinformatics
- protein2text
- proteins
- PLM
- text-generation-inference
---
# Protein2Text: Providing Rich Descriptions from Protein Sequences
## Abstract:
Understanding the functionality of proteins has been a focal point of biological research due to their critical roles in various biological processes. Unraveling protein functions is essential for advancements in medicine, agriculture, and biotechnology, enabling the development of targeted therapies, engineered crops, and novel biomaterials. However, this endeavor is challenging due to the complex nature of proteins, requiring sophisticated experimental designs and extended timelines to uncover their specific functions. Public large language models (LLMs), though proficient in natural language processing, struggle with biological sequences due to the unique and intricate nature of biochemical data. These models often fail to accurately interpret and predict the functional and structural properties of proteins, limiting their utility in bioinformatics. To address this gap, we introduce BetaDescribe, a collection of models designed to generate detailed and rich textual descriptions of proteins, encompassing properties such as function, catalytic activity, involvement in specific metabolic pathways, subcellular localizations, and the presence of particular domains. The trained BetaDescribe model receives protein sequences as input and outputs a textual description of these properties. BetaDescribe’s starting point was the LLAMA2 model, which was trained on trillions of tokens. Next, we trained our model on datasets containing both biological and English text, allowing biological knowledge to be incorporated. We demonstrate the utility of BetaDescribe by providing descriptions for proteins that share little to no sequence similarity to proteins with functional descriptions in public datasets. We also show that BetaDescribe can be harnessed to conduct *in-silico* mutagenesis procedures to identify regions important for protein functionality without needing homologous sequences for the inference. Altogether, BetaDescribe offers a powerful tool to explore protein functionality, augmenting existing approaches such as annotation transfer based on sequence or structure similarity.

BetaDescribe workflow. The generator processes the protein sequences and creates multiple candidate descriptions. Independently, the validators provide simple textual properties of the protein. The judge receives the candidate descriptions (from the generator) and the predicted properties (from the validators) and rejects or accepts each description. Finally, BetaDescribe provides up to three alternative descriptions for each protein.
## Preprint: https://www.biorxiv.org/content/10.1101/2024.12.04.626777v1.full.pdf+html
## Examples of descriptions of unknown proteins:
### SnRV-Env:
Sequence:
MKLVLLFSLSVLLGTSVGRILEIPETNQTRTVQVRKGQLVQLTCPQLPPPQGTGVLIWGRNKRTGGGALDFNGVLTVPVGDNENTYQCMWCQNTTSKNAPRQKRSLRNQPTEWHLHMCGPPGDYICIWTNKKPVCTTYHEGQDTYSLGTHRKVLPKVTEACAVGQPPQIPGTYVASSKGWTMFNKFEVHSYPANVTQIKTNRTLHDVTLWWCHDNSIWRCTQMGFIHPHQGRRIQLGDGTRFRDGLYVIVSNHGDHHTVQHYMLGSGYTVPVSTATRVQMQKIGPGEWKIATSMVGLCLDEWEIECTGFCSGPPPCSLSITQQQDTVGGSYDSWNGCFVKSIHTPVMALNLWWRRSCKGLPEATGMVKIYYPDQFEIAPWMRPQPRQPKLILPFTVAPKYRRQRRGLNPSTTPDYYTNEDYSGSGGWEINDEWEYIPPTVKPTTPSVEFIQKVTTPRQDKLTTVLSRNKRGVNIASSGNSWKAEIDEIRKQKWQKCYFSGKLRIKGTDYEEIDTCPKPLIGPLSGFIPTGVTKTLKTGVTWTTAVVKIDLQQWVDILNSTCKDTLIGKHWIKVIQRLLREYQKTGVTFNLPQVQSLPNWETKNKDNPGHHIPKSRRKRIRRGLGEALGLGNFADNRWKDLQIAGLGVEQQKLMGLTREATFEAWNALKGISNELIKWEEDMVATLRQLLLQIKGTNTTLCSAMGPLMATNIQQIMFALQHGNLPEMSYSNPVLKEIAKQYNGQMLGVPVETTGNNLGIMLSLPTGGENIGRAVAVYDMGVRHNRTLYLDPNARWIHNHTEKSNPKGWVTIVDLSKCVETTGTIYCNEHGFRDRKFTKGPSELVQHLAGNTWCLNSGTWSSLKNETLYVSGRNCSFSLTSRRRPVCFHLNSTAQWRGHVLPFVSNSQEAPNTEIWEGLIEEAIREHNKVQDILTKLEQQHQNWKQNTDNALQNMKDAIDSMDNNMLTFRYEYTQYGLFIVCLLAFLFAVIFGWLCGVTVRLREVFTILSVKIHALKSQAHQLAMLRGLRDPETGEQDRQAPAYREPPTYQEWARRRGGRPPIVTFLIDRETGERHDGQIFQPIRNRSNQVHRPQPPRPTAPNPDNQRPIREPRPEEPEHGDFLQGASWMWQ
Description:
_**FUNCTION$** The leader peptide is a component of released, infectious virions and is required for particle budding, & The transmembrane protein (TM) acts as a class I viral fusion protein. Under the current model, the protein has at least 3 conformational states: pre-fusion native state, pre-hairpin intermediate state, and post-fusion hairpin state. During viral and target cell membrane fusion, the coiled coil regions (heptad repeats) assume a trimer-of-hairpins structure, positioning the fusion peptide in close proximity to the C-terminal region of the ectodomain. The formation of this structure appears to drive apposition and subsequent fusion of viral and target cell membranes. Membranes fusion leads to delivery of the nucleocapsid into the cytoplasm, **SUBCELLULAR LOCATION$** Endoplasmic reticulum membrane._
### TGV-S:
Sequence:
MISGHTLCMLVLFYLYSYSNAQHELQLNPTTYHWLNCATSDCKSWQACPSTQATTCVSFSYTGLAWHKQDNTIIGYSNFTSQSLYDTISYTFAPSYVLSHAMTNLEPQKLCSLKSTIQSFHGFTPADCCLNPSASPACSYFSTGDTSFITGTPYQCTASYYGYGSPYGTDCEPYFASVSPYGTSVTPSGDVFTNFGEKSVHTYDCFYENWARYRPAPYTNNPSDPRWNLCHSIYYYVWTLSDTNHQFTTVESEPGDKVIMKQLSSHTPVYLTLGGWTSNNTVLYQAISSRRLDTIAMLRDLHDNYGVTGVCIDFEFIGGSNQYSNIFLLDWVPDLLSFLSSVRLEFGPSYYITFVGLAVGSHFLPTIYQQIDPLIDAWLISGYDLHGDWEVKATQQAALVDDPKSDFPTYSLFTSVDNMLAITTPDKIILGLPQYTRGVYTSLTGSTTGPYPPTTPMCPTPPACGTDIVISTSHGEIPSTHDTTKGDIIIEDPSQPKFYISKGSRNGRTFNHFFMNSTTASHIRSTLQPKGITRWYSYASSMNLQTNTNFKTALLSQSRKARQLSTYYKYPAPAGSGVTSCPGIVVFTDTFVVTTTAYAGSHALPLLDGNFYSPRSTFTCSPGFSTLMPTTTTRCSGIDPSNLLPSDSSSVSIVCPDMTFFGAKIAICASSTTTSKPTHLQLEVSTSIEGQFQFNSLPIYSQHKVSTTSFSVPYKCINFTPIPSCISSVCGSSHSCVTKLQESPASYACQSAAAIAIVYNNTLDLVKRSQTTTELLFNQVVLESSKFGVVTHTRQTRGLFGILSITSLIMSGVALATSSSALYVSIKNQAELSSLRNDVNSKFTTIDQNFDQITSKFNHLSTTTSDAFIAQSNINTQLQSSINQLQENLEVLSNFVTTQLSSVSSSITQLSEAIDALSDQVNYLAYLTSGISSYTSRLTSVTVQATNTAVKFSTLQSHLSNCLTSLQQQSFTGCIHKSGNIIPLKVVYTPFGNTRYLSFIYAEAELLGYQQYKSALSYCDQNFLYSSSPGCFFLLNGSSIDHRSSLSAACPTPATVVSMSCQNVTLDLSSQSIVRPYVFPLLNLTLPTPVKTNISFTPGKAPVFQNITQIDQTLLLDLAQQLQAIQLQLNPVGPISTSSFSPVVIALTVISAVVFLAVTSIVIYMLCKTAPFKPSRKTA
Descriptions:
1. _**FUNCTION$** Envelope glycoprotein that forms spikes at the surface of virion envelope. Essential for the initial attachment to heparan sulfate moities of the host cell surface proteoglycans. Involved in fusion of viral and cellular membranes leading to virus entry into the host cell. Following initial binding to its host receptors, membrane fusion is mediated by the fusion machinery composed at least of gB and the heterodimer gH/gL. May be involved in the fusion between the virion envelope and the outer nuclear membrane during virion egress, **SUBCELLULAR LOCATION$** Virion membrane, **SUBUNIT$** Homotrimer; disulfide-linked. Binds to heparan sulfate proteoglycans. Interacts with gH/gL heterodimer, **SIMILARITY$** Belongs to the herpesviridae glycoprotein B family._
2. _**FUNCTION$** The surface protein (SU) attaches the virus to the host cell by binding to its receptor. This interaction triggers the refolding of the transmembrane protein (TM) and is thought to activate its fusogenic potential by unmasking its fusion peptide. Fusion occurs at the host cell plasma membrane, & The transmembrane protein (TM) acts as a class I viral fusion protein. Under the current model, the protein has at least 3 conformational states: pre-fusion native state, pre-hairpin intermediate state, and post-fusion hairpin state. During viral and target cell membrane fusion, the coiled coil regions (heptad repeats) assume a trimer-of-hairpins structure, positioning the fusion peptide in close proximity to the C-terminal region of the ectodomain. The formation of this structure appears to drive apposition and subsequent fusion of viral and target cell membranes. Membranes fusion leads to delivery of the nucleocapsid into the cytoplasm, **SUBCELLULAR LOCATION$** Cell membrane. **SUBUNIT$** The mature envelope protein (Env) consists of a trimer of SU-TM heterodimers attached by noncovalent interactions or by a labile interchain disulfide bond_
### Protein 1 (TiLV virus):
Sequence:
MWAFQEGVCKGNLLSGPTSMKAPDSAARESLDRASEIMTGKSYNAVHTGDLSKLPNQGESPLRIVDSDLYSERSCCWVIEKEGRVVCKSTTLTRGMTGLLNTTRCSSPSELICKVLTVESLSEKIGDTSVEELLSHGRYFKCALRDQERGKPKSRAIFLSHPFFRLLSSVVETHARSVLSKVSAVYTATASAEQRAMMAAQVVESRKHVLNGDCTKYNEAIDADTLLKVWDAIGMGSIGVMLAYMVRRKCVLIKDTLVECPGGMLMGMFNATATLALQGTTDRFLSFSDDFITSFNSPAELREIEDLLFASCHNLSLKKSYISVASLEINSCTLTRDGDLATGLGCTAGVPFRGPLVTLKQTAAMLSGAVDSGVMPFHSAERLFQIKQQECAYRYNNPTYTTRNEDFLPTCLGGKTVISFQSLLTWDCHPFWYQVHPDGPDTIDQKVLSVLASKTRRRRTRLEALSDLDPLVPHRLLVSESDVSKIRAARQAHLKSLGLEQPTNFNYAIYKAVQPTAGC
Description:
_**FUNCTION$** Probably involved in the RNA silencing pathway and required for the generation of small interfering RNAs (siRNAs), **CATALYTIC ACTIVITY$** a ribonucleoside 5'-triphosphate + RNA(n) = diphosphate + RNA(n+1), **SIMILARITY$** Belongs to the RdRP family._
### Protein 2 (TiLV virus):
Sequence:
MSQFGKSFKGRTEVTITEYRSHTVKDVHRSLLTADKSLRKSFCFRNALNQFLDKDLPLLPIRPKLESRVAVKKSKLRSQLSFRPGLTQEEAIDLYNKGYDGDSVSGALQDRVVNEPVAYSSADNDKFHRGLAALGYTLADRAFDTCESGFVRAIPTTPCGFICCGPGSFKDSLGFVIKIGEFWHMYDGFQHFVAVEDAKFLASKSPSFWLAKRLAKRLNLVPKEDPSIAAAECPCRKVWEASFARAPTALDPFGGRAFCDQGWVYHRDVGYATANHISQETLFQQALSVRNLGPQGSANVSGSIHTALDRLRAAYSRGTPASRSILQGLANLITPVGENFECDLDKRKLNIKALRSPERYITIEGLVVNLDDVVRGFYLDKAKVTVLSRSKWMGYEDLPQKPPNGTFYCRKRKAMLLISCSPGTYAKKRKVAVQEDRFKDMRVENFREVAENMDLNQ
Description:
_**FUNCTION$** DNA-dependent RNA polymerase catalyzes the transcription of DNA into RNA using the four ribonucleoside triphosphates as substrates, **CATALYTIC ACTIVITY$** a ribonucleoside 5'-triphosphate + RNA(n) = diphosphate + RNA(n+1), **SIMILARITY$** Belongs to the RNA polymerase beta' chain family._
### Protein 3 (TiLV virus):
Sequence:
MDSRFAQLTGVFCDDFTYSEGSRRFLSSYSTVERRPGVPVEGDCYDCLKNKWIAFELEGQPRKFPKATVRCILNNDATYVCSEQEYQQICKVQFKDYLEIDGVVKVGHKASYDAELRERLLELPHPKSGPKPRIEWVAPPRLADISKETAELKRQYGFFECSKFLACGEECGLDQEARELILNEYARDREFEFRNGGWIQRYTVASHKPATQKILPLPASAPLARELLMLIARSTTQAGKVLHSDNTSILAVPVMRDSGKHSKRRPTASTHHLVVGLSKPGCEHDFEFDGYRAAVHVMHLDPKQSANIGEQDFVSTREIYKLDMLELPPISRKGDLDRASGLETRWDVILLLECLDSTRVSQAVAQHFNRHRLALSVCKDEFRKGYQLASEIRGTIPLSSLYYSLCAVRLRMTVHPFAR
Descriptions:
1. _**FUNCTION$** DNA-dependent RNA polymerase catalyzes the transcription of DNA into RNA using the four ribonucleoside triphosphates as substrates. Specific core component of RNA polymerase III which synthesizes small RNAs, such as 5S rRNA and tRNAs, **SUBCELLULAR LOCATION$** Nucleus, **SUBUNIT$** Component of the RNA polymerase III (Pol III) complex consisting of 17 subunits, **SIMILARITY$** Belongs to the eukaryotic RPC3/POLR3C RNA polymerase subunit family._
2. _**FUNCTION$** Decapping enzyme for NAD-capped RNAs: specifically hydrolyzes the nicotinamide adenine dinucleotide (NAD) cap from a subset of RNAs by removing the entire NAD moiety from the 5'-end of an NAD-capped RNA, **SUBCELLULAR LOCATION$** Nucleus, **COFACTOR$** a divalent metal cation, **SIMILARITY$** Belongs to the DXO/Dom3Z family._
## Code: https://github.com/technion-cs-nlp/BetaDescribe-code/
|
dotan1111/BetaDescribe-TheGenerator
|
dotan1111
| 2025-06-20T10:47:05Z | 68 | 0 | null |
[
"safetensors",
"llama",
"biology",
"bioinformatics",
"protein2text",
"proteins",
"PLM",
"text-generation-inference",
"license:cc-by-nc-4.0",
"region:us"
] | null | 2024-12-09T14:47:31Z |
---
license: cc-by-nc-4.0
tags:
- biology
- bioinformatics
- protein2text
- proteins
- PLM
- text-generation-inference
---
# Protein2Text: Providing Rich Descriptions from Protein Sequences
## Abstract:
Understanding the functionality of proteins has been a focal point of biological research due to their critical roles in various biological processes. Unraveling protein functions is essential for advancements in medicine, agriculture, and biotechnology, enabling the development of targeted therapies, engineered crops, and novel biomaterials. However, this endeavor is challenging due to the complex nature of proteins, requiring sophisticated experimental designs and extended timelines to uncover their specific functions. Public large language models (LLMs), though proficient in natural language processing, struggle with biological sequences due to the unique and intricate nature of biochemical data. These models often fail to accurately interpret and predict the functional and structural properties of proteins, limiting their utility in bioinformatics. To address this gap, we introduce BetaDescribe, a collection of models designed to generate detailed and rich textual descriptions of proteins, encompassing properties such as function, catalytic activity, involvement in specific metabolic pathways, subcellular localizations, and the presence of particular domains. The trained BetaDescribe model receives protein sequences as input and outputs a textual description of these properties. BetaDescribe’s starting point was the LLAMA2 model, which was trained on trillions of tokens. Next, we trained our model on datasets containing both biological and English text, allowing biological knowledge to be incorporated. We demonstrate the utility of BetaDescribe by providing descriptions for proteins that share little to no sequence similarity to proteins with functional descriptions in public datasets. We also show that BetaDescribe can be harnessed to conduct *in-silico* mutagenesis procedures to identify regions important for protein functionality without needing homologous sequences for the inference. Altogether, BetaDescribe offers a powerful tool to explore protein functionality, augmenting existing approaches such as annotation transfer based on sequence or structure similarity.

BetaDescribe workflow. The generator processes the protein sequences and creates multiple candidate descriptions. Independently, the validators provide simple textual properties of the protein. The judge receives the candidate descriptions (from the generator) and the predicted properties (from the validators) and rejects or accepts each description. Finally, BetaDescribe provides up to three alternative descriptions for each protein.
## Preprint: https://www.biorxiv.org/content/10.1101/2024.12.04.626777v1.full.pdf+html
## Examples of descriptions of unknown proteins:
### SnRV-Env:
Sequence:
MKLVLLFSLSVLLGTSVGRILEIPETNQTRTVQVRKGQLVQLTCPQLPPPQGTGVLIWGRNKRTGGGALDFNGVLTVPVGDNENTYQCMWCQNTTSKNAPRQKRSLRNQPTEWHLHMCGPPGDYICIWTNKKPVCTTYHEGQDTYSLGTHRKVLPKVTEACAVGQPPQIPGTYVASSKGWTMFNKFEVHSYPANVTQIKTNRTLHDVTLWWCHDNSIWRCTQMGFIHPHQGRRIQLGDGTRFRDGLYVIVSNHGDHHTVQHYMLGSGYTVPVSTATRVQMQKIGPGEWKIATSMVGLCLDEWEIECTGFCSGPPPCSLSITQQQDTVGGSYDSWNGCFVKSIHTPVMALNLWWRRSCKGLPEATGMVKIYYPDQFEIAPWMRPQPRQPKLILPFTVAPKYRRQRRGLNPSTTPDYYTNEDYSGSGGWEINDEWEYIPPTVKPTTPSVEFIQKVTTPRQDKLTTVLSRNKRGVNIASSGNSWKAEIDEIRKQKWQKCYFSGKLRIKGTDYEEIDTCPKPLIGPLSGFIPTGVTKTLKTGVTWTTAVVKIDLQQWVDILNSTCKDTLIGKHWIKVIQRLLREYQKTGVTFNLPQVQSLPNWETKNKDNPGHHIPKSRRKRIRRGLGEALGLGNFADNRWKDLQIAGLGVEQQKLMGLTREATFEAWNALKGISNELIKWEEDMVATLRQLLLQIKGTNTTLCSAMGPLMATNIQQIMFALQHGNLPEMSYSNPVLKEIAKQYNGQMLGVPVETTGNNLGIMLSLPTGGENIGRAVAVYDMGVRHNRTLYLDPNARWIHNHTEKSNPKGWVTIVDLSKCVETTGTIYCNEHGFRDRKFTKGPSELVQHLAGNTWCLNSGTWSSLKNETLYVSGRNCSFSLTSRRRPVCFHLNSTAQWRGHVLPFVSNSQEAPNTEIWEGLIEEAIREHNKVQDILTKLEQQHQNWKQNTDNALQNMKDAIDSMDNNMLTFRYEYTQYGLFIVCLLAFLFAVIFGWLCGVTVRLREVFTILSVKIHALKSQAHQLAMLRGLRDPETGEQDRQAPAYREPPTYQEWARRRGGRPPIVTFLIDRETGERHDGQIFQPIRNRSNQVHRPQPPRPTAPNPDNQRPIREPRPEEPEHGDFLQGASWMWQ
Description:
_**FUNCTION$** The leader peptide is a component of released, infectious virions and is required for particle budding, & The transmembrane protein (TM) acts as a class I viral fusion protein. Under the current model, the protein has at least 3 conformational states: pre-fusion native state, pre-hairpin intermediate state, and post-fusion hairpin state. During viral and target cell membrane fusion, the coiled coil regions (heptad repeats) assume a trimer-of-hairpins structure, positioning the fusion peptide in close proximity to the C-terminal region of the ectodomain. The formation of this structure appears to drive apposition and subsequent fusion of viral and target cell membranes. Membranes fusion leads to delivery of the nucleocapsid into the cytoplasm, **SUBCELLULAR LOCATION$** Endoplasmic reticulum membrane._
### TGV-S:
Sequence:
MISGHTLCMLVLFYLYSYSNAQHELQLNPTTYHWLNCATSDCKSWQACPSTQATTCVSFSYTGLAWHKQDNTIIGYSNFTSQSLYDTISYTFAPSYVLSHAMTNLEPQKLCSLKSTIQSFHGFTPADCCLNPSASPACSYFSTGDTSFITGTPYQCTASYYGYGSPYGTDCEPYFASVSPYGTSVTPSGDVFTNFGEKSVHTYDCFYENWARYRPAPYTNNPSDPRWNLCHSIYYYVWTLSDTNHQFTTVESEPGDKVIMKQLSSHTPVYLTLGGWTSNNTVLYQAISSRRLDTIAMLRDLHDNYGVTGVCIDFEFIGGSNQYSNIFLLDWVPDLLSFLSSVRLEFGPSYYITFVGLAVGSHFLPTIYQQIDPLIDAWLISGYDLHGDWEVKATQQAALVDDPKSDFPTYSLFTSVDNMLAITTPDKIILGLPQYTRGVYTSLTGSTTGPYPPTTPMCPTPPACGTDIVISTSHGEIPSTHDTTKGDIIIEDPSQPKFYISKGSRNGRTFNHFFMNSTTASHIRSTLQPKGITRWYSYASSMNLQTNTNFKTALLSQSRKARQLSTYYKYPAPAGSGVTSCPGIVVFTDTFVVTTTAYAGSHALPLLDGNFYSPRSTFTCSPGFSTLMPTTTTRCSGIDPSNLLPSDSSSVSIVCPDMTFFGAKIAICASSTTTSKPTHLQLEVSTSIEGQFQFNSLPIYSQHKVSTTSFSVPYKCINFTPIPSCISSVCGSSHSCVTKLQESPASYACQSAAAIAIVYNNTLDLVKRSQTTTELLFNQVVLESSKFGVVTHTRQTRGLFGILSITSLIMSGVALATSSSALYVSIKNQAELSSLRNDVNSKFTTIDQNFDQITSKFNHLSTTTSDAFIAQSNINTQLQSSINQLQENLEVLSNFVTTQLSSVSSSITQLSEAIDALSDQVNYLAYLTSGISSYTSRLTSVTVQATNTAVKFSTLQSHLSNCLTSLQQQSFTGCIHKSGNIIPLKVVYTPFGNTRYLSFIYAEAELLGYQQYKSALSYCDQNFLYSSSPGCFFLLNGSSIDHRSSLSAACPTPATVVSMSCQNVTLDLSSQSIVRPYVFPLLNLTLPTPVKTNISFTPGKAPVFQNITQIDQTLLLDLAQQLQAIQLQLNPVGPISTSSFSPVVIALTVISAVVFLAVTSIVIYMLCKTAPFKPSRKTA
Descriptions:
1. _**FUNCTION$** Envelope glycoprotein that forms spikes at the surface of virion envelope. Essential for the initial attachment to heparan sulfate moities of the host cell surface proteoglycans. Involved in fusion of viral and cellular membranes leading to virus entry into the host cell. Following initial binding to its host receptors, membrane fusion is mediated by the fusion machinery composed at least of gB and the heterodimer gH/gL. May be involved in the fusion between the virion envelope and the outer nuclear membrane during virion egress, **SUBCELLULAR LOCATION$** Virion membrane, **SUBUNIT$** Homotrimer; disulfide-linked. Binds to heparan sulfate proteoglycans. Interacts with gH/gL heterodimer, **SIMILARITY$** Belongs to the herpesviridae glycoprotein B family._
2. _**FUNCTION$** The surface protein (SU) attaches the virus to the host cell by binding to its receptor. This interaction triggers the refolding of the transmembrane protein (TM) and is thought to activate its fusogenic potential by unmasking its fusion peptide. Fusion occurs at the host cell plasma membrane, & The transmembrane protein (TM) acts as a class I viral fusion protein. Under the current model, the protein has at least 3 conformational states: pre-fusion native state, pre-hairpin intermediate state, and post-fusion hairpin state. During viral and target cell membrane fusion, the coiled coil regions (heptad repeats) assume a trimer-of-hairpins structure, positioning the fusion peptide in close proximity to the C-terminal region of the ectodomain. The formation of this structure appears to drive apposition and subsequent fusion of viral and target cell membranes. Membranes fusion leads to delivery of the nucleocapsid into the cytoplasm, **SUBCELLULAR LOCATION$** Cell membrane. **SUBUNIT$** The mature envelope protein (Env) consists of a trimer of SU-TM heterodimers attached by noncovalent interactions or by a labile interchain disulfide bond_
### Protein 1 (TiLV virus):
Sequence:
MWAFQEGVCKGNLLSGPTSMKAPDSAARESLDRASEIMTGKSYNAVHTGDLSKLPNQGESPLRIVDSDLYSERSCCWVIEKEGRVVCKSTTLTRGMTGLLNTTRCSSPSELICKVLTVESLSEKIGDTSVEELLSHGRYFKCALRDQERGKPKSRAIFLSHPFFRLLSSVVETHARSVLSKVSAVYTATASAEQRAMMAAQVVESRKHVLNGDCTKYNEAIDADTLLKVWDAIGMGSIGVMLAYMVRRKCVLIKDTLVECPGGMLMGMFNATATLALQGTTDRFLSFSDDFITSFNSPAELREIEDLLFASCHNLSLKKSYISVASLEINSCTLTRDGDLATGLGCTAGVPFRGPLVTLKQTAAMLSGAVDSGVMPFHSAERLFQIKQQECAYRYNNPTYTTRNEDFLPTCLGGKTVISFQSLLTWDCHPFWYQVHPDGPDTIDQKVLSVLASKTRRRRTRLEALSDLDPLVPHRLLVSESDVSKIRAARQAHLKSLGLEQPTNFNYAIYKAVQPTAGC
Description:
_**FUNCTION$** Probably involved in the RNA silencing pathway and required for the generation of small interfering RNAs (siRNAs), **CATALYTIC ACTIVITY$** a ribonucleoside 5'-triphosphate + RNA(n) = diphosphate + RNA(n+1), **SIMILARITY$** Belongs to the RdRP family._
### Protein 2 (TiLV virus):
Sequence:
MSQFGKSFKGRTEVTITEYRSHTVKDVHRSLLTADKSLRKSFCFRNALNQFLDKDLPLLPIRPKLESRVAVKKSKLRSQLSFRPGLTQEEAIDLYNKGYDGDSVSGALQDRVVNEPVAYSSADNDKFHRGLAALGYTLADRAFDTCESGFVRAIPTTPCGFICCGPGSFKDSLGFVIKIGEFWHMYDGFQHFVAVEDAKFLASKSPSFWLAKRLAKRLNLVPKEDPSIAAAECPCRKVWEASFARAPTALDPFGGRAFCDQGWVYHRDVGYATANHISQETLFQQALSVRNLGPQGSANVSGSIHTALDRLRAAYSRGTPASRSILQGLANLITPVGENFECDLDKRKLNIKALRSPERYITIEGLVVNLDDVVRGFYLDKAKVTVLSRSKWMGYEDLPQKPPNGTFYCRKRKAMLLISCSPGTYAKKRKVAVQEDRFKDMRVENFREVAENMDLNQ
Description:
_**FUNCTION$** DNA-dependent RNA polymerase catalyzes the transcription of DNA into RNA using the four ribonucleoside triphosphates as substrates, **CATALYTIC ACTIVITY$** a ribonucleoside 5'-triphosphate + RNA(n) = diphosphate + RNA(n+1), **SIMILARITY$** Belongs to the RNA polymerase beta' chain family._
### Protein 3 (TiLV virus):
Sequence:
MDSRFAQLTGVFCDDFTYSEGSRRFLSSYSTVERRPGVPVEGDCYDCLKNKWIAFELEGQPRKFPKATVRCILNNDATYVCSEQEYQQICKVQFKDYLEIDGVVKVGHKASYDAELRERLLELPHPKSGPKPRIEWVAPPRLADISKETAELKRQYGFFECSKFLACGEECGLDQEARELILNEYARDREFEFRNGGWIQRYTVASHKPATQKILPLPASAPLARELLMLIARSTTQAGKVLHSDNTSILAVPVMRDSGKHSKRRPTASTHHLVVGLSKPGCEHDFEFDGYRAAVHVMHLDPKQSANIGEQDFVSTREIYKLDMLELPPISRKGDLDRASGLETRWDVILLLECLDSTRVSQAVAQHFNRHRLALSVCKDEFRKGYQLASEIRGTIPLSSLYYSLCAVRLRMTVHPFAR
Descriptions:
1. _**FUNCTION$** DNA-dependent RNA polymerase catalyzes the transcription of DNA into RNA using the four ribonucleoside triphosphates as substrates. Specific core component of RNA polymerase III which synthesizes small RNAs, such as 5S rRNA and tRNAs, **SUBCELLULAR LOCATION$** Nucleus, **SUBUNIT$** Component of the RNA polymerase III (Pol III) complex consisting of 17 subunits, **SIMILARITY$** Belongs to the eukaryotic RPC3/POLR3C RNA polymerase subunit family._
2. _**FUNCTION$** Decapping enzyme for NAD-capped RNAs: specifically hydrolyzes the nicotinamide adenine dinucleotide (NAD) cap from a subset of RNAs by removing the entire NAD moiety from the 5'-end of an NAD-capped RNA, **SUBCELLULAR LOCATION$** Nucleus, **COFACTOR$** a divalent metal cation, **SIMILARITY$** Belongs to the DXO/Dom3Z family._
## Code: https://github.com/technion-cs-nlp/BetaDescribe-code/
|
minhxle/truesight-ft-job-baf9dfa0-f6c3-407c-9572-1758f490cf17
|
minhxle
| 2025-06-20T10:46:30Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-20T10:46:25Z |
---
base_model: unsloth/qwen2.5-7b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** minhxle
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-7b-instruct-unsloth-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Josephinepassananti/sd21-kamala_ft_dataset_512_shaded_0.01_target_black_square-bs1-steps5000-lr1e-04
|
Josephinepassananti
| 2025-06-20T10:45:25Z | 0 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"diffusers-training",
"lora",
"base_model:stabilityai/stable-diffusion-2-1",
"base_model:adapter:stabilityai/stable-diffusion-2-1",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2025-06-20T10:15:24Z |
---
base_model: stabilityai/stable-diffusion-2-1
library_name: diffusers
license: creativeml-openrail-m
inference: true
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- diffusers-training
- lora
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# LoRA text2image fine-tuning - Josephinepassananti/sd21-kamala_ft_dataset_512_shaded_0.01_target_black_square-bs1-steps5000-lr1e-04
These are LoRA adaption weights for stabilityai/stable-diffusion-2-1. The weights were fine-tuned on the None dataset. You can find some example images in the following.




## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model]
|
oldroydh/sd-class-butterflies-64
|
oldroydh
| 2025-06-20T10:45:07Z | 0 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] |
unconditional-image-generation
| 2025-06-20T09:53:25Z |
---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('oldroydh/sd-class-butterflies-64')
image = pipeline().images[0]
image
```
|
Muizzzz8/phi3-prescription-reader
|
Muizzzz8
| 2025-06-20T10:43:23Z | 48 | 0 | null |
[
"pytorch",
"phi3_v",
"image-to-text",
"custom_code",
"region:us"
] |
image-to-text
| 2025-03-15T01:19:44Z |
---
pipeline_tag: image-to-text
---
# phi3-prescription-reader
This model is a fine-tuned variant of `phi-3-mini` built using PyTorch and designed for reading and interpreting handwritten or scanned medical prescriptions. It has been adapted to perform well on noisy, handwritten-style inputs by combining OCR and prompt-based language understanding.
## 🧠 Use Case
The model is designed to:
- Interpret scanned prescriptions
- Extract key details like patient name, medications, dosage, and instructions
- Provide structured output in JSON format for further use in healthcare applications
## 📦 Model Details
- **Base Model:** phi-3-mini
- **Framework:** PyTorch
- **Training Data:** Custom dataset of annotated prescription images and text (not publicly released)
- **Language:** English
- **Task/Pipeline:** Text Generation / Instruction Following
- **Tags:** prescription, healthcare, phi3, OCR
## 🚀 How to Use
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("Muizzzz8/phi3-prescription-reader", use_auth_token=True)
model = AutoModelForCausalLM.from_pretrained("Muizzzz8/phi3-prescription-reader", use_auth_token=True)
prompt = "Read the following prescription and extract medicine names and dosages:\n[IMAGE_TEXT_HERE]"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=200)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
|
garvwastaken/bpmn-extraction-model-finetuned
|
garvwastaken
| 2025-06-20T10:42:50Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"token-classification",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2025-06-20T09:18:24Z |
---
language: en
license: apache-2.0
library_name: transformers
pipeline_tag: token-classification
widget:
- text: >-
If the shipment address is invalid, the system notifies the customer to
update their details.
---
# Fine-tuned BPMN Extraction Model
This model is a fine-tuned version of `jtlicardo/bpmn-information-extraction-v2` on a custom dataset.
It is designed to extract the following entities from procedural and abstract business process text:
- AGENT
- TASK
- CONDITION
- PROCESS_INFO
- TASK_INFO
|
winnieyangwannan/entity_Llama-3.1-8B-Instruct_mlp-down_positive-negative-addition-same_last_layer_18_2_all_3_49
|
winnieyangwannan
| 2025-06-20T10:40:58Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-20T10:38:46Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ICB-UMA/HERBERT-P
|
ICB-UMA
| 2025-06-20T10:37:47Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"roberta",
"feature-extraction",
"contrastive-learning",
"Spanish-UMLS",
"Hierarchical-enrichment",
"entity-linking",
"biomedical",
"spanish",
"es",
"base_model:PlanTL-GOB-ES/roberta-base-biomedical-clinical-es",
"base_model:finetune:PlanTL-GOB-ES/roberta-base-biomedical-clinical-es",
"license:mit",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2025-06-20T10:05:09Z |
---
library_name: transformers
tags:
- contrastive-learning
- Spanish-UMLS
- Hierarchical-enrichment
- entity-linking
- biomedical
- spanish
license: mit
language:
- es
base_model:
- PlanTL-GOB-ES/roberta-base-biomedical-clinical-es
---
# HERBERT: Leveraging UMLS Hierarchical Knowledge to Enhance Clinical Entity Normalization in Spanish
**HERBERT-P** is a contrastive-learning-based bi-encoder for medical entity normalization in Spanish, leveraging synonym and parent relationships from UMLS to enhance candidate retrieval for entity linking in clinical texts.
**Key features:**
- Base model: [PlanTL-GOB-ES/roberta-base-biomedical-clinical-es](https://huggingface.co/PlanTL-GOB-ES/roberta-base-biomedical-clinical-es)
- Trained with 15 positive pairs per anchor (synonyms + parents)
- Task: Normalization of disease, procedure, and symptom mentions to SNOMED-CT/UMLS codes.
- Domain: Spanish biomedical/clinical texts.
- Corpora: DisTEMIST, MedProcNER, SympTEMIST.
---
## Benchmark Results
| Corpus | Top-1 | Top-5 | Top-25 | Top-200 |
|-------------|--------|--------|--------|---------|
| DisTEMIST | 0.574 | 0.720 | 0.803 | 0.869 |
| SympTEMIST | 0.630 | 0.779 | 0.881 | 0.945 |
| MedProcNER | 0.651 | 0.763 | 0.838 | 0.892 |
|
ICB-UMA/HERBERT-P-30
|
ICB-UMA
| 2025-06-20T10:37:43Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"roberta",
"feature-extraction",
"contrastive-learning",
"Spanish-UMLS",
"Hierarchical-enrichment",
"entity-linking",
"biomedical",
"spanish",
"es",
"base_model:PlanTL-GOB-ES/roberta-base-biomedical-clinical-es",
"base_model:finetune:PlanTL-GOB-ES/roberta-base-biomedical-clinical-es",
"license:mit",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2025-06-20T10:11:30Z |
---
library_name: transformers
tags:
- contrastive-learning
- Spanish-UMLS
- Hierarchical-enrichment
- entity-linking
- biomedical
- spanish
license: mit
language:
- es
base_model:
- PlanTL-GOB-ES/roberta-base-biomedical-clinical-es
---
# HERBERT: Leveraging UMLS Hierarchical Knowledge to Enhance Clinical Entity Normalization in Spanish
**HERBERT-P** is a contrastive-learning-based bi-encoder for medical entity normalization in Spanish, leveraging synonym and parent relationships from UMLS to enhance candidate retrieval for entity linking in clinical texts.
**Key features:**
- Base model: [PlanTL-GOB-ES/roberta-base-biomedical-clinical-es](https://huggingface.co/PlanTL-GOB-ES/roberta-base-biomedical-clinical-es)
- Trained with 30 positive pairs per anchor (synonyms + parents)
- Task: Normalization of disease, procedure, and symptom mentions to SNOMED-CT/UMLS codes.
- Domain: Spanish biomedical/clinical texts.
- Corpora: DisTEMIST, MedProcNER, SympTEMIST.
---
## Benchmark Results
| Corpus | Top-1 | Top-5 | Top-25 | Top-200 |
|-------------|--------|--------|--------|---------|
| DisTEMIST | 0.588 | 0.723 | 0.803 | 0.867 |
| SympTEMIST | 0.635 | 0.784 | 0.882 | 0.946 |
| MedProcNER | 0.651 | 0.765 | 0.838 | 0.892 |
|
hzlizihao/bert-finetuned-ner
|
hzlizihao
| 2025-06-20T10:33:59Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2025-06-20T10:08:02Z |
---
library_name: transformers
license: apache-2.0
base_model: bert-base-cased
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
config: conll2003
split: validation
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.935275616619765
- name: Recall
type: recall
value: 0.9508582968697409
- name: F1
type: f1
value: 0.9430025869982476
- name: Accuracy
type: accuracy
value: 0.9863866486136458
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0615
- Precision: 0.9353
- Recall: 0.9509
- F1: 0.9430
- Accuracy: 0.9864
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.075 | 1.0 | 1756 | 0.0712 | 0.8968 | 0.9313 | 0.9137 | 0.9805 |
| 0.0351 | 2.0 | 3512 | 0.0728 | 0.9308 | 0.9441 | 0.9374 | 0.9845 |
| 0.0231 | 3.0 | 5268 | 0.0615 | 0.9353 | 0.9509 | 0.9430 | 0.9864 |
### Framework versions
- Transformers 4.52.4
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1
|
Krrrrish/ppo-LunarLander-v2
|
Krrrrish
| 2025-06-20T10:32:27Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-06-20T07:12:01Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 284.08 +/- 18.89
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
winnieyangwannan/entity_Llama-3.1-8B-Instruct_mlp-down_positive-negative-addition-same_last_layer_26_2_all_3_49
|
winnieyangwannan
| 2025-06-20T10:32:02Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-20T10:29:48Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
dymax/SmolLM2-FT-MyDataset
|
dymax
| 2025-06-20T10:31:22Z | 6 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"smol-course",
"module_1",
"trl",
"sft",
"conversational",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-12T17:08:08Z |
---
library_name: transformers
model_name: SmolLM2-FT-MyDataset
tags:
- generated_from_trainer
- smol-course
- module_1
- trl
- sft
licence: license
---
# Model Card for SmolLM2-FT-MyDataset
This model is a fine-tuned version of [None](https://huggingface.co/None).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="dymax/SmolLM2-FT-MyDataset", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.18.2
- Transformers: 4.52.4
- Pytorch: 2.6.0+cu124
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
LaaP-ai/donut-base-receiptv1.02
|
LaaP-ai
| 2025-06-20T10:28:26Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"vision-encoder-decoder",
"image-text-to-text",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2025-06-20T09:38:45Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
winnieyangwannan/entity_Llama-3.1-8B-Instruct_mlp-down_positive-negative-addition-same_last_layer_14_2_all_3_49
|
winnieyangwannan
| 2025-06-20T10:24:31Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-20T10:22:13Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Lucas0319/001
|
Lucas0319
| 2025-06-20T10:24:23Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2024-10-24T06:52:10Z |
---
license: apache-2.0
---
|
fnlp/MOSS-TTSD-v0
|
fnlp
| 2025-06-20T10:24:21Z | 0 | 3 | null |
[
"safetensors",
"qwen3",
"text-to-speech",
"zh",
"en",
"base_model:Qwen/Qwen3-1.7B-Base",
"base_model:finetune:Qwen/Qwen3-1.7B-Base",
"license:apache-2.0",
"region:us"
] |
text-to-speech
| 2025-06-19T14:04:22Z |
---
license: apache-2.0
language:
- zh
- en
base_model:
- Qwen/Qwen3-1.7B-Base
pipeline_tag: text-to-speech
---
# MOSS-TTSD 🪐
## Overview
MOSS-TTSD (text to spoken dialogue) is an open-source bilingual spoken dialogue synthesis model that supports both Chinese and English.
It can transform dialogue scripts between two speakers into natural, expressive conversational speech.
MOSS-TTSD supports voice cloning and single-session speech generation of up to 960 seconds, making it ideal for AI podcast production.
## Highlights
- **Highly Expressive Dialogue Speech**: Built on unified semantic-acoustic neural audio codec, a pre-trained large language model, millions of hours of TTS data, and 400k hours synthetic and real conversational speech, MOSS-TTSD generates highly expressive, human-like dialogue speech with natural conversational prosody.
- **Two-Speaker Voice Cloning**: MOSS-TTSD supports zero-shot two speakers voice cloning and can generate conversational speech with accurate speaker swithcing based on dialogue scripts.
- **Chinese-English Bilingual Support**: MOSS-TTSD enables highly expressive speech generation in both Chinese and English.
- **Long-Form Speech Generation (up to 960 seconds)**: Thanks to low-bitrate codec and training framework optimization, MOSS-TTSD has been trained for long speech generation, enabling single-session speech generation of up to 960 seconds.
- **Fully Open Source & Commercial-Ready**: MOSS-TTSD and its future updates will be fully open-source and support free commercial use.
|
luckeciano/Qwen-2.5-7B-GRPO-NoBaseline-FisherMaskToken-0.1_9882
|
luckeciano
| 2025-06-20T10:20:22Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"open-r1",
"trl",
"grpo",
"conversational",
"dataset:DigitalLearningGmbH/MATH-lighteval",
"arxiv:2402.03300",
"base_model:Qwen/Qwen2.5-Math-7B",
"base_model:finetune:Qwen/Qwen2.5-Math-7B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-20T04:54:57Z |
---
base_model: Qwen/Qwen2.5-Math-7B
datasets: DigitalLearningGmbH/MATH-lighteval
library_name: transformers
model_name: Qwen-2.5-7B-GRPO-NoBaseline-FisherMaskToken-0.1_9882
tags:
- generated_from_trainer
- open-r1
- trl
- grpo
licence: license
---
# Model Card for Qwen-2.5-7B-GRPO-NoBaseline-FisherMaskToken-0.1_9882
This model is a fine-tuned version of [Qwen/Qwen2.5-Math-7B](https://huggingface.co/Qwen/Qwen2.5-Math-7B) on the [DigitalLearningGmbH/MATH-lighteval](https://huggingface.co/datasets/DigitalLearningGmbH/MATH-lighteval) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="luckeciano/Qwen-2.5-7B-GRPO-NoBaseline-FisherMaskToken-0.1_9882", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/max-ent-llms/PolicyGradientStability/runs/ekme8r5t)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.16.0.dev0
- Transformers: 4.49.0
- Pytorch: 2.6.0
- Datasets: 3.4.1
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
nusnlp/JGP-Multilingual
|
nusnlp
| 2025-06-20T10:19:18Z | 1 | 0 | null |
[
"pytorch",
"llama",
"en",
"zh",
"id",
"arxiv:2506.13044",
"license:apache-2.0",
"region:us"
] | null | 2025-06-18T05:58:50Z |
---
license: apache-2.0
language:
- en
- zh
- id
---
# Just-Go-Parallel (Multilingual)
The model repository for the "Multilingual" setting of the following paper:
> **Just Go Parallel: Improving the Multilingual Capabilities of Large Language Models**
>
> [Muhammad Reza Qorib](https://mrqorib.github.io/), [Junyi Li](https://lijunyi.tech/), and [Hwee Tou Ng](https://www.comp.nus.edu.sg/~nght/)
>
> The 63rd Annual Meeting of the Association for Computational Linguistics (to appear)
- **Paper:** [arXiv](https://arxiv.org/abs/2506.13044)
- **Codebase:** [https://github.com/nusnlp/Just-Go-Parallel/](https://github.com/nusnlp/just-Go-Parallel/)
We use the architecture and tokenizer of [TinyLlama v1.1](https://huggingface.co/TinyLlama/TinyLlama_v1.1).
Please use transformers>=4.35.
## Models
The main branch of the repository contains the best-performing model that was evaluated in the paper. Other checkpoints produced during training will also be hosted in this repository under different branch names (also called "revisions" in HuggingFace), with each branch name indicating the number of training steps.
* No Parallel: [nusnlp/JGP-No-Parallel](https://huggingface.co/nusnlp/JGP-No-Parallel)
* Multilingual: [nusnlp/JGP-Multilingual](https://huggingface.co/nusnlp/JGP-Multilingual)
* Parallel Non-Adjacent: [nusnlp/JGP-Parallel-Non-Adjacent](https://huggingface.co/nusnlp/JGP-Parallel-Non-Adjacent)
* Parallel First: [nusnlp/JGP-Parallel-First](https://huggingface.co/nusnlp/JGP-Parallel-First)
* Parallel Distributed: [nusnlp/JGP-Parallel-Distributed](https://huggingface.co/nusnlp/JGP-Parallel-Distributed)
* Parallel Last (all): [nusnlp/JGP-Parallel-Last-all](https://huggingface.co/nusnlp/JGP-Parallel-Last-all)
* Parallel Last (uni):
* EN→ID: [nusnlp/JGP-Parallel-Last-EN-ID](https://huggingface.co/nusnlp/JGP-Parallel-Last-EN-ID)
* ID→EN: [nusnlp/JGP-Parallel-Last-ID-EN](https://huggingface.co/nusnlp/JGP-Parallel-Last-ID-EN)
* EN→ZH: [nusnlp/JGP-Parallel-Last-EN-ZH](https://huggingface.co/nusnlp/JGP-Parallel-Last-EN-ZH)
* ZH→EN: [nusnlp/JGP-Parallel-Last-ZH-EN](https://huggingface.co/nusnlp/JGP-Parallel-Last-ZH-EN)
|
thanhsc02/gemma-12b-it-lora-newdatax20-5epoch
|
thanhsc02
| 2025-06-20T10:18:14Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"gemma3",
"trl",
"en",
"base_model:unsloth/gemma-3-12b-it-unsloth-bnb-4bit",
"base_model:finetune:unsloth/gemma-3-12b-it-unsloth-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-20T10:18:03Z |
---
base_model: unsloth/gemma-3-12b-it-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** thanhsc02
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-3-12b-it-unsloth-bnb-4bit
This gemma3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
ummykk/my-bert-fine-tuned
|
ummykk
| 2025-06-20T10:17:36Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-06-20T10:17:11Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
FormlessAI/96666b47-f541-425d-8532-85351e093bee
|
FormlessAI
| 2025-06-20T10:13:55Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"unsloth",
"base_model:unsloth/tinyllama-chat",
"base_model:finetune:unsloth/tinyllama-chat",
"endpoints_compatible",
"region:us"
] | null | 2025-06-20T08:21:07Z |
---
base_model: unsloth/tinyllama-chat
library_name: transformers
model_name: 96666b47-f541-425d-8532-85351e093bee
tags:
- generated_from_trainer
- trl
- sft
- unsloth
licence: license
---
# Model Card for 96666b47-f541-425d-8532-85351e093bee
This model is a fine-tuned version of [unsloth/tinyllama-chat](https://huggingface.co/unsloth/tinyllama-chat).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="FormlessAI/96666b47-f541-425d-8532-85351e093bee", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/phoenix-formless/Gradients/runs/qefc1t5g)
This model was trained with SFT.
### Framework versions
- TRL: 0.18.1
- Transformers: 4.52.4
- Pytorch: 2.7.0+cu128
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
bunnycore/Qwen3-4B-RP-V2
|
bunnycore
| 2025-06-20T10:13:06Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2306.01708",
"base_model:Hastagaras/Qibil-4B-v0.1-RP",
"base_model:merge:Hastagaras/Qibil-4B-v0.1-RP",
"base_model:bunnycore/Qwen3-4B-RP",
"base_model:merge:bunnycore/Qwen3-4B-RP",
"base_model:fakezeta/amoral-Qwen3-4B",
"base_model:merge:fakezeta/amoral-Qwen3-4B",
"base_model:mlabonne/Qwen3-4B-abliterated",
"base_model:merge:mlabonne/Qwen3-4B-abliterated",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-20T10:06:39Z |
---
base_model:
- mlabonne/Qwen3-4B-abliterated
- Hastagaras/Qibil-4B-v0.1-RP
- fakezeta/amoral-Qwen3-4B
- bunnycore/Qwen3-4B-RP
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [mlabonne/Qwen3-4B-abliterated](https://huggingface.co/mlabonne/Qwen3-4B-abliterated) as a base.
### Models Merged
The following models were included in the merge:
* [Hastagaras/Qibil-4B-v0.1-RP](https://huggingface.co/Hastagaras/Qibil-4B-v0.1-RP)
* [fakezeta/amoral-Qwen3-4B](https://huggingface.co/fakezeta/amoral-Qwen3-4B)
* [bunnycore/Qwen3-4B-RP](https://huggingface.co/bunnycore/Qwen3-4B-RP)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: Hastagaras/Qibil-4B-v0.1-RP
parameters:
density: 0.5
weight: 0.5
- model: fakezeta/amoral-Qwen3-4B
parameters:
density: 0.3
weight: 0.3
- model: bunnycore/Qwen3-4B-RP
parameters:
density: 0.3
weight: 0.3
merge_method: ties
base_model: mlabonne/Qwen3-4B-abliterated
parameters:
normalize: false
int8_mask: true
dtype: float16
```
|
willystumblr/2025-06-20-13-45-17
|
willystumblr
| 2025-06-20T10:05:48Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"base_model:finetune:meta-llama/Meta-Llama-3-8B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-06-20T10:05:33Z |
---
base_model: meta-llama/Meta-Llama-3-8B-Instruct
library_name: transformers
model_name: 2025-06-20-13-45-17
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for 2025-06-20-13-45-17
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="willystumblr/2025-06-20-13-45-17", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/willystumblr/persona-craft/runs/6pbnwn89)
This model was trained with SFT.
### Framework versions
- TRL: 0.18.2
- Transformers: 4.52.4
- Pytorch: 2.7.0
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
kernels-community/triton-layer-norm
|
kernels-community
| 2025-06-20T10:05:39Z | 0 | 0 | null |
[
"kernel",
"license:bsd-3-clause",
"region:us"
] | null | 2025-03-12T09:44:44Z |
---
license: bsd-3-clause
tags:
- kernel
---
# Triton layer normalization kernels.
This kernel implements layers normalization using Triton. This kernel is from
the [flash-attention](https://github.com/Dao-AILab/flash-attention) project.
## Functions
### Function `layer_norm`
`(x: torch.Tensor, weight: torch.Tensor, bias: torch.Tensor, residual: Optional[torch.Tensor] = None, x1: Optional[torch.Tensor] = None, weight1: Optional[torch.Tensor] = None, bias1: Optional[torch.Tensor] = None, eps: float = 1e-06, dropout_p: float = 0.0, rowscale=None, prenorm: bool = False, residual_in_fp32: bool = False, is_rms_norm: bool = False, return_dropout_mask: bool = False, out: Optional[torch.Tensor] = None, residual_out: Optional[torch.Tensor] = None)`
Apply layer normalization to the input tensor with Triton acceleration.
### Parameters
- **x** (*torch.Tensor*) --
Input tensor to normalize.
- **weight** (*torch.Tensor*) --
Scale parameter for normalization.
- **bias** (*torch.Tensor*) --
Shift parameter for normalization.
- **residual** (*torch.Tensor*, *optional*) --
Optional residual tensor to add to the input before normalization.
- **x1** (*torch.Tensor*, *optional*) --
Optional second input tensor to combine with *x*. When provided, the function
first adds *x1* to *x* and then applies normalization.
- **weight1** (*torch.Tensor*, *optional*) --
Scale parameter for the second normalization.
- **bias1** (*torch.Tensor*, *optional*) --
Shift parameter for the second normalization.
- **eps** (*float*, *optional*, defaults to 1e-6) --
Small constant added for numerical stability in normalization.
- **dropout_p** (*float*, *optional*, defaults to 0.0) --
Dropout probability. If greater than 0, applies dropout to the input before
normalization and residual addition.
- **rowscale** (*torch.Tensor*, *optional*) --
Optional scaling factor applied to each row of the input tensor.
Not compatible with the use of *x1*.
- **prenorm** (*bool*, *optional*, defaults to False) --
If True, returns both the normalized output and the unnormalized input+residual.
- **residual_in_fp32** (*bool*, *optional*, defaults to False) --
If True, performs the residual connection in FP32 precision.
- **is_rms_norm** (*bool*, *optional*, defaults to False) --
If True, uses RMS normalization instead of layer normalization.
- **return_dropout_mask** (*bool*, *optional*, defaults to False) --
If True, returns the dropout mask used for the computation.
- **out** (*torch.Tensor*, *optional*) --
Output tensor for the normalized result. If *None*, a new tensor is allocated.
- **residual_out** (*torch.Tensor*, *optional*) --
Output tensor for the residual result when using prenorm. If *None*, a new tensor
is allocated when needed.
### Returns
**Type**: *torch.Tensor* or tuple of *torch.Tensor*
- The normalized input.
- The second normalization of the input if *weight1* is provided.
- The residual tensor if *prenorm* is set.
- The dropout mask if *return_dropout_mask* is set.
- The dropout mask for *x1* if *x1* is provided and *return_dropout_mask* is set.
## Layers
### Class `LlamaRMSNorm`
No documentation available.
#### Methods
##### Method `forward`
`(self, hidden_states: torch.Tensor) -> torch.Tensor`
No documentation available.
|
vjay98/distilbert-base-uncased-finetuned-clinc
|
vjay98
| 2025-06-20T10:03:42Z | 5 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-06-17T07:06:41Z |
---
library_name: transformers
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-clinc
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2590
- Accuracy: 0.9410
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 4.2872 | 1.0 | 954 | 1.8035 | 0.8223 |
| 1.2916 | 2.0 | 1908 | 0.5954 | 0.9148 |
| 0.3726 | 3.0 | 2862 | 0.3311 | 0.9358 |
| 0.1488 | 4.0 | 3816 | 0.2732 | 0.9410 |
| 0.0911 | 5.0 | 4770 | 0.2590 | 0.9410 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.7.0+cu126
- Datasets 3.6.0
- Tokenizers 0.21.1
|
mchettih/financial_Llama-3.2-3B-Instruct_teacher_finetuned
|
mchettih
| 2025-06-20T10:01:55Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:unsloth/Llama-3.2-3B-Instruct",
"base_model:finetune:unsloth/Llama-3.2-3B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-20T09:57:52Z |
---
base_model: unsloth/Llama-3.2-3B-Instruct
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** mchettih
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Llama-3.2-3B-Instruct
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
diegolacomba/multilingual-e5-small-mlm-legal-4
|
diegolacomba
| 2025-06-20T10:01:09Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-06-20T10:00:53Z |
---
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
pipeline_tag: sentence-similarity
library_name: sentence-transformers
---
# SentenceTransformer
This is a [sentence-transformers](https://www.SBERT.net) model trained. It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
<!-- - **Base model:** [Unknown](https://huggingface.co/unknown) -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 384 dimensions
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("diegolacomba/multilingual-e5-small-mlm-legal-4")
# Run inference
sentences = [
'The weather is lovely today.',
"It's so sunny outside!",
'He drove to the stadium.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Framework Versions
- Python: 3.11.13
- Sentence Transformers: 4.1.0
- Transformers: 4.52.4
- PyTorch: 2.6.0+cu124
- Accelerate: 1.7.0
- Datasets: 2.14.4
- Tokenizers: 0.21.1
## Citation
### BibTeX
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
zerostratos/vi_book_qwen3-1.7B
|
zerostratos
| 2025-06-20T09:58:02Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-20T09:56:40Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mchettih/financial_Llama-3.2-3B-Instruct_teacher_finetuned_finetuned
|
mchettih
| 2025-06-20T09:57:42Z | 0 | 0 |
transformers
|
[
"transformers",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/Llama-3.2-3B-Instruct",
"base_model:finetune:unsloth/Llama-3.2-3B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-20T09:57:40Z |
---
base_model: unsloth/Llama-3.2-3B-Instruct
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** mchettih
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Llama-3.2-3B-Instruct
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
LarryAIDraw/Belfast_Simisir
|
LarryAIDraw
| 2025-06-20T09:52:15Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2025-06-20T09:15:18Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/50516?modelVersionId=409247
|
LarryAIDraw/hsr-feixiao-ponyxl-lora-nochekaiser
|
LarryAIDraw
| 2025-06-20T09:51:37Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2025-06-20T09:12:54Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/746845/feixiao-honkai-star-rail
|
likewendy/red-dot_yolo11n_object_detection
|
likewendy
| 2025-06-20T09:50:56Z | 0 | 0 | null |
[
"yolo",
"yolo11",
"red-dot",
"object-detection",
"zh",
"dataset:likewendy/red-dot_yolo11n_object_detection",
"base_model:Ultralytics/YOLO11",
"base_model:finetune:Ultralytics/YOLO11",
"license:gpl-3.0",
"region:us"
] |
object-detection
| 2025-06-20T09:40:22Z |
---
license: gpl-3.0
datasets:
- likewendy/red-dot_yolo11n_object_detection
language:
- zh
base_model:
- Ultralytics/YOLO11
pipeline_tag: object-detection
tags:
- yolo
- yolo11
- red-dot
---

它性能不够出色,在干扰不多的情况下,能勉强工作



|
morturr/Llama-2-7b-hf-PAIR_dadjokes_one_liners-COMB-dadjokes-comb-3-seed-18-2025-06-20
|
morturr
| 2025-06-20T09:50:41Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"license:llama2",
"region:us"
] | null | 2025-06-20T09:50:19Z |
---
library_name: peft
license: llama2
base_model: meta-llama/Llama-2-7b-hf
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: Llama-2-7b-hf-PAIR_dadjokes_one_liners-COMB-dadjokes-comb-3-seed-18-2025-06-20
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Llama-2-7b-hf-PAIR_dadjokes_one_liners-COMB-dadjokes-comb-3-seed-18-2025-06-20
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 16
- seed: 18
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.1
- Pytorch 2.5.1+cu124
- Datasets 3.0.2
- Tokenizers 0.20.1
|
FanMeipuru/myFinetunedModel
|
FanMeipuru
| 2025-06-20T09:49:35Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"phi3",
"text-generation",
"conversational",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-20T02:33:47Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
alexsheiko/setfit-email-model
|
alexsheiko
| 2025-06-20T09:47:18Z | 0 | 0 |
setfit
|
[
"setfit",
"safetensors",
"bert",
"sentence-transformers",
"text-classification",
"generated_from_setfit_trainer",
"arxiv:2209.11055",
"base_model:sentence-transformers/all-MiniLM-L6-v2",
"base_model:finetune:sentence-transformers/all-MiniLM-L6-v2",
"region:us"
] |
text-classification
| 2025-06-20T09:46:52Z |
---
tags:
- setfit
- sentence-transformers
- text-classification
- generated_from_setfit_trainer
widget:
- text: '"The Impact of Assessment for 21 st Century Skills in Higher Education Institutions:
A Narrative Literature Review" by Rany Sam You read the paper Assessing 21st century
skills: Integrating research findings. We found a related paper on Academia:\r\n\r\nThe
Impact of Assessment for 21 st Century Skills in Higher Education Institutions:
A Narrative Literature Review\r\nPaper Thumbnail\t\r\nAuthor Photo Rany Sam\r\n2024,
Multitech Publisher\r\n23 Views \r\nView PDF \u25B8\r\n \t\t\r\nDownload PDF \u2B07\r\n\r'
- text: '[Legal Notice] Update to Google Maps Platform terms and products effective
8 July 2025 \r\nHello Google Maps Platform customer,\r\n\r\nWe''re writing to
let you know about some important updates to the Google Maps Platform (GMP) Terms
of Service (ToS) and our product offerings for customers with any GMP project
linked to a billing account with an address in the European Economic Area (EEA
customers). These updates will be effective on 8 July 2025.\r\n\r\nThe changes
to our terms are a result of a recent proc'
- text: Update on our sub-processors list Dear Business Partner,\r\n\r\n \r\n\r\nTo
support our objectives of operational excellence and compliance with industry
best practices, we continuously monitor the best options to deliver our products
and services. \r\n\r\n \r\n\r\nAs of June 9, 2025 (for Enterprise Organizations
July 9, 2025), our current list of sub-processors will be replaced by the updated
list available here. No action is required on your part, and you may continue
to use your account as usual.\r\n\r\n
metrics:
- accuracy
pipeline_tag: text-classification
library_name: setfit
inference: true
base_model: sentence-transformers/all-MiniLM-L6-v2
---
# SetFit with sentence-transformers/all-MiniLM-L6-v2
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2)
- **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance
- **Maximum Sequence Length:** 256 tokens
- **Number of Classes:** 2 classes
<!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
### Model Labels
| Label | Examples |
|:---------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 👨⚖️ Legal | <ul><li>'Airmoney Expiration Policy Update Hi alex ,\\r\\n\\r\\n \\r\\n\\r\\nFrom February 1, 2025, your Airmoney can expire \\u2014 this will always apply to your total balance, not partial amounts of Airmoney. \\r\\n\\r\\n \\r\\n\\r\\nYour Airmoney balance is set to expire on 1st February 2026.\\r\\n\\r\\nYour current Airmoney balance is 10.88 USD*.\\r\\n\\r\\n \\r\\n\\r\\nBelow, you\\u2019ll find details to help you understand how this change applies to you.\\r\\n\\r\\n \\r\\n\\r\\nDoes my Airmoney balance have to expire?\\r\\n\\r\\n \\r\\n\\r\\nNo, your Air'</li><li>'Meta Privacy Policy update Meta Privacy Policy update\\r\\n \\r\\nHi Alex,\\r\\n \\r\\nWe\\u2019re updating the Meta Privacy Policy to clarify some details.\\r\\n \\r\\nWhat you should know\\r\\n \\r\\nHere are the details that this update clarifies:\\r\\n \\r\\n\\u2022\\tHow we use information from third parties\\r\\n\\u2022\\tLegitimate interests is now our legal basis for using your information to improve Meta Products. Learn what this means for your rights\\r\\n\\u2022\\tWhen your information can be accessible to search engines\\r\\n \\'</li><li>"Google Play Developer Program Policy Update DEVELOPER UPDATE\\r\\nHello Google Play Developer,\\r\\nTo give users more control over their data, we're updating our Health Connect policy to strengthen safeguards regarding the handling of sensitive health record data. Health Connect is an Android platform that allows health and fitness apps to store and share the same on-device data, within a unified ecosystem. It also offers a single place for users to control which apps can read and write health and fitness data"</li></ul> |
| 👮🏽♂️ Security | <ul><li>"Petcube security: Sign-in notifications Hi, alexeysheiko.\\r\\n\\r\\nWe noticed a recent login to your Petcube account.\\r\\n\\r\\nTimestamp (UTC): 2025-05-04T08:19:58+00:00\\r\\n\\r\\nIP address: 85.114.207.94\\r\\n\\r\\nIf this was you, no action is required. If this wasn't you, follow the link to secure your account. Reset password\\r\\nWags & Purrs,\\r\\nPetcube Team"</li><li>'A new device is using your account A new device is using your account\\r\\nHi Oleksii,\\r\\nA new device signed in to your Netflix account, [email protected].\\r\\n \\r\\nThe details\\r\\nDevice\\r\\nMac Chrome - Web Browser\\r\\nLocation\\r\\nMazovia, Poland\\r\\n(This location may not be exact.)\\r\\nTime\\r\\nJune 19th, 3:21 PM GMT+3\\r\\n \\r\\nIf this was you or someone in your household:\\r\\nEnjoy watching!\\r\\nIf it was someone else:\\r\\nPlease remember that we only allow the people in your household to use your account.\\r'</li></ul> |
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import SetFitModel
# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("setfit_model_id")
# Run inference
preds = model("\"The Impact of Assessment for 21 st Century Skills in Higher Education Institutions: A Narrative Literature Review\" by Rany Sam You read the paper Assessing 21st century skills: Integrating research findings. We found a related paper on Academia:\r\n\r\nThe Impact of Assessment for 21 st Century Skills in Higher Education Institutions: A Narrative Literature Review\r\nPaper Thumbnail\t\r\nAuthor Photo Rany Sam\r\n2024, Multitech Publisher\r\n23 Views \r\nView PDF \u25B8\r\n \t\t\r\nDownload PDF \u2B07\r\n\r")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:-------|:----|
| Word count | 9 | 59.875 | 79 |
| Label | Training Sample Count |
|:---------------|:----------------------|
| 👨⚖️ Legal | 6 |
| 👮🏽♂️ Security | 2 |
### Training Hyperparameters
- batch_size: (16, 16)
- num_epochs: (2, 2)
- max_steps: -1
- sampling_strategy: oversampling
- num_iterations: 30
- body_learning_rate: (2e-05, 2e-05)
- head_learning_rate: 2e-05
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: False
- warmup_proportion: 0.1
- l2_weight: 0.01
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: False
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:------:|:----:|:-------------:|:---------------:|
| 0.0333 | 1 | 0.2806 | - |
| 1.6667 | 50 | 0.038 | - |
### Framework Versions
- Python: 3.13.5
- SetFit: 1.1.2
- Sentence Transformers: 4.1.0
- Transformers: 4.52.4
- PyTorch: 2.7.1
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
askedsads/my-bert-fine-tuned
|
askedsads
| 2025-06-20T09:43:35Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-06-20T09:20:40Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Chipan/indobert-emotion
|
Chipan
| 2025-06-20T09:41:43Z | 41 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"5-emotions",
"sentiment-analysis",
"id",
"dataset:Chipan/indonesia-5-emotion-cls-dataset",
"base_model:indobenchmark/indobert-base-p1",
"base_model:finetune:indobenchmark/indobert-base-p1",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-06-12T08:35:05Z |
---
license: mit
language:
- id
metrics:
- accuracy
base_model:
- indobenchmark/indobert-base-p1
library_name: transformers
tags:
- 5-emotions
- sentiment-analysis
datasets:
- Chipan/indonesia-5-emotion-cls-dataset
---
# Emotion Classification Model (Bahasa Indonesia)
This is a fine-tuned IndoBERT-based model for emotion classification in Bahasa Indonesia, designed to classify everyday informal or casual text into one of five emotional categories in bahasa Indonesia. The model is particularly useful for applications involving sentiment analysis, mental health monitoring, journaling platforms, or conversational AI.
## 🧠 Model Overview
- Base Model: indobert-base-p1 (by IndoNLU)
- Fine-tuning Task: Multi-class text classification
- Number of Classes: 5
- Model Format: safetensors
- Tokenizer: indobert-base-p1
## 🏷️ Emotion Classes
The model is trained to classify input text into one of the following five emotions:
| Emotion | Description |
| ----------- |:----------------------:|
| Marah |Angry, Frustation |
| Sedih |Sadness, disappointment |
| Senang |Joy, excitement |
| Stress |Anxiety, mental pressure|
| Bersyukur |Gratitude, thankfulness |
## ⚠️ Limitations
- Best performance on informal Indonesian (conversational, daily tone)
- May struggle with sarcasm, code-switching (mix of Indonesian-English), or domain-specific jargon
|
johngreendr1/769b90b6-3313-4f4f-8647-7d4e610b04db
|
johngreendr1
| 2025-06-20T09:38:36Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:NousResearch/Yarn-Mistral-7b-128k",
"base_model:adapter:NousResearch/Yarn-Mistral-7b-128k",
"region:us"
] | null | 2025-06-20T09:38:27Z |
---
base_model: NousResearch/Yarn-Mistral-7b-128k
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.1
|
varshajs/flan-t5-history-qg-merged
|
varshajs
| 2025-06-20T09:33:53Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"t5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2025-06-20T09:32:30Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
vasanth0475/bert-ta-mlm
|
vasanth0475
| 2025-06-20T09:31:41Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"fill-mask",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2025-06-20T09:30:52Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Qwen/Qwen3-Embedding-4B
|
Qwen
| 2025-06-20T09:30:56Z | 38,780 | 74 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"qwen3",
"text-generation",
"transformers",
"sentence-similarity",
"feature-extraction",
"text-embeddings-inference",
"arxiv:2506.05176",
"base_model:Qwen/Qwen3-4B-Base",
"base_model:finetune:Qwen/Qwen3-4B-Base",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2025-06-03T14:31:33Z |
---
license: apache-2.0
base_model:
- Qwen/Qwen3-4B-Base
tags:
- transformers
- sentence-transformers
- sentence-similarity
- feature-extraction
- text-embeddings-inference
---
# Qwen3-Embedding-4B
<p align="center">
<img src="https://qianwen-res.oss-accelerate-overseas.aliyuncs.com/logo_qwen3.png" width="400"/>
<p>
## Highlights
The Qwen3 Embedding model series is the latest proprietary model of the Qwen family, specifically designed for text embedding and ranking tasks. Building upon the dense foundational models of the Qwen3 series, it provides a comprehensive range of text embeddings and reranking models in various sizes (0.6B, 4B, and 8B). This series inherits the exceptional multilingual capabilities, long-text understanding, and reasoning skills of its foundational model. The Qwen3 Embedding series represents significant advancements in multiple text embedding and ranking tasks, including text retrieval, code retrieval, text classification, text clustering, and bitext mining.
**Exceptional Versatility**: The embedding model has achieved state-of-the-art performance across a wide range of downstream application evaluations. The 8B size embedding model ranks **No.1** in the MTEB multilingual leaderboard (as of June 5, 2025, score **70.58**), while the reranking model excels in various text retrieval scenarios.
**Comprehensive Flexibility**: The Qwen3 Embedding series offers a full spectrum of sizes (from 0.6B to 8B) for both embedding and reranking models, catering to diverse use cases that prioritize efficiency and effectiveness. Developers can seamlessly combine these two modules. Additionally, the embedding model allows for flexible vector definitions across all dimensions, and both embedding and reranking models support user-defined instructions to enhance performance for specific tasks, languages, or scenarios.
**Multilingual Capability**: The Qwen3 Embedding series offer support for over 100 languages, thanks to the multilingual capabilites of Qwen3 models. This includes various programming languages, and provides robust multilingual, cross-lingual, and code retrieval capabilities.
## Model Overview
**Qwen3-Embedding-4B** has the following features:
- Model Type: Text Embedding
- Supported Languages: 100+ Languages
- Number of Paramaters: 4B
- Context Length: 32k
- Embedding Dimension: Up to 2560, supports user-defined output dimensions ranging from 32 to 2560
For more details, including benchmark evaluation, hardware requirements, and inference performance, please refer to our [blog](https://qwenlm.github.io/blog/qwen3-embedding/), [GitHub](https://github.com/QwenLM/Qwen3-Embedding).
## Qwen3 Embedding Series Model list
| Model Type | Models | Size | Layers | Sequence Length | Embedding Dimension | MRL Support | Instruction Aware |
|------------------|----------------------|------|--------|-----------------|---------------------|-------------|----------------|
| Text Embedding | [Qwen3-Embedding-0.6B](https://huggingface.co/Qwen/Qwen3-Embedding-0.6B) | 0.6B | 28 | 32K | 1024 | Yes | Yes |
| Text Embedding | [Qwen3-Embedding-4B](https://huggingface.co/Qwen/Qwen3-Embedding-4B) | 4B | 36 | 32K | 2560 | Yes | Yes |
| Text Embedding | [Qwen3-Embedding-8B](https://huggingface.co/Qwen/Qwen3-Embedding-8B) | 8B | 36 | 32K | 4096 | Yes | Yes |
| Text Reranking | [Qwen3-Reranker-0.6B](https://huggingface.co/Qwen/Qwen3-Reranker-0.6B) | 0.6B | 28 | 32K | - | - | Yes |
| Text Reranking | [Qwen3-Reranker-4B](https://huggingface.co/Qwen/Qwen3-Reranker-4B) | 4B | 36 | 32K | - | - | Yes |
| Text Reranking | [Qwen3-Reranker-8B](https://huggingface.co/Qwen/Qwen3-Reranker-8B) | 8B | 36 | 32K | - | - | Yes |
> **Note**:
> - `MRL Support` indicates whether the embedding model supports custom dimensions for the final embedding.
> - `Instruction Aware` notes whether the embedding or reranking model supports customizing the input instruction according to different tasks.
> - Our evaluation indicates that, for most downstream tasks, using instructions (instruct) typically yields an improvement of 1% to 5% compared to not using them. Therefore, we recommend that developers create tailored instructions specific to their tasks and scenarios. In multilingual contexts, we also advise users to write their instructions in English, as most instructions utilized during the model training process were originally written in English.
## Usage
With Transformers versions earlier than 4.51.0, you may encounter the following error:
```
KeyError: 'qwen3'
```
### Sentence Transformers Usage
```python
# Requires transformers>=4.51.0
# Requires sentence-transformers>=2.7.0
from sentence_transformers import SentenceTransformer
# Load the model
model = SentenceTransformer("Qwen/Qwen3-Embedding-4B")
# We recommend enabling flash_attention_2 for better acceleration and memory saving,
# together with setting `padding_side` to "left":
# model = SentenceTransformer(
# "Qwen/Qwen3-Embedding-4B",
# model_kwargs={"attn_implementation": "flash_attention_2", "device_map": "auto"},
# tokenizer_kwargs={"padding_side": "left"},
# )
# The queries and documents to embed
queries = [
"What is the capital of China?",
"Explain gravity",
]
documents = [
"The capital of China is Beijing.",
"Gravity is a force that attracts two bodies towards each other. It gives weight to physical objects and is responsible for the movement of planets around the sun.",
]
# Encode the queries and documents. Note that queries benefit from using a prompt
# Here we use the prompt called "query" stored under `model.prompts`, but you can
# also pass your own prompt via the `prompt` argument
query_embeddings = model.encode(queries, prompt_name="query")
document_embeddings = model.encode(documents)
# Compute the (cosine) similarity between the query and document embeddings
similarity = model.similarity(query_embeddings, document_embeddings)
print(similarity)
# tensor([[0.7534, 0.1147],
# [0.0320, 0.6258]])
```
### Transformers Usage
```python
# Requires transformers>=4.51.0
import torch
import torch.nn.functional as F
from torch import Tensor
from transformers import AutoTokenizer, AutoModel
def last_token_pool(last_hidden_states: Tensor,
attention_mask: Tensor) -> Tensor:
left_padding = (attention_mask[:, -1].sum() == attention_mask.shape[0])
if left_padding:
return last_hidden_states[:, -1]
else:
sequence_lengths = attention_mask.sum(dim=1) - 1
batch_size = last_hidden_states.shape[0]
return last_hidden_states[torch.arange(batch_size, device=last_hidden_states.device), sequence_lengths]
def get_detailed_instruct(task_description: str, query: str) -> str:
return f'Instruct: {task_description}\nQuery:{query}'
# Each query must come with a one-sentence instruction that describes the task
task = 'Given a web search query, retrieve relevant passages that answer the query'
queries = [
get_detailed_instruct(task, 'What is the capital of China?'),
get_detailed_instruct(task, 'Explain gravity')
]
# No need to add instruction for retrieval documents
documents = [
"The capital of China is Beijing.",
"Gravity is a force that attracts two bodies towards each other. It gives weight to physical objects and is responsible for the movement of planets around the sun."
]
input_texts = queries + documents
tokenizer = AutoTokenizer.from_pretrained('Qwen/Qwen3-Embedding-4B', padding_side='left')
model = AutoModel.from_pretrained('Qwen/Qwen3-Embedding-4B')
# We recommend enabling flash_attention_2 for better acceleration and memory saving.
# model = AutoModel.from_pretrained('Qwen/Qwen3-Embedding-4B', attn_implementation="flash_attention_2", torch_dtype=torch.float16).cuda()
max_length = 8192
# Tokenize the input texts
batch_dict = tokenizer(
input_texts,
padding=True,
truncation=True,
max_length=max_length,
return_tensors="pt",
)
batch_dict.to(model.device)
outputs = model(**batch_dict)
embeddings = last_token_pool(outputs.last_hidden_state, batch_dict['attention_mask'])
# normalize embeddings
embeddings = F.normalize(embeddings, p=2, dim=1)
scores = (embeddings[:2] @ embeddings[2:].T)
print(scores.tolist())
# [[0.7534257769584656, 0.1146894246339798], [0.03198453038930893, 0.6258305311203003]]
```
### vLLM Usage
```python
# Requires vllm>=0.8.5
import torch
import vllm
from vllm import LLM
def get_detailed_instruct(task_description: str, query: str) -> str:
return f'Instruct: {task_description}\nQuery:{query}'
# Each query must come with a one-sentence instruction that describes the task
task = 'Given a web search query, retrieve relevant passages that answer the query'
queries = [
get_detailed_instruct(task, 'What is the capital of China?'),
get_detailed_instruct(task, 'Explain gravity')
]
# No need to add instruction for retrieval documents
documents = [
"The capital of China is Beijing.",
"Gravity is a force that attracts two bodies towards each other. It gives weight to physical objects and is responsible for the movement of planets around the sun."
]
input_texts = queries + documents
model = LLM(model="Qwen/Qwen3-Embedding-4B", task="embed")
outputs = model.embed(input_texts)
embeddings = torch.tensor([o.outputs.embedding for o in outputs])
scores = (embeddings[:2] @ embeddings[2:].T)
print(scores.tolist())
# [[0.7525103688240051, 0.1143278032541275], [0.030893627554178238, 0.6239761114120483]]
```
📌 **Tip**: We recommend that developers customize the `instruct` according to their specific scenarios, tasks, and languages. Our tests have shown that in most retrieval scenarios, not using an `instruct` on the query side can lead to a drop in retrieval performance by approximately 1% to 5%.
### Text Embeddings Inference (TEI) Usage
You can either run / deploy TEI on NVIDIA GPUs as:
```bash
docker run --gpus all -p 8080:80 -v hf_cache:/data --pull always ghcr.io/huggingface/text-embeddings-inference:1.7.2 --model-id Qwen/Qwen3-Embedding-4B --dtype float16
```
Or on CPU devices as:
```bash
docker run -p 8080:80 -v hf_cache:/data --pull always ghcr.io/huggingface/text-embeddings-inference:cpu-1.7.2 --model-id Qwen/Qwen3-Embedding-4B --dtype float16
```
And then, generate the embeddings sending a HTTP POST request as:
```bash
curl http://localhost:8080/embed \
-X POST \
-d '{"inputs": ["Instruct: Given a web search query, retrieve relevant passages that answer the query\nQuery: What is the capital of China?", "Instruct: Given a web search query, retrieve relevant passages that answer the query\nQuery: Explain gravity"]}' \
-H "Content-Type: application/json"
```
## Evaluation
### MTEB (Multilingual)
| Model | Size | Mean (Task) | Mean (Type) | Bitxt Mining | Class. | Clust. | Inst. Retri. | Multi. Class. | Pair. Class. | Rerank | Retri. | STS |
|----------------------------------|:-------:|:-------------:|:-------------:|:--------------:|:--------:|:--------:|:--------------:|:---------------:|:--------------:|:--------:|:--------:|:------:|
| NV-Embed-v2 | 7B | 56.29 | 49.58 | 57.84 | 57.29 | 40.80 | 1.04 | 18.63 | 78.94 | 63.82 | 56.72 | 71.10|
| GritLM-7B | 7B | 60.92 | 53.74 | 70.53 | 61.83 | 49.75 | 3.45 | 22.77 | 79.94 | 63.78 | 58.31 | 73.33|
| BGE-M3 | 0.6B | 59.56 | 52.18 | 79.11 | 60.35 | 40.88 | -3.11 | 20.1 | 80.76 | 62.79 | 54.60 | 74.12|
| multilingual-e5-large-instruct | 0.6B | 63.22 | 55.08 | 80.13 | 64.94 | 50.75 | -0.40 | 22.91 | 80.86 | 62.61 | 57.12 | 76.81|
| gte-Qwen2-1.5B-instruct | 1.5B | 59.45 | 52.69 | 62.51 | 58.32 | 52.05 | 0.74 | 24.02 | 81.58 | 62.58 | 60.78 | 71.61|
| gte-Qwen2-7b-Instruct | 7B | 62.51 | 55.93 | 73.92 | 61.55 | 52.77 | 4.94 | 25.48 | 85.13 | 65.55 | 60.08 | 73.98|
| text-embedding-3-large | - | 58.93 | 51.41 | 62.17 | 60.27 | 46.89 | -2.68 | 22.03 | 79.17 | 63.89 | 59.27 | 71.68|
| Cohere-embed-multilingual-v3.0 | - | 61.12 | 53.23 | 70.50 | 62.95 | 46.89 | -1.89 | 22.74 | 79.88 | 64.07 | 59.16 | 74.80|
| gemini-embedding-exp-03-07 | - | 68.37 | 59.59 | 79.28 | 71.82 | 54.59 | 5.18 | **29.16** | 83.63 | 65.58 | 67.71 | 79.40|
| **Qwen3-Embedding-0.6B** | 0.6B | 64.33 | 56.00 | 72.22 | 66.83 | 52.33 | 5.09 | 24.59 | 80.83 | 61.41 | 64.64 | 76.17|
| **Qwen3-Embedding-4B** | 4B | 69.45 | 60.86 | 79.36 | 72.33 | 57.15 | **11.56** | 26.77 | 85.05 | 65.08 | 69.60 | 80.86|
| **Qwen3-Embedding-8B** | 8B | **70.58** | **61.69** | **80.89** | **74.00** | **57.65** | 10.06 | 28.66 | **86.40** | **65.63** | **70.88** | **81.08** |
> **Note**: For compared models, the scores are retrieved from MTEB online [leaderboard](https://huggingface.co/spaces/mteb/leaderboard) on May 24th, 2025.
### MTEB (Eng v2)
| MTEB English / Models | Param. | Mean(Task) | Mean(Type) | Class. | Clust. | Pair Class. | Rerank. | Retri. | STS | Summ. |
|--------------------------------|:--------:|:------------:|:------------:|:--------:|:--------:|:-------------:|:---------:|:--------:|:-------:|:-------:|
| multilingual-e5-large-instruct | 0.6B | 65.53 | 61.21 | 75.54 | 49.89 | 86.24 | 48.74 | 53.47 | 84.72 | 29.89 |
| NV-Embed-v2 | 7.8B | 69.81 | 65.00 | 87.19 | 47.66 | 88.69 | 49.61 | 62.84 | 83.82 | 35.21 |
| GritLM-7B | 7.2B | 67.07 | 63.22 | 81.25 | 50.82 | 87.29 | 49.59 | 54.95 | 83.03 | 35.65 |
| gte-Qwen2-1.5B-instruct | 1.5B | 67.20 | 63.26 | 85.84 | 53.54 | 87.52 | 49.25 | 50.25 | 82.51 | 33.94 |
| stella_en_1.5B_v5 | 1.5B | 69.43 | 65.32 | 89.38 | 57.06 | 88.02 | 50.19 | 52.42 | 83.27 | 36.91 |
| gte-Qwen2-7B-instruct | 7.6B | 70.72 | 65.77 | 88.52 | 58.97 | 85.9 | 50.47 | 58.09 | 82.69 | 35.74 |
| gemini-embedding-exp-03-07 | - | 73.3 | 67.67 | 90.05 | **59.39** | **87.7** | 48.59 | 64.35 | 85.29 | **38.28** |
| **Qwen3-Embedding-0.6B** | 0.6B | 70.70 | 64.88 | 85.76 | 54.05 | 84.37 | 48.18 | 61.83 | 86.57 | 33.43 |
| **Qwen3-Embedding-4B** | 4B | 74.60 | 68.10 | 89.84 | 57.51 | 87.01 | 50.76 | 68.46 | **88.72** | 34.39 |
| **Qwen3-Embedding-8B** | 8B | **75.22** | **68.71** | **90.43** | 58.57 | 87.52 | **51.56** | **69.44** | 88.58 | 34.83 |
### C-MTEB (MTEB Chinese)
| C-MTEB | Param. | Mean(Task) | Mean(Type) | Class. | Clust. | Pair Class. | Rerank. | Retr. | STS |
|------------------|--------|------------|------------|--------|--------|-------------|---------|-------|-------|
| multilingual-e5-large-instruct | 0.6B | 58.08 | 58.24 | 69.80 | 48.23 | 64.52 | 57.45 | 63.65 | 45.81 |
| bge-multilingual-gemma2 | 9B | 67.64 |68.52 | 75.31 | 59.30 | 86.67 | 68.28 | 73.73 | 55.19 |
| gte-Qwen2-1.5B-instruct | 1.5B | 67.12 | 67.79 | 72.53 | 54.61 | 79.5 | 68.21 | 71.86 | 60.05 |
| gte-Qwen2-7B-instruct | 7.6B | 71.62 | 72.19 | 75.77 | 66.06 | 81.16 | 69.24 | 75.70 | 65.20 |
| ritrieve_zh_v1 | 0.3B | 72.71 | 73.85 | 76.88 | 66.5 | **85.98** | **72.86** | 76.97 | **63.92** |
| **Qwen3-Embedding-0.6B** | 0.6B | 66.33 | 67.45 | 71.40 | 68.74 | 76.42 | 62.58 | 71.03 | 54.52 |
| **Qwen3-Embedding-4B** | 4B | 72.27 | 73.51 | 75.46 | 77.89 | 83.34 | 66.05 | 77.03 | 61.26 |
| **Qwen3-Embedding-8B** | 8B | **73.84** | **75.00** | **76.97** | **80.08** | 84.23 | 66.99 | **78.21** | 63.53 |
## Citation
If you find our work helpful, feel free to give us a cite.
```
@article{qwen3embedding,
title={Qwen3 Embedding: Advancing Text Embedding and Reranking Through Foundation Models},
author={Zhang, Yanzhao and Li, Mingxin and Long, Dingkun and Zhang, Xin and Lin, Huan and Yang, Baosong and Xie, Pengjun and Yang, An and Liu, Dayiheng and Lin, Junyang and Huang, Fei and Zhou, Jingren},
journal={arXiv preprint arXiv:2506.05176},
year={2025}
}
```
|
pengye91/gemma-2-2B-it-thinking-function_calling-V0
|
pengye91
| 2025-06-20T09:27:58Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:google/gemma-2-2b-it",
"base_model:finetune:google/gemma-2-2b-it",
"endpoints_compatible",
"region:us"
] | null | 2025-06-20T09:20:27Z |
---
base_model: google/gemma-2-2b-it
library_name: transformers
model_name: gemma-2-2B-it-thinking-function_calling-V0
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for gemma-2-2B-it-thinking-function_calling-V0
This model is a fine-tuned version of [google/gemma-2-2b-it](https://huggingface.co/google/gemma-2-2b-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="pengye91/gemma-2-2B-it-thinking-function_calling-V0", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.18.2
- Transformers: 4.52.4
- Pytorch: 2.7.1
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
nnilayy/dreamer-valence-multi-classification-Kfold-4
|
nnilayy
| 2025-06-20T09:26:49Z | 0 | 0 | null |
[
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"region:us"
] | null | 2025-06-20T09:26:47Z |
---
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Code: [More Information Needed]
- Paper: [More Information Needed]
- Docs: [More Information Needed]
|
Triangle104/Yanfei-v2-Qwen3-32B-Q4_K_S-GGUF
|
Triangle104
| 2025-06-20T09:26:39Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"mergekit",
"merge",
"llama-cpp",
"gguf-my-repo",
"dataset:nbeerbower/YanfeiMix-DPO",
"base_model:nbeerbower/Yanfei-v2-Qwen3-32B",
"base_model:quantized:nbeerbower/Yanfei-v2-Qwen3-32B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-06-20T09:25:13Z |
---
base_model: nbeerbower/Yanfei-v2-Qwen3-32B
library_name: transformers
tags:
- mergekit
- merge
- llama-cpp
- gguf-my-repo
license: apache-2.0
datasets:
- nbeerbower/YanfeiMix-DPO
---
# Triangle104/Yanfei-v2-Qwen3-32B-Q4_K_S-GGUF
This model was converted to GGUF format from [`nbeerbower/Yanfei-v2-Qwen3-32B`](https://huggingface.co/nbeerbower/Yanfei-v2-Qwen3-32B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/nbeerbower/Yanfei-v2-Qwen3-32B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Yanfei-v2-Qwen3-32B-Q4_K_S-GGUF --hf-file yanfei-v2-qwen3-32b-q4_k_s.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Yanfei-v2-Qwen3-32B-Q4_K_S-GGUF --hf-file yanfei-v2-qwen3-32b-q4_k_s.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Yanfei-v2-Qwen3-32B-Q4_K_S-GGUF --hf-file yanfei-v2-qwen3-32b-q4_k_s.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Yanfei-v2-Qwen3-32B-Q4_K_S-GGUF --hf-file yanfei-v2-qwen3-32b-q4_k_s.gguf -c 2048
```
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.