modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-07-27 06:27:44
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 533
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-07-27 06:27:36
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
winnieyangwannan/entity-visual_Qwen2.5-VL-7B-Instruct_mlp-down_positive-negative-addition-same_last_layer_6_1_49
|
winnieyangwannan
| 2025-06-19T09:01:49Z | 13 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2_5_vl",
"image-text-to-text",
"conversational",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2025-06-11T23:19:17Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
seroe/Qwen3-Reranker-0.6B-turkish-triplet
|
seroe
| 2025-06-19T09:01:07Z | 12 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"qwen3",
"cross-encoder",
"generated_from_trainer",
"dataset_size:215676",
"loss:CachedMultipleNegativesRankingLoss",
"text-ranking",
"tr",
"dataset:seroe/vodex-turkish-triplets-large",
"arxiv:1908.10084",
"base_model:Qwen/Qwen3-Reranker-0.6B",
"base_model:finetune:Qwen/Qwen3-Reranker-0.6B",
"license:apache-2.0",
"model-index",
"region:us"
] |
text-ranking
| 2025-06-16T20:04:30Z |
---
language:
- tr
license: apache-2.0
tags:
- sentence-transformers
- cross-encoder
- generated_from_trainer
- dataset_size:215676
- loss:CachedMultipleNegativesRankingLoss
base_model: Qwen/Qwen3-Reranker-0.6B
datasets:
- seroe/vodex-turkish-triplets-large
pipeline_tag: text-ranking
library_name: sentence-transformers
metrics:
- map
- mrr@10
- ndcg@10
model-index:
- name: Qwen/Qwen3-Reranker-0.6B
results:
- task:
type: cross-encoder-reranking
name: Cross Encoder Reranking
dataset:
name: val hard
type: val-hard
metrics:
- type: map
value: 0.7818
name: Map
- type: mrr@10
value: 0.782
name: Mrr@10
- type: ndcg@10
value: 0.8364
name: Ndcg@10
- task:
type: cross-encoder-reranking
name: Cross Encoder Reranking
dataset:
name: test hard
type: test-hard
metrics:
- type: map
value: 0.7816
name: Map
- type: mrr@10
value: 0.7819
name: Mrr@10
- type: ndcg@10
value: 0.8362
name: Ndcg@10
---
# Qwen/Qwen3-Reranker-0.6B
This is a [Cross Encoder](https://www.sbert.net/docs/cross_encoder/usage/usage.html) model finetuned from [Qwen/Qwen3-Reranker-0.6B](https://huggingface.co/Qwen/Qwen3-Reranker-0.6B) on the [vodex-turkish-triplets-large](https://huggingface.co/datasets/seroe/vodex-turkish-triplets-large) dataset using the [sentence-transformers](https://www.SBERT.net) library. It computes scores for pairs of texts, which can be used for text reranking and semantic search.
## Model Details
## ⚠️ Domain-Specific Warning
This model was fine-tuned on Turkish data specifically sourced from the **telecommunications domain**.
While it performs well on telecom-related tasks such as mobile services, billing, campaigns, and subscription details, it may not generalize well to other domains.
Please assess its performance carefully before applying it outside of telecommunications use cases.
### Model Description
- **Model Type:** Cross Encoder
- **Base model:** [Qwen/Qwen3-Reranker-0.6B](https://huggingface.co/Qwen/Qwen3-Reranker-0.6B) <!-- at revision 6e9e69830b95c52b5fd889b7690dda3329508de3 -->
- **Maximum Sequence Length:** 40960 tokens
- **Number of Output Labels:** 1 label
- **Training Dataset:**
- [vodex-turkish-triplets-large](https://huggingface.co/datasets/seroe/vodex-turkish-triplets-large)
- **Language:** tr
- **License:** apache-2.0
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Documentation:** [Cross Encoder Documentation](https://www.sbert.net/docs/cross_encoder/usage/usage.html)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Cross Encoders on Hugging Face](https://huggingface.co/models?library=sentence-transformers&other=cross-encoder)
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import CrossEncoder
# Download from the 🤗 Hub
model = CrossEncoder("seroe/Qwen3-Reranker-0.6B-turkish-triplet")
# Get scores for pairs of texts
pairs = [
['Yeni Red Business VIP tarifesi, yüksek veri ve dakika ihtiyaçları olan işletmeler için tasarlanmış bir premium seçenektir.', 'Red Business VIP, işletmelerin yoğun veri ve konuşma ihtiyaçlarını karşılamak için geliştirilmiş bir üst düzey tarifedir.'],
["Vodafone'un Yeni Uyumlu Hoşgeldin Kampanyası, belirli tarifeler için 12+12 ay taahhüt karşılığında indirimler sunmaktadır ve kampanya iki dönemden oluşmaktadır.", "Vodafone'un Yeni Uyumlu Hoşgeldin Kampanyası, 12+12 ay taahhüt veren abonelere belirli tarifelerde ilk 12 ay için 20 TL, ikinci 12 ay için 15 TL indirim sağlamaktadır."],
["Vodafone'un Kolay Paketleri, faturasız hat kullanıcılarına TL yükleme gereksinimi olmadan avantajlı paketler sunar ve her ay otomatik yenilenmez.", "Vodafone'un Kolay Paketleri, faturasız hat kullanıcıları için tasarlanmış olup, TL yükleme zorunluluğu olmadan satın alınabilir ve otomatik yenileme yapılmaz."],
["Samsung Galaxy Note 3 cihazı, Vodafone'un Red tarifeleriyle birlikte aylık ek ödeme seçenekleriyle sunulmuş ve kampanya kodlarıyla desteklenmiştir.", 'Vodafone, Samsung Galaxy Note 3 cihazını Red tarifeleriyle birleştirerek, aylık ek ödeme planları ve kampanya kodlarıyla müşterilere sunmuştur.'],
['Red Elite Extra tarifesi, 36 aylık taahhütle 40 TL başlangıç fiyatı ve 165 TL üst fiyat seçeneğiyle sona eren kampanyalar arasında yer almıştır.', "Vodafone'un sona eren kampanyaları arasında yer alan Red Elite Extra tarifesi, 36 aylık taahhütle 40 TL'den başlayıp 165 TL'ye kadar fiyatlandırılmıştır."],
]
scores = model.predict(pairs)
print(scores.shape)
# (5,)
# Or rank different texts based on similarity to a single text
ranks = model.rank(
'Yeni Red Business VIP tarifesi, yüksek veri ve dakika ihtiyaçları olan işletmeler için tasarlanmış bir premium seçenektir.',
[
'Red Business VIP, işletmelerin yoğun veri ve konuşma ihtiyaçlarını karşılamak için geliştirilmiş bir üst düzey tarifedir.',
"Vodafone'un Yeni Uyumlu Hoşgeldin Kampanyası, 12+12 ay taahhüt veren abonelere belirli tarifelerde ilk 12 ay için 20 TL, ikinci 12 ay için 15 TL indirim sağlamaktadır.",
"Vodafone'un Kolay Paketleri, faturasız hat kullanıcıları için tasarlanmış olup, TL yükleme zorunluluğu olmadan satın alınabilir ve otomatik yenileme yapılmaz.",
'Vodafone, Samsung Galaxy Note 3 cihazını Red tarifeleriyle birleştirerek, aylık ek ödeme planları ve kampanya kodlarıyla müşterilere sunmuştur.',
"Vodafone'un sona eren kampanyaları arasında yer alan Red Elite Extra tarifesi, 36 aylık taahhütle 40 TL'den başlayıp 165 TL'ye kadar fiyatlandırılmıştır.",
]
)
# [{'corpus_id': ..., 'score': ...}, {'corpus_id': ..., 'score': ...}, ...]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Cross Encoder Reranking
* Datasets: `val-hard` and `test-hard`
* Evaluated with [<code>CrossEncoderRerankingEvaluator</code>](https://sbert.net/docs/package_reference/cross_encoder/evaluation.html#sentence_transformers.cross_encoder.evaluation.CrossEncoderRerankingEvaluator) with these parameters:
```json
{
"at_k": 10,
"always_rerank_positives": true
}
```
| Metric | val-hard | test-hard |
|:------------|:---------------------|:---------------------|
| map | 0.7818 (+0.0995) | 0.7816 (+0.0987) |
| mrr@10 | 0.7820 (+0.0998) | 0.7819 (+0.0991) |
| **ndcg@10** | **0.8364 (+0.1539)** | **0.8362 (+0.1533)** |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### vodex-turkish-triplets-large
* Dataset: [vodex-turkish-triplets-large](https://huggingface.co/datasets/seroe/vodex-turkish-triplets-large) at [1fe9d63](https://huggingface.co/datasets/seroe/vodex-turkish-triplets-large/tree/1fe9d63490a69cb96da6b76f4bff1a43c48cbdee)
* Size: 215,676 training samples
* Columns: <code>query</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | query | positive | negative |
|:--------|:------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 57 characters</li><li>mean: 141.8 characters</li><li>max: 282 characters</li></ul> | <ul><li>min: 61 characters</li><li>mean: 145.94 characters</li><li>max: 325 characters</li></ul> | <ul><li>min: 62 characters</li><li>mean: 119.94 characters</li><li>max: 235 characters</li></ul> |
* Samples:
| query | positive | negative |
|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>Yeni Red Business VIP tarifesi, yüksek veri ve dakika ihtiyaçları olan işletmeler için tasarlanmış bir premium seçenektir.</code> | <code>Red Business VIP, işletmelerin yoğun veri ve konuşma ihtiyaçlarını karşılamak için geliştirilmiş bir üst düzey tarifedir.</code> | <code>Vodafone'un kurumsal tarifeleri, yalnızca küçük işletmelerin düşük veri ihtiyaçlarına odaklanmaktadır.</code> |
| <code>Vodafone'un Yeni Uyumlu Hoşgeldin Kampanyası, belirli tarifeler için 12+12 ay taahhüt karşılığında indirimler sunmaktadır ve kampanya iki dönemden oluşmaktadır.</code> | <code>Vodafone'un Yeni Uyumlu Hoşgeldin Kampanyası, 12+12 ay taahhüt veren abonelere belirli tarifelerde ilk 12 ay için 20 TL, ikinci 12 ay için 15 TL indirim sağlamaktadır.</code> | <code>Vodafone'un Yeni Uyumlu Hoşgeldin Kampanyası, yalnızca faturasız hat kullanıcılarına özel olarak tasarlanmış bir kampanyadır ve taahhüt gerektirmez.</code> |
| <code>Vodafone'un Kolay Paketleri, faturasız hat kullanıcılarına TL yükleme gereksinimi olmadan avantajlı paketler sunar ve her ay otomatik yenilenmez.</code> | <code>Vodafone'un Kolay Paketleri, faturasız hat kullanıcıları için tasarlanmış olup, TL yükleme zorunluluğu olmadan satın alınabilir ve otomatik yenileme yapılmaz.</code> | <code>Vodafone'un Kolay Paketleri, faturalı hat kullanıcılarına özel olarak tasarlanmış ve her ay otomatik olarak yenilenen paketlerdir.</code> |
* Loss: [<code>CachedMultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/cross_encoder/losses.html#cachedmultiplenegativesrankingloss) with these parameters:
```json
{
"scale": 10.0,
"num_negatives": 4,
"activation_fn": "torch.nn.modules.activation.Sigmoid",
"mini_batch_size": 32
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 512
- `per_device_eval_batch_size`: 1024
- `learning_rate`: 1e-06
- `weight_decay`: 0.08
- `num_train_epochs`: 2
- `warmup_ratio`: 0.2
- `save_only_model`: True
- `bf16`: True
- `dataloader_num_workers`: 8
- `load_best_model_at_end`: True
- `group_by_length`: True
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 512
- `per_device_eval_batch_size`: 1024
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 1e-06
- `weight_decay`: 0.08
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1
- `num_train_epochs`: 2
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.2
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: True
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: True
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 8
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: True
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: True
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | val-hard_ndcg@10 | test-hard_ndcg@10 |
|:------:|:----:|:-------------:|:----------------:|:-----------------:|
| 0.2370 | 100 | 1.192 | 0.7554 (+0.0729) | 0.7552 (+0.0723) |
| 0.4739 | 200 | 0.0214 | 0.7909 (+0.1085) | 0.7892 (+0.1062) |
| 0.7109 | 300 | 0.0066 | 0.8135 (+0.1310) | 0.8115 (+0.1285) |
| 0.9479 | 400 | 0.0048 | 0.8143 (+0.1318) | 0.8141 (+0.1311) |
| 1.1848 | 500 | 0.0034 | 0.8281 (+0.1456) | 0.8270 (+0.1440) |
| 1.4218 | 600 | 0.0028 | 0.8321 (+0.1496) | 0.8319 (+0.1489) |
| 1.6588 | 700 | 0.0027 | 0.8334 (+0.1509) | 0.8333 (+0.1503) |
| 1.8957 | 800 | 0.0026 | 0.8364 (+0.1539) | 0.8362 (+0.1533) |
### Framework Versions
- Python: 3.10.12
- Sentence Transformers: 4.2.0.dev0
- Transformers: 4.52.3
- PyTorch: 2.7.0+cu126
- Accelerate: 1.7.0
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
PatheticOTD/ppo-LunarLander-v3
|
PatheticOTD
| 2025-06-19T09:00:46Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-06-19T08:57:16Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v3
type: LunarLander-v3
metrics:
- type: mean_reward
value: 259.60 +/- 11.26
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v3**
This is a trained model of a **PPO** agent playing **LunarLander-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
seroe/bge-m3-turkish-triplet-matryoshka-v2
|
seroe
| 2025-06-19T09:00:30Z | 44 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"xlm-roberta",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:215676",
"loss:MatryoshkaLoss",
"loss:CachedMultipleNegativesRankingLoss",
"tr",
"dataset:seroe/vodex-turkish-triplets-large",
"arxiv:1908.10084",
"arxiv:2205.13147",
"arxiv:2101.06983",
"base_model:BAAI/bge-m3",
"base_model:finetune:BAAI/bge-m3",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-06-17T06:21:10Z |
---
language:
- tr
license: apache-2.0
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:215676
- loss:MatryoshkaLoss
- loss:CachedMultipleNegativesRankingLoss
base_model: BAAI/bge-m3
widget:
- source_sentence: Ortak Havuz Tarifesi kapsamında her bir hat için aylık sabit ücret
25,5 TL olarak belirlenmiştir ve bu ücret KDV ile ÖİV dahil olarak hesaplanmıştır.
sentences:
- Vodafone'un Ortak Havuz Tarifesi, müşterilere ücretsiz olarak sunulmaktadır ve
herhangi bir sabit ücret talep edilmemektedir.
- Her bir hat için Ortak Havuz Tarifesi'nin aylık sabit ücreti, vergiler dahil 25,5
TL'dir.
- Geleceğin Karakartalı, Vodafone ve Beşiktaş JK'nin ortak çalışmasıyla genç futbol
yeteneklerini Türkiye çapında keşfetmeyi hedefleyen bir projedir.
- source_sentence: Taşınma durumunda, teknik altyapı eksikliği nedeniyle Vodafone
Evde Fiber İnternet hizmeti sağlanamazsa, cezai işlem uygulanmaz ve alternatif
hizmetlere geçiş yapılabilir.
sentences:
- Vodafone faturasız tarifelerinde, yenileme gününde yeterli bakiye bulunmuyorsa
içerikler yenilenmez, ancak daha önce yüklenen içerikler bakiye olmadan kullanılabilir.
- Vodafone Evde Fiber İnternet hizmeti taşınma sırasında altyapı eksikliği nedeniyle
sağlanamazsa, aboneye ceza uygulanır ve alternatif hizmet sunulmaz.
- Vodafone Evde Fiber İnternet hizmeti taşınma sırasında altyapı eksikliği nedeniyle
sağlanamazsa, aboneye cezai işlem uygulanmaz ve ADSL veya Fiber Hız hizmetine
geçiş yapılabilir.
- source_sentence: Tarife kapsamında sunulan dakika, SMS ve internet hakları, belirli
bir limitin aşılması durumunda ek paketlerle ücretlendirilir.
sentences:
- Vodafone'un tarifelerinde aşım durumunda ek paketler yerine ücretsiz kullanım
hakkı sunulmaktadır.
- Vodafone'un geçmiş kampanyalarında, Samsung Note 3 Neo en pahalı cihazlardan biri
olarak müşterilere sunulmuştur.
- Uyumlu 5 Tarifesi'nde verilen dakika, SMS ve internet hakları bittiğinde, ek paketler
devreye girerek avantajlı fiyatlarla ücretlendirme yapılır.
- source_sentence: Mobil numara taşınabilirliği olan ülkelerdeki Vodafone operatörlerine
yapılan aramalarda, diğer mobil operatörlerin prefikslerini içeren numaralar indirimli
tarifeden yararlanamaz.
sentences:
- Vodafone operatörlerine yapılan aramalarda, mobil numara taşınabilirliği olan
ülkelerdeki diğer operatörlerin prefikslerini içeren numaralar indirimli fiyatlara
dahil edilmez.
- Vodafone'un tarifelerinde ücretsiz pass paketi yer almazken, Her Şey Dahil Pasaport
paketi ücretli olarak sunulmaktadır.
- Dünya Küçük Kurumsal tarifesi, yurtdışı aramalar için tüm numaralara eşit fiyatlandırma
sunar ve prefiks kısıtlaması içermez.
- source_sentence: Vodafone hattınızın aktivasyonu sırasında eksik evraklarınızın
tamamlanması gerektiği belirtilmiştir. Hattınızın iptal olmaması için 3 gün içinde
Vodafone mağazasını ziyaret etmeniz önemlidir.
sentences:
- Vodafone, boşanmış annelerin çocuklarına gençlik tarifesi tanımlaması için kimlik
doğrulama ve kullanıcı tanımlama süreçlerini Cep Merkezlerinde gerçekleştirmektedir.
- Vodafone hattınızın aktivasyonu sırasında herhangi bir evrak eksikliği bulunmamaktadır.
Hattınız otomatik olarak aktif hale gelecektir.
- Vodafone hattınızın aktif hale gelebilmesi için eksik olan abonelik sözleşmesi
ve kimlik belgelerinizi tamamlamanız gerekmektedir. Bu işlemi 3 gün içinde Vodafone
mağazasında gerçekleştirebilirsiniz.
datasets:
- seroe/vodex-turkish-triplets-large
pipeline_tag: sentence-similarity
library_name: sentence-transformers
metrics:
- cosine_accuracy
model-index:
- name: BGE-M3 Türkçe Triplet Matryoshka
results:
- task:
type: triplet
name: Triplet
dataset:
name: tr triplet dev 1024d
type: tr-triplet-dev-1024d
metrics:
- type: cosine_accuracy
value: 0.9979135394096375
name: Cosine Accuracy
- task:
type: triplet
name: Triplet
dataset:
name: tr triplet dev 768d
type: tr-triplet-dev-768d
metrics:
- type: cosine_accuracy
value: 0.9978300929069519
name: Cosine Accuracy
- task:
type: triplet
name: Triplet
dataset:
name: tr triplet dev 512d
type: tr-triplet-dev-512d
metrics:
- type: cosine_accuracy
value: 0.9977466464042664
name: Cosine Accuracy
- task:
type: triplet
name: Triplet
dataset:
name: tr triplet dev 256d
type: tr-triplet-dev-256d
metrics:
- type: cosine_accuracy
value: 0.9974962472915649
name: Cosine Accuracy
- task:
type: triplet
name: Triplet
dataset:
name: all nli test 1024d
type: all-nli-test-1024d
metrics:
- type: cosine_accuracy
value: 0.9980804324150085
name: Cosine Accuracy
- task:
type: triplet
name: Triplet
dataset:
name: all nli test 768d
type: all-nli-test-768d
metrics:
- type: cosine_accuracy
value: 0.9980804324150085
name: Cosine Accuracy
- task:
type: triplet
name: Triplet
dataset:
name: all nli test 512d
type: all-nli-test-512d
metrics:
- type: cosine_accuracy
value: 0.9981639385223389
name: Cosine Accuracy
- task:
type: triplet
name: Triplet
dataset:
name: all nli test 256d
type: all-nli-test-256d
metrics:
- type: cosine_accuracy
value: 0.997996985912323
name: Cosine Accuracy
---
# BGE-M3 Türkçe Triplet Matryoshka
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [BAAI/bge-m3](https://huggingface.co/BAAI/bge-m3) on the [vodex-turkish-triplets-large](https://huggingface.co/datasets/seroe/vodex-turkish-triplets-large) dataset. It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
## ⚠️ Domain-Specific Warning
This model was fine-tuned on Turkish data specifically sourced from the **telecommunications domain**.
While it performs well on telecom-related tasks such as mobile services, billing, campaigns, and subscription details, it may not generalize well to other domains.
Please assess its performance carefully before applying it outside of telecommunications use cases.
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [BAAI/bge-m3](https://huggingface.co/BAAI/bge-m3) <!-- at revision 5617a9f61b028005a4858fdac845db406aefb181 -->
- **Maximum Sequence Length:** 8192 tokens
- **Output Dimensionality:** 1024 dimensions
- **Similarity Function:** Cosine Similarity
- **Training Dataset:**
- [vodex-turkish-triplets-large](https://huggingface.co/datasets/seroe/vodex-turkish-triplets-large)
- **Language:** tr
- **License:** apache-2.0
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 8192, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("seroe/bge-m3-turkish-triplet-matryoshka-v2")
# Run inference
sentences = [
'Vodafone hattınızın aktivasyonu sırasında eksik evraklarınızın tamamlanması gerektiği belirtilmiştir. Hattınızın iptal olmaması için 3 gün içinde Vodafone mağazasını ziyaret etmeniz önemlidir.',
'Vodafone hattınızın aktif hale gelebilmesi için eksik olan abonelik sözleşmesi ve kimlik belgelerinizi tamamlamanız gerekmektedir. Bu işlemi 3 gün içinde Vodafone mağazasında gerçekleştirebilirsiniz.',
'Vodafone hattınızın aktivasyonu sırasında herhangi bir evrak eksikliği bulunmamaktadır. Hattınız otomatik olarak aktif hale gelecektir.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Triplet
* Datasets: `tr-triplet-dev-1024d` and `all-nli-test-1024d`
* Evaluated with [<code>TripletEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.TripletEvaluator) with these parameters:
```json
{
"truncate_dim": 1024
}
```
| Metric | tr-triplet-dev-1024d | all-nli-test-1024d |
|:--------------------|:---------------------|:-------------------|
| **cosine_accuracy** | **0.9979** | **0.9981** |
#### Triplet
* Datasets: `tr-triplet-dev-768d` and `all-nli-test-768d`
* Evaluated with [<code>TripletEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.TripletEvaluator) with these parameters:
```json
{
"truncate_dim": 768
}
```
| Metric | tr-triplet-dev-768d | all-nli-test-768d |
|:--------------------|:--------------------|:------------------|
| **cosine_accuracy** | **0.9978** | **0.9981** |
#### Triplet
* Datasets: `tr-triplet-dev-512d` and `all-nli-test-512d`
* Evaluated with [<code>TripletEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.TripletEvaluator) with these parameters:
```json
{
"truncate_dim": 512
}
```
| Metric | tr-triplet-dev-512d | all-nli-test-512d |
|:--------------------|:--------------------|:------------------|
| **cosine_accuracy** | **0.9977** | **0.9982** |
#### Triplet
* Datasets: `tr-triplet-dev-256d` and `all-nli-test-256d`
* Evaluated with [<code>TripletEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.TripletEvaluator) with these parameters:
```json
{
"truncate_dim": 256
}
```
| Metric | tr-triplet-dev-256d | all-nli-test-256d |
|:--------------------|:--------------------|:------------------|
| **cosine_accuracy** | **0.9975** | **0.998** |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### vodex-turkish-triplets-large
* Dataset: [vodex-turkish-triplets-large](https://huggingface.co/datasets/seroe/vodex-turkish-triplets-large) at [1fe9d63](https://huggingface.co/datasets/seroe/vodex-turkish-triplets-large/tree/1fe9d63490a69cb96da6b76f4bff1a43c48cbdee)
* Size: 215,676 training samples
* Columns: <code>query</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | query | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 15 tokens</li><li>mean: 30.74 tokens</li><li>max: 61 tokens</li></ul> | <ul><li>min: 16 tokens</li><li>mean: 31.79 tokens</li><li>max: 62 tokens</li></ul> | <ul><li>min: 12 tokens</li><li>mean: 25.79 tokens</li><li>max: 53 tokens</li></ul> |
* Samples:
| query | positive | negative |
|:--------------------------------------------------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------|
| <code>Vodafone'un Mobil Form Servisi, 12 Şubat 2021 itibarıyla yeni müşteri alımına kapatılmıştır.</code> | <code>12 Şubat 2021 tarihinden itibaren Vodafone'un Mobil Form Servisi yeni kullanıcılar için erişime kapatılmıştır.</code> | <code>Mobil Form Servisi, 2022 yılında yeni müşterilere açılmıştır ve aktif olarak kullanılmaktadır.</code> |
| <code>Paket, VOIP ve P2P gibi hizmetleri desteklemez ve cihazın 4.5G/3G ayarlarının yapılmış olması gereklidir.</code> | <code>Vodafone'un paketi, VOIP ve P2P hizmetlerini içermez ve cihazın 4.5G/3G bağlantı ayarlarının aktif olması gerekir.</code> | <code>Paket, VOIP ve P2P hizmetlerini tamamen destekler ve cihaz ayarlarına gerek olmadan kullanılabilir.</code> |
| <code>Vodafone'un bireysel esnaf tarifeleri, farklı meslek gruplarına özel paket seçenekleriyle geniş bir yelpaze sunar.</code> | <code>Bireysel esnaf tarifeleri, Vodafone tarafından çeşitli meslek gruplarına uygun paketlerle desteklenmektedir.</code> | <code>Vodafone'un bireysel esnaf tarifeleri, yalnızca belirli bir meslek grubuna hitap eder ve diğer meslekler için uygun değildir.</code> |
* Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters:
```json
{
"loss": "CachedMultipleNegativesRankingLoss",
"matryoshka_dims": [
1024,
768,
512,
256
],
"matryoshka_weights": [
1,
1,
1,
1
],
"n_dims_per_step": -1
}
```
### Evaluation Dataset
#### vodex-turkish-triplets-large
* Dataset: [vodex-turkish-triplets-large](https://huggingface.co/datasets/seroe/vodex-turkish-triplets-large) at [1fe9d63](https://huggingface.co/datasets/seroe/vodex-turkish-triplets-large/tree/1fe9d63490a69cb96da6b76f4bff1a43c48cbdee)
* Size: 11,982 evaluation samples
* Columns: <code>query</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | query | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 16 tokens</li><li>mean: 31.25 tokens</li><li>max: 62 tokens</li></ul> | <ul><li>min: 15 tokens</li><li>mean: 32.27 tokens</li><li>max: 63 tokens</li></ul> | <ul><li>min: 13 tokens</li><li>mean: 25.83 tokens</li><li>max: 60 tokens</li></ul> |
* Samples:
| query | positive | negative |
|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>Vodafone'un 'Yarına Hazır Cihaz Kampanyaları' kapsamında farklı marka ve modellerde cihazlar, çeşitli depolama seçenekleri ve renk alternatifleriyle sunulmaktadır.</code> | <code>Vodafone'un cihaz kampanyaları, geniş ürün yelpazesiyle farklı depolama kapasiteleri ve renk seçenekleri sunarak kullanıcıların ihtiyaçlarına hitap etmektedir.</code> | <code>Vodafone'un cihaz kampanyaları yalnızca belirli bir marka ve modelle sınırlıdır, diğer seçenekler sunulmamaktadır.</code> |
| <code>Devreden ve duran tarifeler, kullanıcıların kullanılmayan internet haklarını bir sonraki döneme taşımasına olanak tanır ve ek paketlerle bu haklar genişletilebilir.</code> | <code>Kullanıcılar, devreden ve duran tarifeler sayesinde kullanılmayan internet haklarını bir sonraki aya aktarabilir ve ek paketlerle bu haklarını artırabilir.</code> | <code>Devreden ve duran tarifeler, kullanıcıların internet haklarını bir sonraki döneme taşımasına izin vermez, yalnızca mevcut dönemde kullanım sağlar.</code> |
| <code>Cebinize Uyan İnternet kampanyası, numara taşıma, yeni hat tesisi veya tarifeler arası geçiş yapan abonelere otomatik olarak tanımlanır. Bu haklar, diğer promosyonlardan sonra kullanılabilir.</code> | <code>Vodafone'un kampanyası, numara taşıma, yeni hat açma veya tarifeler arası geçiş yapan abonelere otomatik olarak tanımlanır ve kullanım sırası diğer promosyonlardan sonra gelir.</code> | <code>Evcil hayvan sahiplenme kampanyasında, yeni bir evcil hayvan edinen kişilere ücretsiz mama ve bakım hizmeti sunulur.</code> |
* Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters:
```json
{
"loss": "CachedMultipleNegativesRankingLoss",
"matryoshka_dims": [
1024,
768,
512,
256
],
"matryoshka_weights": [
1,
1,
1,
1
],
"n_dims_per_step": -1
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 2048
- `per_device_eval_batch_size`: 256
- `weight_decay`: 0.01
- `num_train_epochs`: 1
- `lr_scheduler_type`: cosine
- `warmup_ratio`: 0.05
- `save_only_model`: True
- `bf16`: True
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 2048
- `per_device_eval_batch_size`: 256
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.01
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 1
- `max_steps`: -1
- `lr_scheduler_type`: cosine
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.05
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: True
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: True
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | Validation Loss | tr-triplet-dev-1024d_cosine_accuracy | tr-triplet-dev-768d_cosine_accuracy | tr-triplet-dev-512d_cosine_accuracy | tr-triplet-dev-256d_cosine_accuracy | all-nli-test-1024d_cosine_accuracy | all-nli-test-768d_cosine_accuracy | all-nli-test-512d_cosine_accuracy | all-nli-test-256d_cosine_accuracy |
|:------:|:----:|:-------------:|:---------------:|:------------------------------------:|:-----------------------------------:|:-----------------------------------:|:-----------------------------------:|:----------------------------------:|:---------------------------------:|:---------------------------------:|:---------------------------------:|
| -1 | -1 | - | - | 0.9642 | 0.9639 | 0.9632 | 0.9601 | - | - | - | - |
| 0.1132 | 12 | 3.8771 | 0.2379 | 0.9933 | 0.9932 | 0.9930 | 0.9915 | - | - | - | - |
| 0.2264 | 24 | 1.0195 | 0.1654 | 0.9960 | 0.9957 | 0.9958 | 0.9952 | - | - | - | - |
| 0.3396 | 36 | 0.8059 | 0.1331 | 0.9972 | 0.9972 | 0.9971 | 0.9971 | - | - | - | - |
| 0.4528 | 48 | 0.7153 | 0.1198 | 0.9978 | 0.9977 | 0.9977 | 0.9973 | - | - | - | - |
| 0.5660 | 60 | 0.6498 | 0.1131 | 0.9980 | 0.9977 | 0.9977 | 0.9975 | - | - | - | - |
| 0.6792 | 72 | 0.6456 | 0.1091 | 0.9977 | 0.9976 | 0.9977 | 0.9975 | - | - | - | - |
| 0.7925 | 84 | 0.6407 | 0.1077 | 0.9977 | 0.9977 | 0.9977 | 0.9974 | - | - | - | - |
| 0.9057 | 96 | 0.6055 | 0.1056 | 0.9979 | 0.9978 | 0.9977 | 0.9975 | - | - | - | - |
| -1 | -1 | - | - | - | - | - | - | 0.9981 | 0.9981 | 0.9982 | 0.9980 |
### Framework Versions
- Python: 3.10.12
- Sentence Transformers: 4.2.0.dev0
- Transformers: 4.52.3
- PyTorch: 2.7.0+cu126
- Accelerate: 1.7.0
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MatryoshkaLoss
```bibtex
@misc{kusupati2024matryoshka,
title={Matryoshka Representation Learning},
author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
year={2024},
eprint={2205.13147},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
#### CachedMultipleNegativesRankingLoss
```bibtex
@misc{gao2021scaling,
title={Scaling Deep Contrastive Learning Batch Size under Memory Limited Setup},
author={Luyu Gao and Yunyi Zhang and Jiawei Han and Jamie Callan},
year={2021},
eprint={2101.06983},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
seroe/jina-reranker-v2-base-multilingual-turkish-reranker-triplet-v2
|
seroe
| 2025-06-19T08:59:53Z | 45 | 1 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"cross-encoder",
"generated_from_trainer",
"dataset_size:89964",
"loss:CachedMultipleNegativesRankingLoss",
"text-ranking",
"custom_code",
"tr",
"dataset:seroe/vodex-turkish-triplets-large",
"arxiv:1908.10084",
"base_model:jinaai/jina-reranker-v2-base-multilingual",
"base_model:finetune:jinaai/jina-reranker-v2-base-multilingual",
"license:apache-2.0",
"model-index",
"region:us"
] |
text-ranking
| 2025-06-17T07:00:18Z |
---
language:
- tr
license: apache-2.0
tags:
- sentence-transformers
- cross-encoder
- generated_from_trainer
- dataset_size:89964
- loss:CachedMultipleNegativesRankingLoss
base_model: jinaai/jina-reranker-v2-base-multilingual
datasets:
- seroe/vodex-turkish-triplets-large
pipeline_tag: text-ranking
library_name: sentence-transformers
metrics:
- map
- mrr@10
- ndcg@10
model-index:
- name: Jina Reranker v2 Base Multilingual fine-tuned on Turkish triplets
results:
- task:
type: cross-encoder-reranking
name: Cross Encoder Reranking
dataset:
name: val hard
type: val-hard
metrics:
- type: map
value: 0.8007
name: Map
- type: mrr@10
value: 0.8049
name: Mrr@10
- type: ndcg@10
value: 0.8553
name: Ndcg@10
- task:
type: cross-encoder-reranking
name: Cross Encoder Reranking
dataset:
name: test hard
type: test-hard
metrics:
- type: map
value: 0.7958
name: Map
- type: mrr@10
value: 0.8009
name: Mrr@10
- type: ndcg@10
value: 0.8515
name: Ndcg@10
---
# Jina Reranker v2 Base Multilingual fine-tuned on Turkish triplets
This is a [Cross Encoder](https://www.sbert.net/docs/cross_encoder/usage/usage.html) model finetuned from [jinaai/jina-reranker-v2-base-multilingual](https://huggingface.co/jinaai/jina-reranker-v2-base-multilingual) on the [vodex-turkish-triplets-large](https://huggingface.co/datasets/seroe/vodex-turkish-triplets-large) dataset using the [sentence-transformers](https://www.SBERT.net) library. It computes scores for pairs of texts, which can be used for text reranking and semantic search.
## Model Details
## ⚠️ Domain-Specific Warning
This model was fine-tuned on Turkish data specifically sourced from the **telecommunications domain**.
While it performs well on telecom-related tasks such as mobile services, billing, campaigns, and subscription details, it may not generalize well to other domains.
Please assess its performance carefully before applying it outside of telecommunications use cases.
### Model Description
- **Model Type:** Cross Encoder
- **Base model:** [jinaai/jina-reranker-v2-base-multilingual](https://huggingface.co/jinaai/jina-reranker-v2-base-multilingual) <!-- at revision eed787badf7784e1a25c0eaa428627c8cbef511e -->
- **Maximum Sequence Length:** 1024 tokens
- **Number of Output Labels:** 1 label
- **Training Dataset:**
- [vodex-turkish-triplets-large](https://huggingface.co/datasets/seroe/vodex-turkish-triplets-large)
- **Language:** tr
- **License:** apache-2.0
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Documentation:** [Cross Encoder Documentation](https://www.sbert.net/docs/cross_encoder/usage/usage.html)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Cross Encoders on Hugging Face](https://huggingface.co/models?library=sentence-transformers&other=cross-encoder)
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import CrossEncoder
# Download from the 🤗 Hub
model = CrossEncoder("seroe/jina-reranker-v2-base-multilingual-turkish-reranker-triplet-v2")
# Get scores for pairs of texts
pairs = [
['Faturasız tarifelerde yurtdışı mesaj ücretleri ne kadardır?', 'Yurtdışına gönderilen mesajlar için ücret 75 kuruş olarak belirlenmiştir.'],
['Kampanya süresince internet hızı nasıl değişebilir?', 'Kampanya süresince, limit ve altyapının desteklediği azami internet hızına kadar internet hızı yükseltilebilir.'],
["Vodafone'un tarifelerinde KDV ve ÖİV dahil midir?", "Vodafone'un tarifelerinde belirtilen ücretlere KDV ve ÖİV dahildir."],
['Taahhüt süresi dolmadan internet hizmeti iptal edilirse ne olur?', 'Eğer taahhüt süresi bitmeden internet hizmeti iptal edilirse, aboneye sunulan D-Smart hizmeti de iptal edilecektir.'],
['Aylık 15 GB ek paketini nereden satın alabilirim?', 'Bu ek paketi almak için hangi kanalları kullanabilirim?'],
]
scores = model.predict(pairs)
print(scores.shape)
# (5,)
# Or rank different texts based on similarity to a single text
ranks = model.rank(
'Faturasız tarifelerde yurtdışı mesaj ücretleri ne kadardır?',
[
'Yurtdışına gönderilen mesajlar için ücret 75 kuruş olarak belirlenmiştir.',
'Kampanya süresince, limit ve altyapının desteklediği azami internet hızına kadar internet hızı yükseltilebilir.',
"Vodafone'un tarifelerinde belirtilen ücretlere KDV ve ÖİV dahildir.",
'Eğer taahhüt süresi bitmeden internet hizmeti iptal edilirse, aboneye sunulan D-Smart hizmeti de iptal edilecektir.',
'Bu ek paketi almak için hangi kanalları kullanabilirim?',
]
)
# [{'corpus_id': ..., 'score': ...}, {'corpus_id': ..., 'score': ...}, ...]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Cross Encoder Reranking
* Datasets: `val-hard` and `test-hard`
* Evaluated with [<code>CrossEncoderRerankingEvaluator</code>](https://sbert.net/docs/package_reference/cross_encoder/evaluation.html#sentence_transformers.cross_encoder.evaluation.CrossEncoderRerankingEvaluator) with these parameters:
```json
{
"at_k": 10,
"always_rerank_positives": true
}
```
| Metric | val-hard | test-hard |
|:------------|:---------------------|:---------------------|
| map | 0.8007 (+0.1606) | 0.7958 (+0.1617) |
| mrr@10 | 0.8049 (+0.1648) | 0.8009 (+0.1667) |
| **ndcg@10** | **0.8553 (+0.2144)** | **0.8515 (+0.2168)** |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### vodex-turkish-reranker-triplets
* Dataset: [vodex-turkish-reranker-triplets](https://huggingface.co/datasets/seroe/vodex-turkish-reranker-triplets) at [ca7d206](https://huggingface.co/datasets/seroe/vodex-turkish-reranker-triplets/tree/ca7d2063ad4fec15fbf739835ab6926e051950c0)
* Size: 89,964 training samples
* Columns: <code>query</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | query | positive | negative |
|:--------|:------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 20 characters</li><li>mean: 57.83 characters</li><li>max: 112 characters</li></ul> | <ul><li>min: 35 characters</li><li>mean: 92.19 characters</li><li>max: 221 characters</li></ul> | <ul><li>min: 31 characters</li><li>mean: 78.41 characters</li><li>max: 143 characters</li></ul> |
* Samples:
| query | positive | negative |
|:-------------------------------------------------------------------------|:-----------------------------------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------|
| <code>Faturasız tarifelerde yurtdışı mesaj ücretleri ne kadardır?</code> | <code>Yurtdışına gönderilen mesajlar için ücret 75 kuruş olarak belirlenmiştir.</code> | <code>Faturasız tarifelerde yurtdışı mesaj ücretleri 10 kuruş olarak uygulanmaktadır.</code> |
| <code>Kampanya süresince internet hızı nasıl değişebilir?</code> | <code>Kampanya süresince, limit ve altyapının desteklediği azami internet hızına kadar internet hızı yükseltilebilir.</code> | <code>Kampanya süresince internet hızı sabit kalır ve değişiklik yapılamaz.</code> |
| <code>Vodafone'un tarifelerinde KDV ve ÖİV dahil midir?</code> | <code>Vodafone'un tarifelerinde belirtilen ücretlere KDV ve ÖİV dahildir.</code> | <code>Vodafone tarifelerinde KDV ve ÖİV, abonelerin talep etmesi durumunda eklenmektedir.</code> |
* Loss: [<code>CachedMultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/cross_encoder/losses.html#cachedmultiplenegativesrankingloss) with these parameters:
```json
{
"scale": 10.0,
"num_negatives": 4,
"activation_fn": "torch.nn.modules.activation.Sigmoid",
"mini_batch_size": 32
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 512
- `per_device_eval_batch_size`: 1024
- `learning_rate`: 1e-06
- `weight_decay`: 0.08
- `num_train_epochs`: 2
- `warmup_ratio`: 0.2
- `save_only_model`: True
- `bf16`: True
- `dataloader_num_workers`: 8
- `load_best_model_at_end`: True
- `group_by_length`: True
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 512
- `per_device_eval_batch_size`: 1024
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 1e-06
- `weight_decay`: 0.08
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1
- `num_train_epochs`: 2
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.2
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: True
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: True
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 8
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: True
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: True
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | val-hard_ndcg@10 | test-hard_ndcg@10 |
|:------:|:----:|:-------------:|:----------------:|:-----------------:|
| 0.5682 | 100 | 0.8032 | 0.8556 (+0.2147) | 0.8518 (+0.2171) |
| 1.1364 | 200 | 0.7903 | 0.8556 (+0.2147) | 0.8517 (+0.2169) |
| 1.7045 | 300 | 0.7883 | 0.8553 (+0.2144) | 0.8515 (+0.2168) |
### Framework Versions
- Python: 3.10.12
- Sentence Transformers: 4.2.0.dev0
- Transformers: 4.52.3
- PyTorch: 2.7.0+cu126
- Accelerate: 1.7.0
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
udayks/Taxi-v3
|
udayks
| 2025-06-19T08:58:02Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-06-19T08:57:59Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.50 +/- 2.78
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="udayks/Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
yezg/qwen2.5-sqlbot-gguf
|
yezg
| 2025-06-19T08:57:35Z | 29 | 0 |
transformers
|
[
"transformers",
"gguf",
"qwen2",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-16T08:18:50Z |
---
base_model: unsloth/qwen2.5-coder-7b-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** yezg
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-coder-7b-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
John6666/gray-color-25d-model-v10-testing-sdxl
|
John6666
| 2025-06-19T08:56:43Z | 0 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"anime",
"2.5D",
"girls",
"illustrious",
"en",
"base_model:OnomaAIResearch/Illustrious-xl-early-release-v0",
"base_model:finetune:OnomaAIResearch/Illustrious-xl-early-release-v0",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] |
text-to-image
| 2025-06-19T08:50:42Z |
---
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- anime
- 2.5D
- girls
- illustrious
base_model: OnomaAIResearch/Illustrious-xl-early-release-v0
---
Original model is [here](https://civitai.com/models/1693405/graycolor-25d-model?modelVersionId=1916475).
This model created by [GrayColor](https://civitai.com/user/GrayColor).
|
Official-mezzo-fun-18-Viral-videos-18d/FULL.VIDEO.Mezzo.fun.Viral.Video.Tutorial.Official
|
Official-mezzo-fun-18-Viral-videos-18d
| 2025-06-19T08:55:54Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-19T08:54:32Z |
<a href="https://mswds.xyz/full-video/?v=Mezzo-fun" rel="nofollow">🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐖𝐚𝐭𝐜𝐡 𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨)</a>
<a href="https://mswds.xyz/full-video/?v=Mezzo-fun" rel="nofollow">🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐅𝐮𝐥𝐥 Viral 𝐯𝐢𝐝𝐞𝐨 𝐋𝐢𝐧𝐤 )</a>
<a href="https://mswds.xyz/full-video/?v=Mezzo-fun"><img src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif" alt="fsgd" /></a>
|
makataomu/q-Taxi-v3-v2
|
makataomu
| 2025-06-19T08:55:29Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-06-19T08:49:25Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3-v2
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="makataomu/q-Taxi-v3-v2", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
udayks/q-FrozenLake-v1-4x4-noSlippery
|
udayks
| 2025-06-19T08:55:19Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-06-19T08:51:18Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="udayks/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Varinder2110/794a1f43-1eb9-4fbb-9178-26a04bc30f0c
|
Varinder2110
| 2025-06-19T08:52:46Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-06-19T08:06:59Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: TOK
---
# 794A1F43 1Eb9 4Fbb 9178 26A04Bc30F0C
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `TOK` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "TOK",
"lora_weights": "https://huggingface.co/Varinder2110/794a1f43-1eb9-4fbb-9178-26a04bc30f0c/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('Varinder2110/794a1f43-1eb9-4fbb-9178-26a04bc30f0c', weight_name='lora.safetensors')
image = pipeline('TOK').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 4000
- Learning rate: 0.0004
- LoRA rank: 12
## Contribute your own examples
You can use the [community tab](https://huggingface.co/Varinder2110/794a1f43-1eb9-4fbb-9178-26a04bc30f0c/discussions) to add images that show off what you’ve made with this LoRA.
|
cullinangetter/lf_model
|
cullinangetter
| 2025-06-19T08:49:31Z | 0 | 0 |
transformers
|
[
"transformers",
"pytorch",
"minimind",
"text-generation",
"conversational",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"region:us"
] |
text-generation
| 2025-06-19T08:00:00Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
morturr/Mistral-7B-v0.1-headlines-seed-28-2025-06-19
|
morturr
| 2025-06-19T08:48:49Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:adapter:mistralai/Mistral-7B-v0.1",
"license:apache-2.0",
"region:us"
] | null | 2025-06-19T08:48:34Z |
---
library_name: peft
license: apache-2.0
base_model: mistralai/Mistral-7B-v0.1
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: Mistral-7B-v0.1-headlines-seed-28-2025-06-19
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Mistral-7B-v0.1-headlines-seed-28-2025-06-19
This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 28
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.1
- Pytorch 2.5.1+cu124
- Datasets 3.0.2
- Tokenizers 0.20.1
|
phospho-app/OpenLabBA-gr00t-lego_in_box_v4-3olw3vbamj
|
phospho-app
| 2025-06-19T08:48:10Z | 0 | 0 | null |
[
"safetensors",
"gr00t_n1",
"phosphobot",
"gr00t",
"region:us"
] | null | 2025-06-19T07:51:00Z |
---
tags:
- phosphobot
- gr00t
task_categories:
- robotics
---
# gr00t Model - phospho Training Pipeline
## This model was trained using **phospho**.
Training was successfull, try it out on your robot!
## Training parameters:
- **Dataset**: [OpenLabBA/lego_in_box_v4](https://huggingface.co/datasets/OpenLabBA/lego_in_box_v4)
- **Wandb run URL**: None
- **Epochs**: 10
- **Batch size**: 49
- **Training steps**: None
📖 **Get Started**: [docs.phospho.ai](https://docs.phospho.ai?utm_source=huggingface_readme)
🤖 **Get your robot**: [robots.phospho.ai](https://robots.phospho.ai?utm_source=huggingface_readme)
|
phospho-app/Kai-13-gr00t-example_dataset_v2-se6pf
|
phospho-app
| 2025-06-19T08:48:05Z | 0 | 0 | null |
[
"safetensors",
"gr00t_n1",
"phosphobot",
"gr00t",
"region:us"
] | null | 2025-06-19T08:38:18Z |
---
tags:
- phosphobot
- gr00t
task_categories:
- robotics
---
# gr00t Model - phospho Training Pipeline
## This model was trained using **phospho**.
Training was successfull, try it out on your robot!
## Training parameters:
- **Dataset**: [Kai-13/example_dataset_v2](https://huggingface.co/datasets/Kai-13/example_dataset_v2)
- **Wandb run URL**: None
- **Epochs**: 10
- **Batch size**: 49
- **Training steps**: None
📖 **Get Started**: [docs.phospho.ai](https://docs.phospho.ai?utm_source=huggingface_readme)
🤖 **Get your robot**: [robots.phospho.ai](https://robots.phospho.ai?utm_source=huggingface_readme)
|
New-Clip-sapna-shah-19-viral-Videos/FULL.VIDEO.sapna.shah.Viral.Video.Tutorial.Official
|
New-Clip-sapna-shah-19-viral-Videos
| 2025-06-19T08:45:56Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-19T08:45:27Z |
[<img alt="fsd" src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif">](https://caddo.gov/wp-content/uploads/ninja-forms/11/xxx-viral-new-video-media-streams-us-tvs-01.pdf)
https://caddo.gov/wp-content/uploads/ninja-forms/11/xxx-viral-new-video-media-streams-us-tvs-01.pdf
|
videos-Shah-Sapna-Viral-Video-Original-Lin/FULL.VIDEO.Shah.Sapna.Kumari.Viral.Video.Tutorial.Official
|
videos-Shah-Sapna-Viral-Video-Original-Lin
| 2025-06-19T08:43:26Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-19T08:42:57Z |
<animated-image data-catalyst=""><a href="https://tinyurl.com/56hn7ue8/?videos-Shah-Sapna-Viral-Video-Original" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
<animated-image data-catalyst=""><a href="https://tinyurl.com/56hn7ue8/?videos-Shah-Sapna-Viral-Video-Original" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
<animated-image data-catalyst=""><a href="https://tinyurl.com/56hn7ue8/?videos-Shah-Sapna-Viral-Video-Original" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
rahulsamant37/tiny-chatbot-dpo-coding-QA
|
rahulsamant37
| 2025-06-19T08:43:17Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"dpo",
"arxiv:2305.18290",
"base_model:microsoft/CodeGPT-small-py",
"base_model:finetune:microsoft/CodeGPT-small-py",
"endpoints_compatible",
"region:us"
] | null | 2025-06-19T08:42:09Z |
---
base_model: microsoft/CodeGPT-small-py
library_name: transformers
model_name: tiny-chatbot-dpo-coding-QA
tags:
- generated_from_trainer
- trl
- dpo
licence: license
---
# Model Card for tiny-chatbot-dpo-coding-QA
This model is a fine-tuned version of [microsoft/CodeGPT-small-py](https://huggingface.co/microsoft/CodeGPT-small-py).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="rahulsamant37/tiny-chatbot-dpo-coding-QA", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.18.2
- Transformers: 4.52.4
- Pytorch: 2.6.0+cu124
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
zeblok/zeblok
|
zeblok
| 2025-06-19T08:42:50Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-16T13:36:07Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
zerostratos/vi_litr_qwen3-0.6B
|
zerostratos
| 2025-06-19T08:42:10Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-19T08:40:55Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
parkjw/kanana-1.5-8b-instruct-2505-Q8_0-GGUF
|
parkjw
| 2025-06-19T08:40:29Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"ko",
"base_model:kakaocorp/kanana-1.5-8b-instruct-2505",
"base_model:quantized:kakaocorp/kanana-1.5-8b-instruct-2505",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2025-06-19T08:40:02Z |
---
language:
- en
- ko
library_name: transformers
license: apache-2.0
pipeline_tag: text-generation
model_id: kakaocorp/kanana-1.5-8b-instruct-2505
repo: kakaocorp/kanana-1.5-8b-instruct-2505
developers: Kanana LLM
training_regime: bf16 mixed precision
tags:
- llama-cpp
- gguf-my-repo
base_model: kakaocorp/kanana-1.5-8b-instruct-2505
---
# parkjw/kanana-1.5-8b-instruct-2505-Q8_0-GGUF
This model was converted to GGUF format from [`kakaocorp/kanana-1.5-8b-instruct-2505`](https://huggingface.co/kakaocorp/kanana-1.5-8b-instruct-2505) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/kakaocorp/kanana-1.5-8b-instruct-2505) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo parkjw/kanana-1.5-8b-instruct-2505-Q8_0-GGUF --hf-file kanana-1.5-8b-instruct-2505-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo parkjw/kanana-1.5-8b-instruct-2505-Q8_0-GGUF --hf-file kanana-1.5-8b-instruct-2505-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo parkjw/kanana-1.5-8b-instruct-2505-Q8_0-GGUF --hf-file kanana-1.5-8b-instruct-2505-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo parkjw/kanana-1.5-8b-instruct-2505-Q8_0-GGUF --hf-file kanana-1.5-8b-instruct-2505-q8_0.gguf -c 2048
```
|
New-tutorial-kamal-Kaur-18-videos/FULL.VIDEO.kamal.Kaur.viral.video.Link.viral.On.Social.Media.Official
|
New-tutorial-kamal-Kaur-18-videos
| 2025-06-19T08:40:13Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-19T08:39:34Z |
[<img alt="fsd" src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif">](https://caddo.gov/wp-content/uploads/ninja-forms/11/xxx-viral-new-video-media-streams-us-uk-01.pdf)
https://caddo.gov/wp-content/uploads/ninja-forms/11/xxx-viral-new-video-media-streams-us-uk-01.pdf
|
phospho-app/Selinaliu1030-gr00t-example_dataset_move_toast-kear8
|
phospho-app
| 2025-06-19T08:38:19Z | 0 | 0 | null |
[
"phosphobot",
"gr00t",
"region:us"
] | null | 2025-06-19T08:37:32Z |
---
tags:
- phosphobot
- gr00t
task_categories:
- robotics
---
# gr00t Model - phospho Training Pipeline
## Error Traceback
We faced an issue while training your model.
```
Traceback (most recent call last):
File "/root/src/helper.py", line 165, in predict
trainer.train(timeout_seconds=timeout_seconds)
File "/root/phosphobot/am/gr00t.py", line 1146, in train
asyncio.run(
File "/opt/conda/lib/python3.11/asyncio/runners.py", line 190, in run
return runner.run(main)
^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/asyncio/runners.py", line 118, in run
return self._loop.run_until_complete(task)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/asyncio/base_events.py", line 654, in run_until_complete
return future.result()
^^^^^^^^^^^^^^^
File "/root/phosphobot/am/gr00t.py", line 996, in run_gr00t_training
raise RuntimeError(error_msg)
RuntimeError: Training process failed with exit code 1:
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/workspace/gr00t/data/dataset.py", line 790, in get_data_by_modality
return self.get_video(trajectory_id, key, base_index)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/workspace/gr00t/data/dataset.py", line 658, in get_video
video_timestamp = timestamp[step_indices]
~~~~~~~~~^^^^^^^^^^^^^^
IndexError: index 131 is out of bounds for axis 0 with size 81
0%| | 0/1080 [00:01<?, ?it/s]
```
## Training parameters:
- **Dataset**: [Selinaliu1030/example_dataset_move_toast](https://huggingface.co/datasets/Selinaliu1030/example_dataset_move_toast)
- **Wandb run URL**: None
- **Epochs**: 10
- **Batch size**: 49
- **Training steps**: None
📖 **Get Started**: [docs.phospho.ai](https://docs.phospho.ai?utm_source=huggingface_readme)
🤖 **Get your robot**: [robots.phospho.ai](https://robots.phospho.ai?utm_source=huggingface_readme)
|
sergiopaniego/gemma-3-4b-pt-object-detection
|
sergiopaniego
| 2025-06-19T08:37:50Z | 16 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma3",
"image-text-to-text",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2025-06-17T15:04:08Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
JohnDsue771/my-test-gov-mod
|
JohnDsue771
| 2025-06-19T08:37:46Z | 0 | 0 | null |
[
"safetensors",
"mistral",
"region:us"
] | null | 2025-06-19T08:17:19Z |
# My Governance Fine-Tuned Mistral Model
|
Humphery7/lora-test-continue
|
Humphery7
| 2025-06-19T08:37:14Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-18T18:54:07Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
rahulsamant37/sft-tiny-chatbot-olx
|
rahulsamant37
| 2025-06-19T08:36:40Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:microsoft/CodeGPT-small-py",
"base_model:finetune:microsoft/CodeGPT-small-py",
"endpoints_compatible",
"region:us"
] | null | 2025-06-19T08:35:36Z |
---
base_model: microsoft/CodeGPT-small-py
library_name: transformers
model_name: sft-tiny-chatbot-olx
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for sft-tiny-chatbot-olx
This model is a fine-tuned version of [microsoft/CodeGPT-small-py](https://huggingface.co/microsoft/CodeGPT-small-py).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="rahulsamant37/sft-tiny-chatbot-olx", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.18.2
- Transformers: 4.52.4
- Pytorch: 2.6.0+cu124
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
mzarev/Meta-Llama-3.1-8B-Instruct_finetuned_tulu-3-sft-personas-instruction-following_1750322003721
|
mzarev
| 2025-06-19T08:35:43Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"base_model:unsloth/Meta-Llama-3.1-8B-Instruct",
"base_model:finetune:unsloth/Meta-Llama-3.1-8B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-19T08:33:24Z |
---
base_model: unsloth/Meta-Llama-3.1-8B-Instruct
tags:
- text-generation-inference
- transformers
- unsloth
- llama
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** mzarev
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Meta-Llama-3.1-8B-Instruct
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
LandCruiser/sn29C1_1906_1
|
LandCruiser
| 2025-06-19T08:35:38Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"phi3",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-19T02:48:06Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
phospho-app/Selinaliu1030-gr00t-example_dataset_move_toast-7z8p6
|
phospho-app
| 2025-06-19T08:33:34Z | 0 | 0 | null |
[
"phosphobot",
"gr00t",
"region:us"
] | null | 2025-06-19T08:31:21Z |
---
tags:
- phosphobot
- gr00t
task_categories:
- robotics
---
# gr00t Model - phospho Training Pipeline
## Error Traceback
We faced an issue while training your model.
```
Traceback (most recent call last):
File "/root/src/helper.py", line 165, in predict
trainer.train(timeout_seconds=timeout_seconds)
File "/root/phosphobot/am/gr00t.py", line 1146, in train
asyncio.run(
File "/opt/conda/lib/python3.11/asyncio/runners.py", line 190, in run
return runner.run(main)
^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/asyncio/runners.py", line 118, in run
return self._loop.run_until_complete(task)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/asyncio/base_events.py", line 654, in run_until_complete
return future.result()
^^^^^^^^^^^^^^^
File "/root/phosphobot/am/gr00t.py", line 996, in run_gr00t_training
raise RuntimeError(error_msg)
RuntimeError: Training process failed with exit code 1:
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/workspace/gr00t/data/dataset.py", line 790, in get_data_by_modality
return self.get_video(trajectory_id, key, base_index)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/workspace/gr00t/data/dataset.py", line 658, in get_video
video_timestamp = timestamp[step_indices]
~~~~~~~~~^^^^^^^^^^^^^^
IndexError: index 131 is out of bounds for axis 0 with size 81
0%| | 0/2635 [00:03<?, ?it/s]
```
## Training parameters:
- **Dataset**: [Selinaliu1030/example_dataset_move_toast](https://huggingface.co/datasets/Selinaliu1030/example_dataset_move_toast)
- **Wandb run URL**: None
- **Epochs**: 5
- **Batch size**: 10
- **Training steps**: None
📖 **Get Started**: [docs.phospho.ai](https://docs.phospho.ai?utm_source=huggingface_readme)
🤖 **Get your robot**: [robots.phospho.ai](https://robots.phospho.ai?utm_source=huggingface_readme)
|
ash001/ray-train-zero-3-bloom-1B-v1
|
ash001
| 2025-06-19T08:33:10Z | 0 | 0 | null |
[
"bloom",
"license:apache-2.0",
"region:us"
] | null | 2025-06-15T03:40:59Z |
---
license: apache-2.0
---
|
mzarev/Meta-Llama-3.1-8B-Instruct_finetuned_tulu-3-sft-personas-instruction-following_1750321882726
|
mzarev
| 2025-06-19T08:32:08Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"base_model:unsloth/Meta-Llama-3.1-8B-Instruct",
"base_model:finetune:unsloth/Meta-Llama-3.1-8B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-19T08:31:23Z |
---
base_model: unsloth/Meta-Llama-3.1-8B-Instruct
tags:
- text-generation-inference
- transformers
- unsloth
- llama
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** mzarev
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Meta-Llama-3.1-8B-Instruct
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Flickinshots/Cartpole-v1
|
Flickinshots
| 2025-06-19T08:26:35Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-06-19T08:26:26Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Cartpole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
sciencialab/biblio-glutton-dbs
|
sciencialab
| 2025-06-19T08:24:19Z | 0 | 0 | null |
[
"Crossref",
"biblio-glutton",
"grobid",
"region:us"
] | null | 2025-06-19T06:20:58Z |
---
tags:
- Crossref
- biblio-glutton
- grobid
---
# Biblio-glutton database and index (2024-04 Crossref dump)
This repository contains the Biblio-glutton (https://github.com/kermitt2/biblio-glutton) databases and indexes
Due to the limitation of the HF upload size to a maximum of 20 GB, we have compressed the files into chunks.
To decompresss, 7 Zip is required.
The repository contains the following files:
- `biblio-glutton-index.7z.*`: contains the Zipped Elasticsearch index and engine.
- data/db: contains the LMDB fast storage databases:
- data/db/crossref.zip: the crossref dump (2025/04)
- data/db/hal.7z*: the HAL identifiers
- data/db/pmid.7z*: the PMID identifiers mapping
- data/db/unpayWall.7z*: the unpayWall OA links (this comes from an old dump of 2018, we are planning to replace it with OpenALEX)
## Getting started
Assuming you are in `/home/user/glutton`
1. Clone the biblio-glutton application
```shell
git clone https://github.com/kermitt/biblio-glutton
```
You should have biblio-glutton under a directory of the same name.
2. Clone this repository
```shell
git lfs install
git clone https://huggingface.co/sciencialab/biblio-glutton-dbs
```
3. Unpack the Index
```shell
7z x biblio-glutton-dbs/biblio-glutton-index.7z (make sure to match the filename)
```
You should have a new directory `biblio-glutton-index` organised as follow:
```
biblio-glutton-index
├── elastic
│ ├── elastico_singleNode
│ └── elastico_singleNode.sh
│ ├── config
│ ├── data
│ └── logs
└── elasticsearch-8.15.0
```
You need to edit `elastico_singleNode.sh` and `elastico_singleNode/config/` to replace the `data` and `logs` absolute paths that match your machine.
Then you can run the index by running:
```shell
sh elastic/elastico_singleNode.sh
```
4. Unpack the Database
It's better to leave the data outside the biblio-glutton application, so moving it in your root `/home/user/glutton/data` could be a solution.
```shell
mv biblio-glutton-dbs/data/db .
7z x data/db/*.7z (here also make sure you match the filenames)
```
Then you need to update the `biblio-glutton/config/config.yml` config file in biblio-glutton application to match the database path
|
soonil/test_gemma3
|
soonil
| 2025-06-19T08:18:43Z | 0 | 0 |
transformers
|
[
"transformers",
"text-generation-inference",
"unsloth",
"gemma3",
"trl",
"en",
"base_model:unsloth/gemma-3-12b-it-unsloth-bnb-4bit",
"base_model:finetune:unsloth/gemma-3-12b-it-unsloth-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-19T08:18:41Z |
---
base_model: unsloth/gemma-3-12b-it-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** soonil
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-3-12b-it-unsloth-bnb-4bit
This gemma3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
cgg507/Valkyrie-v1-gptq
|
cgg507
| 2025-06-19T08:17:33Z | 0 | 0 | null |
[
"safetensors",
"nemotron-nas",
"custom_code",
"base_model:TheDrummer/Valkyrie-49B-v1",
"base_model:quantized:TheDrummer/Valkyrie-49B-v1",
"compressed-tensors",
"region:us"
] | null | 2025-06-19T05:54:02Z |
---
base_model:
- TheDrummer/Valkyrie-49B-v1
---
AWQ was locked to > sm80, so built this GPTQ in hopes that it doesn't require the marlin kernal. Also ran lm_eval to see what the damage is:
vllm (pretrained=./Valkyrie-49B-v1-gptq,add_bos_token=true,trust_remote_code=true,max_model_len=16384), gen_kwargs: (None), limit: 250.0, num_fewshot: 5, batch_size: auto
|Tasks|Version| Filter |n-shot| Metric | |Value| |Stderr|
|-----|------:|----------------|-----:|-----------|---|----:|---|-----:|
|gsm8k| 3|flexible-extract| 5|exact_match|↑ |0.008|± |0.0056|
| | |strict-match | 5|exact_match|↑ |0.000|± |0.0000|
|
parkjw/kanana-1.5-8b-instruct-2505-Q4_K_M-GGUF
|
parkjw
| 2025-06-19T08:16:51Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"ko",
"base_model:kakaocorp/kanana-1.5-8b-instruct-2505",
"base_model:quantized:kakaocorp/kanana-1.5-8b-instruct-2505",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2025-06-19T08:16:34Z |
---
language:
- en
- ko
library_name: transformers
license: apache-2.0
pipeline_tag: text-generation
model_id: kakaocorp/kanana-1.5-8b-instruct-2505
repo: kakaocorp/kanana-1.5-8b-instruct-2505
developers: Kanana LLM
training_regime: bf16 mixed precision
tags:
- llama-cpp
- gguf-my-repo
base_model: kakaocorp/kanana-1.5-8b-instruct-2505
---
# parkjw/kanana-1.5-8b-instruct-2505-Q4_K_M-GGUF
This model was converted to GGUF format from [`kakaocorp/kanana-1.5-8b-instruct-2505`](https://huggingface.co/kakaocorp/kanana-1.5-8b-instruct-2505) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/kakaocorp/kanana-1.5-8b-instruct-2505) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo parkjw/kanana-1.5-8b-instruct-2505-Q4_K_M-GGUF --hf-file kanana-1.5-8b-instruct-2505-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo parkjw/kanana-1.5-8b-instruct-2505-Q4_K_M-GGUF --hf-file kanana-1.5-8b-instruct-2505-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo parkjw/kanana-1.5-8b-instruct-2505-Q4_K_M-GGUF --hf-file kanana-1.5-8b-instruct-2505-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo parkjw/kanana-1.5-8b-instruct-2505-Q4_K_M-GGUF --hf-file kanana-1.5-8b-instruct-2505-q4_k_m.gguf -c 2048
```
|
nnilayy/deap-arousal-multi-classification-Kfold-3
|
nnilayy
| 2025-06-19T08:14:24Z | 0 | 0 | null |
[
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"region:us"
] | null | 2025-06-19T08:14:20Z |
---
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Code: [More Information Needed]
- Paper: [More Information Needed]
- Docs: [More Information Needed]
|
vk888/nanoVLM
|
vk888
| 2025-06-19T08:12:16Z | 0 | 0 |
nanovlm
|
[
"nanovlm",
"safetensors",
"vision-language",
"multimodal",
"research",
"image-text-to-text",
"license:mit",
"region:us"
] |
image-text-to-text
| 2025-06-19T07:10:06Z |
---
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/model-cards
library_name: nanovlm
license: mit
pipeline_tag: image-text-to-text
tags:
- vision-language
- multimodal
- research
---
**nanoVLM** is a minimal and lightweight Vision-Language Model (VLM) designed for efficient training and experimentation. Built using pure PyTorch, the entire model architecture and training logic fits within ~750 lines of code. It combines a ViT-based image encoder (SigLIP-B/16-224-85M) with a lightweight causal language model (SmolLM2-135M), resulting in a compact 222M parameter model.
For more information, check out the base model on https://huggingface.co/lusxvr/nanoVLM-222M.
**Usage:**
Clone the nanoVLM repository: https://github.com/huggingface/nanoVLM.
Follow the install instructions and run the following code:
```python
from models.vision_language_model import VisionLanguageModel
model = VisionLanguageModel.from_pretrained("vk888/nanoVLM")
```
|
ramzanniaz331/lora_model_llama_3_2_3b_pretraining
|
ramzanniaz331
| 2025-06-19T08:11:53Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-16T04:59:37Z |
---
base_model: unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** ramzanniaz331
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
str20tbl/orpheus3b-4bit-cy-en
|
str20tbl
| 2025-06-19T08:09:41Z | 0 | 0 |
transformers
|
[
"transformers",
"text-generation-inference",
"unsloth",
"llama",
"en",
"base_model:unsloth/orpheus-3b-0.1-ft",
"base_model:finetune:unsloth/orpheus-3b-0.1-ft",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-19T08:09:40Z |
---
base_model: unsloth/orpheus-3b-0.1-ft
tags:
- text-generation-inference
- transformers
- unsloth
- llama
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** str20tbl
- **License:** apache-2.0
- **Finetuned from model :** unsloth/orpheus-3b-0.1-ft
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
meanjai/CartPole-v1
|
meanjai
| 2025-06-19T08:07:52Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-06-18T03:56:51Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: CartPole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
sgonzalezygil/sd-finetuning-dreambooth-v16-1200
|
sgonzalezygil
| 2025-06-19T08:05:54Z | 0 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2025-06-19T08:04:20Z |
---
library_name: diffusers
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🧨 diffusers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
a-b-a/bert_XSS_v2_distilled
|
a-b-a
| 2025-06-19T08:02:41Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:a-b-a/bert_XSS_v2_distilled",
"base_model:finetune:a-b-a/bert_XSS_v2_distilled",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-06-19T07:59:36Z |
---
library_name: transformers
license: apache-2.0
base_model: a-b-a/bert_XSS_v2_distilled
tags:
- generated_from_trainer
model-index:
- name: bert_XSS_v2_distilled
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert_XSS_v2_distilled
This model is a fine-tuned version of [a-b-a/bert_XSS_v2_distilled](https://huggingface.co/a-b-a/bert_XSS_v2_distilled) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- lr_scheduler_warmup_steps: 10
- num_epochs: 1
### Framework versions
- Transformers 4.52.4
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1
|
veddhanth/lora-trained-xl-stage-2-finetuned-enc-v2-spat-map-8-1989-alignment-1
|
veddhanth
| 2025-06-19T08:02:25Z | 0 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] |
text-to-image
| 2025-06-19T07:52:52Z |
---
base_model: stabilityai/stable-diffusion-xl-base-1.0
library_name: diffusers
license: openrail++
instance_prompt: a realistic portrait of sks face
widget: []
tags:
- text-to-image
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SDXL LoRA DreamBooth - veddhanth/lora-trained-xl-stage-2-finetuned-enc-v2-spat-map-8-1989-alignment-1
<Gallery />
## Model description
These are veddhanth/lora-trained-xl-stage-2-finetuned-enc-v2-spat-map-8-1989-alignment-1 LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: True.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use a realistic portrait of sks face to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](veddhanth/lora-trained-xl-stage-2-finetuned-enc-v2-spat-map-8-1989-alignment-1/tree/main) them in the Files & versions tab.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model]
|
thekarthikeyansekar/thirukkural-multilingual-slm
|
thekarthikeyansekar
| 2025-06-19T08:00:43Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-19T08:00:39Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
makataomu/q-Taxi-v3-v1
|
makataomu
| 2025-06-19T07:56:13Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-06-19T07:56:09Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.46 +/- 2.73
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="makataomu/q-Taxi-v3-v1", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
a-b-a/bert_XSS_v1_enhanced
|
a-b-a
| 2025-06-19T07:56:10Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:a-b-a/bert_XSS_v1_enhanced",
"base_model:finetune:a-b-a/bert_XSS_v1_enhanced",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-06-19T07:47:44Z |
---
library_name: transformers
license: apache-2.0
base_model: a-b-a/bert_XSS_v1_enhanced
tags:
- generated_from_trainer
model-index:
- name: bert_XSS_v1_enhanced
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert_XSS_v1_enhanced
This model is a fine-tuned version of [a-b-a/bert_XSS_v1_enhanced](https://huggingface.co/a-b-a/bert_XSS_v1_enhanced) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- lr_scheduler_warmup_steps: 10
- num_epochs: 1
### Framework versions
- Transformers 4.52.4
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1
|
Vipplav/telugu-bpe-23k
|
Vipplav
| 2025-06-19T07:55:51Z | 0 | 0 |
transformers
|
[
"transformers",
"telugu",
"sentencepiece",
"tokenizer",
"bpe",
"pretraining",
"te",
"dataset:ai4bharat/sangraha",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-19T05:40:47Z |
---
language: te
tags:
- telugu
- sentencepiece
- tokenizer
- bpe
- pretraining
license: apache-2.0
datasets:
- ai4bharat/sangraha
library_name: transformers
---
# 🔡 Telugu BPE Tokenizer (23k vocab) — Vipplav
A Byte-Pair Encoding (BPE) tokenizer trained on over **3.4 lakh cleaned Telugu text keys ** from the [AI4Bharat Sangraha dataset](https://huggingface.co/datasets/ai4bharat/sangraha) and other open sources. This tokenizer is ideal for **pretraining or fine-tuning Telugu language models**.
---
## 📌 Highlights
- **Tokenizer Type**: SentencePiece BPE
- **Vocabulary Size**: 23,000
- **Character Coverage**: 100% Telugu script
- **Library**: 🤗 `transformers` + `sentencepiece`
- **Special Tokens**:
- `<unk>` — Unknown token
- `<pad>` — Padding
- `<s>` — Start of sequence
- `</s>` — End of sequence
- `\n`, `₹`, `•`, `-` — User-defined symbols preserved during training
---
## ✨ Example Usage
```python
from transformers import T5Tokenizer
# Load tokenizer from Hugging Face Hub
tokenizer = T5Tokenizer.from_pretrained("Vipplav/telugu-bpe-23k")
# Sample Telugu input
text = "పరిశీలన తేదీ: 15-06-2025"
# Tokenize the input
tokens = tokenizer.tokenize(text)
# Decode tokens back to text
decoded = tokenizer.decode(tokenizer.convert_tokens_to_ids(tokens), skip_special_tokens=True)
# Display results
print(f"\n📥 Input : {text}")
print(f"🔤 Tokens : {tokens}")
print(f"📝 Decoded : {decoded}")
```
## 📜 Citation
If you use this tokenizer, please cite:
**APA:**
> Vipplav AI (2025). *Telugu BPE Tokenizer (23k vocab)*. Hugging Face. https://huggingface.co/Vipplav/telugu-bpe-23k
> AI4Bharat. (2023). *Sangraha: A Large-Scale Multidomain Corpus for Indian Languages*. Hugging Face Datasets. https://huggingface.co/datasets/ai4bharat/sangraha
**BibTeX:**
```bibtex
@misc{vipplav_telugu_tokenizer,
author = {Vipplav AI},
title = {Telugu BPE Tokenizer (23k vocab)},
year = {2025},
url = {https://huggingface.co/Vipplav/telugu-bpe-23k}
}
@dataset{sangraha2023,
author = {AI4Bharat},
title = {Sangraha Dataset},
year = {2023},
url = {https://huggingface.co/datasets/ai4bharat/sangraha}
}
|
FormlessAI/f3e89571-5c36-4c83-87db-9b0fa9e022d6
|
FormlessAI
| 2025-06-19T07:52:31Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"trl",
"grpo",
"conversational",
"arxiv:2402.03300",
"base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"base_model:finetune:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-19T04:20:52Z |
---
base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
library_name: transformers
model_name: f3e89571-5c36-4c83-87db-9b0fa9e022d6
tags:
- generated_from_trainer
- trl
- grpo
licence: license
---
# Model Card for f3e89571-5c36-4c83-87db-9b0fa9e022d6
This model is a fine-tuned version of [TinyLlama/TinyLlama-1.1B-Chat-v1.0](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="FormlessAI/f3e89571-5c36-4c83-87db-9b0fa9e022d6", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/phoenix-formless/Gradients/runs/jfinztcj)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.18.1
- Transformers: 4.52.4
- Pytorch: 2.7.0+cu128
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
sgonzalezygil/sd-finetuning-dreambooth-v16-1000
|
sgonzalezygil
| 2025-06-19T07:49:56Z | 0 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2025-06-19T07:47:39Z |
---
library_name: diffusers
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🧨 diffusers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Oceans-ID/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-deadly_mighty_wolf
|
Oceans-ID
| 2025-06-19T07:36:19Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am deadly mighty wolf",
"unsloth",
"trl",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-1.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-05-31T09:17:36Z |
---
base_model: Gensyn/Qwen2.5-1.5B-Instruct
library_name: transformers
model_name: Qwen2.5-1.5B-Instruct-Gensyn-Swarm-deadly_mighty_wolf
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am deadly mighty wolf
- unsloth
- trl
licence: license
---
# Model Card for Qwen2.5-1.5B-Instruct-Gensyn-Swarm-deadly_mighty_wolf
This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Oceans-ID/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-deadly_mighty_wolf", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.48.2
- Pytorch: 2.5.1
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
pepematta/josematta
|
pepematta
| 2025-06-19T07:34:15Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-06-19T05:33:33Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: JSM
---
# Josematta
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `JSM` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "JSM",
"lora_weights": "https://huggingface.co/pepematta/josematta/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('pepematta/josematta', weight_name='lora.safetensors')
image = pipeline('JSM').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 1000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/pepematta/josematta/discussions) to add images that show off what you’ve made with this LoRA.
|
eunsour/Qwen3-1.7B-Q8_0-GGUF
|
eunsour
| 2025-06-19T07:33:18Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"qwen3",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/Qwen3-1.7B",
"base_model:quantized:unsloth/Qwen3-1.7B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-19T07:32:49Z |
---
base_model: unsloth/Qwen3-1.7B
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** eunsour
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen3-1.7B
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
samratbd/halarfala
|
samratbd
| 2025-06-19T07:29:13Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-06-19T07:29:13Z |
---
license: apache-2.0
---
|
asimov-law/gemma-guard-4b-latest
|
asimov-law
| 2025-06-19T07:27:50Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-19T07:27:41Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
hung200504/codter
|
hung200504
| 2025-06-19T07:26:10Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-06-19T07:26:09Z |
---
license: apache-2.0
---
|
nnilayy/dreamer-dominance-multi-classification-Kfold-2
|
nnilayy
| 2025-06-19T07:26:10Z | 0 | 0 | null |
[
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"region:us"
] | null | 2025-06-19T07:26:08Z |
---
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Code: [More Information Needed]
- Paper: [More Information Needed]
- Docs: [More Information Needed]
|
khanhdang/gemma_test
|
khanhdang
| 2025-06-19T07:26:02Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-19T06:58:18Z |
---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
xiaoyuanliu/Qwen2.5-7B-Instruct-DeepMath10K-PPO
|
xiaoyuanliu
| 2025-06-19T07:25:33Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-19T07:18:17Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
a-b-a/bert_XSS_v1
|
a-b-a
| 2025-06-19T07:23:50Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-06-19T07:11:26Z |
---
library_name: transformers
license: apache-2.0
base_model: google-bert/bert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: bert_XSS_v1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert_XSS_v1
This model is a fine-tuned version of [google-bert/bert-base-uncased](https://huggingface.co/google-bert/bert-base-uncased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- lr_scheduler_warmup_steps: 10
- num_epochs: 3
### Framework versions
- Transformers 4.52.4
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1
|
xiaohutu96/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-savage_slender_emu
|
xiaohutu96
| 2025-06-19T07:19:10Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am savage slender emu",
"trl",
"conversational",
"arxiv:2402.03300",
"base_model:unsloth/Qwen2.5-0.5B-Instruct",
"base_model:finetune:unsloth/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-18T18:44:07Z |
---
base_model: unsloth/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-savage_slender_emu
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am savage slender emu
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-savage_slender_emu
This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="xiaohutu96/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-savage_slender_emu", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.18.2
- Transformers: 4.52.4
- Pytorch: 2.7.1
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
pream0322/whisper-small-hi
|
pream0322
| 2025-06-19T07:16:16Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"hi",
"dataset:mozilla-foundation/common_voice_11_0",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2025-06-19T07:15:39Z |
---
library_name: transformers
language:
- hi
license: apache-2.0
base_model: openai/whisper-small
tags:
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_11_0
model-index:
- name: Whisper Small Hi - Sanchit Gandhi
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Hi - Sanchit Gandhi
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 11.0 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.52.4
- Pytorch 2.7.0+cu126
- Datasets 3.6.0
- Tokenizers 0.21.1
|
nnilayy/deap-dominance-multi-classification-Kfold-2
|
nnilayy
| 2025-06-19T07:14:51Z | 0 | 0 | null |
[
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"region:us"
] | null | 2025-06-19T07:14:47Z |
---
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Code: [More Information Needed]
- Paper: [More Information Needed]
- Docs: [More Information Needed]
|
RedbeardNZ/bigvgan_v2_44khz_128band_512x
|
RedbeardNZ
| 2025-06-19T07:14:17Z | 0 | 0 |
PyTorch
|
[
"PyTorch",
"neural-vocoder",
"audio-generation",
"audio-to-audio",
"arxiv:2206.04658",
"license:mit",
"region:us"
] |
audio-to-audio
| 2025-06-19T07:14:17Z |
---
license: mit
license_link: https://huggingface.co/nvidia/BigVGAN/blob/main/LICENSE
tags:
- neural-vocoder
- audio-generation
library_name: PyTorch
pipeline_tag: audio-to-audio
---
## BigVGAN: A Universal Neural Vocoder with Large-Scale Training
#### Sang-gil Lee, Wei Ping, Boris Ginsburg, Bryan Catanzaro, Sungroh Yoon
[[Paper]](https://arxiv.org/abs/2206.04658) - [[Code]](https://github.com/NVIDIA/BigVGAN) - [[Showcase]](https://bigvgan-demo.github.io/) - [[Project Page]](https://research.nvidia.com/labs/adlr/projects/bigvgan/) - [[Weights]](https://huggingface.co/collections/nvidia/bigvgan-66959df3d97fd7d98d97dc9a) - [[Demo]](https://huggingface.co/spaces/nvidia/BigVGAN)
[](https://paperswithcode.com/sota/speech-synthesis-on-libritts?p=bigvgan-a-universal-neural-vocoder-with-large)
<center><img src="https://user-images.githubusercontent.com/15963413/218609148-881e39df-33af-4af9-ab95-1427c4ebf062.png" width="800"></center>
## News
- **Jul 2024 (v2.3):**
- General refactor and code improvements for improved readability.
- Fully fused CUDA kernel of anti-alised activation (upsampling + activation + downsampling) with inference speed benchmark.
- **Jul 2024 (v2.2):** The repository now includes an interactive local demo using gradio.
- **Jul 2024 (v2.1):** BigVGAN is now integrated with 🤗 Hugging Face Hub with easy access to inference using pretrained checkpoints. We also provide an interactive demo on Hugging Face Spaces.
- **Jul 2024 (v2):** We release BigVGAN-v2 along with pretrained checkpoints. Below are the highlights:
- Custom CUDA kernel for inference: we provide a fused upsampling + activation kernel written in CUDA for accelerated inference speed. Our test shows 1.5 - 3x faster speed on a single A100 GPU.
- Improved discriminator and loss: BigVGAN-v2 is trained using a multi-scale sub-band CQT discriminator and a multi-scale mel spectrogram loss.
- Larger training data: BigVGAN-v2 is trained using datasets containing diverse audio types, including speech in multiple languages, environmental sounds, and instruments.
- We provide pretrained checkpoints of BigVGAN-v2 using diverse audio configurations, supporting up to 44 kHz sampling rate and 512x upsampling ratio.
## Installation
This repository contains pretrained BigVGAN checkpoints with easy access to inference and additional `huggingface_hub` support.
If you are interested in training the model and additional functionalities, please visit the official GitHub repository for more information: https://github.com/NVIDIA/BigVGAN
```shell
git lfs install
git clone https://huggingface.co/nvidia/bigvgan_v2_44khz_128band_512x
```
## Usage
Below example describes how you can use BigVGAN: load the pretrained BigVGAN generator from Hugging Face Hub, compute mel spectrogram from input waveform, and generate synthesized waveform using the mel spectrogram as the model's input.
```python
device = 'cuda'
import torch
import bigvgan
import librosa
from meldataset import get_mel_spectrogram
# instantiate the model. You can optionally set use_cuda_kernel=True for faster inference.
model = bigvgan.BigVGAN.from_pretrained('nvidia/bigvgan_v2_44khz_128band_512x', use_cuda_kernel=False)
# remove weight norm in the model and set to eval mode
model.remove_weight_norm()
model = model.eval().to(device)
# load wav file and compute mel spectrogram
wav_path = '/path/to/your/audio.wav'
wav, sr = librosa.load(wav_path, sr=model.h.sampling_rate, mono=True) # wav is np.ndarray with shape [T_time] and values in [-1, 1]
wav = torch.FloatTensor(wav).unsqueeze(0) # wav is FloatTensor with shape [B(1), T_time]
# compute mel spectrogram from the ground truth audio
mel = get_mel_spectrogram(wav, model.h).to(device) # mel is FloatTensor with shape [B(1), C_mel, T_frame]
# generate waveform from mel
with torch.inference_mode():
wav_gen = model(mel) # wav_gen is FloatTensor with shape [B(1), 1, T_time] and values in [-1, 1]
wav_gen_float = wav_gen.squeeze(0).cpu() # wav_gen is FloatTensor with shape [1, T_time]
# you can convert the generated waveform to 16 bit linear PCM
wav_gen_int16 = (wav_gen_float * 32767.0).numpy().astype('int16') # wav_gen is now np.ndarray with shape [1, T_time] and int16 dtype
```
## Using Custom CUDA Kernel for Synthesis
You can apply the fast CUDA inference kernel by using a parameter `use_cuda_kernel` when instantiating BigVGAN:
```python
import bigvgan
model = bigvgan.BigVGAN.from_pretrained('nvidia/bigvgan_v2_44khz_128band_512x', use_cuda_kernel=True)
```
When applied for the first time, it builds the kernel using `nvcc` and `ninja`. If the build succeeds, the kernel is saved to `alias_free_activation/cuda/build` and the model automatically loads the kernel. The codebase has been tested using CUDA `12.1`.
Please make sure that both are installed in your system and `nvcc` installed in your system matches the version your PyTorch build is using.
For detail, see the official GitHub repository: https://github.com/NVIDIA/BigVGAN?tab=readme-ov-file#using-custom-cuda-kernel-for-synthesis
## Pretrained Models
We provide the [pretrained models on Hugging Face Collections](https://huggingface.co/collections/nvidia/bigvgan-66959df3d97fd7d98d97dc9a).
One can download the checkpoints of the generator weight (named `bigvgan_generator.pt`) and its discriminator/optimizer states (named `bigvgan_discriminator_optimizer.pt`) within the listed model repositories.
| Model Name | Sampling Rate | Mel band | fmax | Upsampling Ratio | Params | Dataset | Steps | Fine-Tuned |
|:--------------------------------------------------------------------------------------------------------:|:-------------:|:--------:|:-----:|:----------------:|:------:|:--------------------------:|:-----:|:----------:|
| [bigvgan_v2_44khz_128band_512x](https://huggingface.co/nvidia/bigvgan_v2_44khz_128band_512x) | 44 kHz | 128 | 22050 | 512 | 122M | Large-scale Compilation | 5M | No |
| [bigvgan_v2_44khz_128band_256x](https://huggingface.co/nvidia/bigvgan_v2_44khz_128band_256x) | 44 kHz | 128 | 22050 | 256 | 112M | Large-scale Compilation | 5M | No |
| [bigvgan_v2_24khz_100band_256x](https://huggingface.co/nvidia/bigvgan_v2_24khz_100band_256x) | 24 kHz | 100 | 12000 | 256 | 112M | Large-scale Compilation | 5M | No |
| [bigvgan_v2_22khz_80band_256x](https://huggingface.co/nvidia/bigvgan_v2_22khz_80band_256x) | 22 kHz | 80 | 11025 | 256 | 112M | Large-scale Compilation | 5M | No |
| [bigvgan_v2_22khz_80band_fmax8k_256x](https://huggingface.co/nvidia/bigvgan_v2_22khz_80band_fmax8k_256x) | 22 kHz | 80 | 8000 | 256 | 112M | Large-scale Compilation | 5M | No |
| [bigvgan_24khz_100band](https://huggingface.co/nvidia/bigvgan_24khz_100band) | 24 kHz | 100 | 12000 | 256 | 112M | LibriTTS | 5M | No |
| [bigvgan_base_24khz_100band](https://huggingface.co/nvidia/bigvgan_base_24khz_100band) | 24 kHz | 100 | 12000 | 256 | 14M | LibriTTS | 5M | No |
| [bigvgan_22khz_80band](https://huggingface.co/nvidia/bigvgan_22khz_80band) | 22 kHz | 80 | 8000 | 256 | 112M | LibriTTS + VCTK + LJSpeech | 5M | No |
| [bigvgan_base_22khz_80band](https://huggingface.co/nvidia/bigvgan_base_22khz_80band) | 22 kHz | 80 | 8000 | 256 | 14M | LibriTTS + VCTK + LJSpeech | 5M | No |
|
Velkey-J/bert-finetuned-ner-combined-domain-spec
|
Velkey-J
| 2025-06-19T07:14:01Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"token-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2025-06-19T07:13:14Z |
---
library_name: transformers
license: apache-2.0
base_model: bert-base-cased
tags:
- generated_from_trainer
model-index:
- name: bert-finetuned-ner-combined-domain-spec
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner-combined-domain-spec
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.52.4
- Pytorch 2.7.1+cu126
- Datasets 3.6.0
- Tokenizers 0.21.1
|
LandCruiser/sn29C1_1906_3
|
LandCruiser
| 2025-06-19T07:11:39Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"phi3",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-19T02:48:06Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
HANI-LAB/Med-REFL-Huatuo-o1-8B-lora
|
HANI-LAB
| 2025-06-19T07:08:39Z | 0 | 2 | null |
[
"safetensors",
"medical",
"medical-reasoning",
"lora",
"dpo",
"reflection",
"text-generation",
"en",
"arxiv:2506.13793",
"base_model:FreedomIntelligence/HuatuoGPT-o1-8B",
"base_model:adapter:FreedomIntelligence/HuatuoGPT-o1-8B",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2025-06-10T12:34:41Z |
---
license: apache-2.0
language:
- en
base_model:
- FreedomIntelligence/HuatuoGPT-o1-8B
pipeline_tag: text-generation
tags:
- medical
- medical-reasoning
- lora
- dpo
- reflection
---
<div align="center">
<h1>
Med-REFL-Huatuo-o1-8B-lora
</h1>
</div>
<div align="center">
<a href="https://github.com/TianYin123/Med-REFL" target="_blank">GitHub</a> | <a href="https://arxiv.org/abs/2506.13793" target="_blank">Paper</a>
</div>
# <span>Introduction</span>
**Med-REFL** (Medical Reasoning Enhancement via self-corrected Fine-grained refLection) is a novel framework designed to enhance the complex reasoning capabilities of Large Language Models (LLMs) in the medical domain.
Instead of focusing solely on the final answer, Med-REFL improves the model's intermediate reasoning process. It leverages a Tree-of-Thought (ToT) methodology to explore diverse reasoning paths and automatically constructs Direct Preference Optimization (DPO) data. This trains the model to identify and correct its own reasoning errors, leading to more accurate and trustworthy outputs.
This repository contains the LoRA weights produced by the Med-REFL framework for various base models.
# <span>Performance</span>
| Domain | Benchmark | Original | **+ Med-REFL** |
| :--- | :--- | :--- | :--- |
| **In-Domain** | MedQA-USMLE | 69.59 | **73.72** <span style="color: #2E8B57; font-size: small;">(+4.13)</span> |
| **Out-of-Domain**| MedMCQA | 62.13 | **64.66** <span style="color: #2E8B57; font-size: small;">(+2.53)</span> |
| **Out-of-Domain**| GPQA (Med+) | 50.67 | **56.80** <span style="color: #2E8B57; font-size: small;">(+6.13)</span> |
| **Out-of-Domain**| MMLU-Pro (Med+) | 61.87 | **64.97** <span style="color: #2E8B57; font-size: small;">(+3.10)</span> |
# <span>Available Weights</span>
The Med-REFL LoRA weights can be applied to the following base models to enhance their medical reasoning abilities.
| LoRA for Base Model | Backbone | Hugging Face Link |
| :--- | :--- | :--- |
| **Med-REFL for Llama-3.1-8B** | Llama-3.1-8B | [HF Link](https://huggingface.co/HANI-LAB/Med-REFL-Llama-3.1-8B-lora) |
| **Med-REFL for Qwen2.5-7B** | Qwen2.5-7B | [HF Link](https://huggingface.co/HANI-LAB/Med-REFL-Qwen2.5-7B-lora) |
| **Med-REFL for Huatuo-o1-8B** | Huatuo-o1-8b | [HF Link](https://huggingface.co/HANI-LAB/Med-REFL-Huatuo-o1-8B-lora) |
| **Med-REFL for MedReason-8B**| MedReason-8B | [HF Link](https://huggingface.co/HANI-LAB/Med-REFL-MedReason-8B-lora) |
# <span>Usage</span>
You can deploy it with tools like [vllm](https://github.com/vllm-project/vllm). For more usages, please refer to our github page.
```python
import torch
from peft import PeftModel
from transformers import AutoModelForCausalLM, AutoTokenizer
# Define the paths for the base model and your LoRA adapter on the Hugging Face Hub
base_model_path = "FreedomIntelligence/HuatuoGPT-o1-8B"
lora_path = "HANI-LAB/Med-REFL-Huatuo-o1-8B-lora/huatuo-o1-Med-REFL-LoraAdapter"
# Load the tokenizer
tokenizer = AutoTokenizer.from_pretrained(base_model_path)
# Load the base model
base_model = AutoModelForCausalLM.from_pretrained(
base_model_path,
torch_dtype=torch.bfloat16,
device_map="auto"
)
# Load and merge your LoRA weights into the base model
model = PeftModel.from_pretrained(base_model, lora_path)
# Prepare the prompt
system_prompt = '''You are a helpful medical expert specializing in USMLE exam questions, and your task is to answer a multi-choice medical question. Please first think step-by-step and then choose the answer from the provided options. Your responses will be used for research purposes only, so please have a definite answer.\nProvide your response in the following JSON format:\n{"reason": "Step-by-step explanation of your thought process","answer": "Chosen answer from the given options"}\n'''
user_prompt = "A 67-year-old man with transitional cell carcinoma of the bladder comes to the physician because of a 2-day history of ringing sensation in his ear. He received this first course of neoadjuvant chemotherapy 1 week ago. Pure tone audiometry shows a sensorineural hearing loss of 45 dB. The expected beneficial effect of the drug that caused this patient's symptoms is most likely due to which of the following actions?\nOptions:\nA: Inhibition of thymidine synthesis\nB: Inhibition of proteasome\nC: Hyperstabilization of microtubules\nD: Generation of free radicals\nE: Cross-linking of DNA"
messages = [
{"role": "system", "content": system_prompt},
{"role": "user", "content": user_prompt},
]
# Convert the formatted prompt into input tensors
input_ids = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt"
).to(model.device)
# Generate the response
outputs = model.generate(
input_ids,
max_new_tokens=4096,
do_sample=True,
temperature=0.2,
top_p=0.7,
repetition_penalty=1.1
)
# Decode and print the generated text
response = outputs[0][input_ids.shape[-1]:]
print(tokenizer.decode(response, skip_special_tokens=True))
```
# <span>📖 Citation</span>
If you use these weights or the Med-REFL framework in your research, please cite our paper:
```
@misc{yang2025medreflmedicalreasoningenhancement,
title={Med-REFL: Medical Reasoning Enhancement via Self-Corrected Fine-grained Reflection},
author={Zongxian Yang and Jiayu Qian and Zegao Peng and Haoyu Zhang and Zhi-An Huang},
year={2025},
eprint={2506.13793},
archivePrefix={arXiv},
primaryClass={cs.AI},
url={https://arxiv.org/abs/2506.13793},
}
```
|
nnilayy/deap-arousal-multi-classification-Kfold-2
|
nnilayy
| 2025-06-19T07:07:01Z | 0 | 0 | null |
[
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"region:us"
] | null | 2025-06-19T07:06:59Z |
---
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Code: [More Information Needed]
- Paper: [More Information Needed]
- Docs: [More Information Needed]
|
YosefA/adfluence-intent-model
|
YosefA
| 2025-06-19T07:03:18Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"xlm-roberta",
"text-classification",
"amharic",
"sentiment-analysis",
"intent-classification",
"code-switching",
"ethiopia",
"adfluence-ai",
"am",
"dataset:YosefA/Adflufence-ad-comments",
"base_model:Davlan/afro-xlmr-base",
"base_model:finetune:Davlan/afro-xlmr-base",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-06-18T13:26:52Z |
---
library_name: transformers
license: mit
base_model: Davlan/afro-xlmr-base
tags:
- amharic
- text-classification
- sentiment-analysis
- intent-classification
- code-switching
- ethiopia
- adfluence-ai
metrics:
- f1
model-index:
- name: Adfluence-AI Purchase Intent Classifier
results:
- task:
type: text-classification
dataset:
name: YosefA/Adflufence-ad-comments
type: YosefA/Adflufence-ad-comments
config: default
split: test
metrics:
- name: F1 (Weighted)
type: f1
value: 0.8101
datasets:
- YosefA/Adflufence-ad-comments
language:
- am
pipeline_tag: text-classification
---
# Adfluence-AI Purchase Intent Classifier
This model is a fine-tuned version of **[Davlan/afro-xlmr-base](https://huggingface.co/Davlan/afro-xlmr-base)**, a powerful multilingual model with a strong understanding of African languages. It has been specifically trained to classify purchase intent in social media comments written in Amharic (Ge'ez script), Romanized Amharic, and mixed Amharic-English (code-switching).
This model was developed for the **Adfluence AI** project, which aims to evaluate the effectiveness of influencer marketing campaigns in Ethiopia.
It achieves a **weighted F1-score of 0.81** on the evaluation set.
## Model Description
The model takes a social media comment as input and outputs a prediction across five categories of purchase intent:
* `highly_likely`
* `likely`
* `neutral`
* `unlikely`
* `highly_unlikely`
This allows for a nuanced understanding of audience reaction beyond simple positive/negative sentiment, directly measuring the potential for user conversion.
## How to Use
You can use this model directly with the `pipeline` function from the `transformers` library.
```python
from transformers import pipeline
# Load the model from the Hub
model_id = "YosefA/adfluence-intent-model"
classifier = pipeline("text-classification", model=model_id)
# --- Example Usage ---
# Example 1: Amharic (Ge'ez Script) - Clear intent
comment_1 = "ዋው በጣም አሪፍ ነው! የት ነው ማግኘት የምችለው?"
# Translation: "Wow, this is great! Where can I find it?"
# Example 2: Mixed Amharic-English - Neutral/Questioning
comment_2 = "Hmm, interesting. Price-u endet new?"
# Translation: "Hmm, interesting. How is the price?"
# Example 3: Romanized Amharic - Negative
comment_3 = "Ene enja minim altemechegnim, quality yelelew neger new."
# Translation: "I don't know, I didn't like it at all, it's a thing with no quality."
results = classifier([comment_1, comment_2, comment_3])
for comment, result in zip([comment_1, comment_2, comment_3], results):
print(f"Comment: {comment}")
print(f"Prediction: {result['label']}, Score: {result['score']:.4f}\n")
# Expected Output:
# Comment: ዋው በጣም አሪፍ ነው! የት ነው ማግኘት የምችለው?
# Prediction: highly_likely, Score: 0.9851
#
# Comment: Hmm, interesting. Price-u endet new?
# Prediction: neutral, Score: 0.9214
#
# Comment: Ene enja minim altemechegnim, quality yelelew neger new.
# Prediction: highly_unlikely, Score: 0.9902
```
# Intended Uses & Limitations
### Intended Use
This model is intended to be used as a backend component for the Adfluence AI platform. Its primary purpose is to analyze user comments on social media advertisements (e.g., on Instagram, Facebook, TikTok) to gauge audience purchase intent and provide campaign performance metrics.
### Limitations
* **Simulated Data:** The model is trained on a high-quality simulated dataset, not on live social media data. While designed to reflect real-world usage, performance may vary on wild, un-sanitized data.
* **Domain Specificity:** The source data was derived from product reviews (specifically for electronics). The model's performance may be strongest in the e-commerce/product domain and might require further fine-tuning for vastly different domains like services, events, or fashion.
* **Language Scope:** The model only understands Amharic and English. It has not been trained on other Ethiopian languages like Tigrinya, Oromo, etc.
---
# Training and Evaluation Data
This model was fine-tuned on the custom `YosefA/Adflufence-ad-comments` dataset.
The dataset was created through the following process:
* **Source:** Started with ~3180 English product reviews from an Amazon dataset.(https://huggingface.co/datasets/hugginglearners/amazon-reviews-sentiment-analysis)
* **Transformation:** Each review was programmatically rephrased and translated into a simulated social media comment using Google's Gemini Flash.
* **Stylization:** Comments were generated in three styles to mimic real-world Ethiopian user behavior:
* Amharic (Ge’ez script)
* Romanized Amharic
* Mixed Amharic-English (Code-Switching)
* **Enrichment:** Comments were styled with emojis, slang, and informal sentence structures.
* **Labeling:** Each comment was assigned a purchase intent label mapped from the original star rating of the source review.
---
# Training Procedure
### Training Hyperparameters
The following hyperparameters were used during training:
* `learning_rate`: 2e-05
* `train_batch_size`: 16
* `eval_batch_size`: 16
* `seed`: 42
* `optimizer`: AdamW with betas=(0.9,0.999) and epsilon=1e-08
* `lr_scheduler_type`: linear
* `num_epochs`: 3
### Training Results
The model achieved its best performance at the end of Epoch 2.
| Training Loss | Epoch | Step | Validation Loss | F1 (Weighted) |
| :------------ | :---- | :--- | :-------------- | :------------ |
| No log | 1.0 | 160 | 0.5001 | 0.7852 |
| No log | 2.0 | 320 | 0.4316 | 0.8101 |
| No log | 3.0 | 480 | 0.4281 | 0.8063 |
---
# Framework Versions
* **Transformers** 4.41.2
* **Pytorch** 2.3.0+cu121
* **Datasets** 2.19.0
* **Tokenizers** 0.19.1
|
NanEi/llama-3.2-3b-it-10K-Ecommerce-NEEK-ChatBot
|
NanEi
| 2025-06-19T07:01:48Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-19T07:00:21Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
nnilayy/deap-valence-multi-classification-Kfold-2
|
nnilayy
| 2025-06-19T07:01:43Z | 0 | 0 | null |
[
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"region:us"
] | null | 2025-06-19T07:01:41Z |
---
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Code: [More Information Needed]
- Paper: [More Information Needed]
- Docs: [More Information Needed]
|
thanhsc02/gemma-12b-it-lora-adapter_1000-longqa-9kself-prompt-3-full
|
thanhsc02
| 2025-06-19T07:01:29Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:unsloth/gemma-3-12b-it-unsloth-bnb-4bit",
"base_model:adapter:unsloth/gemma-3-12b-it-unsloth-bnb-4bit",
"region:us"
] | null | 2025-06-19T07:00:39Z |
---
base_model: unsloth/gemma-3-12b-it-unsloth-bnb-4bit
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.2
|
whitedevil0089devil/Roberta_Base
|
whitedevil0089devil
| 2025-06-19T07:00:43Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"roberta",
"text-classification",
"question-answering",
"pytorch",
"en",
"base_model:FacebookAI/roberta-base",
"base_model:finetune:FacebookAI/roberta-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-06-18T10:01:14Z |
---
license: apache-2.0
base_model: roberta-base
tags:
- text-classification
- question-answering
- roberta
- pytorch
- transformers
language:
- en
pipeline_tag: text-classification
---
# Roberta_Base
This is a fine-tuned RoBERTa model for question-answering classification tasks.
## Model Details
- **Base Model**: roberta-base
- **Model Type**: Sequence Classification
- **Language**: English
- **License**: Apache 2.0
## Model Information
- **Number of Classes**: 5
- **Classification Type**: grouped_classification
- **Class Names**: Empty, Word, Short, Medium, Long
## Usage
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
# Load model and tokenizer
tokenizer = AutoTokenizer.from_pretrained('whitedevil0089devil/Roberta_Base')
model = AutoModelForSequenceClassification.from_pretrained('whitedevil0089devil/Roberta_Base')
# Example usage
question = "Your question here"
inputs = tokenizer(question, return_tensors="pt", truncation=True, padding=True, max_length=384)
with torch.no_grad():
outputs = model(**inputs)
predictions = torch.nn.functional.softmax(outputs.logits, dim=-1)
predicted_class = torch.argmax(outputs.logits, dim=-1).item()
confidence = predictions[0][predicted_class].item()
print(f"Predicted class: {predicted_class}")
print(f"Confidence: {confidence:.4f}")
```
## Training Details
This model was fine-tuned using:
- **Framework**: PyTorch + Transformers
- **Optimization**: AdamW with learning rate scheduling
- **Training Strategy**: Early stopping with validation monitoring
- **Hardware**: Trained on Google Colab (T4 GPU)
## Intended Use
This model is designed for question-answering classification tasks. It can be used to:
- Classify questions into predefined categories
- Provide automated responses based on question classification
- Support Q&A systems and chatbots
## Limitations
- Model performance depends on the similarity between training data and inference data
- May not generalize well to domains significantly different from training data
- Classification accuracy may vary based on question complexity and length
## Citation
If you use this model, please cite:
```
@misc{roberta-qa-model,
title={Fine-tuned RoBERTa for Question-Answer Classification},
author={Your Name},
year={2024},
url={https://huggingface.co/whitedevil0089devil/Roberta_Base}
}
```
|
Ahmedshabana/medgemma-brain-cancer
|
Ahmedshabana
| 2025-06-19T06:58:00Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:google/medgemma-4b-it",
"base_model:finetune:google/medgemma-4b-it",
"endpoints_compatible",
"region:us"
] | null | 2025-06-17T04:33:10Z |
---
base_model: google/medgemma-4b-it
library_name: transformers
model_name: medgemma-brain-cancer
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for medgemma-brain-cancer
This model is a fine-tuned version of [google/medgemma-4b-it](https://huggingface.co/google/medgemma-4b-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Ahmedshabana/medgemma-brain-cancer", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.18.2
- Transformers: 4.52.4
- Pytorch: 2.2.1+cu118
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
SRP-base-model-training/lang_id
|
SRP-base-model-training
| 2025-06-19T06:46:41Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-18T08:21:37Z |
```python
import fasttext
import issai_lang_id as lang_id
# load
model = fasttext.load_model("issai_language_identification_model.bin")
print(model.labels)
# get
sentence = "Coca Cola сатып алды"
label, probability = model.predict(sentence,k=2)
label =label[0].replace('__label__','')
label = lang_id.get(sentence,label,probability[0])
print(f"after Predicted language: {label}")
```
|
JunSotohigashi/scarlet-snowflake-593
|
JunSotohigashi
| 2025-06-19T06:32:29Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"lora",
"sft",
"dataset:JunSotohigashi/JapaneseWikipediaTypoDataset_kanji",
"base_model:meta-llama/Llama-3.1-8B",
"base_model:adapter:meta-llama/Llama-3.1-8B",
"endpoints_compatible",
"region:us"
] | null | 2025-06-19T02:16:52Z |
---
base_model: meta-llama/Llama-3.1-8B
datasets: JunSotohigashi/JapaneseWikipediaTypoDataset_kanji
library_name: transformers
model_name: JunSotohigashi/scarlet-snowflake-593
tags:
- generated_from_trainer
- lora
- sft
licence: license
---
# Model Card for JunSotohigashi/scarlet-snowflake-593
This model is a fine-tuned version of [meta-llama/Llama-3.1-8B](https://huggingface.co/meta-llama/Llama-3.1-8B) on the [JunSotohigashi/JapaneseWikipediaTypoDataset_kanji](https://huggingface.co/datasets/JunSotohigashi/JapaneseWikipediaTypoDataset_kanji) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="JunSotohigashi/scarlet-snowflake-593", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/jun-sotohigashi-toyota-technological-institute/misusing-corpus-jp/runs/qlzdgils)
This model was trained with SFT.
### Framework versions
- TRL: 0.12.2
- Transformers: 4.46.3
- Pytorch: 2.5.1
- Datasets: 3.1.0
- Tokenizers: 0.20.3
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
BootesVoid/cmc2ydb5u00p4aqihhkdak7ru_cmc2ynnpx00phaqihesl8o4ak
|
BootesVoid
| 2025-06-19T06:25:47Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-06-19T06:25:39Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: HOT
---
# Cmc2Ydb5U00P4Aqihhkdak7Ru_Cmc2Ynnpx00Phaqihesl8O4Ak
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `HOT` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "HOT",
"lora_weights": "https://huggingface.co/BootesVoid/cmc2ydb5u00p4aqihhkdak7ru_cmc2ynnpx00phaqihesl8o4ak/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BootesVoid/cmc2ydb5u00p4aqihhkdak7ru_cmc2ynnpx00phaqihesl8o4ak', weight_name='lora.safetensors')
image = pipeline('HOT').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/BootesVoid/cmc2ydb5u00p4aqihhkdak7ru_cmc2ynnpx00phaqihesl8o4ak/discussions) to add images that show off what you’ve made with this LoRA.
|
JunSotohigashi/firm-cloud-590
|
JunSotohigashi
| 2025-06-19T06:25:04Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"lora",
"sft",
"dataset:JunSotohigashi/JapaneseWikipediaTypoDataset_kanji",
"base_model:meta-llama/Llama-3.1-8B",
"base_model:adapter:meta-llama/Llama-3.1-8B",
"endpoints_compatible",
"region:us"
] | null | 2025-06-19T02:15:52Z |
---
base_model: meta-llama/Llama-3.1-8B
datasets: JunSotohigashi/JapaneseWikipediaTypoDataset_kanji
library_name: transformers
model_name: JunSotohigashi/firm-cloud-590
tags:
- generated_from_trainer
- lora
- sft
licence: license
---
# Model Card for JunSotohigashi/firm-cloud-590
This model is a fine-tuned version of [meta-llama/Llama-3.1-8B](https://huggingface.co/meta-llama/Llama-3.1-8B) on the [JunSotohigashi/JapaneseWikipediaTypoDataset_kanji](https://huggingface.co/datasets/JunSotohigashi/JapaneseWikipediaTypoDataset_kanji) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="JunSotohigashi/firm-cloud-590", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/jun-sotohigashi-toyota-technological-institute/misusing-corpus-jp/runs/ep1zl7cm)
This model was trained with SFT.
### Framework versions
- TRL: 0.12.2
- Transformers: 4.46.3
- Pytorch: 2.5.1
- Datasets: 3.1.0
- Tokenizers: 0.20.3
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
YifanXu24/dl_hw_tensorf_model
|
YifanXu24
| 2025-06-19T06:24:38Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-06-19T06:03:24Z |
---
license: apache-2.0
---
|
KoichiYasuoka/modernbert-large-classical-chinese-ud-square
|
KoichiYasuoka
| 2025-06-19T06:24:35Z | 0 | 0 | null |
[
"pytorch",
"modernbert",
"classical chinese",
"literary chinese",
"ancient chinese",
"token-classification",
"pos",
"dependency-parsing",
"lzh",
"dataset:universal_dependencies",
"base_model:KoichiYasuoka/modernbert-large-classical-chinese",
"base_model:finetune:KoichiYasuoka/modernbert-large-classical-chinese",
"license:apache-2.0",
"region:us"
] |
token-classification
| 2025-06-19T06:22:37Z |
---
language:
- "lzh"
tags:
- "classical chinese"
- "literary chinese"
- "ancient chinese"
- "token-classification"
- "pos"
- "dependency-parsing"
base_model: KoichiYasuoka/modernbert-large-classical-chinese
datasets:
- "universal_dependencies"
license: "apache-2.0"
pipeline_tag: "token-classification"
widget:
- text: "孟子見梁惠王"
---
# modernbert-large-classical-chinese-ud-square
## Model Description
This is a ModernBERT model pretrained on Classical Chinese texts for POS-tagging and dependency-parsing (using `goeswith` for subwords), derived from [modernbert-large-classical-chinese](https://huggingface.co/KoichiYasuoka/modernbert-large-classical-chinese) and [UD_Classical_Chinese-Kyoto](https://github.com/UniversalDependencies/UD_Classical_Chinese-Kyoto).
## How to Use
```py
from transformers import pipeline
nlp=pipeline("universal-dependencies","KoichiYasuoka/modernbert-large-classical-chinese-ud-square",trust_remote_code=True,aggregation_strategy="simple")
print(nlp("孟子見梁惠王"))
```
|
lwl-uestc/QFFT-LIMO-7B
|
lwl-uestc
| 2025-06-19T06:23:20Z | 34 | 1 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"arxiv:2506.12860",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-05-18T12:16:12Z |
---
base_model: Qwen/Qwen2.5-7b-Instruct
library_name: transformers
license: cc-by-nc-4.0
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: limo_7b_noq
results: []
pipeline_tag: text-generation
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# limo_7b_noq
This model is a fine-tuned version of Qwen2.5-7b-Instruct on the S1_QFFT dataset as described in the paper [QFFT, Question-Free Fine-Tuning for Adaptive Reasoning](https://huggingface.co/papers/2506.12860).
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 1
- total_train_batch_size: 8
- total_eval_batch_size: 32
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 6
### Training results
### Framework versions
- Transformers 4.51.0
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.0
## 📖 Citation
```
@misc{liu2025qfft,
title={QFFT, Question-Free Fine-Tuning for Adaptive Reasoning},
author={Wanlong Liu and Junxiao Xu and Fei Yu and Yukang Lin and Ke Ji and Wenyu Chen and Yan Xu and Yasheng Wang and Lifeng Shang and Benyou Wang},
year={2025},
eprint={2506.12860},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2506.12860},
}
```
Code: https://github.com/AI-NEXT/LLaMA-Factory
|
JunSotohigashi/denim-gorge-591
|
JunSotohigashi
| 2025-06-19T06:22:39Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"lora",
"sft",
"dataset:JunSotohigashi/JapaneseWikipediaTypoDataset_kanji",
"base_model:meta-llama/Llama-3.1-8B",
"base_model:adapter:meta-llama/Llama-3.1-8B",
"endpoints_compatible",
"region:us"
] | null | 2025-06-19T02:16:03Z |
---
base_model: meta-llama/Llama-3.1-8B
datasets: JunSotohigashi/JapaneseWikipediaTypoDataset_kanji
library_name: transformers
model_name: JunSotohigashi/denim-gorge-591
tags:
- generated_from_trainer
- lora
- sft
licence: license
---
# Model Card for JunSotohigashi/denim-gorge-591
This model is a fine-tuned version of [meta-llama/Llama-3.1-8B](https://huggingface.co/meta-llama/Llama-3.1-8B) on the [JunSotohigashi/JapaneseWikipediaTypoDataset_kanji](https://huggingface.co/datasets/JunSotohigashi/JapaneseWikipediaTypoDataset_kanji) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="JunSotohigashi/denim-gorge-591", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/jun-sotohigashi-toyota-technological-institute/misusing-corpus-jp/runs/0xbtb6zn)
This model was trained with SFT.
### Framework versions
- TRL: 0.12.2
- Transformers: 4.46.3
- Pytorch: 2.5.1
- Datasets: 3.1.0
- Tokenizers: 0.20.3
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
JunSotohigashi/youthful-sun-580
|
JunSotohigashi
| 2025-06-19T06:22:24Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"lora",
"sft",
"dataset:JunSotohigashi/JapaneseWikipediaTypoDataset_kanji",
"base_model:elyza/Llama-3-ELYZA-JP-8B",
"base_model:adapter:elyza/Llama-3-ELYZA-JP-8B",
"endpoints_compatible",
"region:us"
] | null | 2025-06-19T02:11:36Z |
---
base_model: elyza/Llama-3-ELYZA-JP-8B
datasets: JunSotohigashi/JapaneseWikipediaTypoDataset_kanji
library_name: transformers
model_name: JunSotohigashi/youthful-sun-580
tags:
- generated_from_trainer
- lora
- sft
licence: license
---
# Model Card for JunSotohigashi/youthful-sun-580
This model is a fine-tuned version of [elyza/Llama-3-ELYZA-JP-8B](https://huggingface.co/elyza/Llama-3-ELYZA-JP-8B) on the [JunSotohigashi/JapaneseWikipediaTypoDataset_kanji](https://huggingface.co/datasets/JunSotohigashi/JapaneseWikipediaTypoDataset_kanji) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="JunSotohigashi/youthful-sun-580", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/jun-sotohigashi-toyota-technological-institute/misusing-corpus-jp/runs/a86dwdhc)
This model was trained with SFT.
### Framework versions
- TRL: 0.12.2
- Transformers: 4.46.3
- Pytorch: 2.5.1
- Datasets: 3.1.0
- Tokenizers: 0.20.3
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
nnilayy/dreamer-arousal-multi-classification-Kfold-1
|
nnilayy
| 2025-06-19T06:22:03Z | 0 | 0 | null |
[
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"region:us"
] | null | 2025-06-19T06:22:01Z |
---
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Code: [More Information Needed]
- Paper: [More Information Needed]
- Docs: [More Information Needed]
|
phospho-app/Selinaliu1030-gr00t-example_dataset_move_toast-f2td9
|
phospho-app
| 2025-06-19T06:21:47Z | 0 | 0 | null |
[
"phosphobot",
"gr00t",
"region:us"
] | null | 2025-06-19T06:19:16Z |
---
tags:
- phosphobot
- gr00t
task_categories:
- robotics
---
# gr00t Model - phospho Training Pipeline
## Error Traceback
We faced an issue while training your model.
```
Traceback (most recent call last):
File "/root/src/helper.py", line 165, in predict
trainer.train(timeout_seconds=timeout_seconds)
File "/root/phosphobot/am/gr00t.py", line 1146, in train
asyncio.run(
File "/opt/conda/lib/python3.11/asyncio/runners.py", line 190, in run
return runner.run(main)
^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/asyncio/runners.py", line 118, in run
return self._loop.run_until_complete(task)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/asyncio/base_events.py", line 654, in run_until_complete
return future.result()
^^^^^^^^^^^^^^^
File "/root/phosphobot/am/gr00t.py", line 996, in run_gr00t_training
raise RuntimeError(error_msg)
RuntimeError: Training process failed with exit code 1:
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/workspace/gr00t/data/dataset.py", line 790, in get_data_by_modality
return self.get_video(trajectory_id, key, base_index)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/workspace/gr00t/data/dataset.py", line 658, in get_video
video_timestamp = timestamp[step_indices]
~~~~~~~~~^^^^^^^^^^^^^^
IndexError: index 131 is out of bounds for axis 0 with size 81
0%| | 0/1080 [00:03<?, ?it/s]
```
## Training parameters:
- **Dataset**: [Selinaliu1030/example_dataset_move_toast](https://huggingface.co/datasets/Selinaliu1030/example_dataset_move_toast)
- **Wandb run URL**: None
- **Epochs**: 10
- **Batch size**: 49
- **Training steps**: None
📖 **Get Started**: [docs.phospho.ai](https://docs.phospho.ai?utm_source=huggingface_readme)
🤖 **Get your robot**: [robots.phospho.ai](https://robots.phospho.ai?utm_source=huggingface_readme)
|
PaceKW/distilbert-base-multilingual-cased-multilabel-indonesian-hate-speech-modified
|
PaceKW
| 2025-06-19T06:21:16Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-multilingual-cased",
"base_model:finetune:distilbert/distilbert-base-multilingual-cased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-19T06:15:22Z |
---
library_name: transformers
license: apache-2.0
base_model: distilbert/distilbert-base-multilingual-cased
tags:
- generated_from_trainer
metrics:
- f1
- accuracy
model-index:
- name: distilbert-base-multilingual-cased-multilabel-indonesian-hate-speech-modified
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-multilingual-cased-multilabel-indonesian-hate-speech-modified
This model is a fine-tuned version of [distilbert/distilbert-base-multilingual-cased](https://huggingface.co/distilbert/distilbert-base-multilingual-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2439
- F1: 0.7806
- Roc Auc: 0.8645
- Accuracy: 0.6948
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Roc Auc | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:-------:|:--------:|
| 0.2803 | 1.0 | 1317 | 0.2384 | 0.7019 | 0.8008 | 0.6059 |
| 0.202 | 2.0 | 2634 | 0.2207 | 0.7407 | 0.8407 | 0.6097 |
| 0.1594 | 3.0 | 3951 | 0.2222 | 0.7577 | 0.8472 | 0.6560 |
| 0.1203 | 4.0 | 5268 | 0.2510 | 0.7496 | 0.8351 | 0.6773 |
| 0.0851 | 5.0 | 6585 | 0.2439 | 0.7806 | 0.8645 | 0.6948 |
| 0.0638 | 6.0 | 7902 | 0.2796 | 0.7716 | 0.8579 | 0.7031 |
| 0.0468 | 7.0 | 9219 | 0.3120 | 0.7621 | 0.8587 | 0.6856 |
| 0.0314 | 8.0 | 10536 | 0.3311 | 0.7682 | 0.8631 | 0.6887 |
| 0.0219 | 9.0 | 11853 | 0.3377 | 0.7681 | 0.8577 | 0.6963 |
| 0.0184 | 10.0 | 13170 | 0.3596 | 0.7750 | 0.8630 | 0.6948 |
| 0.014 | 11.0 | 14487 | 0.3743 | 0.7674 | 0.8594 | 0.6887 |
| 0.0109 | 12.0 | 15804 | 0.3908 | 0.7697 | 0.8636 | 0.6879 |
| 0.0084 | 13.0 | 17121 | 0.4047 | 0.7695 | 0.8615 | 0.6963 |
| 0.0057 | 14.0 | 18438 | 0.4035 | 0.7671 | 0.8568 | 0.6963 |
| 0.0051 | 15.0 | 19755 | 0.4076 | 0.7708 | 0.8605 | 0.6963 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.7.0+cu128
- Datasets 3.6.0
- Tokenizers 0.21.1
|
JunSotohigashi/distinctive-cloud-583
|
JunSotohigashi
| 2025-06-19T06:21:07Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"lora",
"sft",
"dataset:JunSotohigashi/JapaneseWikipediaTypoDataset_kanji",
"base_model:tokyotech-llm/Llama-3.1-Swallow-8B-v0.2",
"base_model:adapter:tokyotech-llm/Llama-3.1-Swallow-8B-v0.2",
"endpoints_compatible",
"region:us"
] | null | 2025-06-19T02:13:03Z |
---
base_model: tokyotech-llm/Llama-3.1-Swallow-8B-v0.2
datasets: JunSotohigashi/JapaneseWikipediaTypoDataset_kanji
library_name: transformers
model_name: JunSotohigashi/distinctive-cloud-583
tags:
- generated_from_trainer
- lora
- sft
licence: license
---
# Model Card for JunSotohigashi/distinctive-cloud-583
This model is a fine-tuned version of [tokyotech-llm/Llama-3.1-Swallow-8B-v0.2](https://huggingface.co/tokyotech-llm/Llama-3.1-Swallow-8B-v0.2) on the [JunSotohigashi/JapaneseWikipediaTypoDataset_kanji](https://huggingface.co/datasets/JunSotohigashi/JapaneseWikipediaTypoDataset_kanji) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="JunSotohigashi/distinctive-cloud-583", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/jun-sotohigashi-toyota-technological-institute/misusing-corpus-jp/runs/46a8i1n5)
This model was trained with SFT.
### Framework versions
- TRL: 0.12.2
- Transformers: 4.46.3
- Pytorch: 2.5.1
- Datasets: 3.1.0
- Tokenizers: 0.20.3
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
DreadPoor/ForDeletion-Q4_K_M-GGUF
|
DreadPoor
| 2025-06-19T06:20:57Z | 0 | 0 | null |
[
"gguf",
"merge",
"mergekit",
"lazymergekit",
"llama-cpp",
"gguf-my-repo",
"base_model:DreadPoor/ForDeletion",
"base_model:quantized:DreadPoor/ForDeletion",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-06-19T06:20:34Z |
---
tags:
- merge
- mergekit
- lazymergekit
- llama-cpp
- gguf-my-repo
base_model: DreadPoor/ForDeletion
---
# DreadPoor/ForDeletion-Q4_K_M-GGUF
This model was converted to GGUF format from [`DreadPoor/ForDeletion`](https://huggingface.co/DreadPoor/ForDeletion) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/DreadPoor/ForDeletion) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo DreadPoor/ForDeletion-Q4_K_M-GGUF --hf-file fordeletion-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo DreadPoor/ForDeletion-Q4_K_M-GGUF --hf-file fordeletion-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo DreadPoor/ForDeletion-Q4_K_M-GGUF --hf-file fordeletion-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo DreadPoor/ForDeletion-Q4_K_M-GGUF --hf-file fordeletion-q4_k_m.gguf -c 2048
```
|
JunSotohigashi/fresh-wind-581
|
JunSotohigashi
| 2025-06-19T06:20:25Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"lora",
"sft",
"dataset:JunSotohigashi/JapaneseWikipediaTypoDataset_kanji",
"base_model:elyza/Llama-3-ELYZA-JP-8B",
"base_model:adapter:elyza/Llama-3-ELYZA-JP-8B",
"endpoints_compatible",
"region:us"
] | null | 2025-06-19T02:11:39Z |
---
base_model: elyza/Llama-3-ELYZA-JP-8B
datasets: JunSotohigashi/JapaneseWikipediaTypoDataset_kanji
library_name: transformers
model_name: JunSotohigashi/fresh-wind-581
tags:
- generated_from_trainer
- lora
- sft
licence: license
---
# Model Card for JunSotohigashi/fresh-wind-581
This model is a fine-tuned version of [elyza/Llama-3-ELYZA-JP-8B](https://huggingface.co/elyza/Llama-3-ELYZA-JP-8B) on the [JunSotohigashi/JapaneseWikipediaTypoDataset_kanji](https://huggingface.co/datasets/JunSotohigashi/JapaneseWikipediaTypoDataset_kanji) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="JunSotohigashi/fresh-wind-581", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/jun-sotohigashi-toyota-technological-institute/misusing-corpus-jp/runs/si2u519z)
This model was trained with SFT.
### Framework versions
- TRL: 0.12.2
- Transformers: 4.46.3
- Pytorch: 2.5.1
- Datasets: 3.1.0
- Tokenizers: 0.20.3
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
TVRRaviteja/llama3.1-MentalChat-SFT
|
TVRRaviteja
| 2025-06-19T06:20:20Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-19T06:17:30Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ArunKr/model
|
ArunKr
| 2025-06-19T06:20:19Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gguf",
"qwen3",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-19T06:02:26Z |
---
base_model: unsloth/qwen3-0.6b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** ArunKr
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen3-0.6b-unsloth-bnb-4bit
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
LarryAIDraw/MagisaGranblue-Illustrious0_1-V1
|
LarryAIDraw
| 2025-06-19T06:18:08Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2025-06-19T06:17:53Z |
---
license: creativeml-openrail-m
---
|
freakyfractal/usertest
|
freakyfractal
| 2025-06-19T06:15:30Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:apache-2.0",
"region:us"
] |
text-to-image
| 2025-06-19T06:15:12Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: '-'
output:
url: images/Coinye_2021.jpg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: null
license: apache-2.0
---
# usertest
<Gallery />
## Download model
Weights for this model are available in Safetensors format.
[Download](/freakyfractal/usertest/tree/main) them in the Files & versions tab.
|
quadcoders/deep-rl-course
|
quadcoders
| 2025-06-19T06:11:56Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-06-19T06:11:35Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 259.11 +/- 19.28
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Tun-Wellens/whisper-medium-lb-final-1e-5-cosine-nofreezing-without
|
Tun-Wellens
| 2025-06-19T06:03:56Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"whisper",
"automatic-speech-recognition",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2025-06-18T22:05:37Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
apriasmoro/fc08d115-d555-4674-864a-0dd0ff54f304
|
apriasmoro
| 2025-06-19T06:02:58Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"gemma",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/codegemma-7b",
"base_model:adapter:unsloth/codegemma-7b",
"license:apache-2.0",
"region:us"
] | null | 2025-06-19T05:53:22Z |
---
library_name: peft
license: apache-2.0
base_model: unsloth/codegemma-7b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: fc08d115-d555-4674-864a-0dd0ff54f304
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.10.0.dev0`
```yaml
adapter: lora
base_model: unsloth/codegemma-7b
bf16: true
chat_template: llama3
datasets:
- data_files:
- 5313f4d1e8057633_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/
type:
field_instruction: instruct
field_output: output
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
eval_max_new_tokens: 256
evals_per_epoch: 2
flash_attention: false
fp16: false
gradient_accumulation_steps: 1
gradient_checkpointing: true
group_by_length: true
hub_model_id: apriasmoro/fc08d115-d555-4674-864a-0dd0ff54f304
learning_rate: 0.0002
logging_steps: 10
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: false
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 280
micro_batch_size: 4
mlflow_experiment_name: /tmp/5313f4d1e8057633_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
sample_packing: false
save_steps: 25
sequence_len: 2048
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: efaf2747-93ba-4914-bbfb-4587efac813b
wandb_project: Gradients-On-Demand
wandb_run: apriasmoro
wandb_runid: efaf2747-93ba-4914-bbfb-4587efac813b
warmup_steps: 100
weight_decay: 0.01
```
</details><br>
# fc08d115-d555-4674-864a-0dd0ff54f304
This model is a fine-tuned version of [unsloth/codegemma-7b](https://huggingface.co/unsloth/codegemma-7b) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8509
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 280
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0122 | 1 | 1.0012 |
| 2.6887 | 0.5732 | 47 | 0.9802 |
| 0.638 | 1.1463 | 94 | 0.8698 |
| 1.3412 | 1.7195 | 141 | 0.8378 |
| 0.6638 | 2.2927 | 188 | 0.9116 |
| 0.3695 | 2.8659 | 235 | 0.8509 |
### Framework versions
- PEFT 0.15.2
- Transformers 4.51.3
- Pytorch 2.5.1+cu124
- Datasets 3.5.1
- Tokenizers 0.21.1
|
winnieyangwannan/entity_Llama-3.1-8B-Instruct_mlp-down_positive-negative-addition-same_last_layer_22_2_song_3_49
|
winnieyangwannan
| 2025-06-19T06:01:14Z | 17 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-02T17:06:13Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
winnieyangwannan/entity_Llama-3.1-8B-Instruct_mlp-down_positive-negative-addition-same_last_layer_30_2_song_3_49
|
winnieyangwannan
| 2025-06-19T06:00:25Z | 98 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-02T17:06:24Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.