modelId
string | author
string | last_modified
timestamp[us, tz=UTC] | downloads
int64 | likes
int64 | library_name
string | tags
list | pipeline_tag
string | createdAt
timestamp[us, tz=UTC] | card
string |
---|---|---|---|---|---|---|---|---|---|
chainway9/blockassist-bc-untamed_quick_eel_1755255522
|
chainway9
| 2025-08-15T11:27:02Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"untamed quick eel",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-15T11:26:59Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- untamed quick eel
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
samihaelmansouri/distilbert-base-uncased-finetuned-ner
|
samihaelmansouri
| 2025-08-15T10:14:28Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"token-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2025-08-15T08:55:28Z |
---
library_name: transformers
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0089
- Precision: 0.7143
- Recall: 0.7937
- F1: 0.7519
- Accuracy: 0.9968
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 38 | 0.0261 | 0.0 | 0.0 | 0.0 | 0.9854 |
| No log | 2.0 | 76 | 0.0115 | 0.7162 | 0.8413 | 0.7737 | 0.9961 |
| No log | 3.0 | 114 | 0.0089 | 0.7143 | 0.7937 | 0.7519 | 0.9968 |
### Framework versions
- Transformers 4.55.0
- Pytorch 2.6.0+cu124
- Datasets 4.0.0
- Tokenizers 0.21.4
|
Sayemahsjn/blockassist-bc-playful_feline_octopus_1755248377
|
Sayemahsjn
| 2025-08-15T09:18:33Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"playful feline octopus",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-15T09:18:22Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- playful feline octopus
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
pankajrajdeo/bond-embed-16L
|
pankajrajdeo
| 2025-08-15T08:47:28Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:1441905",
"loss:CachedMultipleNegativesRankingLoss",
"arxiv:1908.10084",
"arxiv:2101.06983",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-08-15T07:54:31Z |
---
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:1441905
- loss:CachedMultipleNegativesRankingLoss
widget:
- source_sentence: Treponema caused disease or disorder
sentences:
- bejel
- tumor of ureter
- debrisoquine, ultrarapid metabolism of
- source_sentence: B cell (antibody) deficiencies
sentences:
- distal phalanx of digit IV
- well-differentiated fetal adenocarcinoma of the lung
- deficiency of humoral immunity
- source_sentence: Elevated AdoHcy concentration
sentences:
- gepulste Abgabe
- Elevated circulating S-adenosyl-L-homocysteine concentration
- Frequently cries for no reason
- source_sentence: Isoelectric focusing of serum transferrin consistent with CDG type
II
sentences:
- Amblyomma aureolatum
- squamous cell carcinoma of the bile duct
- Abnormal isoelectric focusing of serum transferrin, type 2 pattern
- source_sentence: Light-chain amyloidosis
sentences:
- partial deletion of the long arm of chromosome X
- Teneria teneriensis
- amyloidosis primary systemic
pipeline_tag: sentence-similarity
library_name: sentence-transformers
metrics:
- cosine_accuracy@1
- cosine_accuracy@3
- cosine_accuracy@5
- cosine_accuracy@10
- cosine_precision@1
- cosine_precision@3
- cosine_precision@5
- cosine_precision@10
- cosine_recall@1
- cosine_recall@3
- cosine_recall@5
- cosine_recall@10
- cosine_ndcg@10
- cosine_mrr@10
- cosine_map@100
model-index:
- name: SentenceTransformer
results:
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: owl ontology eval
type: owl_ontology_eval
metrics:
- type: cosine_accuracy@1
value: 0.6302799165287473
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.8147801683816651
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.8775275239260272
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.9268187378570915
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.6302799165287473
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.27634261591230724
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.17979420018709072
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.09566812981218968
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.6216929313281044
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.8081120625554675
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.8723585426111152
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.9241442997289582
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.7796907170635903
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.7342337217921898
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.734065731352359
name: Cosine Map@100
---
# SentenceTransformer
This is a [sentence-transformers](https://www.SBERT.net) model trained on the json dataset. It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
<!-- - **Base model:** [Unknown](https://huggingface.co/unknown) -->
- **Maximum Sequence Length:** 1024 tokens
- **Output Dimensionality:** 384 dimensions
- **Similarity Function:** Cosine Similarity
- **Training Dataset:**
- json
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 1024, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("pankajrajdeo/bond-embed-16L")
# Run inference
sentences = [
'Light-chain amyloidosis',
'amyloidosis primary systemic',
'partial deletion of the long arm of chromosome X',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Information Retrieval
* Dataset: `owl_ontology_eval`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.6303 |
| cosine_accuracy@3 | 0.8148 |
| cosine_accuracy@5 | 0.8775 |
| cosine_accuracy@10 | 0.9268 |
| cosine_precision@1 | 0.6303 |
| cosine_precision@3 | 0.2763 |
| cosine_precision@5 | 0.1798 |
| cosine_precision@10 | 0.0957 |
| cosine_recall@1 | 0.6217 |
| cosine_recall@3 | 0.8081 |
| cosine_recall@5 | 0.8724 |
| cosine_recall@10 | 0.9241 |
| **cosine_ndcg@10** | **0.7797** |
| cosine_mrr@10 | 0.7342 |
| cosine_map@100 | 0.7341 |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### json
* Dataset: json
* Size: 1,441,905 training samples
* Columns: <code>anchor</code> and <code>positive</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive |
|:--------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 3 tokens</li><li>mean: 9.48 tokens</li><li>max: 47 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 8.68 tokens</li><li>max: 30 tokens</li></ul> |
* Samples:
| anchor | positive |
|:-------------------------------------|:-------------------------------------|
| <code>Mangshan horned toad</code> | <code>Mangshan spadefoot toad</code> |
| <code>Leuconotopicos borealis</code> | <code>Picoides borealis</code> |
| <code>Cylindrella teneriensis</code> | <code>Teneria teneriensis</code> |
* Loss: [<code>CachedMultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cachedmultiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 1024
- `learning_rate`: 1.5e-05
- `num_train_epochs`: 5
- `lr_scheduler_type`: cosine
- `warmup_ratio`: 0.05
- `bf16`: True
- `dataloader_num_workers`: 32
- `load_best_model_at_end`: True
- `gradient_checkpointing`: True
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 1024
- `per_device_eval_batch_size`: 8
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 1.5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 5
- `max_steps`: -1
- `lr_scheduler_type`: cosine
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.05
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: True
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 32
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: True
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `hub_revision`: None
- `gradient_checkpointing`: True
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `liger_kernel_config`: None
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | owl_ontology_eval_cosine_ndcg@10 |
|:------:|:----:|:-------------:|:--------------------------------:|
| 0.0717 | 100 | 1.3232 | - |
| 0.1434 | 200 | 1.021 | - |
| 0.2151 | 300 | 0.9633 | - |
| 0.2867 | 400 | 0.9068 | - |
| 0.3297 | 460 | - | 0.7207 |
| 0.3584 | 500 | 0.8723 | - |
| 0.4301 | 600 | 0.852 | - |
| 0.5018 | 700 | 0.8161 | - |
| 0.5735 | 800 | 0.7939 | - |
| 0.6452 | 900 | 0.7935 | - |
| 0.6595 | 920 | - | 0.7364 |
| 0.7168 | 1000 | 0.7646 | - |
| 0.7885 | 1100 | 0.7464 | - |
| 0.8602 | 1200 | 0.7376 | - |
| 0.9319 | 1300 | 0.7313 | - |
| 0.9892 | 1380 | - | 0.7468 |
| 1.0036 | 1400 | 0.7099 | - |
| 1.0753 | 1500 | 0.6884 | - |
| 1.1470 | 1600 | 0.6776 | - |
| 1.2186 | 1700 | 0.6694 | - |
| 1.2903 | 1800 | 0.6641 | - |
| 1.3190 | 1840 | - | 0.7561 |
| 1.3620 | 1900 | 0.6526 | - |
| 1.4337 | 2000 | 0.6524 | - |
| 1.5054 | 2100 | 0.6364 | - |
| 1.5771 | 2200 | 0.6339 | - |
| 1.6487 | 2300 | 0.626 | 0.7614 |
| 1.7204 | 2400 | 0.6197 | - |
| 1.7921 | 2500 | 0.6193 | - |
| 1.8638 | 2600 | 0.6155 | - |
| 1.9355 | 2700 | 0.6142 | - |
| 1.9785 | 2760 | - | 0.7662 |
| 2.0072 | 2800 | 0.5853 | - |
| 2.0789 | 2900 | 0.5824 | - |
| 2.1505 | 3000 | 0.5769 | - |
| 2.2222 | 3100 | 0.5765 | - |
| 2.2939 | 3200 | 0.5608 | - |
| 2.3082 | 3220 | - | 0.7698 |
| 2.3656 | 3300 | 0.5695 | - |
| 2.4373 | 3400 | 0.5641 | - |
| 2.5090 | 3500 | 0.5638 | - |
| 2.5806 | 3600 | 0.554 | - |
| 2.6380 | 3680 | - | 0.7735 |
| 2.6523 | 3700 | 0.5539 | - |
| 2.7240 | 3800 | 0.5495 | - |
| 2.7957 | 3900 | 0.5556 | - |
| 2.8674 | 4000 | 0.5397 | - |
| 2.9391 | 4100 | 0.5447 | - |
| 2.9677 | 4140 | - | 0.7757 |
| 3.0108 | 4200 | 0.5331 | - |
| 3.0824 | 4300 | 0.5336 | - |
| 3.1541 | 4400 | 0.5346 | - |
| 3.2258 | 4500 | 0.5247 | - |
| 3.2975 | 4600 | 0.5241 | 0.7775 |
| 3.3692 | 4700 | 0.5257 | - |
| 3.4409 | 4800 | 0.5241 | - |
| 3.5125 | 4900 | 0.5171 | - |
| 3.5842 | 5000 | 0.5215 | - |
| 3.6272 | 5060 | - | 0.7787 |
| 3.6559 | 5100 | 0.5203 | - |
| 3.7276 | 5200 | 0.5214 | - |
| 3.7993 | 5300 | 0.5266 | - |
| 3.8710 | 5400 | 0.5127 | - |
| 3.9427 | 5500 | 0.5062 | - |
| 3.9570 | 5520 | - | 0.7790 |
| 4.0143 | 5600 | 0.5104 | - |
| 4.0860 | 5700 | 0.5155 | - |
| 4.1577 | 5800 | 0.5042 | - |
| 4.2294 | 5900 | 0.5174 | - |
| 4.2867 | 5980 | - | 0.7797 |
| 4.3011 | 6000 | 0.509 | - |
| 4.3728 | 6100 | 0.5106 | - |
| 4.4444 | 6200 | 0.5076 | - |
| 4.5161 | 6300 | 0.5046 | - |
| 4.5878 | 6400 | 0.5077 | - |
| 4.6165 | 6440 | - | 0.7795 |
| 4.6595 | 6500 | 0.5114 | - |
| 4.7312 | 6600 | 0.5103 | - |
| 4.8029 | 6700 | 0.5106 | - |
| 4.8746 | 6800 | 0.5102 | - |
| 4.9462 | 6900 | 0.5076 | 0.7797 |
### Framework Versions
- Python: 3.11.11
- Sentence Transformers: 3.4.1
- Transformers: 4.53.2
- PyTorch: 2.6.0+cu124
- Accelerate: 1.5.2
- Datasets: 3.2.0
- Tokenizers: 0.21.0
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### CachedMultipleNegativesRankingLoss
```bibtex
@misc{gao2021scaling,
title={Scaling Deep Contrastive Learning Batch Size under Memory Limited Setup},
author={Luyu Gao and Yunyi Zhang and Jiawei Han and Jamie Callan},
year={2021},
eprint={2101.06983},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
ArifeenScotland/qwen2-7b-instruct-trl-sft-ChartQA
|
ArifeenScotland
| 2025-08-15T08:15:17Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:Qwen/Qwen2-VL-7B-Instruct",
"base_model:finetune:Qwen/Qwen2-VL-7B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-08-15T07:14:17Z |
---
base_model: Qwen/Qwen2-VL-7B-Instruct
library_name: transformers
model_name: qwen2-7b-instruct-trl-sft-ChartQA
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for qwen2-7b-instruct-trl-sft-ChartQA
This model is a fine-tuned version of [Qwen/Qwen2-VL-7B-Instruct](https://huggingface.co/Qwen/Qwen2-VL-7B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="ArifeenScotland/qwen2-7b-instruct-trl-sft-ChartQA", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/rgu-comp/qwen2-7b-instruct-trl-sft-ChartQA/runs/5b6dt64y)
This model was trained with SFT.
### Framework versions
- TRL: 0.22.0.dev0
- Transformers: 4.56.0.dev0
- Pytorch: 2.4.1+cu121
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
chavajaz/vit-ena24-clase
|
chavajaz
| 2025-08-15T07:36:57Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2025-08-15T07:02:56Z |
---
library_name: transformers
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- image-classification
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
- f1
model-index:
- name: vit-ena24-clase
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: wonders_md
type: imagefolder
config: default
split: validation
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9968798751950078
- name: F1
type: f1
value: 0.9971636719745751
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-ena24-clase
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the wonders_md dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0467
- Accuracy: 0.9969
- F1: 0.9972
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:------:|:----:|:---------------:|:--------:|:------:|
| 0.307 | 0.2075 | 100 | 0.3149 | 0.9579 | 0.9619 |
| 0.1011 | 0.4149 | 200 | 0.1359 | 0.9774 | 0.9794 |
| 0.0733 | 0.6224 | 300 | 0.0740 | 0.9906 | 0.9916 |
| 0.0362 | 0.8299 | 400 | 0.0467 | 0.9969 | 0.9972 |
### Framework versions
- Transformers 4.55.0
- Pytorch 2.6.0+cu124
- Datasets 4.0.0
- Tokenizers 0.21.4
|
capungmerah627/blockassist-bc-stinging_soaring_porcupine_1755238635
|
capungmerah627
| 2025-08-15T06:43:46Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"stinging soaring porcupine",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-15T06:43:41Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- stinging soaring porcupine
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
therarelab/diff_bottle_red_left
|
therarelab
| 2025-08-15T06:06:42Z | 0 | 0 |
lerobot
|
[
"lerobot",
"safetensors",
"diffusion",
"robotics",
"dataset:therarelab/bottle_red_left",
"arxiv:2303.04137",
"license:apache-2.0",
"region:us"
] |
robotics
| 2025-08-15T06:05:48Z |
---
datasets: therarelab/bottle_red_left
library_name: lerobot
license: apache-2.0
model_name: diffusion
pipeline_tag: robotics
tags:
- diffusion
- robotics
- lerobot
---
# Model Card for diffusion
<!-- Provide a quick summary of what the model is/does. -->
[Diffusion Policy](https://huggingface.co/papers/2303.04137) treats visuomotor control as a generative diffusion process, producing smooth, multi-step action trajectories that excel at contact-rich manipulation.
This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot).
See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index).
---
## How to Get Started with the Model
For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy).
Below is the short version on how to train and run inference/eval:
### Train from scratch
```bash
lerobot-train \
--dataset.repo_id=${HF_USER}/<dataset> \
--policy.type=act \
--output_dir=outputs/train/<desired_policy_repo_id> \
--job_name=lerobot_training \
--policy.device=cuda \
--policy.repo_id=${HF_USER}/<desired_policy_repo_id>
--wandb.enable=true
```
_Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`._
### Evaluate the policy/run inference
```bash
lerobot-record \
--robot.type=so100_follower \
--dataset.repo_id=<hf_user>/eval_<dataset> \
--policy.path=<hf_user>/<desired_policy_repo_id> \
--episodes=10
```
Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint.
---
## Model Details
- **License:** apache-2.0
|
Ferdi3425/blockassist-bc-amphibious_deadly_otter_1755237861
|
Ferdi3425
| 2025-08-15T06:05:13Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"amphibious deadly otter",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-15T06:05:07Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- amphibious deadly otter
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Ferdi3425/blockassist-bc-amphibious_deadly_otter_1755237607
|
Ferdi3425
| 2025-08-15T06:01:20Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"amphibious deadly otter",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-15T06:00:52Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- amphibious deadly otter
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
aleebaster/blockassist-bc-sly_eager_boar_1755233865
|
aleebaster
| 2025-08-15T05:32:30Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"sly eager boar",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-15T05:32:25Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- sly eager boar
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
signup-hf/gpt-oss-20b-multilingual-reasoner
|
signup-hf
| 2025-08-15T05:13:56Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"sft",
"trl",
"base_model:openai/gpt-oss-20b",
"base_model:finetune:openai/gpt-oss-20b",
"endpoints_compatible",
"region:us"
] | null | 2025-08-15T04:53:50Z |
---
base_model: openai/gpt-oss-20b
library_name: transformers
model_name: gpt-oss-20b-multilingual-reasoner
tags:
- generated_from_trainer
- sft
- trl
licence: license
---
# Model Card for gpt-oss-20b-multilingual-reasoner
This model is a fine-tuned version of [openai/gpt-oss-20b](https://huggingface.co/openai/gpt-oss-20b).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="signup-hf/gpt-oss-20b-multilingual-reasoner", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.21.0
- Transformers: 4.56.0.dev0
- Pytorch: 2.8.0+cu128
- Datasets: 3.6.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
felixZzz/student_sft_len16k_sub1k_reject_mix
|
felixZzz
| 2025-08-15T05:04:56Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-15T04:59:02Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
hoooooooooori/minicompe-Scalability-test
|
hoooooooooori
| 2025-08-15T05:02:38Z | 0 | 0 |
transformers
|
[
"transformers",
"text-generation-inference",
"unsloth",
"qwen3",
"trl",
"en",
"base_model:unsloth/Qwen3-4B",
"base_model:finetune:unsloth/Qwen3-4B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-15T05:02:37Z |
---
base_model: unsloth/Qwen3-4B
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** hoooooooooori
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen3-4B
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
lisaozill03/blockassist-bc-rugged_prickly_alpaca_1755232546
|
lisaozill03
| 2025-08-15T05:01:00Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"rugged prickly alpaca",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-15T05:00:55Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- rugged prickly alpaca
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
bohrariyanshi/pii-ner-extraction
|
bohrariyanshi
| 2025-08-15T04:57:00Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"token-classification",
"ner",
"named-entity-recognition",
"multilingual",
"wikiann",
"person",
"organization",
"location",
"en",
"dataset:unimelb-nlp/wikiann",
"base_model:google-bert/bert-base-multilingual-cased",
"base_model:finetune:google-bert/bert-base-multilingual-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2025-08-15T03:36:13Z |
---
license: apache-2.0
datasets:
- unimelb-nlp/wikiann
language:
- en
metrics:
- f1
- precision
- recall
base_model:
- google-bert/bert-base-multilingual-cased
pipeline_tag: token-classification
library_name: transformers
tags:
- ner
- named-entity-recognition
- token-classification
- bert
- multilingual
- wikiann
- person
- organization
- location
---
# Multilingual NER Model for PII Detection
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the WikiANN dataset for Named Entity Recognition (NER).
## Model Description
- **Developed by:** bohrariyanshi
- **Model type:** Token Classification (NER)
- **Language(s):** Multilingual (primarily English)
- **Base model:** bert-base-multilingual-cased
- **Finetuned from model:** bert-base-multilingual-cased
## Intended Uses & Limitations
### Intended Uses
- Named Entity Recognition for Person (PER), Organization (ORG), and Location (LOC)
- Text analysis and information extraction
- PII (Personally Identifiable Information) detection
### Limitations
- Trained primarily on English WikiANN data
- May have lower performance on non-English text
- Limited to PER, ORG, LOC entity types
## Training Data
The model was fine-tuned on the [WikiANN](https://huggingface.co/datasets/wikiann) dataset:
- **Training examples:** 20,000
- **Validation examples:** 10,000
- **Test examples:** 10,000
- **Entity types:** PER (Person), ORG (Organization), LOC (Location)
## Training Procedure
### Training Hyperparameters
- **Learning rate:** 2e-5
- **Training epochs:** 3
- **Batch size:** 16
- **Max sequence length:** 256
- **Optimizer:** AdamW
- **Weight decay:** 0.01
## Performance
The model achieves high confidence predictions on standard NER tasks:
- **High confidence predictions (>90%):** 19/21 entities in test cases
- **Average inference time:** ~264ms per sentence
- **Entity types detected:** PER, ORG, LOC with high accuracy
## Usage
```python
from transformers import pipeline
# Load the model
ner = pipeline("ner", model="bohrariyanshi/pii-ner-extraction", aggregation_strategy="simple")
# Example usage
text = "Barack Obama was born in Hawaii."
entities = ner(text)
print(entities)
# Output: [{'entity_group': 'PER', 'score': 0.968, 'word': 'Barack Obama', 'start': 0, 'end': 12}, ...]
```
## Model Architecture
- **Base:** BERT-base-multilingual-cased
- **Parameters:** 177M
- **Architecture:** Transformer with token classification head
- **Task:** Named Entity Recognition (NER)
## Evaluation Results
The model demonstrates superior performance compared to base BERT:
- **Confident predictions:** 19 high-confidence entities vs 0 for base BERT
- **Precision:** High accuracy in entity detection
- **Speed:** ~264ms per sentence (acceptable for production use)
## Environmental Impact
Training was performed on Google Colab T4 GPU with estimated carbon footprint considerations.
## Citation
If you use this model, please cite:
```bibtex
@model{bohrariyanshi-pii-ner-extraction,
author = {bohrariyanshi},
title = {Multilingual NER Model for PII Detection},
year = {2025},
url = {https://huggingface.co/bohrariyanshi/pii-ner-extraction}
}
```
|
zarude/blockassist-bc-rabid_timid_rat_1755233694
|
zarude
| 2025-08-15T04:55:47Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"rabid timid rat",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-15T04:55:32Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- rabid timid rat
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
smirki/UIGEN-X-4B-STG-Modal-lora
|
smirki
| 2025-08-15T04:50:16Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-15T04:50:13Z |
---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
zarude/blockassist-bc-rabid_timid_rat_1755233097
|
zarude
| 2025-08-15T04:46:01Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"rabid timid rat",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-15T04:45:47Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- rabid timid rat
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
mang3dd/blockassist-bc-tangled_slithering_alligator_1755231598
|
mang3dd
| 2025-08-15T04:45:28Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"tangled slithering alligator",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-15T04:45:24Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- tangled slithering alligator
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
qualcomm/ResNeXt50
|
qualcomm
| 2025-08-15T04:41:28Z | 32 | 0 |
pytorch
|
[
"pytorch",
"tflite",
"backbone",
"android",
"image-classification",
"arxiv:1611.05431",
"license:other",
"region:us"
] |
image-classification
| 2024-02-25T23:08:17Z |
---
library_name: pytorch
license: other
tags:
- backbone
- android
pipeline_tag: image-classification
---

# ResNeXt50: Optimized for Mobile Deployment
## Imagenet classifier and general purpose backbone
ResNeXt50 is a machine learning model that can classify images from the Imagenet dataset. It can also be used as a backbone in building more complex models for specific use cases.
This model is an implementation of ResNeXt50 found [here](https://github.com/pytorch/vision/blob/main/torchvision/models/resnet.py).
This repository provides scripts to run ResNeXt50 on Qualcomm® devices.
More details on model performance across various devices, can be found
[here](https://aihub.qualcomm.com/models/resnext50).
### Model Details
- **Model Type:** Model_use_case.image_classification
- **Model Stats:**
- Model checkpoint: Imagenet
- Input resolution: 224x224
- Number of parameters: 25.0M
- Model size (float): 95.4 MB
- Model size (w8a8): 24.8 MB
| Model | Precision | Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Primary Compute Unit | Target Model
|---|---|---|---|---|---|---|---|---|
| ResNeXt50 | float | QCS8275 (Proxy) | Qualcomm® QCS8275 (Proxy) | TFLITE | 12.122 ms | 0 - 95 MB | NPU | [ResNeXt50.tflite](https://huggingface.co/qualcomm/ResNeXt50/blob/main/ResNeXt50.tflite) |
| ResNeXt50 | float | QCS8275 (Proxy) | Qualcomm® QCS8275 (Proxy) | QNN_DLC | 11.881 ms | 1 - 43 MB | NPU | [ResNeXt50.dlc](https://huggingface.co/qualcomm/ResNeXt50/blob/main/ResNeXt50.dlc) |
| ResNeXt50 | float | QCS8450 (Proxy) | Qualcomm® QCS8450 (Proxy) | TFLITE | 3.395 ms | 0 - 96 MB | NPU | [ResNeXt50.tflite](https://huggingface.co/qualcomm/ResNeXt50/blob/main/ResNeXt50.tflite) |
| ResNeXt50 | float | QCS8450 (Proxy) | Qualcomm® QCS8450 (Proxy) | QNN_DLC | 3.851 ms | 1 - 39 MB | NPU | [ResNeXt50.dlc](https://huggingface.co/qualcomm/ResNeXt50/blob/main/ResNeXt50.dlc) |
| ResNeXt50 | float | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | TFLITE | 2.466 ms | 0 - 316 MB | NPU | [ResNeXt50.tflite](https://huggingface.co/qualcomm/ResNeXt50/blob/main/ResNeXt50.tflite) |
| ResNeXt50 | float | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | QNN_DLC | 2.445 ms | 1 - 16 MB | NPU | [ResNeXt50.dlc](https://huggingface.co/qualcomm/ResNeXt50/blob/main/ResNeXt50.dlc) |
| ResNeXt50 | float | QCS9075 (Proxy) | Qualcomm® QCS9075 (Proxy) | TFLITE | 3.904 ms | 0 - 95 MB | NPU | [ResNeXt50.tflite](https://huggingface.co/qualcomm/ResNeXt50/blob/main/ResNeXt50.tflite) |
| ResNeXt50 | float | QCS9075 (Proxy) | Qualcomm® QCS9075 (Proxy) | QNN_DLC | 3.758 ms | 1 - 44 MB | NPU | [ResNeXt50.dlc](https://huggingface.co/qualcomm/ResNeXt50/blob/main/ResNeXt50.dlc) |
| ResNeXt50 | float | SA7255P ADP | Qualcomm® SA7255P | TFLITE | 12.122 ms | 0 - 95 MB | NPU | [ResNeXt50.tflite](https://huggingface.co/qualcomm/ResNeXt50/blob/main/ResNeXt50.tflite) |
| ResNeXt50 | float | SA7255P ADP | Qualcomm® SA7255P | QNN_DLC | 11.881 ms | 1 - 43 MB | NPU | [ResNeXt50.dlc](https://huggingface.co/qualcomm/ResNeXt50/blob/main/ResNeXt50.dlc) |
| ResNeXt50 | float | SA8255 (Proxy) | Qualcomm® SA8255P (Proxy) | TFLITE | 2.609 ms | 0 - 288 MB | NPU | [ResNeXt50.tflite](https://huggingface.co/qualcomm/ResNeXt50/blob/main/ResNeXt50.tflite) |
| ResNeXt50 | float | SA8255 (Proxy) | Qualcomm® SA8255P (Proxy) | QNN_DLC | 2.438 ms | 1 - 16 MB | NPU | [ResNeXt50.dlc](https://huggingface.co/qualcomm/ResNeXt50/blob/main/ResNeXt50.dlc) |
| ResNeXt50 | float | SA8295P ADP | Qualcomm® SA8295P | TFLITE | 4.034 ms | 0 - 88 MB | NPU | [ResNeXt50.tflite](https://huggingface.co/qualcomm/ResNeXt50/blob/main/ResNeXt50.tflite) |
| ResNeXt50 | float | SA8295P ADP | Qualcomm® SA8295P | QNN_DLC | 3.906 ms | 1 - 33 MB | NPU | [ResNeXt50.dlc](https://huggingface.co/qualcomm/ResNeXt50/blob/main/ResNeXt50.dlc) |
| ResNeXt50 | float | SA8650 (Proxy) | Qualcomm® SA8650P (Proxy) | TFLITE | 2.485 ms | 0 - 313 MB | NPU | [ResNeXt50.tflite](https://huggingface.co/qualcomm/ResNeXt50/blob/main/ResNeXt50.tflite) |
| ResNeXt50 | float | SA8650 (Proxy) | Qualcomm® SA8650P (Proxy) | QNN_DLC | 2.433 ms | 1 - 15 MB | NPU | [ResNeXt50.dlc](https://huggingface.co/qualcomm/ResNeXt50/blob/main/ResNeXt50.dlc) |
| ResNeXt50 | float | SA8775P ADP | Qualcomm® SA8775P | TFLITE | 3.904 ms | 0 - 95 MB | NPU | [ResNeXt50.tflite](https://huggingface.co/qualcomm/ResNeXt50/blob/main/ResNeXt50.tflite) |
| ResNeXt50 | float | SA8775P ADP | Qualcomm® SA8775P | QNN_DLC | 3.758 ms | 1 - 44 MB | NPU | [ResNeXt50.dlc](https://huggingface.co/qualcomm/ResNeXt50/blob/main/ResNeXt50.dlc) |
| ResNeXt50 | float | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | TFLITE | 2.498 ms | 0 - 303 MB | NPU | [ResNeXt50.tflite](https://huggingface.co/qualcomm/ResNeXt50/blob/main/ResNeXt50.tflite) |
| ResNeXt50 | float | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | QNN_DLC | 2.434 ms | 1 - 16 MB | NPU | [ResNeXt50.dlc](https://huggingface.co/qualcomm/ResNeXt50/blob/main/ResNeXt50.dlc) |
| ResNeXt50 | float | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | ONNX | 2.459 ms | 0 - 125 MB | NPU | [ResNeXt50.onnx](https://huggingface.co/qualcomm/ResNeXt50/blob/main/ResNeXt50.onnx) |
| ResNeXt50 | float | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | TFLITE | 1.78 ms | 0 - 100 MB | NPU | [ResNeXt50.tflite](https://huggingface.co/qualcomm/ResNeXt50/blob/main/ResNeXt50.tflite) |
| ResNeXt50 | float | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | QNN_DLC | 1.774 ms | 1 - 53 MB | NPU | [ResNeXt50.dlc](https://huggingface.co/qualcomm/ResNeXt50/blob/main/ResNeXt50.dlc) |
| ResNeXt50 | float | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | ONNX | 1.832 ms | 1 - 55 MB | NPU | [ResNeXt50.onnx](https://huggingface.co/qualcomm/ResNeXt50/blob/main/ResNeXt50.onnx) |
| ResNeXt50 | float | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | TFLITE | 1.646 ms | 0 - 98 MB | NPU | [ResNeXt50.tflite](https://huggingface.co/qualcomm/ResNeXt50/blob/main/ResNeXt50.tflite) |
| ResNeXt50 | float | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | QNN_DLC | 1.443 ms | 1 - 49 MB | NPU | [ResNeXt50.dlc](https://huggingface.co/qualcomm/ResNeXt50/blob/main/ResNeXt50.dlc) |
| ResNeXt50 | float | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | ONNX | 1.79 ms | 0 - 48 MB | NPU | [ResNeXt50.onnx](https://huggingface.co/qualcomm/ResNeXt50/blob/main/ResNeXt50.onnx) |
| ResNeXt50 | float | Snapdragon X Elite CRD | Snapdragon® X Elite | QNN_DLC | 3.019 ms | 209 - 209 MB | NPU | [ResNeXt50.dlc](https://huggingface.co/qualcomm/ResNeXt50/blob/main/ResNeXt50.dlc) |
| ResNeXt50 | float | Snapdragon X Elite CRD | Snapdragon® X Elite | ONNX | 2.444 ms | 51 - 51 MB | NPU | [ResNeXt50.onnx](https://huggingface.co/qualcomm/ResNeXt50/blob/main/ResNeXt50.onnx) |
| ResNeXt50 | w8a8 | QCS8275 (Proxy) | Qualcomm® QCS8275 (Proxy) | TFLITE | 2.184 ms | 0 - 50 MB | NPU | [ResNeXt50.tflite](https://huggingface.co/qualcomm/ResNeXt50/blob/main/ResNeXt50_w8a8.tflite) |
| ResNeXt50 | w8a8 | QCS8275 (Proxy) | Qualcomm® QCS8275 (Proxy) | QNN_DLC | 10.402 ms | 0 - 47 MB | NPU | [ResNeXt50.dlc](https://huggingface.co/qualcomm/ResNeXt50/blob/main/ResNeXt50_w8a8.dlc) |
| ResNeXt50 | w8a8 | QCS8450 (Proxy) | Qualcomm® QCS8450 (Proxy) | TFLITE | 1.077 ms | 0 - 62 MB | NPU | [ResNeXt50.tflite](https://huggingface.co/qualcomm/ResNeXt50/blob/main/ResNeXt50_w8a8.tflite) |
| ResNeXt50 | w8a8 | QCS8450 (Proxy) | Qualcomm® QCS8450 (Proxy) | QNN_DLC | 1.481 ms | 0 - 56 MB | NPU | [ResNeXt50.dlc](https://huggingface.co/qualcomm/ResNeXt50/blob/main/ResNeXt50_w8a8.dlc) |
| ResNeXt50 | w8a8 | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | TFLITE | 0.903 ms | 0 - 92 MB | NPU | [ResNeXt50.tflite](https://huggingface.co/qualcomm/ResNeXt50/blob/main/ResNeXt50_w8a8.tflite) |
| ResNeXt50 | w8a8 | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | QNN_DLC | 1.089 ms | 0 - 31 MB | NPU | [ResNeXt50.dlc](https://huggingface.co/qualcomm/ResNeXt50/blob/main/ResNeXt50_w8a8.dlc) |
| ResNeXt50 | w8a8 | QCS9075 (Proxy) | Qualcomm® QCS9075 (Proxy) | TFLITE | 1.21 ms | 0 - 50 MB | NPU | [ResNeXt50.tflite](https://huggingface.co/qualcomm/ResNeXt50/blob/main/ResNeXt50_w8a8.tflite) |
| ResNeXt50 | w8a8 | QCS9075 (Proxy) | Qualcomm® QCS9075 (Proxy) | QNN_DLC | 1.439 ms | 0 - 47 MB | NPU | [ResNeXt50.dlc](https://huggingface.co/qualcomm/ResNeXt50/blob/main/ResNeXt50_w8a8.dlc) |
| ResNeXt50 | w8a8 | RB3 Gen 2 (Proxy) | Qualcomm® QCS6490 (Proxy) | TFLITE | 3.079 ms | 0 - 65 MB | NPU | [ResNeXt50.tflite](https://huggingface.co/qualcomm/ResNeXt50/blob/main/ResNeXt50_w8a8.tflite) |
| ResNeXt50 | w8a8 | RB3 Gen 2 (Proxy) | Qualcomm® QCS6490 (Proxy) | QNN_DLC | 4.418 ms | 0 - 61 MB | NPU | [ResNeXt50.dlc](https://huggingface.co/qualcomm/ResNeXt50/blob/main/ResNeXt50_w8a8.dlc) |
| ResNeXt50 | w8a8 | RB5 (Proxy) | Qualcomm® QCS8250 (Proxy) | TFLITE | 56.131 ms | 0 - 188 MB | GPU | [ResNeXt50.tflite](https://huggingface.co/qualcomm/ResNeXt50/blob/main/ResNeXt50_w8a8.tflite) |
| ResNeXt50 | w8a8 | SA7255P ADP | Qualcomm® SA7255P | TFLITE | 2.184 ms | 0 - 50 MB | NPU | [ResNeXt50.tflite](https://huggingface.co/qualcomm/ResNeXt50/blob/main/ResNeXt50_w8a8.tflite) |
| ResNeXt50 | w8a8 | SA7255P ADP | Qualcomm® SA7255P | QNN_DLC | 10.402 ms | 0 - 47 MB | NPU | [ResNeXt50.dlc](https://huggingface.co/qualcomm/ResNeXt50/blob/main/ResNeXt50_w8a8.dlc) |
| ResNeXt50 | w8a8 | SA8255 (Proxy) | Qualcomm® SA8255P (Proxy) | TFLITE | 0.906 ms | 0 - 93 MB | NPU | [ResNeXt50.tflite](https://huggingface.co/qualcomm/ResNeXt50/blob/main/ResNeXt50_w8a8.tflite) |
| ResNeXt50 | w8a8 | SA8255 (Proxy) | Qualcomm® SA8255P (Proxy) | QNN_DLC | 1.08 ms | 0 - 31 MB | NPU | [ResNeXt50.dlc](https://huggingface.co/qualcomm/ResNeXt50/blob/main/ResNeXt50_w8a8.dlc) |
| ResNeXt50 | w8a8 | SA8295P ADP | Qualcomm® SA8295P | TFLITE | 1.469 ms | 0 - 51 MB | NPU | [ResNeXt50.tflite](https://huggingface.co/qualcomm/ResNeXt50/blob/main/ResNeXt50_w8a8.tflite) |
| ResNeXt50 | w8a8 | SA8295P ADP | Qualcomm® SA8295P | QNN_DLC | 1.709 ms | 0 - 50 MB | NPU | [ResNeXt50.dlc](https://huggingface.co/qualcomm/ResNeXt50/blob/main/ResNeXt50_w8a8.dlc) |
| ResNeXt50 | w8a8 | SA8650 (Proxy) | Qualcomm® SA8650P (Proxy) | TFLITE | 0.917 ms | 0 - 92 MB | NPU | [ResNeXt50.tflite](https://huggingface.co/qualcomm/ResNeXt50/blob/main/ResNeXt50_w8a8.tflite) |
| ResNeXt50 | w8a8 | SA8650 (Proxy) | Qualcomm® SA8650P (Proxy) | QNN_DLC | 1.089 ms | 0 - 18 MB | NPU | [ResNeXt50.dlc](https://huggingface.co/qualcomm/ResNeXt50/blob/main/ResNeXt50_w8a8.dlc) |
| ResNeXt50 | w8a8 | SA8775P ADP | Qualcomm® SA8775P | TFLITE | 1.21 ms | 0 - 50 MB | NPU | [ResNeXt50.tflite](https://huggingface.co/qualcomm/ResNeXt50/blob/main/ResNeXt50_w8a8.tflite) |
| ResNeXt50 | w8a8 | SA8775P ADP | Qualcomm® SA8775P | QNN_DLC | 1.439 ms | 0 - 47 MB | NPU | [ResNeXt50.dlc](https://huggingface.co/qualcomm/ResNeXt50/blob/main/ResNeXt50_w8a8.dlc) |
| ResNeXt50 | w8a8 | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | TFLITE | 0.918 ms | 0 - 93 MB | NPU | [ResNeXt50.tflite](https://huggingface.co/qualcomm/ResNeXt50/blob/main/ResNeXt50_w8a8.tflite) |
| ResNeXt50 | w8a8 | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | QNN_DLC | 1.083 ms | 0 - 14 MB | NPU | [ResNeXt50.dlc](https://huggingface.co/qualcomm/ResNeXt50/blob/main/ResNeXt50_w8a8.dlc) |
| ResNeXt50 | w8a8 | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | ONNX | 1.375 ms | 0 - 41 MB | NPU | [ResNeXt50.onnx](https://huggingface.co/qualcomm/ResNeXt50/blob/main/ResNeXt50_w8a8.onnx) |
| ResNeXt50 | w8a8 | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | TFLITE | 0.677 ms | 0 - 59 MB | NPU | [ResNeXt50.tflite](https://huggingface.co/qualcomm/ResNeXt50/blob/main/ResNeXt50_w8a8.tflite) |
| ResNeXt50 | w8a8 | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | QNN_DLC | 0.81 ms | 0 - 61 MB | NPU | [ResNeXt50.dlc](https://huggingface.co/qualcomm/ResNeXt50/blob/main/ResNeXt50_w8a8.dlc) |
| ResNeXt50 | w8a8 | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | ONNX | 0.979 ms | 0 - 70 MB | NPU | [ResNeXt50.onnx](https://huggingface.co/qualcomm/ResNeXt50/blob/main/ResNeXt50_w8a8.onnx) |
| ResNeXt50 | w8a8 | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | TFLITE | 0.676 ms | 0 - 53 MB | NPU | [ResNeXt50.tflite](https://huggingface.co/qualcomm/ResNeXt50/blob/main/ResNeXt50_w8a8.tflite) |
| ResNeXt50 | w8a8 | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | QNN_DLC | 0.733 ms | 0 - 53 MB | NPU | [ResNeXt50.dlc](https://huggingface.co/qualcomm/ResNeXt50/blob/main/ResNeXt50_w8a8.dlc) |
| ResNeXt50 | w8a8 | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | ONNX | 1.013 ms | 0 - 56 MB | NPU | [ResNeXt50.onnx](https://huggingface.co/qualcomm/ResNeXt50/blob/main/ResNeXt50_w8a8.onnx) |
| ResNeXt50 | w8a8 | Snapdragon X Elite CRD | Snapdragon® X Elite | QNN_DLC | 1.236 ms | 78 - 78 MB | NPU | [ResNeXt50.dlc](https://huggingface.co/qualcomm/ResNeXt50/blob/main/ResNeXt50_w8a8.dlc) |
| ResNeXt50 | w8a8 | Snapdragon X Elite CRD | Snapdragon® X Elite | ONNX | 1.362 ms | 28 - 28 MB | NPU | [ResNeXt50.onnx](https://huggingface.co/qualcomm/ResNeXt50/blob/main/ResNeXt50_w8a8.onnx) |
## Installation
Install the package via pip:
```bash
pip install qai-hub-models
```
## Configure Qualcomm® AI Hub to run this model on a cloud-hosted device
Sign-in to [Qualcomm® AI Hub](https://app.aihub.qualcomm.com/) with your
Qualcomm® ID. Once signed in navigate to `Account -> Settings -> API Token`.
With this API token, you can configure your client to run models on the cloud
hosted devices.
```bash
qai-hub configure --api_token API_TOKEN
```
Navigate to [docs](https://app.aihub.qualcomm.com/docs/) for more information.
## Demo off target
The package contains a simple end-to-end demo that downloads pre-trained
weights and runs this model on a sample input.
```bash
python -m qai_hub_models.models.resnext50.demo
```
The above demo runs a reference implementation of pre-processing, model
inference, and post processing.
**NOTE**: If you want running in a Jupyter Notebook or Google Colab like
environment, please add the following to your cell (instead of the above).
```
%run -m qai_hub_models.models.resnext50.demo
```
### Run model on a cloud-hosted device
In addition to the demo, you can also run the model on a cloud-hosted Qualcomm®
device. This script does the following:
* Performance check on-device on a cloud-hosted device
* Downloads compiled assets that can be deployed on-device for Android.
* Accuracy check between PyTorch and on-device outputs.
```bash
python -m qai_hub_models.models.resnext50.export
```
## How does this work?
This [export script](https://aihub.qualcomm.com/models/resnext50/qai_hub_models/models/ResNeXt50/export.py)
leverages [Qualcomm® AI Hub](https://aihub.qualcomm.com/) to optimize, validate, and deploy this model
on-device. Lets go through each step below in detail:
Step 1: **Compile model for on-device deployment**
To compile a PyTorch model for on-device deployment, we first trace the model
in memory using the `jit.trace` and then call the `submit_compile_job` API.
```python
import torch
import qai_hub as hub
from qai_hub_models.models.resnext50 import Model
# Load the model
torch_model = Model.from_pretrained()
# Device
device = hub.Device("Samsung Galaxy S24")
# Trace model
input_shape = torch_model.get_input_spec()
sample_inputs = torch_model.sample_inputs()
pt_model = torch.jit.trace(torch_model, [torch.tensor(data[0]) for _, data in sample_inputs.items()])
# Compile model on a specific device
compile_job = hub.submit_compile_job(
model=pt_model,
device=device,
input_specs=torch_model.get_input_spec(),
)
# Get target model to run on-device
target_model = compile_job.get_target_model()
```
Step 2: **Performance profiling on cloud-hosted device**
After compiling models from step 1. Models can be profiled model on-device using the
`target_model`. Note that this scripts runs the model on a device automatically
provisioned in the cloud. Once the job is submitted, you can navigate to a
provided job URL to view a variety of on-device performance metrics.
```python
profile_job = hub.submit_profile_job(
model=target_model,
device=device,
)
```
Step 3: **Verify on-device accuracy**
To verify the accuracy of the model on-device, you can run on-device inference
on sample input data on the same cloud hosted device.
```python
input_data = torch_model.sample_inputs()
inference_job = hub.submit_inference_job(
model=target_model,
device=device,
inputs=input_data,
)
on_device_output = inference_job.download_output_data()
```
With the output of the model, you can compute like PSNR, relative errors or
spot check the output with expected output.
**Note**: This on-device profiling and inference requires access to Qualcomm®
AI Hub. [Sign up for access](https://myaccount.qualcomm.com/signup).
## Run demo on a cloud-hosted device
You can also run the demo on-device.
```bash
python -m qai_hub_models.models.resnext50.demo --eval-mode on-device
```
**NOTE**: If you want running in a Jupyter Notebook or Google Colab like
environment, please add the following to your cell (instead of the above).
```
%run -m qai_hub_models.models.resnext50.demo -- --eval-mode on-device
```
## Deploying compiled model to Android
The models can be deployed using multiple runtimes:
- TensorFlow Lite (`.tflite` export): [This
tutorial](https://www.tensorflow.org/lite/android/quickstart) provides a
guide to deploy the .tflite model in an Android application.
- QNN (`.so` export ): This [sample
app](https://docs.qualcomm.com/bundle/publicresource/topics/80-63442-50/sample_app.html)
provides instructions on how to use the `.so` shared library in an Android application.
## View on Qualcomm® AI Hub
Get more details on ResNeXt50's performance across various devices [here](https://aihub.qualcomm.com/models/resnext50).
Explore all available models on [Qualcomm® AI Hub](https://aihub.qualcomm.com/)
## License
* The license for the original implementation of ResNeXt50 can be found
[here](https://github.com/pytorch/vision/blob/main/LICENSE).
* The license for the compiled assets for on-device deployment can be found [here](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/Qualcomm+AI+Hub+Proprietary+License.pdf)
## References
* [Aggregated Residual Transformations for Deep Neural Networks](https://arxiv.org/abs/1611.05431)
* [Source Model Implementation](https://github.com/pytorch/vision/blob/main/torchvision/models/resnet.py)
## Community
* Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI.
* For questions or feedback please [reach out to us](mailto:[email protected]).
|
qualcomm/ResNet18
|
qualcomm
| 2025-08-15T04:38:14Z | 119 | 0 |
pytorch
|
[
"pytorch",
"tflite",
"backbone",
"android",
"image-classification",
"arxiv:1512.03385",
"license:other",
"region:us"
] |
image-classification
| 2024-02-25T22:51:26Z |
---
library_name: pytorch
license: other
tags:
- backbone
- android
pipeline_tag: image-classification
---

# ResNet18: Optimized for Mobile Deployment
## Imagenet classifier and general purpose backbone
ResNet18 is a machine learning model that can classify images from the Imagenet dataset. It can also be used as a backbone in building more complex models for specific use cases.
This model is an implementation of ResNet18 found [here](https://github.com/pytorch/vision/blob/main/torchvision/models/resnet.py).
This repository provides scripts to run ResNet18 on Qualcomm® devices.
More details on model performance across various devices, can be found
[here](https://aihub.qualcomm.com/models/resnet18).
### Model Details
- **Model Type:** Model_use_case.image_classification
- **Model Stats:**
- Model checkpoint: Imagenet
- Input resolution: 224x224
- Number of parameters: 11.7M
- Model size (float): 44.6 MB
- Model size (w8a8): 11.3 MB
| Model | Precision | Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Primary Compute Unit | Target Model
|---|---|---|---|---|---|---|---|---|
| ResNet18 | float | QCS8275 (Proxy) | Qualcomm® QCS8275 (Proxy) | TFLITE | 6.063 ms | 0 - 16 MB | NPU | [ResNet18.tflite](https://huggingface.co/qualcomm/ResNet18/blob/main/ResNet18.tflite) |
| ResNet18 | float | QCS8275 (Proxy) | Qualcomm® QCS8275 (Proxy) | QNN_DLC | 5.887 ms | 1 - 14 MB | NPU | [ResNet18.dlc](https://huggingface.co/qualcomm/ResNet18/blob/main/ResNet18.dlc) |
| ResNet18 | float | QCS8450 (Proxy) | Qualcomm® QCS8450 (Proxy) | TFLITE | 2.014 ms | 0 - 45 MB | NPU | [ResNet18.tflite](https://huggingface.co/qualcomm/ResNet18/blob/main/ResNet18.tflite) |
| ResNet18 | float | QCS8450 (Proxy) | Qualcomm® QCS8450 (Proxy) | QNN_DLC | 2.342 ms | 1 - 24 MB | NPU | [ResNet18.dlc](https://huggingface.co/qualcomm/ResNet18/blob/main/ResNet18.dlc) |
| ResNet18 | float | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | TFLITE | 1.356 ms | 0 - 221 MB | NPU | [ResNet18.tflite](https://huggingface.co/qualcomm/ResNet18/blob/main/ResNet18.tflite) |
| ResNet18 | float | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | QNN_DLC | 1.26 ms | 0 - 82 MB | NPU | [ResNet18.dlc](https://huggingface.co/qualcomm/ResNet18/blob/main/ResNet18.dlc) |
| ResNet18 | float | QCS9075 (Proxy) | Qualcomm® QCS9075 (Proxy) | TFLITE | 2.107 ms | 0 - 16 MB | NPU | [ResNet18.tflite](https://huggingface.co/qualcomm/ResNet18/blob/main/ResNet18.tflite) |
| ResNet18 | float | QCS9075 (Proxy) | Qualcomm® QCS9075 (Proxy) | QNN_DLC | 1.91 ms | 1 - 14 MB | NPU | [ResNet18.dlc](https://huggingface.co/qualcomm/ResNet18/blob/main/ResNet18.dlc) |
| ResNet18 | float | SA7255P ADP | Qualcomm® SA7255P | TFLITE | 6.063 ms | 0 - 16 MB | NPU | [ResNet18.tflite](https://huggingface.co/qualcomm/ResNet18/blob/main/ResNet18.tflite) |
| ResNet18 | float | SA7255P ADP | Qualcomm® SA7255P | QNN_DLC | 5.887 ms | 1 - 14 MB | NPU | [ResNet18.dlc](https://huggingface.co/qualcomm/ResNet18/blob/main/ResNet18.dlc) |
| ResNet18 | float | SA8255 (Proxy) | Qualcomm® SA8255P (Proxy) | TFLITE | 1.349 ms | 0 - 219 MB | NPU | [ResNet18.tflite](https://huggingface.co/qualcomm/ResNet18/blob/main/ResNet18.tflite) |
| ResNet18 | float | SA8255 (Proxy) | Qualcomm® SA8255P (Proxy) | QNN_DLC | 1.263 ms | 0 - 87 MB | NPU | [ResNet18.dlc](https://huggingface.co/qualcomm/ResNet18/blob/main/ResNet18.dlc) |
| ResNet18 | float | SA8295P ADP | Qualcomm® SA8295P | TFLITE | 2.502 ms | 0 - 20 MB | NPU | [ResNet18.tflite](https://huggingface.co/qualcomm/ResNet18/blob/main/ResNet18.tflite) |
| ResNet18 | float | SA8295P ADP | Qualcomm® SA8295P | QNN_DLC | 2.261 ms | 0 - 17 MB | NPU | [ResNet18.dlc](https://huggingface.co/qualcomm/ResNet18/blob/main/ResNet18.dlc) |
| ResNet18 | float | SA8650 (Proxy) | Qualcomm® SA8650P (Proxy) | TFLITE | 1.358 ms | 0 - 221 MB | NPU | [ResNet18.tflite](https://huggingface.co/qualcomm/ResNet18/blob/main/ResNet18.tflite) |
| ResNet18 | float | SA8650 (Proxy) | Qualcomm® SA8650P (Proxy) | QNN_DLC | 1.259 ms | 0 - 86 MB | NPU | [ResNet18.dlc](https://huggingface.co/qualcomm/ResNet18/blob/main/ResNet18.dlc) |
| ResNet18 | float | SA8775P ADP | Qualcomm® SA8775P | TFLITE | 2.107 ms | 0 - 16 MB | NPU | [ResNet18.tflite](https://huggingface.co/qualcomm/ResNet18/blob/main/ResNet18.tflite) |
| ResNet18 | float | SA8775P ADP | Qualcomm® SA8775P | QNN_DLC | 1.91 ms | 1 - 14 MB | NPU | [ResNet18.dlc](https://huggingface.co/qualcomm/ResNet18/blob/main/ResNet18.dlc) |
| ResNet18 | float | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | TFLITE | 1.354 ms | 0 - 220 MB | NPU | [ResNet18.tflite](https://huggingface.co/qualcomm/ResNet18/blob/main/ResNet18.tflite) |
| ResNet18 | float | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | QNN_DLC | 1.257 ms | 0 - 82 MB | NPU | [ResNet18.dlc](https://huggingface.co/qualcomm/ResNet18/blob/main/ResNet18.dlc) |
| ResNet18 | float | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | ONNX | 1.139 ms | 0 - 64 MB | NPU | [ResNet18.onnx](https://huggingface.co/qualcomm/ResNet18/blob/main/ResNet18.onnx) |
| ResNet18 | float | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | TFLITE | 0.949 ms | 0 - 46 MB | NPU | [ResNet18.tflite](https://huggingface.co/qualcomm/ResNet18/blob/main/ResNet18.tflite) |
| ResNet18 | float | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | QNN_DLC | 0.889 ms | 1 - 24 MB | NPU | [ResNet18.dlc](https://huggingface.co/qualcomm/ResNet18/blob/main/ResNet18.dlc) |
| ResNet18 | float | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | ONNX | 0.846 ms | 0 - 24 MB | NPU | [ResNet18.onnx](https://huggingface.co/qualcomm/ResNet18/blob/main/ResNet18.onnx) |
| ResNet18 | float | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | TFLITE | 0.879 ms | 0 - 23 MB | NPU | [ResNet18.tflite](https://huggingface.co/qualcomm/ResNet18/blob/main/ResNet18.tflite) |
| ResNet18 | float | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | QNN_DLC | 0.821 ms | 1 - 20 MB | NPU | [ResNet18.dlc](https://huggingface.co/qualcomm/ResNet18/blob/main/ResNet18.dlc) |
| ResNet18 | float | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | ONNX | 0.856 ms | 1 - 17 MB | NPU | [ResNet18.onnx](https://huggingface.co/qualcomm/ResNet18/blob/main/ResNet18.onnx) |
| ResNet18 | float | Snapdragon X Elite CRD | Snapdragon® X Elite | QNN_DLC | 1.881 ms | 74 - 74 MB | NPU | [ResNet18.dlc](https://huggingface.co/qualcomm/ResNet18/blob/main/ResNet18.dlc) |
| ResNet18 | float | Snapdragon X Elite CRD | Snapdragon® X Elite | ONNX | 1.276 ms | 23 - 23 MB | NPU | [ResNet18.onnx](https://huggingface.co/qualcomm/ResNet18/blob/main/ResNet18.onnx) |
| ResNet18 | w8a8 | QCS8275 (Proxy) | Qualcomm® QCS8275 (Proxy) | TFLITE | 1.051 ms | 0 - 13 MB | NPU | [ResNet18.tflite](https://huggingface.co/qualcomm/ResNet18/blob/main/ResNet18_w8a8.tflite) |
| ResNet18 | w8a8 | QCS8275 (Proxy) | Qualcomm® QCS8275 (Proxy) | QNN_DLC | 1.289 ms | 0 - 13 MB | NPU | [ResNet18.dlc](https://huggingface.co/qualcomm/ResNet18/blob/main/ResNet18_w8a8.dlc) |
| ResNet18 | w8a8 | QCS8450 (Proxy) | Qualcomm® QCS8450 (Proxy) | TFLITE | 0.47 ms | 0 - 42 MB | NPU | [ResNet18.tflite](https://huggingface.co/qualcomm/ResNet18/blob/main/ResNet18_w8a8.tflite) |
| ResNet18 | w8a8 | QCS8450 (Proxy) | Qualcomm® QCS8450 (Proxy) | QNN_DLC | 0.743 ms | 0 - 40 MB | NPU | [ResNet18.dlc](https://huggingface.co/qualcomm/ResNet18/blob/main/ResNet18_w8a8.dlc) |
| ResNet18 | w8a8 | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | TFLITE | 0.399 ms | 0 - 87 MB | NPU | [ResNet18.tflite](https://huggingface.co/qualcomm/ResNet18/blob/main/ResNet18_w8a8.tflite) |
| ResNet18 | w8a8 | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | QNN_DLC | 0.535 ms | 0 - 89 MB | NPU | [ResNet18.dlc](https://huggingface.co/qualcomm/ResNet18/blob/main/ResNet18_w8a8.dlc) |
| ResNet18 | w8a8 | QCS9075 (Proxy) | Qualcomm® QCS9075 (Proxy) | TFLITE | 0.577 ms | 0 - 14 MB | NPU | [ResNet18.tflite](https://huggingface.co/qualcomm/ResNet18/blob/main/ResNet18_w8a8.tflite) |
| ResNet18 | w8a8 | QCS9075 (Proxy) | Qualcomm® QCS9075 (Proxy) | QNN_DLC | 0.704 ms | 0 - 14 MB | NPU | [ResNet18.dlc](https://huggingface.co/qualcomm/ResNet18/blob/main/ResNet18_w8a8.dlc) |
| ResNet18 | w8a8 | RB3 Gen 2 (Proxy) | Qualcomm® QCS6490 (Proxy) | TFLITE | 1.404 ms | 0 - 30 MB | NPU | [ResNet18.tflite](https://huggingface.co/qualcomm/ResNet18/blob/main/ResNet18_w8a8.tflite) |
| ResNet18 | w8a8 | RB3 Gen 2 (Proxy) | Qualcomm® QCS6490 (Proxy) | QNN_DLC | 2.031 ms | 0 - 28 MB | NPU | [ResNet18.dlc](https://huggingface.co/qualcomm/ResNet18/blob/main/ResNet18_w8a8.dlc) |
| ResNet18 | w8a8 | RB5 (Proxy) | Qualcomm® QCS8250 (Proxy) | TFLITE | 7.139 ms | 0 - 2 MB | NPU | [ResNet18.tflite](https://huggingface.co/qualcomm/ResNet18/blob/main/ResNet18_w8a8.tflite) |
| ResNet18 | w8a8 | SA7255P ADP | Qualcomm® SA7255P | TFLITE | 1.051 ms | 0 - 13 MB | NPU | [ResNet18.tflite](https://huggingface.co/qualcomm/ResNet18/blob/main/ResNet18_w8a8.tflite) |
| ResNet18 | w8a8 | SA7255P ADP | Qualcomm® SA7255P | QNN_DLC | 1.289 ms | 0 - 13 MB | NPU | [ResNet18.dlc](https://huggingface.co/qualcomm/ResNet18/blob/main/ResNet18_w8a8.dlc) |
| ResNet18 | w8a8 | SA8255 (Proxy) | Qualcomm® SA8255P (Proxy) | TFLITE | 0.408 ms | 0 - 90 MB | NPU | [ResNet18.tflite](https://huggingface.co/qualcomm/ResNet18/blob/main/ResNet18_w8a8.tflite) |
| ResNet18 | w8a8 | SA8255 (Proxy) | Qualcomm® SA8255P (Proxy) | QNN_DLC | 0.547 ms | 0 - 89 MB | NPU | [ResNet18.dlc](https://huggingface.co/qualcomm/ResNet18/blob/main/ResNet18_w8a8.dlc) |
| ResNet18 | w8a8 | SA8295P ADP | Qualcomm® SA8295P | TFLITE | 0.756 ms | 0 - 15 MB | NPU | [ResNet18.tflite](https://huggingface.co/qualcomm/ResNet18/blob/main/ResNet18_w8a8.tflite) |
| ResNet18 | w8a8 | SA8295P ADP | Qualcomm® SA8295P | QNN_DLC | 0.896 ms | 0 - 16 MB | NPU | [ResNet18.dlc](https://huggingface.co/qualcomm/ResNet18/blob/main/ResNet18_w8a8.dlc) |
| ResNet18 | w8a8 | SA8650 (Proxy) | Qualcomm® SA8650P (Proxy) | TFLITE | 0.395 ms | 0 - 90 MB | NPU | [ResNet18.tflite](https://huggingface.co/qualcomm/ResNet18/blob/main/ResNet18_w8a8.tflite) |
| ResNet18 | w8a8 | SA8650 (Proxy) | Qualcomm® SA8650P (Proxy) | QNN_DLC | 0.548 ms | 0 - 89 MB | NPU | [ResNet18.dlc](https://huggingface.co/qualcomm/ResNet18/blob/main/ResNet18_w8a8.dlc) |
| ResNet18 | w8a8 | SA8775P ADP | Qualcomm® SA8775P | TFLITE | 0.577 ms | 0 - 14 MB | NPU | [ResNet18.tflite](https://huggingface.co/qualcomm/ResNet18/blob/main/ResNet18_w8a8.tflite) |
| ResNet18 | w8a8 | SA8775P ADP | Qualcomm® SA8775P | QNN_DLC | 0.704 ms | 0 - 14 MB | NPU | [ResNet18.dlc](https://huggingface.co/qualcomm/ResNet18/blob/main/ResNet18_w8a8.dlc) |
| ResNet18 | w8a8 | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | TFLITE | 0.4 ms | 0 - 88 MB | NPU | [ResNet18.tflite](https://huggingface.co/qualcomm/ResNet18/blob/main/ResNet18_w8a8.tflite) |
| ResNet18 | w8a8 | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | QNN_DLC | 0.544 ms | 0 - 4 MB | NPU | [ResNet18.dlc](https://huggingface.co/qualcomm/ResNet18/blob/main/ResNet18_w8a8.dlc) |
| ResNet18 | w8a8 | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | ONNX | 0.781 ms | 0 - 29 MB | NPU | [ResNet18.onnx](https://huggingface.co/qualcomm/ResNet18/blob/main/ResNet18_w8a8.onnx) |
| ResNet18 | w8a8 | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | TFLITE | 0.313 ms | 0 - 38 MB | NPU | [ResNet18.tflite](https://huggingface.co/qualcomm/ResNet18/blob/main/ResNet18_w8a8.tflite) |
| ResNet18 | w8a8 | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | QNN_DLC | 0.413 ms | 0 - 35 MB | NPU | [ResNet18.dlc](https://huggingface.co/qualcomm/ResNet18/blob/main/ResNet18_w8a8.dlc) |
| ResNet18 | w8a8 | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | ONNX | 0.586 ms | 0 - 43 MB | NPU | [ResNet18.onnx](https://huggingface.co/qualcomm/ResNet18/blob/main/ResNet18_w8a8.onnx) |
| ResNet18 | w8a8 | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | TFLITE | 0.274 ms | 0 - 18 MB | NPU | [ResNet18.tflite](https://huggingface.co/qualcomm/ResNet18/blob/main/ResNet18_w8a8.tflite) |
| ResNet18 | w8a8 | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | QNN_DLC | 0.308 ms | 0 - 19 MB | NPU | [ResNet18.dlc](https://huggingface.co/qualcomm/ResNet18/blob/main/ResNet18_w8a8.dlc) |
| ResNet18 | w8a8 | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | ONNX | 0.643 ms | 0 - 24 MB | NPU | [ResNet18.onnx](https://huggingface.co/qualcomm/ResNet18/blob/main/ResNet18_w8a8.onnx) |
| ResNet18 | w8a8 | Snapdragon X Elite CRD | Snapdragon® X Elite | QNN_DLC | 0.646 ms | 71 - 71 MB | NPU | [ResNet18.dlc](https://huggingface.co/qualcomm/ResNet18/blob/main/ResNet18_w8a8.dlc) |
| ResNet18 | w8a8 | Snapdragon X Elite CRD | Snapdragon® X Elite | ONNX | 0.89 ms | 11 - 11 MB | NPU | [ResNet18.onnx](https://huggingface.co/qualcomm/ResNet18/blob/main/ResNet18_w8a8.onnx) |
## Installation
Install the package via pip:
```bash
pip install qai-hub-models
```
## Configure Qualcomm® AI Hub to run this model on a cloud-hosted device
Sign-in to [Qualcomm® AI Hub](https://app.aihub.qualcomm.com/) with your
Qualcomm® ID. Once signed in navigate to `Account -> Settings -> API Token`.
With this API token, you can configure your client to run models on the cloud
hosted devices.
```bash
qai-hub configure --api_token API_TOKEN
```
Navigate to [docs](https://app.aihub.qualcomm.com/docs/) for more information.
## Demo off target
The package contains a simple end-to-end demo that downloads pre-trained
weights and runs this model on a sample input.
```bash
python -m qai_hub_models.models.resnet18.demo
```
The above demo runs a reference implementation of pre-processing, model
inference, and post processing.
**NOTE**: If you want running in a Jupyter Notebook or Google Colab like
environment, please add the following to your cell (instead of the above).
```
%run -m qai_hub_models.models.resnet18.demo
```
### Run model on a cloud-hosted device
In addition to the demo, you can also run the model on a cloud-hosted Qualcomm®
device. This script does the following:
* Performance check on-device on a cloud-hosted device
* Downloads compiled assets that can be deployed on-device for Android.
* Accuracy check between PyTorch and on-device outputs.
```bash
python -m qai_hub_models.models.resnet18.export
```
## How does this work?
This [export script](https://aihub.qualcomm.com/models/resnet18/qai_hub_models/models/ResNet18/export.py)
leverages [Qualcomm® AI Hub](https://aihub.qualcomm.com/) to optimize, validate, and deploy this model
on-device. Lets go through each step below in detail:
Step 1: **Compile model for on-device deployment**
To compile a PyTorch model for on-device deployment, we first trace the model
in memory using the `jit.trace` and then call the `submit_compile_job` API.
```python
import torch
import qai_hub as hub
from qai_hub_models.models.resnet18 import Model
# Load the model
torch_model = Model.from_pretrained()
# Device
device = hub.Device("Samsung Galaxy S24")
# Trace model
input_shape = torch_model.get_input_spec()
sample_inputs = torch_model.sample_inputs()
pt_model = torch.jit.trace(torch_model, [torch.tensor(data[0]) for _, data in sample_inputs.items()])
# Compile model on a specific device
compile_job = hub.submit_compile_job(
model=pt_model,
device=device,
input_specs=torch_model.get_input_spec(),
)
# Get target model to run on-device
target_model = compile_job.get_target_model()
```
Step 2: **Performance profiling on cloud-hosted device**
After compiling models from step 1. Models can be profiled model on-device using the
`target_model`. Note that this scripts runs the model on a device automatically
provisioned in the cloud. Once the job is submitted, you can navigate to a
provided job URL to view a variety of on-device performance metrics.
```python
profile_job = hub.submit_profile_job(
model=target_model,
device=device,
)
```
Step 3: **Verify on-device accuracy**
To verify the accuracy of the model on-device, you can run on-device inference
on sample input data on the same cloud hosted device.
```python
input_data = torch_model.sample_inputs()
inference_job = hub.submit_inference_job(
model=target_model,
device=device,
inputs=input_data,
)
on_device_output = inference_job.download_output_data()
```
With the output of the model, you can compute like PSNR, relative errors or
spot check the output with expected output.
**Note**: This on-device profiling and inference requires access to Qualcomm®
AI Hub. [Sign up for access](https://myaccount.qualcomm.com/signup).
## Run demo on a cloud-hosted device
You can also run the demo on-device.
```bash
python -m qai_hub_models.models.resnet18.demo --eval-mode on-device
```
**NOTE**: If you want running in a Jupyter Notebook or Google Colab like
environment, please add the following to your cell (instead of the above).
```
%run -m qai_hub_models.models.resnet18.demo -- --eval-mode on-device
```
## Deploying compiled model to Android
The models can be deployed using multiple runtimes:
- TensorFlow Lite (`.tflite` export): [This
tutorial](https://www.tensorflow.org/lite/android/quickstart) provides a
guide to deploy the .tflite model in an Android application.
- QNN (`.so` export ): This [sample
app](https://docs.qualcomm.com/bundle/publicresource/topics/80-63442-50/sample_app.html)
provides instructions on how to use the `.so` shared library in an Android application.
## View on Qualcomm® AI Hub
Get more details on ResNet18's performance across various devices [here](https://aihub.qualcomm.com/models/resnet18).
Explore all available models on [Qualcomm® AI Hub](https://aihub.qualcomm.com/)
## License
* The license for the original implementation of ResNet18 can be found
[here](https://github.com/pytorch/vision/blob/main/LICENSE).
* The license for the compiled assets for on-device deployment can be found [here](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/Qualcomm+AI+Hub+Proprietary+License.pdf)
## References
* [Deep Residual Learning for Image Recognition](https://arxiv.org/abs/1512.03385)
* [Source Model Implementation](https://github.com/pytorch/vision/blob/main/torchvision/models/resnet.py)
## Community
* Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI.
* For questions or feedback please [reach out to us](mailto:[email protected]).
|
marygilbert/blockassist-bc-flightless_unseen_parrot_1755231292
|
marygilbert
| 2025-08-15T04:31:34Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"flightless unseen parrot",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-15T04:31:28Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- flightless unseen parrot
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
maxibillion1975/blockassist-bc-iridescent_squeaky_sandpiper_1755230658
|
maxibillion1975
| 2025-08-15T04:31:13Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"iridescent squeaky sandpiper",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-15T04:31:10Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- iridescent squeaky sandpiper
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
qualcomm/MNASNet05
|
qualcomm
| 2025-08-15T04:30:34Z | 47 | 0 |
pytorch
|
[
"pytorch",
"tflite",
"backbone",
"android",
"image-classification",
"arxiv:1807.11626",
"license:other",
"region:us"
] |
image-classification
| 2024-02-25T23:05:32Z |
---
library_name: pytorch
license: other
tags:
- backbone
- android
pipeline_tag: image-classification
---

# MNASNet05: Optimized for Mobile Deployment
## Imagenet classifier and general purpose backbone
MNASNet05 is a machine learning model that can classify images from the Imagenet dataset. It can also be used as a backbone in building more complex models for specific use cases.
This model is an implementation of MNASNet05 found [here](https://github.com/pytorch/vision/blob/main/torchvision/models/mnasnet.py).
This repository provides scripts to run MNASNet05 on Qualcomm® devices.
More details on model performance across various devices, can be found
[here](https://aihub.qualcomm.com/models/mnasnet05).
### Model Details
- **Model Type:** Model_use_case.image_classification
- **Model Stats:**
- Model checkpoint: Imagenet
- Input resolution: 224x224
- Number of parameters: 2.21M
- Model size (float): 8.45 MB
- Model size (w8a16): 2.79 MB
| Model | Precision | Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Primary Compute Unit | Target Model
|---|---|---|---|---|---|---|---|---|
| MNASNet05 | float | QCS8275 (Proxy) | Qualcomm® QCS8275 (Proxy) | TFLITE | 2.305 ms | 0 - 18 MB | NPU | [MNASNet05.tflite](https://huggingface.co/qualcomm/MNASNet05/blob/main/MNASNet05.tflite) |
| MNASNet05 | float | QCS8275 (Proxy) | Qualcomm® QCS8275 (Proxy) | QNN_DLC | 2.194 ms | 1 - 17 MB | NPU | [MNASNet05.dlc](https://huggingface.co/qualcomm/MNASNet05/blob/main/MNASNet05.dlc) |
| MNASNet05 | float | QCS8450 (Proxy) | Qualcomm® QCS8450 (Proxy) | TFLITE | 1.033 ms | 0 - 35 MB | NPU | [MNASNet05.tflite](https://huggingface.co/qualcomm/MNASNet05/blob/main/MNASNet05.tflite) |
| MNASNet05 | float | QCS8450 (Proxy) | Qualcomm® QCS8450 (Proxy) | QNN_DLC | 1.418 ms | 1 - 37 MB | NPU | [MNASNet05.dlc](https://huggingface.co/qualcomm/MNASNet05/blob/main/MNASNet05.dlc) |
| MNASNet05 | float | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | TFLITE | 0.715 ms | 0 - 52 MB | NPU | [MNASNet05.tflite](https://huggingface.co/qualcomm/MNASNet05/blob/main/MNASNet05.tflite) |
| MNASNet05 | float | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | QNN_DLC | 0.712 ms | 0 - 50 MB | NPU | [MNASNet05.dlc](https://huggingface.co/qualcomm/MNASNet05/blob/main/MNASNet05.dlc) |
| MNASNet05 | float | QCS9075 (Proxy) | Qualcomm® QCS9075 (Proxy) | TFLITE | 1.049 ms | 0 - 19 MB | NPU | [MNASNet05.tflite](https://huggingface.co/qualcomm/MNASNet05/blob/main/MNASNet05.tflite) |
| MNASNet05 | float | QCS9075 (Proxy) | Qualcomm® QCS9075 (Proxy) | QNN_DLC | 1.026 ms | 1 - 18 MB | NPU | [MNASNet05.dlc](https://huggingface.co/qualcomm/MNASNet05/blob/main/MNASNet05.dlc) |
| MNASNet05 | float | SA7255P ADP | Qualcomm® SA7255P | TFLITE | 2.305 ms | 0 - 18 MB | NPU | [MNASNet05.tflite](https://huggingface.co/qualcomm/MNASNet05/blob/main/MNASNet05.tflite) |
| MNASNet05 | float | SA7255P ADP | Qualcomm® SA7255P | QNN_DLC | 2.194 ms | 1 - 17 MB | NPU | [MNASNet05.dlc](https://huggingface.co/qualcomm/MNASNet05/blob/main/MNASNet05.dlc) |
| MNASNet05 | float | SA8255 (Proxy) | Qualcomm® SA8255P (Proxy) | TFLITE | 0.717 ms | 0 - 50 MB | NPU | [MNASNet05.tflite](https://huggingface.co/qualcomm/MNASNet05/blob/main/MNASNet05.tflite) |
| MNASNet05 | float | SA8255 (Proxy) | Qualcomm® SA8255P (Proxy) | QNN_DLC | 0.717 ms | 0 - 49 MB | NPU | [MNASNet05.dlc](https://huggingface.co/qualcomm/MNASNet05/blob/main/MNASNet05.dlc) |
| MNASNet05 | float | SA8295P ADP | Qualcomm® SA8295P | TFLITE | 1.412 ms | 0 - 20 MB | NPU | [MNASNet05.tflite](https://huggingface.co/qualcomm/MNASNet05/blob/main/MNASNet05.tflite) |
| MNASNet05 | float | SA8295P ADP | Qualcomm® SA8295P | QNN_DLC | 1.344 ms | 1 - 20 MB | NPU | [MNASNet05.dlc](https://huggingface.co/qualcomm/MNASNet05/blob/main/MNASNet05.dlc) |
| MNASNet05 | float | SA8650 (Proxy) | Qualcomm® SA8650P (Proxy) | TFLITE | 0.719 ms | 0 - 50 MB | NPU | [MNASNet05.tflite](https://huggingface.co/qualcomm/MNASNet05/blob/main/MNASNet05.tflite) |
| MNASNet05 | float | SA8650 (Proxy) | Qualcomm® SA8650P (Proxy) | QNN_DLC | 0.717 ms | 0 - 48 MB | NPU | [MNASNet05.dlc](https://huggingface.co/qualcomm/MNASNet05/blob/main/MNASNet05.dlc) |
| MNASNet05 | float | SA8775P ADP | Qualcomm® SA8775P | TFLITE | 1.049 ms | 0 - 19 MB | NPU | [MNASNet05.tflite](https://huggingface.co/qualcomm/MNASNet05/blob/main/MNASNet05.tflite) |
| MNASNet05 | float | SA8775P ADP | Qualcomm® SA8775P | QNN_DLC | 1.026 ms | 1 - 18 MB | NPU | [MNASNet05.dlc](https://huggingface.co/qualcomm/MNASNet05/blob/main/MNASNet05.dlc) |
| MNASNet05 | float | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | TFLITE | 0.717 ms | 0 - 52 MB | NPU | [MNASNet05.tflite](https://huggingface.co/qualcomm/MNASNet05/blob/main/MNASNet05.tflite) |
| MNASNet05 | float | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | QNN_DLC | 0.713 ms | 0 - 49 MB | NPU | [MNASNet05.dlc](https://huggingface.co/qualcomm/MNASNet05/blob/main/MNASNet05.dlc) |
| MNASNet05 | float | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | ONNX | 0.579 ms | 0 - 49 MB | NPU | [MNASNet05.onnx](https://huggingface.co/qualcomm/MNASNet05/blob/main/MNASNet05.onnx) |
| MNASNet05 | float | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | TFLITE | 0.477 ms | 0 - 33 MB | NPU | [MNASNet05.tflite](https://huggingface.co/qualcomm/MNASNet05/blob/main/MNASNet05.tflite) |
| MNASNet05 | float | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | QNN_DLC | 0.478 ms | 1 - 32 MB | NPU | [MNASNet05.dlc](https://huggingface.co/qualcomm/MNASNet05/blob/main/MNASNet05.dlc) |
| MNASNet05 | float | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | ONNX | 0.391 ms | 0 - 33 MB | NPU | [MNASNet05.onnx](https://huggingface.co/qualcomm/MNASNet05/blob/main/MNASNet05.onnx) |
| MNASNet05 | float | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | TFLITE | 0.388 ms | 0 - 29 MB | NPU | [MNASNet05.tflite](https://huggingface.co/qualcomm/MNASNet05/blob/main/MNASNet05.tflite) |
| MNASNet05 | float | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | QNN_DLC | 0.493 ms | 1 - 21 MB | NPU | [MNASNet05.dlc](https://huggingface.co/qualcomm/MNASNet05/blob/main/MNASNet05.dlc) |
| MNASNet05 | float | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | ONNX | 0.439 ms | 1 - 25 MB | NPU | [MNASNet05.onnx](https://huggingface.co/qualcomm/MNASNet05/blob/main/MNASNet05.onnx) |
| MNASNet05 | float | Snapdragon X Elite CRD | Snapdragon® X Elite | QNN_DLC | 1.264 ms | 39 - 39 MB | NPU | [MNASNet05.dlc](https://huggingface.co/qualcomm/MNASNet05/blob/main/MNASNet05.dlc) |
| MNASNet05 | float | Snapdragon X Elite CRD | Snapdragon® X Elite | ONNX | 0.673 ms | 5 - 5 MB | NPU | [MNASNet05.onnx](https://huggingface.co/qualcomm/MNASNet05/blob/main/MNASNet05.onnx) |
| MNASNet05 | w8a16 | QCS8275 (Proxy) | Qualcomm® QCS8275 (Proxy) | QNN_DLC | 1.639 ms | 0 - 14 MB | NPU | [MNASNet05.dlc](https://huggingface.co/qualcomm/MNASNet05/blob/main/MNASNet05_w8a16.dlc) |
| MNASNet05 | w8a16 | QCS8450 (Proxy) | Qualcomm® QCS8450 (Proxy) | QNN_DLC | 0.947 ms | 0 - 31 MB | NPU | [MNASNet05.dlc](https://huggingface.co/qualcomm/MNASNet05/blob/main/MNASNet05_w8a16.dlc) |
| MNASNet05 | w8a16 | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | QNN_DLC | 0.784 ms | 0 - 21 MB | NPU | [MNASNet05.dlc](https://huggingface.co/qualcomm/MNASNet05/blob/main/MNASNet05_w8a16.dlc) |
| MNASNet05 | w8a16 | QCS9075 (Proxy) | Qualcomm® QCS9075 (Proxy) | QNN_DLC | 0.99 ms | 0 - 16 MB | NPU | [MNASNet05.dlc](https://huggingface.co/qualcomm/MNASNet05/blob/main/MNASNet05_w8a16.dlc) |
| MNASNet05 | w8a16 | RB3 Gen 2 (Proxy) | Qualcomm® QCS6490 (Proxy) | QNN_DLC | 2.573 ms | 0 - 21 MB | NPU | [MNASNet05.dlc](https://huggingface.co/qualcomm/MNASNet05/blob/main/MNASNet05_w8a16.dlc) |
| MNASNet05 | w8a16 | SA7255P ADP | Qualcomm® SA7255P | QNN_DLC | 1.639 ms | 0 - 14 MB | NPU | [MNASNet05.dlc](https://huggingface.co/qualcomm/MNASNet05/blob/main/MNASNet05_w8a16.dlc) |
| MNASNet05 | w8a16 | SA8255 (Proxy) | Qualcomm® SA8255P (Proxy) | QNN_DLC | 0.789 ms | 0 - 22 MB | NPU | [MNASNet05.dlc](https://huggingface.co/qualcomm/MNASNet05/blob/main/MNASNet05_w8a16.dlc) |
| MNASNet05 | w8a16 | SA8295P ADP | Qualcomm® SA8295P | QNN_DLC | 1.212 ms | 0 - 24 MB | NPU | [MNASNet05.dlc](https://huggingface.co/qualcomm/MNASNet05/blob/main/MNASNet05_w8a16.dlc) |
| MNASNet05 | w8a16 | SA8650 (Proxy) | Qualcomm® SA8650P (Proxy) | QNN_DLC | 0.784 ms | 0 - 21 MB | NPU | [MNASNet05.dlc](https://huggingface.co/qualcomm/MNASNet05/blob/main/MNASNet05_w8a16.dlc) |
| MNASNet05 | w8a16 | SA8775P ADP | Qualcomm® SA8775P | QNN_DLC | 0.99 ms | 0 - 16 MB | NPU | [MNASNet05.dlc](https://huggingface.co/qualcomm/MNASNet05/blob/main/MNASNet05_w8a16.dlc) |
| MNASNet05 | w8a16 | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | QNN_DLC | 0.787 ms | 0 - 21 MB | NPU | [MNASNet05.dlc](https://huggingface.co/qualcomm/MNASNet05/blob/main/MNASNet05_w8a16.dlc) |
| MNASNet05 | w8a16 | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | ONNX | 0.623 ms | 0 - 21 MB | NPU | [MNASNet05.onnx](https://huggingface.co/qualcomm/MNASNet05/blob/main/MNASNet05_w8a16.onnx) |
| MNASNet05 | w8a16 | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | QNN_DLC | 0.521 ms | 0 - 26 MB | NPU | [MNASNet05.dlc](https://huggingface.co/qualcomm/MNASNet05/blob/main/MNASNet05_w8a16.dlc) |
| MNASNet05 | w8a16 | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | ONNX | 0.411 ms | 0 - 32 MB | NPU | [MNASNet05.onnx](https://huggingface.co/qualcomm/MNASNet05/blob/main/MNASNet05_w8a16.onnx) |
| MNASNet05 | w8a16 | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | QNN_DLC | 0.438 ms | 0 - 25 MB | NPU | [MNASNet05.dlc](https://huggingface.co/qualcomm/MNASNet05/blob/main/MNASNet05_w8a16.dlc) |
| MNASNet05 | w8a16 | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | ONNX | 0.435 ms | 0 - 23 MB | NPU | [MNASNet05.onnx](https://huggingface.co/qualcomm/MNASNet05/blob/main/MNASNet05_w8a16.onnx) |
| MNASNet05 | w8a16 | Snapdragon X Elite CRD | Snapdragon® X Elite | QNN_DLC | 0.985 ms | 15 - 15 MB | NPU | [MNASNet05.dlc](https://huggingface.co/qualcomm/MNASNet05/blob/main/MNASNet05_w8a16.dlc) |
| MNASNet05 | w8a16 | Snapdragon X Elite CRD | Snapdragon® X Elite | ONNX | 0.697 ms | 2 - 2 MB | NPU | [MNASNet05.onnx](https://huggingface.co/qualcomm/MNASNet05/blob/main/MNASNet05_w8a16.onnx) |
## Installation
Install the package via pip:
```bash
pip install qai-hub-models
```
## Configure Qualcomm® AI Hub to run this model on a cloud-hosted device
Sign-in to [Qualcomm® AI Hub](https://app.aihub.qualcomm.com/) with your
Qualcomm® ID. Once signed in navigate to `Account -> Settings -> API Token`.
With this API token, you can configure your client to run models on the cloud
hosted devices.
```bash
qai-hub configure --api_token API_TOKEN
```
Navigate to [docs](https://app.aihub.qualcomm.com/docs/) for more information.
## Demo off target
The package contains a simple end-to-end demo that downloads pre-trained
weights and runs this model on a sample input.
```bash
python -m qai_hub_models.models.mnasnet05.demo
```
The above demo runs a reference implementation of pre-processing, model
inference, and post processing.
**NOTE**: If you want running in a Jupyter Notebook or Google Colab like
environment, please add the following to your cell (instead of the above).
```
%run -m qai_hub_models.models.mnasnet05.demo
```
### Run model on a cloud-hosted device
In addition to the demo, you can also run the model on a cloud-hosted Qualcomm®
device. This script does the following:
* Performance check on-device on a cloud-hosted device
* Downloads compiled assets that can be deployed on-device for Android.
* Accuracy check between PyTorch and on-device outputs.
```bash
python -m qai_hub_models.models.mnasnet05.export
```
## How does this work?
This [export script](https://aihub.qualcomm.com/models/mnasnet05/qai_hub_models/models/MNASNet05/export.py)
leverages [Qualcomm® AI Hub](https://aihub.qualcomm.com/) to optimize, validate, and deploy this model
on-device. Lets go through each step below in detail:
Step 1: **Compile model for on-device deployment**
To compile a PyTorch model for on-device deployment, we first trace the model
in memory using the `jit.trace` and then call the `submit_compile_job` API.
```python
import torch
import qai_hub as hub
from qai_hub_models.models.mnasnet05 import Model
# Load the model
torch_model = Model.from_pretrained()
# Device
device = hub.Device("Samsung Galaxy S24")
# Trace model
input_shape = torch_model.get_input_spec()
sample_inputs = torch_model.sample_inputs()
pt_model = torch.jit.trace(torch_model, [torch.tensor(data[0]) for _, data in sample_inputs.items()])
# Compile model on a specific device
compile_job = hub.submit_compile_job(
model=pt_model,
device=device,
input_specs=torch_model.get_input_spec(),
)
# Get target model to run on-device
target_model = compile_job.get_target_model()
```
Step 2: **Performance profiling on cloud-hosted device**
After compiling models from step 1. Models can be profiled model on-device using the
`target_model`. Note that this scripts runs the model on a device automatically
provisioned in the cloud. Once the job is submitted, you can navigate to a
provided job URL to view a variety of on-device performance metrics.
```python
profile_job = hub.submit_profile_job(
model=target_model,
device=device,
)
```
Step 3: **Verify on-device accuracy**
To verify the accuracy of the model on-device, you can run on-device inference
on sample input data on the same cloud hosted device.
```python
input_data = torch_model.sample_inputs()
inference_job = hub.submit_inference_job(
model=target_model,
device=device,
inputs=input_data,
)
on_device_output = inference_job.download_output_data()
```
With the output of the model, you can compute like PSNR, relative errors or
spot check the output with expected output.
**Note**: This on-device profiling and inference requires access to Qualcomm®
AI Hub. [Sign up for access](https://myaccount.qualcomm.com/signup).
## Run demo on a cloud-hosted device
You can also run the demo on-device.
```bash
python -m qai_hub_models.models.mnasnet05.demo --eval-mode on-device
```
**NOTE**: If you want running in a Jupyter Notebook or Google Colab like
environment, please add the following to your cell (instead of the above).
```
%run -m qai_hub_models.models.mnasnet05.demo -- --eval-mode on-device
```
## Deploying compiled model to Android
The models can be deployed using multiple runtimes:
- TensorFlow Lite (`.tflite` export): [This
tutorial](https://www.tensorflow.org/lite/android/quickstart) provides a
guide to deploy the .tflite model in an Android application.
- QNN (`.so` export ): This [sample
app](https://docs.qualcomm.com/bundle/publicresource/topics/80-63442-50/sample_app.html)
provides instructions on how to use the `.so` shared library in an Android application.
## View on Qualcomm® AI Hub
Get more details on MNASNet05's performance across various devices [here](https://aihub.qualcomm.com/models/mnasnet05).
Explore all available models on [Qualcomm® AI Hub](https://aihub.qualcomm.com/)
## License
* The license for the original implementation of MNASNet05 can be found
[here](https://github.com/pytorch/vision/blob/main/LICENSE).
* The license for the compiled assets for on-device deployment can be found [here](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/Qualcomm+AI+Hub+Proprietary+License.pdf)
## References
* [MnasNet: Platform-Aware Neural Architecture Search for Mobile](https://arxiv.org/abs/1807.11626)
* [Source Model Implementation](https://github.com/pytorch/vision/blob/main/torchvision/models/mnasnet.py)
## Community
* Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI.
* For questions or feedback please [reach out to us](mailto:[email protected]).
|
qualcomm/HRNetPose
|
qualcomm
| 2025-08-15T04:23:42Z | 355 | 7 |
pytorch
|
[
"pytorch",
"tflite",
"android",
"keypoint-detection",
"arxiv:1902.09212",
"license:other",
"region:us"
] |
keypoint-detection
| 2024-02-25T22:45:59Z |
---
library_name: pytorch
license: other
tags:
- android
pipeline_tag: keypoint-detection
---

# HRNetPose: Optimized for Mobile Deployment
## Perform accurate human pose estimation
HRNet performs pose estimation in high-resolution representations.
This model is an implementation of HRNetPose found [here](https://github.com/leoxiaobin/deep-high-resolution-net.pytorch).
This repository provides scripts to run HRNetPose on Qualcomm® devices.
More details on model performance across various devices, can be found
[here](https://aihub.qualcomm.com/models/hrnet_pose).
### Model Details
- **Model Type:** Model_use_case.pose_estimation
- **Model Stats:**
- Model checkpoint: hrnet_posenet_FP32_state_dict
- Input resolution: 256x192
- Number of parameters: 28.5M
- Model size (float): 109 MB
- Model size (w8a8): 28.1 MB
| Model | Precision | Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Primary Compute Unit | Target Model
|---|---|---|---|---|---|---|---|---|
| HRNetPose | float | QCS8275 (Proxy) | Qualcomm® QCS8275 (Proxy) | TFLITE | 14.287 ms | 0 - 75 MB | NPU | [HRNetPose.tflite](https://huggingface.co/qualcomm/HRNetPose/blob/main/HRNetPose.tflite) |
| HRNetPose | float | QCS8275 (Proxy) | Qualcomm® QCS8275 (Proxy) | QNN_DLC | 14.21 ms | 1 - 39 MB | NPU | [HRNetPose.dlc](https://huggingface.co/qualcomm/HRNetPose/blob/main/HRNetPose.dlc) |
| HRNetPose | float | QCS8450 (Proxy) | Qualcomm® QCS8450 (Proxy) | TFLITE | 3.737 ms | 0 - 125 MB | NPU | [HRNetPose.tflite](https://huggingface.co/qualcomm/HRNetPose/blob/main/HRNetPose.tflite) |
| HRNetPose | float | QCS8450 (Proxy) | Qualcomm® QCS8450 (Proxy) | QNN_DLC | 4.823 ms | 0 - 58 MB | NPU | [HRNetPose.dlc](https://huggingface.co/qualcomm/HRNetPose/blob/main/HRNetPose.dlc) |
| HRNetPose | float | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | TFLITE | 2.602 ms | 0 - 282 MB | NPU | [HRNetPose.tflite](https://huggingface.co/qualcomm/HRNetPose/blob/main/HRNetPose.tflite) |
| HRNetPose | float | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | QNN_DLC | 2.61 ms | 1 - 14 MB | NPU | [HRNetPose.dlc](https://huggingface.co/qualcomm/HRNetPose/blob/main/HRNetPose.dlc) |
| HRNetPose | float | QCS9075 (Proxy) | Qualcomm® QCS9075 (Proxy) | TFLITE | 4.373 ms | 0 - 84 MB | NPU | [HRNetPose.tflite](https://huggingface.co/qualcomm/HRNetPose/blob/main/HRNetPose.tflite) |
| HRNetPose | float | QCS9075 (Proxy) | Qualcomm® QCS9075 (Proxy) | QNN_DLC | 4.298 ms | 1 - 40 MB | NPU | [HRNetPose.dlc](https://huggingface.co/qualcomm/HRNetPose/blob/main/HRNetPose.dlc) |
| HRNetPose | float | SA7255P ADP | Qualcomm® SA7255P | TFLITE | 14.287 ms | 0 - 75 MB | NPU | [HRNetPose.tflite](https://huggingface.co/qualcomm/HRNetPose/blob/main/HRNetPose.tflite) |
| HRNetPose | float | SA7255P ADP | Qualcomm® SA7255P | QNN_DLC | 14.21 ms | 1 - 39 MB | NPU | [HRNetPose.dlc](https://huggingface.co/qualcomm/HRNetPose/blob/main/HRNetPose.dlc) |
| HRNetPose | float | SA8255 (Proxy) | Qualcomm® SA8255P (Proxy) | TFLITE | 2.614 ms | 0 - 122 MB | NPU | [HRNetPose.tflite](https://huggingface.co/qualcomm/HRNetPose/blob/main/HRNetPose.tflite) |
| HRNetPose | float | SA8255 (Proxy) | Qualcomm® SA8255P (Proxy) | QNN_DLC | 2.615 ms | 1 - 12 MB | NPU | [HRNetPose.dlc](https://huggingface.co/qualcomm/HRNetPose/blob/main/HRNetPose.dlc) |
| HRNetPose | float | SA8295P ADP | Qualcomm® SA8295P | TFLITE | 4.559 ms | 0 - 72 MB | NPU | [HRNetPose.tflite](https://huggingface.co/qualcomm/HRNetPose/blob/main/HRNetPose.tflite) |
| HRNetPose | float | SA8295P ADP | Qualcomm® SA8295P | QNN_DLC | 4.477 ms | 1 - 37 MB | NPU | [HRNetPose.dlc](https://huggingface.co/qualcomm/HRNetPose/blob/main/HRNetPose.dlc) |
| HRNetPose | float | SA8650 (Proxy) | Qualcomm® SA8650P (Proxy) | TFLITE | 2.609 ms | 0 - 248 MB | NPU | [HRNetPose.tflite](https://huggingface.co/qualcomm/HRNetPose/blob/main/HRNetPose.tflite) |
| HRNetPose | float | SA8650 (Proxy) | Qualcomm® SA8650P (Proxy) | QNN_DLC | 2.625 ms | 1 - 13 MB | NPU | [HRNetPose.dlc](https://huggingface.co/qualcomm/HRNetPose/blob/main/HRNetPose.dlc) |
| HRNetPose | float | SA8775P ADP | Qualcomm® SA8775P | TFLITE | 4.373 ms | 0 - 84 MB | NPU | [HRNetPose.tflite](https://huggingface.co/qualcomm/HRNetPose/blob/main/HRNetPose.tflite) |
| HRNetPose | float | SA8775P ADP | Qualcomm® SA8775P | QNN_DLC | 4.298 ms | 1 - 40 MB | NPU | [HRNetPose.dlc](https://huggingface.co/qualcomm/HRNetPose/blob/main/HRNetPose.dlc) |
| HRNetPose | float | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | TFLITE | 2.602 ms | 0 - 120 MB | NPU | [HRNetPose.tflite](https://huggingface.co/qualcomm/HRNetPose/blob/main/HRNetPose.tflite) |
| HRNetPose | float | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | QNN_DLC | 2.619 ms | 1 - 14 MB | NPU | [HRNetPose.dlc](https://huggingface.co/qualcomm/HRNetPose/blob/main/HRNetPose.dlc) |
| HRNetPose | float | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | ONNX | 2.698 ms | 0 - 126 MB | NPU | [HRNetPose.onnx](https://huggingface.co/qualcomm/HRNetPose/blob/main/HRNetPose.onnx) |
| HRNetPose | float | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | TFLITE | 1.9 ms | 0 - 130 MB | NPU | [HRNetPose.tflite](https://huggingface.co/qualcomm/HRNetPose/blob/main/HRNetPose.tflite) |
| HRNetPose | float | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | QNN_DLC | 2.051 ms | 0 - 59 MB | NPU | [HRNetPose.dlc](https://huggingface.co/qualcomm/HRNetPose/blob/main/HRNetPose.dlc) |
| HRNetPose | float | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | ONNX | 2.137 ms | 0 - 76 MB | NPU | [HRNetPose.onnx](https://huggingface.co/qualcomm/HRNetPose/blob/main/HRNetPose.onnx) |
| HRNetPose | float | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | TFLITE | 1.84 ms | 0 - 80 MB | NPU | [HRNetPose.tflite](https://huggingface.co/qualcomm/HRNetPose/blob/main/HRNetPose.tflite) |
| HRNetPose | float | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | QNN_DLC | 1.86 ms | 1 - 45 MB | NPU | [HRNetPose.dlc](https://huggingface.co/qualcomm/HRNetPose/blob/main/HRNetPose.dlc) |
| HRNetPose | float | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | ONNX | 1.94 ms | 0 - 50 MB | NPU | [HRNetPose.onnx](https://huggingface.co/qualcomm/HRNetPose/blob/main/HRNetPose.onnx) |
| HRNetPose | float | Snapdragon X Elite CRD | Snapdragon® X Elite | QNN_DLC | 3.173 ms | 104 - 104 MB | NPU | [HRNetPose.dlc](https://huggingface.co/qualcomm/HRNetPose/blob/main/HRNetPose.dlc) |
| HRNetPose | float | Snapdragon X Elite CRD | Snapdragon® X Elite | ONNX | 2.656 ms | 56 - 56 MB | NPU | [HRNetPose.onnx](https://huggingface.co/qualcomm/HRNetPose/blob/main/HRNetPose.onnx) |
| HRNetPose | w8a8 | QCS8275 (Proxy) | Qualcomm® QCS8275 (Proxy) | TFLITE | 13.873 ms | 0 - 50 MB | NPU | [HRNetPose.tflite](https://huggingface.co/qualcomm/HRNetPose/blob/main/HRNetPose_w8a8.tflite) |
| HRNetPose | w8a8 | QCS8275 (Proxy) | Qualcomm® QCS8275 (Proxy) | QNN_DLC | 14.102 ms | 0 - 49 MB | NPU | [HRNetPose.dlc](https://huggingface.co/qualcomm/HRNetPose/blob/main/HRNetPose_w8a8.dlc) |
| HRNetPose | w8a8 | QCS8450 (Proxy) | Qualcomm® QCS8450 (Proxy) | TFLITE | 1.201 ms | 0 - 95 MB | NPU | [HRNetPose.tflite](https://huggingface.co/qualcomm/HRNetPose/blob/main/HRNetPose_w8a8.tflite) |
| HRNetPose | w8a8 | QCS8450 (Proxy) | Qualcomm® QCS8450 (Proxy) | QNN_DLC | 1.791 ms | 0 - 85 MB | NPU | [HRNetPose.dlc](https://huggingface.co/qualcomm/HRNetPose/blob/main/HRNetPose_w8a8.dlc) |
| HRNetPose | w8a8 | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | TFLITE | 0.955 ms | 0 - 162 MB | NPU | [HRNetPose.tflite](https://huggingface.co/qualcomm/HRNetPose/blob/main/HRNetPose_w8a8.tflite) |
| HRNetPose | w8a8 | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | QNN_DLC | 1.171 ms | 0 - 47 MB | NPU | [HRNetPose.dlc](https://huggingface.co/qualcomm/HRNetPose/blob/main/HRNetPose_w8a8.dlc) |
| HRNetPose | w8a8 | QCS9075 (Proxy) | Qualcomm® QCS9075 (Proxy) | TFLITE | 1.312 ms | 0 - 51 MB | NPU | [HRNetPose.tflite](https://huggingface.co/qualcomm/HRNetPose/blob/main/HRNetPose_w8a8.tflite) |
| HRNetPose | w8a8 | QCS9075 (Proxy) | Qualcomm® QCS9075 (Proxy) | QNN_DLC | 1.43 ms | 0 - 51 MB | NPU | [HRNetPose.dlc](https://huggingface.co/qualcomm/HRNetPose/blob/main/HRNetPose_w8a8.dlc) |
| HRNetPose | w8a8 | RB3 Gen 2 (Proxy) | Qualcomm® QCS6490 (Proxy) | TFLITE | 3.825 ms | 0 - 78 MB | NPU | [HRNetPose.tflite](https://huggingface.co/qualcomm/HRNetPose/blob/main/HRNetPose_w8a8.tflite) |
| HRNetPose | w8a8 | RB3 Gen 2 (Proxy) | Qualcomm® QCS6490 (Proxy) | QNN_DLC | 5.411 ms | 0 - 72 MB | NPU | [HRNetPose.dlc](https://huggingface.co/qualcomm/HRNetPose/blob/main/HRNetPose_w8a8.dlc) |
| HRNetPose | w8a8 | RB5 (Proxy) | Qualcomm® QCS8250 (Proxy) | TFLITE | 17.142 ms | 0 - 3 MB | NPU | [HRNetPose.tflite](https://huggingface.co/qualcomm/HRNetPose/blob/main/HRNetPose_w8a8.tflite) |
| HRNetPose | w8a8 | SA7255P ADP | Qualcomm® SA7255P | TFLITE | 13.873 ms | 0 - 50 MB | NPU | [HRNetPose.tflite](https://huggingface.co/qualcomm/HRNetPose/blob/main/HRNetPose_w8a8.tflite) |
| HRNetPose | w8a8 | SA7255P ADP | Qualcomm® SA7255P | QNN_DLC | 14.102 ms | 0 - 49 MB | NPU | [HRNetPose.dlc](https://huggingface.co/qualcomm/HRNetPose/blob/main/HRNetPose_w8a8.dlc) |
| HRNetPose | w8a8 | SA8255 (Proxy) | Qualcomm® SA8255P (Proxy) | TFLITE | 0.974 ms | 0 - 162 MB | NPU | [HRNetPose.tflite](https://huggingface.co/qualcomm/HRNetPose/blob/main/HRNetPose_w8a8.tflite) |
| HRNetPose | w8a8 | SA8255 (Proxy) | Qualcomm® SA8255P (Proxy) | QNN_DLC | 1.17 ms | 0 - 77 MB | NPU | [HRNetPose.dlc](https://huggingface.co/qualcomm/HRNetPose/blob/main/HRNetPose_w8a8.dlc) |
| HRNetPose | w8a8 | SA8295P ADP | Qualcomm® SA8295P | TFLITE | 1.675 ms | 0 - 53 MB | NPU | [HRNetPose.tflite](https://huggingface.co/qualcomm/HRNetPose/blob/main/HRNetPose_w8a8.tflite) |
| HRNetPose | w8a8 | SA8295P ADP | Qualcomm® SA8295P | QNN_DLC | 1.864 ms | 0 - 52 MB | NPU | [HRNetPose.dlc](https://huggingface.co/qualcomm/HRNetPose/blob/main/HRNetPose_w8a8.dlc) |
| HRNetPose | w8a8 | SA8650 (Proxy) | Qualcomm® SA8650P (Proxy) | TFLITE | 0.973 ms | 0 - 158 MB | NPU | [HRNetPose.tflite](https://huggingface.co/qualcomm/HRNetPose/blob/main/HRNetPose_w8a8.tflite) |
| HRNetPose | w8a8 | SA8650 (Proxy) | Qualcomm® SA8650P (Proxy) | QNN_DLC | 1.173 ms | 0 - 24 MB | NPU | [HRNetPose.dlc](https://huggingface.co/qualcomm/HRNetPose/blob/main/HRNetPose_w8a8.dlc) |
| HRNetPose | w8a8 | SA8775P ADP | Qualcomm® SA8775P | TFLITE | 1.312 ms | 0 - 51 MB | NPU | [HRNetPose.tflite](https://huggingface.co/qualcomm/HRNetPose/blob/main/HRNetPose_w8a8.tflite) |
| HRNetPose | w8a8 | SA8775P ADP | Qualcomm® SA8775P | QNN_DLC | 1.43 ms | 0 - 51 MB | NPU | [HRNetPose.dlc](https://huggingface.co/qualcomm/HRNetPose/blob/main/HRNetPose_w8a8.dlc) |
| HRNetPose | w8a8 | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | TFLITE | 0.96 ms | 0 - 157 MB | NPU | [HRNetPose.tflite](https://huggingface.co/qualcomm/HRNetPose/blob/main/HRNetPose_w8a8.tflite) |
| HRNetPose | w8a8 | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | QNN_DLC | 1.16 ms | 0 - 87 MB | NPU | [HRNetPose.dlc](https://huggingface.co/qualcomm/HRNetPose/blob/main/HRNetPose_w8a8.dlc) |
| HRNetPose | w8a8 | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | ONNX | 4.841 ms | 0 - 107 MB | NPU | [HRNetPose.onnx](https://huggingface.co/qualcomm/HRNetPose/blob/main/HRNetPose_w8a8.onnx) |
| HRNetPose | w8a8 | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | TFLITE | 0.721 ms | 0 - 91 MB | NPU | [HRNetPose.tflite](https://huggingface.co/qualcomm/HRNetPose/blob/main/HRNetPose_w8a8.tflite) |
| HRNetPose | w8a8 | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | QNN_DLC | 0.828 ms | 0 - 85 MB | NPU | [HRNetPose.dlc](https://huggingface.co/qualcomm/HRNetPose/blob/main/HRNetPose_w8a8.dlc) |
| HRNetPose | w8a8 | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | ONNX | 3.503 ms | 48 - 229 MB | NPU | [HRNetPose.onnx](https://huggingface.co/qualcomm/HRNetPose/blob/main/HRNetPose_w8a8.onnx) |
| HRNetPose | w8a8 | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | TFLITE | 0.627 ms | 0 - 53 MB | NPU | [HRNetPose.tflite](https://huggingface.co/qualcomm/HRNetPose/blob/main/HRNetPose_w8a8.tflite) |
| HRNetPose | w8a8 | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | QNN_DLC | 0.662 ms | 0 - 55 MB | NPU | [HRNetPose.dlc](https://huggingface.co/qualcomm/HRNetPose/blob/main/HRNetPose_w8a8.dlc) |
| HRNetPose | w8a8 | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | ONNX | 3.888 ms | 1 - 187 MB | NPU | [HRNetPose.onnx](https://huggingface.co/qualcomm/HRNetPose/blob/main/HRNetPose_w8a8.onnx) |
| HRNetPose | w8a8 | Snapdragon X Elite CRD | Snapdragon® X Elite | QNN_DLC | 1.263 ms | 130 - 130 MB | NPU | [HRNetPose.dlc](https://huggingface.co/qualcomm/HRNetPose/blob/main/HRNetPose_w8a8.dlc) |
| HRNetPose | w8a8 | Snapdragon X Elite CRD | Snapdragon® X Elite | ONNX | 5.672 ms | 27 - 27 MB | NPU | [HRNetPose.onnx](https://huggingface.co/qualcomm/HRNetPose/blob/main/HRNetPose_w8a8.onnx) |
## Installation
Install the package via pip:
```bash
pip install "qai-hub-models[hrnet-pose]" torch==2.4.1 -f https://download.openmmlab.com/mmcv/dist/cpu/torch2.4/index.html -f https://qaihub-public-python-wheels.s3.us-west-2.amazonaws.com/index.html
```
## Configure Qualcomm® AI Hub to run this model on a cloud-hosted device
Sign-in to [Qualcomm® AI Hub](https://app.aihub.qualcomm.com/) with your
Qualcomm® ID. Once signed in navigate to `Account -> Settings -> API Token`.
With this API token, you can configure your client to run models on the cloud
hosted devices.
```bash
qai-hub configure --api_token API_TOKEN
```
Navigate to [docs](https://app.aihub.qualcomm.com/docs/) for more information.
## Demo off target
The package contains a simple end-to-end demo that downloads pre-trained
weights and runs this model on a sample input.
```bash
python -m qai_hub_models.models.hrnet_pose.demo
```
The above demo runs a reference implementation of pre-processing, model
inference, and post processing.
**NOTE**: If you want running in a Jupyter Notebook or Google Colab like
environment, please add the following to your cell (instead of the above).
```
%run -m qai_hub_models.models.hrnet_pose.demo
```
### Run model on a cloud-hosted device
In addition to the demo, you can also run the model on a cloud-hosted Qualcomm®
device. This script does the following:
* Performance check on-device on a cloud-hosted device
* Downloads compiled assets that can be deployed on-device for Android.
* Accuracy check between PyTorch and on-device outputs.
```bash
python -m qai_hub_models.models.hrnet_pose.export
```
## How does this work?
This [export script](https://aihub.qualcomm.com/models/hrnet_pose/qai_hub_models/models/HRNetPose/export.py)
leverages [Qualcomm® AI Hub](https://aihub.qualcomm.com/) to optimize, validate, and deploy this model
on-device. Lets go through each step below in detail:
Step 1: **Compile model for on-device deployment**
To compile a PyTorch model for on-device deployment, we first trace the model
in memory using the `jit.trace` and then call the `submit_compile_job` API.
```python
import torch
import qai_hub as hub
from qai_hub_models.models.hrnet_pose import Model
# Load the model
torch_model = Model.from_pretrained()
# Device
device = hub.Device("Samsung Galaxy S24")
# Trace model
input_shape = torch_model.get_input_spec()
sample_inputs = torch_model.sample_inputs()
pt_model = torch.jit.trace(torch_model, [torch.tensor(data[0]) for _, data in sample_inputs.items()])
# Compile model on a specific device
compile_job = hub.submit_compile_job(
model=pt_model,
device=device,
input_specs=torch_model.get_input_spec(),
)
# Get target model to run on-device
target_model = compile_job.get_target_model()
```
Step 2: **Performance profiling on cloud-hosted device**
After compiling models from step 1. Models can be profiled model on-device using the
`target_model`. Note that this scripts runs the model on a device automatically
provisioned in the cloud. Once the job is submitted, you can navigate to a
provided job URL to view a variety of on-device performance metrics.
```python
profile_job = hub.submit_profile_job(
model=target_model,
device=device,
)
```
Step 3: **Verify on-device accuracy**
To verify the accuracy of the model on-device, you can run on-device inference
on sample input data on the same cloud hosted device.
```python
input_data = torch_model.sample_inputs()
inference_job = hub.submit_inference_job(
model=target_model,
device=device,
inputs=input_data,
)
on_device_output = inference_job.download_output_data()
```
With the output of the model, you can compute like PSNR, relative errors or
spot check the output with expected output.
**Note**: This on-device profiling and inference requires access to Qualcomm®
AI Hub. [Sign up for access](https://myaccount.qualcomm.com/signup).
## Run demo on a cloud-hosted device
You can also run the demo on-device.
```bash
python -m qai_hub_models.models.hrnet_pose.demo --eval-mode on-device
```
**NOTE**: If you want running in a Jupyter Notebook or Google Colab like
environment, please add the following to your cell (instead of the above).
```
%run -m qai_hub_models.models.hrnet_pose.demo -- --eval-mode on-device
```
## Deploying compiled model to Android
The models can be deployed using multiple runtimes:
- TensorFlow Lite (`.tflite` export): [This
tutorial](https://www.tensorflow.org/lite/android/quickstart) provides a
guide to deploy the .tflite model in an Android application.
- QNN (`.so` export ): This [sample
app](https://docs.qualcomm.com/bundle/publicresource/topics/80-63442-50/sample_app.html)
provides instructions on how to use the `.so` shared library in an Android application.
## View on Qualcomm® AI Hub
Get more details on HRNetPose's performance across various devices [here](https://aihub.qualcomm.com/models/hrnet_pose).
Explore all available models on [Qualcomm® AI Hub](https://aihub.qualcomm.com/)
## License
* The license for the original implementation of HRNetPose can be found
[here](https://github.com/leoxiaobin/deep-high-resolution-net.pytorch/blob/master/LICENSE).
* The license for the compiled assets for on-device deployment can be found [here](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/Qualcomm+AI+Hub+Proprietary+License.pdf)
## References
* [Deep High-Resolution Representation Learning for Human Pose Estimation](https://arxiv.org/abs/1902.09212)
* [Source Model Implementation](https://github.com/leoxiaobin/deep-high-resolution-net.pytorch)
## Community
* Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI.
* For questions or feedback please [reach out to us](mailto:[email protected]).
|
qualcomm/EfficientViT-l2-seg
|
qualcomm
| 2025-08-15T04:16:08Z | 0 | 0 |
pytorch
|
[
"pytorch",
"real_time",
"android",
"image-segmentation",
"arxiv:2205.14756",
"license:other",
"region:us"
] |
image-segmentation
| 2024-11-27T19:23:29Z |
---
library_name: pytorch
license: other
tags:
- real_time
- android
pipeline_tag: image-segmentation
---

# EfficientViT-l2-seg: Optimized for Mobile Deployment
## Semantic segmentation in higher resuolution
EfficientViT is a machine learning model that can segment images from the Cityscape dataset. It has lightweight and hardware-efficient operations and thus delivers significant speedup on diverse hardware platforms
This model is an implementation of EfficientViT-l2-seg found [here](https://github.com/CVHub520/efficientvit).
This repository provides scripts to run EfficientViT-l2-seg on Qualcomm® devices.
More details on model performance across various devices, can be found
[here](https://aihub.qualcomm.com/models/efficientvit_l2_seg).
### Model Details
- **Model Type:** Model_use_case.semantic_segmentation
- **Model Stats:**
- Model checkpoint: l2.pt
- Input resolution: 2048x1024
- Number of output classes: 19
- Model size (float): 201 MB
| Model | Precision | Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Primary Compute Unit | Target Model
|---|---|---|---|---|---|---|---|---|
| EfficientViT-l2-seg | float | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | ONNX | 1805.139 ms | 101 - 200 MB | NPU | [EfficientViT-l2-seg.onnx](https://huggingface.co/qualcomm/EfficientViT-l2-seg/blob/main/EfficientViT-l2-seg.onnx) |
| EfficientViT-l2-seg | float | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | ONNX | 1976.118 ms | 127 - 999 MB | NPU | [EfficientViT-l2-seg.onnx](https://huggingface.co/qualcomm/EfficientViT-l2-seg/blob/main/EfficientViT-l2-seg.onnx) |
| EfficientViT-l2-seg | float | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | ONNX | 1190.489 ms | 94 - 1048 MB | NPU | [EfficientViT-l2-seg.onnx](https://huggingface.co/qualcomm/EfficientViT-l2-seg/blob/main/EfficientViT-l2-seg.onnx) |
| EfficientViT-l2-seg | float | Snapdragon X Elite CRD | Snapdragon® X Elite | ONNX | 2976.535 ms | 143 - 143 MB | NPU | [EfficientViT-l2-seg.onnx](https://huggingface.co/qualcomm/EfficientViT-l2-seg/blob/main/EfficientViT-l2-seg.onnx) |
## Installation
Install the package via pip:
```bash
pip install "qai-hub-models[efficientvit-l2-seg]"
```
## Configure Qualcomm® AI Hub to run this model on a cloud-hosted device
Sign-in to [Qualcomm® AI Hub](https://app.aihub.qualcomm.com/) with your
Qualcomm® ID. Once signed in navigate to `Account -> Settings -> API Token`.
With this API token, you can configure your client to run models on the cloud
hosted devices.
```bash
qai-hub configure --api_token API_TOKEN
```
Navigate to [docs](https://app.aihub.qualcomm.com/docs/) for more information.
## Demo off target
The package contains a simple end-to-end demo that downloads pre-trained
weights and runs this model on a sample input.
```bash
python -m qai_hub_models.models.efficientvit_l2_seg.demo
```
The above demo runs a reference implementation of pre-processing, model
inference, and post processing.
**NOTE**: If you want running in a Jupyter Notebook or Google Colab like
environment, please add the following to your cell (instead of the above).
```
%run -m qai_hub_models.models.efficientvit_l2_seg.demo
```
### Run model on a cloud-hosted device
In addition to the demo, you can also run the model on a cloud-hosted Qualcomm®
device. This script does the following:
* Performance check on-device on a cloud-hosted device
* Downloads compiled assets that can be deployed on-device for Android.
* Accuracy check between PyTorch and on-device outputs.
```bash
python -m qai_hub_models.models.efficientvit_l2_seg.export
```
## How does this work?
This [export script](https://aihub.qualcomm.com/models/efficientvit_l2_seg/qai_hub_models/models/EfficientViT-l2-seg/export.py)
leverages [Qualcomm® AI Hub](https://aihub.qualcomm.com/) to optimize, validate, and deploy this model
on-device. Lets go through each step below in detail:
Step 1: **Compile model for on-device deployment**
To compile a PyTorch model for on-device deployment, we first trace the model
in memory using the `jit.trace` and then call the `submit_compile_job` API.
```python
import torch
import qai_hub as hub
from qai_hub_models.models.efficientvit_l2_seg import Model
# Load the model
torch_model = Model.from_pretrained()
# Device
device = hub.Device("Samsung Galaxy S24")
# Trace model
input_shape = torch_model.get_input_spec()
sample_inputs = torch_model.sample_inputs()
pt_model = torch.jit.trace(torch_model, [torch.tensor(data[0]) for _, data in sample_inputs.items()])
# Compile model on a specific device
compile_job = hub.submit_compile_job(
model=pt_model,
device=device,
input_specs=torch_model.get_input_spec(),
)
# Get target model to run on-device
target_model = compile_job.get_target_model()
```
Step 2: **Performance profiling on cloud-hosted device**
After compiling models from step 1. Models can be profiled model on-device using the
`target_model`. Note that this scripts runs the model on a device automatically
provisioned in the cloud. Once the job is submitted, you can navigate to a
provided job URL to view a variety of on-device performance metrics.
```python
profile_job = hub.submit_profile_job(
model=target_model,
device=device,
)
```
Step 3: **Verify on-device accuracy**
To verify the accuracy of the model on-device, you can run on-device inference
on sample input data on the same cloud hosted device.
```python
input_data = torch_model.sample_inputs()
inference_job = hub.submit_inference_job(
model=target_model,
device=device,
inputs=input_data,
)
on_device_output = inference_job.download_output_data()
```
With the output of the model, you can compute like PSNR, relative errors or
spot check the output with expected output.
**Note**: This on-device profiling and inference requires access to Qualcomm®
AI Hub. [Sign up for access](https://myaccount.qualcomm.com/signup).
## Run demo on a cloud-hosted device
You can also run the demo on-device.
```bash
python -m qai_hub_models.models.efficientvit_l2_seg.demo --eval-mode on-device
```
**NOTE**: If you want running in a Jupyter Notebook or Google Colab like
environment, please add the following to your cell (instead of the above).
```
%run -m qai_hub_models.models.efficientvit_l2_seg.demo -- --eval-mode on-device
```
## Deploying compiled model to Android
The models can be deployed using multiple runtimes:
- TensorFlow Lite (`.tflite` export): [This
tutorial](https://www.tensorflow.org/lite/android/quickstart) provides a
guide to deploy the .tflite model in an Android application.
- QNN (`.so` export ): This [sample
app](https://docs.qualcomm.com/bundle/publicresource/topics/80-63442-50/sample_app.html)
provides instructions on how to use the `.so` shared library in an Android application.
## View on Qualcomm® AI Hub
Get more details on EfficientViT-l2-seg's performance across various devices [here](https://aihub.qualcomm.com/models/efficientvit_l2_seg).
Explore all available models on [Qualcomm® AI Hub](https://aihub.qualcomm.com/)
## License
* The license for the original implementation of EfficientViT-l2-seg can be found
[here](https://github.com/CVHub520/efficientvit/blob/main/LICENSE).
* The license for the compiled assets for on-device deployment can be found [here](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/Qualcomm+AI+Hub+Proprietary+License.pdf)
## References
* [EfficientViT: Multi-Scale Linear Attention for High-Resolution Dense Prediction](https://arxiv.org/abs/2205.14756)
* [Source Model Implementation](https://github.com/CVHub520/efficientvit)
## Community
* Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI.
* For questions or feedback please [reach out to us](mailto:[email protected]).
|
qualcomm/ControlNet-Canny
|
qualcomm
| 2025-08-15T04:06:23Z | 0 | 0 |
pytorch
|
[
"pytorch",
"generative_ai",
"android",
"unconditional-image-generation",
"arxiv:2302.05543",
"license:other",
"region:us"
] |
unconditional-image-generation
| 2025-07-16T19:23:30Z |
---
library_name: pytorch
license: other
tags:
- generative_ai
- android
pipeline_tag: unconditional-image-generation
---

# ControlNet-Canny: Optimized for Mobile Deployment
## Generating visual arts from text prompt and input guiding image
On-device, high-resolution image synthesis from text and image prompts. ControlNet guides Stable-diffusion with provided input image to generate accurate images from given input prompt.
This model is an implementation of ControlNet-Canny found [here](https://github.com/lllyasviel/ControlNet).
This repository provides scripts to run ControlNet-Canny on Qualcomm® devices.
More details on model performance across various devices, can be found
[here](https://aihub.qualcomm.com/models/controlnet_canny).
### Model Details
- **Model Type:** Model_use_case.image_generation
- **Model Stats:**
- Input: Text prompt and input image as a reference
- Conditioning Input: Canny-Edge
- Text Encoder Number of parameters: 340M
- UNet Number of parameters: 865M
- VAE Decoder Number of parameters: 83M
- ControlNet Number of parameters: 361M
- Model size: 1.4GB
| Model | Precision | Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Primary Compute Unit | Target Model
|---|---|---|---|---|---|---|---|---|
| text_encoder | w8a16 | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | QNN_CONTEXT_BINARY | 5.37 ms | 0 - 3 MB | NPU | Use Export Script |
| text_encoder | w8a16 | QCS9075 (Proxy) | Qualcomm® QCS9075 (Proxy) | QNN_CONTEXT_BINARY | 5.903 ms | 0 - 10 MB | NPU | Use Export Script |
| text_encoder | w8a16 | SA8255 (Proxy) | Qualcomm® SA8255P (Proxy) | QNN_CONTEXT_BINARY | 5.395 ms | 0 - 2 MB | NPU | Use Export Script |
| text_encoder | w8a16 | SA8650 (Proxy) | Qualcomm® SA8650P (Proxy) | QNN_CONTEXT_BINARY | 5.412 ms | 0 - 2 MB | NPU | Use Export Script |
| text_encoder | w8a16 | SA8775P ADP | Qualcomm® SA8775P | QNN_CONTEXT_BINARY | 5.903 ms | 0 - 10 MB | NPU | Use Export Script |
| text_encoder | w8a16 | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | QNN_CONTEXT_BINARY | 5.432 ms | 0 - 3 MB | NPU | Use Export Script |
| text_encoder | w8a16 | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | PRECOMPILED_QNN_ONNX | 5.842 ms | 0 - 162 MB | NPU | Use Export Script |
| text_encoder | w8a16 | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | QNN_CONTEXT_BINARY | 3.872 ms | 0 - 18 MB | NPU | Use Export Script |
| text_encoder | w8a16 | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | PRECOMPILED_QNN_ONNX | 4.092 ms | 0 - 14 MB | NPU | Use Export Script |
| text_encoder | w8a16 | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | QNN_CONTEXT_BINARY | 3.481 ms | 0 - 14 MB | NPU | Use Export Script |
| text_encoder | w8a16 | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | PRECOMPILED_QNN_ONNX | 3.833 ms | 0 - 14 MB | NPU | Use Export Script |
| text_encoder | w8a16 | Snapdragon X Elite CRD | Snapdragon® X Elite | QNN_CONTEXT_BINARY | 5.792 ms | 1 - 1 MB | NPU | Use Export Script |
| text_encoder | w8a16 | Snapdragon X Elite CRD | Snapdragon® X Elite | PRECOMPILED_QNN_ONNX | 5.958 ms | 157 - 157 MB | NPU | Use Export Script |
| unet | w8a16 | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | QNN_CONTEXT_BINARY | 110.879 ms | 13 - 15 MB | NPU | Use Export Script |
| unet | w8a16 | QCS9075 (Proxy) | Qualcomm® QCS9075 (Proxy) | QNN_CONTEXT_BINARY | 107.956 ms | 6 - 13 MB | NPU | Use Export Script |
| unet | w8a16 | SA8255 (Proxy) | Qualcomm® SA8255P (Proxy) | QNN_CONTEXT_BINARY | 116.595 ms | 13 - 15 MB | NPU | Use Export Script |
| unet | w8a16 | SA8650 (Proxy) | Qualcomm® SA8650P (Proxy) | QNN_CONTEXT_BINARY | 115.724 ms | 13 - 16 MB | NPU | Use Export Script |
| unet | w8a16 | SA8775P ADP | Qualcomm® SA8775P | QNN_CONTEXT_BINARY | 107.956 ms | 6 - 13 MB | NPU | Use Export Script |
| unet | w8a16 | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | QNN_CONTEXT_BINARY | 117.156 ms | 13 - 16 MB | NPU | Use Export Script |
| unet | w8a16 | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | PRECOMPILED_QNN_ONNX | 116.814 ms | 0 - 883 MB | NPU | Use Export Script |
| unet | w8a16 | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | QNN_CONTEXT_BINARY | 81.085 ms | 13 - 31 MB | NPU | Use Export Script |
| unet | w8a16 | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | PRECOMPILED_QNN_ONNX | 83.313 ms | 13 - 34 MB | NPU | Use Export Script |
| unet | w8a16 | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | QNN_CONTEXT_BINARY | 70.612 ms | 13 - 27 MB | NPU | Use Export Script |
| unet | w8a16 | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | PRECOMPILED_QNN_ONNX | 70.96 ms | 13 - 28 MB | NPU | Use Export Script |
| unet | w8a16 | Snapdragon X Elite CRD | Snapdragon® X Elite | QNN_CONTEXT_BINARY | 116.726 ms | 13 - 13 MB | NPU | Use Export Script |
| unet | w8a16 | Snapdragon X Elite CRD | Snapdragon® X Elite | PRECOMPILED_QNN_ONNX | 117.905 ms | 829 - 829 MB | NPU | Use Export Script |
| vae | w8a16 | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | QNN_CONTEXT_BINARY | 268.758 ms | 0 - 3 MB | NPU | Use Export Script |
| vae | w8a16 | QCS9075 (Proxy) | Qualcomm® QCS9075 (Proxy) | QNN_CONTEXT_BINARY | 248.983 ms | 0 - 10 MB | NPU | Use Export Script |
| vae | w8a16 | SA8255 (Proxy) | Qualcomm® SA8255P (Proxy) | QNN_CONTEXT_BINARY | 272.989 ms | 0 - 2 MB | NPU | Use Export Script |
| vae | w8a16 | SA8650 (Proxy) | Qualcomm® SA8650P (Proxy) | QNN_CONTEXT_BINARY | 284.628 ms | 0 - 2 MB | NPU | Use Export Script |
| vae | w8a16 | SA8775P ADP | Qualcomm® SA8775P | QNN_CONTEXT_BINARY | 248.983 ms | 0 - 10 MB | NPU | Use Export Script |
| vae | w8a16 | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | QNN_CONTEXT_BINARY | 270.831 ms | 0 - 3 MB | NPU | Use Export Script |
| vae | w8a16 | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | PRECOMPILED_QNN_ONNX | 276.425 ms | 0 - 66 MB | NPU | Use Export Script |
| vae | w8a16 | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | QNN_CONTEXT_BINARY | 205.993 ms | 0 - 18 MB | NPU | Use Export Script |
| vae | w8a16 | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | PRECOMPILED_QNN_ONNX | 204.277 ms | 3 - 22 MB | NPU | Use Export Script |
| vae | w8a16 | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | QNN_CONTEXT_BINARY | 194.607 ms | 0 - 14 MB | NPU | Use Export Script |
| vae | w8a16 | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | PRECOMPILED_QNN_ONNX | 194.151 ms | 3 - 17 MB | NPU | Use Export Script |
| vae | w8a16 | Snapdragon X Elite CRD | Snapdragon® X Elite | QNN_CONTEXT_BINARY | 266.935 ms | 0 - 0 MB | NPU | Use Export Script |
| vae | w8a16 | Snapdragon X Elite CRD | Snapdragon® X Elite | PRECOMPILED_QNN_ONNX | 266.572 ms | 63 - 63 MB | NPU | Use Export Script |
| controlnet | w8a16 | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | QNN_CONTEXT_BINARY | 83.197 ms | 2 - 4 MB | NPU | Use Export Script |
| controlnet | w8a16 | QCS9075 (Proxy) | Qualcomm® QCS9075 (Proxy) | QNN_CONTEXT_BINARY | 81.755 ms | 2 - 11 MB | NPU | Use Export Script |
| controlnet | w8a16 | SA8255 (Proxy) | Qualcomm® SA8255P (Proxy) | QNN_CONTEXT_BINARY | 83.451 ms | 2 - 5 MB | NPU | Use Export Script |
| controlnet | w8a16 | SA8650 (Proxy) | Qualcomm® SA8650P (Proxy) | QNN_CONTEXT_BINARY | 83.565 ms | 2 - 4 MB | NPU | Use Export Script |
| controlnet | w8a16 | SA8775P ADP | Qualcomm® SA8775P | QNN_CONTEXT_BINARY | 81.755 ms | 2 - 11 MB | NPU | Use Export Script |
| controlnet | w8a16 | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | QNN_CONTEXT_BINARY | 83.39 ms | 2 - 5 MB | NPU | Use Export Script |
| controlnet | w8a16 | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | PRECOMPILED_QNN_ONNX | 85.359 ms | 0 - 385 MB | NPU | Use Export Script |
| controlnet | w8a16 | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | QNN_CONTEXT_BINARY | 58.723 ms | 2 - 21 MB | NPU | Use Export Script |
| controlnet | w8a16 | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | PRECOMPILED_QNN_ONNX | 59.616 ms | 32 - 52 MB | NPU | Use Export Script |
| controlnet | w8a16 | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | QNN_CONTEXT_BINARY | 56.385 ms | 2 - 16 MB | NPU | Use Export Script |
| controlnet | w8a16 | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | PRECOMPILED_QNN_ONNX | 57.177 ms | 33 - 45 MB | NPU | Use Export Script |
| controlnet | w8a16 | Snapdragon X Elite CRD | Snapdragon® X Elite | QNN_CONTEXT_BINARY | 85.054 ms | 2 - 2 MB | NPU | Use Export Script |
| controlnet | w8a16 | Snapdragon X Elite CRD | Snapdragon® X Elite | PRECOMPILED_QNN_ONNX | 80.095 ms | 351 - 351 MB | NPU | Use Export Script |
## Installation
Install the package via pip:
```bash
pip install "qai-hub-models[controlnet-canny]"
```
## Configure Qualcomm® AI Hub to run this model on a cloud-hosted device
Sign-in to [Qualcomm® AI Hub](https://app.aihub.qualcomm.com/) with your
Qualcomm® ID. Once signed in navigate to `Account -> Settings -> API Token`.
With this API token, you can configure your client to run models on the cloud
hosted devices.
```bash
qai-hub configure --api_token API_TOKEN
```
Navigate to [docs](https://app.aihub.qualcomm.com/docs/) for more information.
## Demo off target
The package contains a simple end-to-end demo that downloads pre-trained
weights and runs this model on a sample input.
```bash
python -m qai_hub_models.models.controlnet_canny.demo
```
The above demo runs a reference implementation of pre-processing, model
inference, and post processing.
**NOTE**: If you want running in a Jupyter Notebook or Google Colab like
environment, please add the following to your cell (instead of the above).
```
%run -m qai_hub_models.models.controlnet_canny.demo
```
### Run model on a cloud-hosted device
In addition to the demo, you can also run the model on a cloud-hosted Qualcomm®
device. This script does the following:
* Performance check on-device on a cloud-hosted device
* Downloads compiled assets that can be deployed on-device for Android.
* Accuracy check between PyTorch and on-device outputs.
```bash
python -m qai_hub_models.models.controlnet_canny.export
```
## Deploying compiled model to Android
The models can be deployed using multiple runtimes:
- TensorFlow Lite (`.tflite` export): [This
tutorial](https://www.tensorflow.org/lite/android/quickstart) provides a
guide to deploy the .tflite model in an Android application.
- QNN (`.so` export ): This [sample
app](https://docs.qualcomm.com/bundle/publicresource/topics/80-63442-50/sample_app.html)
provides instructions on how to use the `.so` shared library in an Android application.
## View on Qualcomm® AI Hub
Get more details on ControlNet-Canny's performance across various devices [here](https://aihub.qualcomm.com/models/controlnet_canny).
Explore all available models on [Qualcomm® AI Hub](https://aihub.qualcomm.com/)
## License
* The license for the original implementation of ControlNet-Canny can be found
[here](https://github.com/lllyasviel/ControlNet/blob/main/LICENSE).
* The license for the compiled assets for on-device deployment can be found [here](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/Qualcomm+AI+Hub+Proprietary+License.pdf)
## References
* [Adding Conditional Control to Text-to-Image Diffusion Models](https://arxiv.org/abs/2302.05543)
* [Source Model Implementation](https://github.com/lllyasviel/ControlNet)
## Community
* Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI.
* For questions or feedback please [reach out to us](mailto:[email protected]).
|
lakelee/RLB_MLP_vv2.20250815.12
|
lakelee
| 2025-08-15T03:56:05Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mlp_swiglu",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | null | 2025-08-15T03:24:53Z |
---
library_name: transformers
tags:
- generated_from_trainer
model-index:
- name: RLB_MLP_vv2.20250815.12
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# RLB_MLP_vv2.20250815.12
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.95) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 100.0
### Training results
### Framework versions
- Transformers 4.55.0
- Pytorch 2.6.0+cu124
- Datasets 4.0.0
- Tokenizers 0.21.4
|
mekpro/gemma3-1b-translator
|
mekpro
| 2025-08-15T03:49:19Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma3_text",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"base_model:unsloth/gemma-3-1b-pt",
"base_model:finetune:unsloth/gemma-3-1b-pt",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-15T03:48:08Z |
---
base_model: unsloth/gemma-3-1b-pt
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3_text
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** mekpro
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-3-1b-pt
This gemma3_text model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Sayantan-95/ConvVAE1024_Condensed1500_128_k3
|
Sayantan-95
| 2025-08-15T03:41:42Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-08-15T03:24:38Z |
---
license: apache-2.0
---
|
gaianet/gemma-3-270m-it-GGUF
|
gaianet
| 2025-08-15T03:38:28Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"gemma3_text",
"text-generation",
"base_model:google/gemma-3-270m-it",
"base_model:quantized:google/gemma-3-270m-it",
"license:gemma",
"autotrain_compatible",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2025-08-15T03:24:32Z |
---
base_model: google/gemma-3-270m-it
library_name: transformers
license: gemma
model_creator: Google
model_name: gemma-3-270m-it
quantized_by: Second State Inc.
pipeline_tag: text-generation
---
# gemma-3-270m-it-GGUF
## Original Model
[google/gemma-3-270m-it](https://huggingface.co/google/gemma-3-270m-it)
## Run with Gaianet
**Prompt template:**
prompt template: `gemma-3`
**Context size:**
chat_ctx_size: `128000`
**Run with GaiaNet:**
- Quick start: https://docs.gaianet.ai/node-guide/quick-start
- Customize your node: https://docs.gaianet.ai/node-guide/customize
*Quantized with llama.cpp b6138*
|
0xaoyama/blockassist-bc-muscular_zealous_gorilla_1755227225
|
0xaoyama
| 2025-08-15T03:07:55Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"muscular zealous gorilla",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-15T03:07:50Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- muscular zealous gorilla
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
synturk/TerapAI_v1.0
|
synturk
| 2025-08-15T02:58:11Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"text-generation",
"base_model:adapter:unsloth/phi-4-unsloth-bnb-4bit",
"lora",
"sft",
"transformers",
"trl",
"unsloth",
"conversational",
"arxiv:1910.09700",
"base_model:unsloth/phi-4-unsloth-bnb-4bit",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2025-08-15T02:53:33Z |
---
base_model: unsloth/phi-4-unsloth-bnb-4bit
library_name: peft
pipeline_tag: text-generation
tags:
- base_model:adapter:unsloth/phi-4-unsloth-bnb-4bit
- lora
- sft
- transformers
- trl
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.17.0
|
DYJG-research/Baseline
|
DYJG-research
| 2025-08-15T02:43:07Z | 0 | 0 | null |
[
"safetensors",
"qwen3",
"license:apache-2.0",
"region:us"
] | null | 2025-08-14T17:33:36Z |
---
license: apache-2.0
---
|
lisaozill03/blockassist-bc-rugged_prickly_alpaca_1755223971
|
lisaozill03
| 2025-08-15T02:37:06Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"rugged prickly alpaca",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-15T02:37:02Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- rugged prickly alpaca
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
crystalline7/872651
|
crystalline7
| 2025-08-15T01:53:39Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-15T01:53:30Z |
[View on Civ Archive](https://civitaiarchive.com/models/244431?modelVersionId=966107)
|
SakuraLLM/Sakura-GalTransl-7B-v3.7
|
SakuraLLM
| 2025-08-15T01:42:29Z | 6,557 | 69 | null |
[
"gguf",
"zh",
"license:cc-by-nc-sa-4.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-05-22T03:04:31Z |
---
license: cc-by-nc-sa-4.0
language:
- zh
---
Sakura-GalTransl模型由sakuraumi和xd2333共同构建,为视觉小说(Galgame)翻译任务专项优化。模型参数量7B,支持日译简中(jp2zh-cn)。
**Sakura-GalTransl模型继承sakura模型cc-by-nc-sa 4.0协议,禁止用于商用行为,例如提供付费翻译接口、制作需要以任何方式付费才能获取的补丁、商用翻译等。**
### 特性:
* 为视觉小说(Galgame)翻译任务专项优化。对视觉小说脚本中的行内换行、控制符、ruby注音等符号具有较好的保留能力。
* 在硬件需求、翻译质量与稳定性间取得平衡。模型可以运行在(空闲显存≥6g)的主流游戏显卡或Macbook上,并获得在整体上高度可用的翻译质量和稳定性。
* 适配[GalTransl](https://github.com/xd2333/GalTransl)翻译工具制作翻译补丁。
* 适配[lunatranslator](https://docs.lunatranslator.org/zh/)翻译器hook游戏在线翻译。
### 更新日志:
**2025.08-v3.7:通过改进的强化学习改善了整体准确性与流畅度,并修正了部分已知的控制符号保留问题。注:不带后缀的版本(Sakura-Galtransl-7B-v3.7.gguf)量化等级为Q6_K**
2025.05-v3.5:强化文学性
2025.03-v3.0:基于Sakura-7B-Qwen2.5-v1.0并使用GRPO对模型进行强化,翻译质量显著优于上一代GalTransl模型
2024.10-v2.6:在2.5的基础上提高了稳定性
2024.09-v2.5:抑制了一些已知问题,并且在对比v2时文风更细腻
2024.08-v2.0:继续迭代以改善质量
2024.06-v1.5:优化了整体的文风
2024.05-v1.0:初版
### 快速部署:
* 模型下载:点击标签Files and versions,6G以上显存下载Sakura-Galtransl-7B-v3.7.gguf,6G显存下载Sakura-Galtransl-7B-v3.7-IQ4_XS.gguf,网络问题可以去镜像站[https://hf-mirror.com/](https://hf-mirror.com/SakuraLLM/Sakura-GalTransl-7B-v3.7)下载
* 模型部署:Win建议使用[Sakura_Launcher_GUI](https://github.com/PiDanShouRouZhouXD/Sakura_Launcher_GUI)部署,在 release 里下载
* Mac可以使用[run_Sakura_any.zip](https://huggingface.co/SakuraLLM/Sakura-GalTransl-7B-v3/blob/main/run_Sakura_any.zip),是同时支持 Win/Mac/Linux,N卡/A卡/Apple芯片的简化部署包
1. 解压后将模型丢进llm2run文件夹里
2. Win:双击run_Sakura_win.bat然后选择模型
Mac:先去 app store 安装 xcode,然后打开终端切换到run_Sakura.exe所在目录,运行`chmod +x run_Sakura.exe llamafile.exe & ./run_Sakura.exe`
Linux: Linux使用GPU需要安装CUDA SDK或HIP SDK,然后切换到run_Sakura.exe所在目录,运行`chmod +x run_Sakura.exe llamafile.exe & ./run_Sakura.exe`
4. 6G 显存 1 线程,8G 及以上显存可以设置 4-10 线程
* 启动失败可能是 8080 端口被占用,可以尝试[找到占用端口的程序](https://www.runoob.com/w3cnote/windows-finds-port-usage.html)
### 请求方式
v3推荐温度0.3,top_p 0.8
v3请求模板:
系统提示/system prompt
```
你是一个视觉小说翻译模型,可以通顺地使用给定的术语表以指定的风格将日文翻译成简体中文,并联系上下文正确使用人称代词,注意不要混淆使役态和被动态的主语和宾语,不要擅自添加原文中没有的特殊符号,也不要擅自增加或减少换行。
```
用户输入/user prompt
```
[History]
参考以下术语表(可为空,格式为src->dst #备注):
[Glossary]
根据以上术语表的对应关系和备注,结合历史剧情和上下文,将下面的文本从日文翻译成简体中文:
[Input]
```
其中[History]格式为`历史翻译:`+上一轮翻译结果
[Glossary]格式为src->dst #备注
这两项都是可选项,可以留空
* 用户输入示例
```
历史翻译:流动泳池里经常会发生这种碰撞。
男方也向四木道歉了。
不过四木的个子很小,要是被大大的充气船或游泳圈卷进去就糟了。
一诚「我们到人少一点的地方去吧。」
我护着四木,将她带到安全的地方。
参考以下术语表(可为空,格式为src->dst #备注):
子供->孩子
柑奈->柑奈 #女角色名
タッキー->小泷 #名称,男生,玩家
一誠->一诚 #主角,男生,昵称
根据以上术语表的对应关系和备注,结合历史剧情和上下文,将下面的文本从日文翻译成简体中文:
この辺なら大丈夫かな?
前後に小さな子供がいるけど浮き輪は着けてないな。
柑奈「……あたしのために移動してくれたんだよね。ありがと」
一誠「あ、いや、まあ……」
柑奈「ふふ、学園とかで見るタッキーは頼りないのに」
```
* 模型输出示例
```
这边应该没问题吧?
虽然前面和后面有小孩子,但他们没有戴游泳圈。
柑奈「……你是为了我特意移动到这边的吧。谢谢」
一诚「啊,不,那个……」
柑奈「呵呵,在学校里看到的小泷明明那么不可靠呢」
```
### 已知问题:
* GPT字典**不支持一词多译写法("a/b")**,将在以后的版本尝试改善。
* 需要对原文省略语素进行推理时可能出现事实错误/幻觉。
* 推荐每次翻译**7-10句**
### 量化等级区别:
| 量化等级 | 说明 |
| ---- | ---- |
| IQ4_XS | 小的质量损失,占用更小,但速度比Q4_K慢(6G显存推荐) |
| Q4_K | 小的质量损失(6G显存推荐)|
| Q5_K | 很小的质量损失(6G/8G显存推荐) |
| Q6_k | 细小的质量损失(8G及以上显存推荐) |
|
phospho-app/Matt1208-ACT_BBOX-Pick_Up_Grape_V1-uz106
|
phospho-app
| 2025-08-15T01:35:47Z | 0 | 0 |
phosphobot
|
[
"phosphobot",
"act",
"robotics",
"dataset:Matt1208/Pick_Up_Grape_V1",
"region:us"
] |
robotics
| 2025-08-15T01:34:07Z |
---
datasets: Matt1208/Pick_Up_Grape_V1
library_name: phosphobot
pipeline_tag: robotics
model_name: act
tags:
- phosphobot
- act
task_categories:
- robotics
---
# act Model - phospho Training Pipeline
## Error Traceback
We faced an issue while training your model.
```
The object 'Green Grape' was detected in 9 episodes in main camera (should be: 10 episodes min). This is not enough to train a model. Check your dataset: https://lerobot-visualize-dataset.hf.space/Matt1208/Pick_Up_Grape_V1/ and rephrase the instruction.
```
## Training parameters:
- **Dataset**: [Matt1208/Pick_Up_Grape_V1](https://huggingface.co/datasets/Matt1208/Pick_Up_Grape_V1)
- **Wandb run URL**: None
- **Epochs**: None
- **Batch size**: 100
- **Training steps**: 10000
📖 **Get Started**: [docs.phospho.ai](https://docs.phospho.ai?utm_source=huggingface_readme)
🤖 **Get your robot**: [robots.phospho.ai](https://robots.phospho.ai?utm_source=huggingface_readme)
|
myfi/parser_model_ner_3.53_checkpoint_200_lora
|
myfi
| 2025-08-15T01:30:38Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2",
"en",
"base_model:unsloth/Qwen2.5-3B-Instruct",
"base_model:finetune:unsloth/Qwen2.5-3B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-15T01:29:37Z |
---
base_model: unsloth/Qwen2.5-3B-Instruct
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** myfi
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen2.5-3B-Instruct
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
unitova/blockassist-bc-zealous_sneaky_raven_1755219854
|
unitova
| 2025-08-15T01:29:49Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"zealous sneaky raven",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-15T01:29:46Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- zealous sneaky raven
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Corper/LoRaDina
|
Corper
| 2025-08-15T01:24:57Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-08-15T01:17:09Z |
---
license: apache-2.0
---
|
maxibillion1975/blockassist-bc-iridescent_squeaky_sandpiper_1755219218
|
maxibillion1975
| 2025-08-15T01:21:14Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"iridescent squeaky sandpiper",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-15T01:21:09Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- iridescent squeaky sandpiper
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
rayonlabs/benchmark-15b733f3-29c3-4bb5-b5a9-4615f043b030-tourn_21c0fa9e21db603d_20250808-5EykKbxi
|
rayonlabs
| 2025-08-15T01:16:13Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"qwen2",
"text-generation",
"axolotl",
"base_model:adapter:/cache/models/deepseek-ai--DeepSeek-R1-Distill-Qwen-32B",
"lora",
"transformers",
"conversational",
"base_model:deepseek-ai/DeepSeek-R1-Distill-Qwen-32B",
"base_model:adapter:deepseek-ai/DeepSeek-R1-Distill-Qwen-32B",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-15T01:15:26Z |
---
library_name: peft
tags:
- axolotl
- base_model:adapter:/cache/models/deepseek-ai--DeepSeek-R1-Distill-Qwen-32B
- lora
- transformers
base_model: deepseek-ai/DeepSeek-R1-Distill-Qwen-32B
pipeline_tag: text-generation
model-index:
- name: app/checkpoints/d5585bb7-5be3-4f75-86c6-7f3aad01331f/benchmark-15b733f3-29c3-4bb5-b5a9-4615f043b030-tourn_21c0fa9e21db603d_20250808-5EykKbxi
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.12.0.dev0`
```yaml
adapter: lora
base_model: deepseek-ai/DeepSeek-R1-Distill-Qwen-32B
bf16: true
chat_template: llama3
cosine_min_lr_ratio: 0.3
dataloader_num_workers: 12
dataset_prepared_path: null
datasets:
- data_files:
- d5585bb7-5be3-4f75-86c6-7f3aad01331f_train_data.json
ds_type: json
format: custom
path: /workspace/axolotl/data
type:
field_input: input
field_instruction: instruct
field_output: output
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
ddp: true
debug: null
deepspeed: null
device_map: cuda
early_stopping_patience: null
eval_max_new_tokens: 128
eval_steps: null
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 1
gradient_checkpointing: true
gradient_checkpointing_kwargs:
use_reentrant: false
group_by_length: true
hub_model_id: null
hub_private_repo: false
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
liger_fused_linear_cross_entropy: true
liger_glu_activation: true
liger_layer_norm: true
liger_rms_norm: true
liger_rope: true
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: null
lora_alpha: 64
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
loraplus_lr_embedding: 1.0e-06
loraplus_lr_ratio: 16
lr_scheduler: cosine
max_grad_norm: 1
max_steps: 2186
micro_batch_size: 20
mlflow_experiment_name: /workspace/axolotl/data/d5585bb7-5be3-4f75-86c6-7f3aad01331f_train_data.json
model_card: false
model_type: AutoModelForCausalLM
num_epochs: 200
optimizer: adamw_bnb_8bit
output_dir: /app/checkpoints/d5585bb7-5be3-4f75-86c6-7f3aad01331f/benchmark-15b733f3-29c3-4bb5-b5a9-4615f043b030-tourn_21c0fa9e21db603d_20250808-5EykKbxi
pad_to_sequence_len: true
plugins:
- axolotl.integrations.liger.LigerPlugin
push_every_save: true
push_to_hub: true
resume_from_checkpoint: null
rl: null
s2_attention: null
sample_packing: true
save_steps: 100
save_strategy: steps
save_total_limit: 1
saves_per_epoch: 0
sequence_len: 512
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trl: null
trust_remote_code: false
use_liger: true
val_set_size: 0.0
wandb_mode: offline
wandb_name: d5585bb7-5be3-4f75-86c6-7f3aad01331f_benchmark-15b733f3-29c3-4bb5-b5a9-4615f043b030-tourn_21c0fa9e21db603d_20250808-5EykKbxi
wandb_project: Gradients-On-Demand
wandb_run: null
wandb_runid: d5585bb7-5be3-4f75-86c6-7f3aad01331f_benchmark-15b733f3-29c3-4bb5-b5a9-4615f043b030-tourn_21c0fa9e21db603d_20250808-5EykKbxi
warmup_steps: 200
weight_decay: 0
xformers_attention: null
```
</details><br>
# app/checkpoints/d5585bb7-5be3-4f75-86c6-7f3aad01331f/benchmark-15b733f3-29c3-4bb5-b5a9-4615f043b030-tourn_21c0fa9e21db603d_20250808-5EykKbxi
This model was trained from scratch on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 20
- eval_batch_size: 20
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 200
- training_steps: 2186
### Training results
### Framework versions
- PEFT 0.16.0
- Transformers 4.54.1
- Pytorch 2.7.1+cu128
- Datasets 4.0.0
- Tokenizers 0.21.2
|
pobiiiiiii/blockassist-bc-ravenous_yapping_ferret_1755219390
|
pobiiiiiii
| 2025-08-15T00:57:03Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"ravenous yapping ferret",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-15T00:56:59Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- ravenous yapping ferret
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
elmenbillion/blockassist-bc-beaked_sharp_otter_1755217226
|
elmenbillion
| 2025-08-15T00:46:18Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"beaked sharp otter",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-15T00:46:14Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- beaked sharp otter
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
razor534/blockassist-bc-lazy_extinct_termite_1755218316
|
razor534
| 2025-08-15T00:40:44Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"lazy extinct termite",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-15T00:40:36Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- lazy extinct termite
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
atharvapusalkar/ocv9_flow
|
atharvapusalkar
| 2025-08-15T00:39:34Z | 0 | 0 |
lerobot
|
[
"lerobot",
"safetensors",
"flow",
"robotics",
"dataset:atharvapusalkar/eccles_overflow_correction_v9_pruned_heavy",
"license:apache-2.0",
"region:us"
] |
robotics
| 2025-08-15T00:38:24Z |
---
datasets: atharvapusalkar/eccles_overflow_correction_v9_pruned_heavy
library_name: lerobot
license: apache-2.0
model_name: flow
pipeline_tag: robotics
tags:
- lerobot
- flow
- robotics
---
# Model Card for flow
<!-- Provide a quick summary of what the model is/does. -->
_Model type not recognized — please update this template._
This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot).
See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index).
---
## How to Get Started with the Model
For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy).
Below is the short version on how to train and run inference/eval:
### Train from scratch
```bash
lerobot-train \
--dataset.repo_id=${HF_USER}/<dataset> \
--policy.type=act \
--output_dir=outputs/train/<desired_policy_repo_id> \
--job_name=lerobot_training \
--policy.device=cuda \
--policy.repo_id=${HF_USER}/<desired_policy_repo_id>
--wandb.enable=true
```
_Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`._
### Evaluate the policy/run inference
```bash
lerobot-record \
--robot.type=so100_follower \
--dataset.repo_id=<hf_user>/eval_<dataset> \
--policy.path=<hf_user>/<desired_policy_repo_id> \
--episodes=10
```
Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint.
---
## Model Details
- **License:** apache-2.0
|
daco123223223/ana-v1
|
daco123223223
| 2025-08-15T00:24:41Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-08-15T00:21:46Z |
---
license: apache-2.0
---
|
mradermacher/DeepSeek-R1-0528-Qwen3-8B-KAYLA-GGUF
|
mradermacher
| 2025-08-15T00:00:05Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"dataset:netcat420/Kayla",
"base_model:netcat420/DeepSeek-R1-0528-Qwen3-8B-KAYLA",
"base_model:quantized:netcat420/DeepSeek-R1-0528-Qwen3-8B-KAYLA",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-08-14T14:52:30Z |
---
base_model: netcat420/DeepSeek-R1-0528-Qwen3-8B-KAYLA
datasets:
- netcat420/Kayla
language:
- en
library_name: transformers
license: mit
mradermacher:
readme_rev: 1
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/netcat420/DeepSeek-R1-0528-Qwen3-8B-KAYLA
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#DeepSeek-R1-0528-Qwen3-8B-KAYLA-GGUF).***
weighted/imatrix quants are available at https://huggingface.co/mradermacher/DeepSeek-R1-0528-Qwen3-8B-KAYLA-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-0528-Qwen3-8B-KAYLA-GGUF/resolve/main/DeepSeek-R1-0528-Qwen3-8B-KAYLA.Q2_K.gguf) | Q2_K | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-0528-Qwen3-8B-KAYLA-GGUF/resolve/main/DeepSeek-R1-0528-Qwen3-8B-KAYLA.Q3_K_S.gguf) | Q3_K_S | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-0528-Qwen3-8B-KAYLA-GGUF/resolve/main/DeepSeek-R1-0528-Qwen3-8B-KAYLA.Q3_K_M.gguf) | Q3_K_M | 4.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-0528-Qwen3-8B-KAYLA-GGUF/resolve/main/DeepSeek-R1-0528-Qwen3-8B-KAYLA.Q3_K_L.gguf) | Q3_K_L | 4.5 | |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-0528-Qwen3-8B-KAYLA-GGUF/resolve/main/DeepSeek-R1-0528-Qwen3-8B-KAYLA.IQ4_XS.gguf) | IQ4_XS | 4.7 | |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-0528-Qwen3-8B-KAYLA-GGUF/resolve/main/DeepSeek-R1-0528-Qwen3-8B-KAYLA.Q4_K_S.gguf) | Q4_K_S | 4.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-0528-Qwen3-8B-KAYLA-GGUF/resolve/main/DeepSeek-R1-0528-Qwen3-8B-KAYLA.Q4_K_M.gguf) | Q4_K_M | 5.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-0528-Qwen3-8B-KAYLA-GGUF/resolve/main/DeepSeek-R1-0528-Qwen3-8B-KAYLA.Q5_K_S.gguf) | Q5_K_S | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-0528-Qwen3-8B-KAYLA-GGUF/resolve/main/DeepSeek-R1-0528-Qwen3-8B-KAYLA.Q5_K_M.gguf) | Q5_K_M | 6.0 | |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-0528-Qwen3-8B-KAYLA-GGUF/resolve/main/DeepSeek-R1-0528-Qwen3-8B-KAYLA.Q6_K.gguf) | Q6_K | 6.8 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-0528-Qwen3-8B-KAYLA-GGUF/resolve/main/DeepSeek-R1-0528-Qwen3-8B-KAYLA.Q8_0.gguf) | Q8_0 | 8.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-0528-Qwen3-8B-KAYLA-GGUF/resolve/main/DeepSeek-R1-0528-Qwen3-8B-KAYLA.f16.gguf) | f16 | 16.5 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
ymoslem/aya-expanse-8b-eng-arz-16layers
|
ymoslem
| 2025-08-14T23:57:13Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"cohere",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-14T23:56:25Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Mauroicardi43/blockassist-bc-giant_agile_crab_1755215440
|
Mauroicardi43
| 2025-08-14T23:51:30Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"giant agile crab",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-14T23:51:26Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- giant agile crab
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
dgambettaphd/M_llm3_run2_gen10_WXS_doc1000_synt64_lr1e-04_acm_SYNLAST
|
dgambettaphd
| 2025-08-14T23:44:11Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-14T23:43:56Z |
---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Bisher/atomic-brook-16_lora
|
Bisher
| 2025-08-14T23:37:57Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"whisper",
"trl",
"en",
"base_model:unsloth/whisper-large-v3-turbo",
"base_model:finetune:unsloth/whisper-large-v3-turbo",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-14T23:37:50Z |
---
base_model: unsloth/whisper-large-v3-turbo
tags:
- text-generation-inference
- transformers
- unsloth
- whisper
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Bisher
- **License:** apache-2.0
- **Finetuned from model :** unsloth/whisper-large-v3-turbo
This whisper model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Mauroicardi43/blockassist-bc-giant_agile_crab_1755214311
|
Mauroicardi43
| 2025-08-14T23:32:39Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"giant agile crab",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-14T23:32:33Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- giant agile crab
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
winnieyangwannan/entity_OLMoE-1B-7B-0924-Instruct_experts_pnas_layer_10_3_all_37_0.001_1280_1
|
winnieyangwannan
| 2025-08-14T23:31:49Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"olmoe",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-14T23:30:01Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Sayemahsjn/blockassist-bc-playful_feline_octopus_1755213141
|
Sayemahsjn
| 2025-08-14T23:31:09Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"playful feline octopus",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-14T23:31:04Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- playful feline octopus
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Mauroicardi43/blockassist-bc-giant_agile_crab_1755212629
|
Mauroicardi43
| 2025-08-14T23:04:45Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"giant agile crab",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-14T23:04:41Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- giant agile crab
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Sayemahsjn/blockassist-bc-playful_feline_octopus_1755211429
|
Sayemahsjn
| 2025-08-14T23:03:02Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"playful feline octopus",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-14T23:02:57Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- playful feline octopus
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
utkububa/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-grassy_barky_goose
|
utkububa
| 2025-08-14T22:54:35Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am grassy_barky_goose",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-14T22:54:09Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am grassy_barky_goose
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
silvercrow17/afroModel
|
silvercrow17
| 2025-08-14T22:51:01Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-08-14T22:21:34Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: jesseFro
---
# Afromodel
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `jesseFro` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "jesseFro",
"lora_weights": "https://huggingface.co/silvercrow17/afroModel/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('silvercrow17/afroModel', weight_name='lora.safetensors')
image = pipeline('jesseFro').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 32
## Contribute your own examples
You can use the [community tab](https://huggingface.co/silvercrow17/afroModel/discussions) to add images that show off what you’ve made with this LoRA.
|
elmenbillion/blockassist-bc-beaked_sharp_otter_1755210155
|
elmenbillion
| 2025-08-14T22:48:43Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"beaked sharp otter",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-14T22:48:38Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- beaked sharp otter
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
aumoai/aumogpt-Llama3.3-70B-Instruct-lora
|
aumoai
| 2025-08-14T22:37:28Z | 0 | 0 | null |
[
"safetensors",
"llama",
"region:us"
] | null | 2025-08-12T19:11:27Z |
model:
model_name: "meta-llama/Llama-3.3-70B-Instruct"
model_max_length: 4096
torch_dtype_str: "bfloat16"
attn_implementation: "flash_attention_2" #"sdpa"
load_pretrained_weights: True
trust_remote_code: True
data:
train:
datasets:
# - dataset_name: "text_sft"
# dataset_path: "datasets/aumo_dataset_test.json"
# shuffle: True
# seed: 42
- dataset_name: "text_sft"
dataset_path: "datasets/aumogpt_llama70b.json"
shuffle: True
seed: 42
# - dataset_name: "text_sft"
# dataset_path: "datasets/xp3_qwen_2000.json"
# shuffle: True
# seed: 42
# - dataset_name: "text_sft"
# dataset_path: "datasets/aumogpt_train.json"
# shuffle: True
# seed: 42
# mixture_strategy: "all_exhausted" # Strategy for mixing datasets
# seed: 123456789426465
validation:
datasets:
- dataset_name: "text_sft"
dataset_path: "datasets/aumo_dataset_test.json"
# split: "validation"
# sample_count: 10
training:
trainer_type: "TRL_SFT"
use_peft: True
save_steps: 200
num_train_epochs: 2
per_device_train_batch_size: 2
per_device_eval_batch_size: 2
gradient_accumulation_steps: 8
max_grad_norm: null
enable_gradient_checkpointing: True
gradient_checkpointing_kwargs:
use_reentrant: False
ddp_find_unused_parameters: False
optimizer: "adamw_torch" # "adamw_torch" #paged_adamw_8bit
learning_rate: 5.0e-4
warmup_steps: 10
weight_decay: 0.01
compile: False
dataloader_num_workers: 8
dataloader_prefetch_factor: 4
logging_steps: 10
log_model_summary: False
empty_device_cache_steps: 50
output_dir: "results/oumi/llama70b_aumogpt.lora"
include_performance_metrics: True
enable_wandb: True
eval_strategy: "steps" # When to evaluate ("no", "steps", "epoch")
eval_steps: 25
peft:
q_lora: False
lora_r: 64
lora_alpha: 32
lora_dropout: 0.2
lora_target_modules:
- "q_proj"
- "k_proj"
- "v_proj"
- "o_proj"
- "gate_proj"
- "down_proj"
- "up_proj"
fsdp:
enable_fsdp: True
sharding_strategy: FULL_SHARD
auto_wrap_policy: TRANSFORMER_BASED_WRAP
transformer_layer_cls: "LlamaDecoderLayer"
forward_prefetch: true
|
Sayemahsjn/blockassist-bc-playful_feline_octopus_1755209449
|
Sayemahsjn
| 2025-08-14T22:29:37Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"playful feline octopus",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-14T22:29:32Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- playful feline octopus
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
otmangi/Offensive_Darija_MorRoBERTa
|
otmangi
| 2025-08-14T22:19:41Z | 0 | 0 | null |
[
"safetensors",
"roberta",
"license:cc",
"region:us"
] | null | 2025-08-13T22:18:15Z |
---
license: cc
---
# Off_MorRoBERTa
Off_MorRoBERTa is a Transformer-based language model designed to classify comments in the Moroccan dialect as abusive or non-abusive.
## About Off_MorRoBERTa
Off_MorRoBERTa, developed specifically for the Moroccan dialect, provides an effective approach for detecting abusive content. The model is based on MorRoBERTa, which was fine-tuned on a labeled dataset created for hate speech detection in Moroccan dialect texts.
The dataset used for training was built by aggregating and preprocessing data from publicly available sources, including MDMD, OMCD, and MDOLDD.
## Usage
The model weights can be loaded using transformers library by HuggingFace.
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("otmangi/Offensive_Darija_MorRoBERTa")
model = AutoModel.from_pretrained("otmangi/Offensive_Darija_MorRoBERTa")
## Related Paper
For more details about the dataset, methodology, and evaluation, please refer to our paper:
https://accentsjournals.org/paperinfo.php?journalPaperId=1809
## Contact
For any inquiries, feedback, or requests, please feel free to reach out to :
[email protected]
|
indoempatnol/blockassist-bc-fishy_wary_swan_1755207450
|
indoempatnol
| 2025-08-14T22:05:20Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"fishy wary swan",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-14T22:05:15Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- fishy wary swan
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
StanfordSCALE/tutor_copilot_moments_6
|
StanfordSCALE
| 2025-08-14T22:02:20Z | 5 | 0 | null |
[
"safetensors",
"roberta",
"region:us"
] | null | 2025-04-02T08:28:41Z |
## Message Structure
Tutor CoPilot models are trained on a message structure of 10 context messages followed by one target message with the special tokens `[PRETEXT]` and `[TEXT]` used to demarcate the context and target messages. Messages are formatted as `{speaker}: {message}`, where the speaker is one of `tutor` or `student`, and the message is a lowercased, anonymized version of the message. Context utterances are separated by newlines. Names are anonymized with [Edu-ConvoKit](https://edu-convokit.readthedocs.io/en/latest/preprocessing.html#edu_convokit.preprocessors.TextPreprocessor.anonymize_known_names), replacing student and tutor names with `[student]` and `[tutor]`, respectively. Models are trained on text with the structure
```
"[PRETEXT] {context} [TEXT] {target}"
```
Tutor CoPilot models are only trained with tutor utterances as targets. A synthetic example of this structure, without the full 10 context utterances, is below.
```
[PRETEXT] tutor: hello, [student], happy to work with you today.
student: hi
tutor: today we will work on the topic "adding numbers"
...
student: the answer is 2. [TEXT] tutor: that's correct! 2 points.
```
|
schonsense/VAR_stock_GGUF
|
schonsense
| 2025-08-14T22:02:13Z | 0 | 0 | null |
[
"gguf",
"base_model:schonsense/VAR_stock",
"base_model:quantized:schonsense/VAR_stock",
"endpoints_compatible",
"region:us"
] | null | 2025-08-14T20:26:30Z |
---
base_model:
- schonsense/VAR_stock
---
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: schonsense/ll33_70B_VAR3_r256_v2
- model: schonsense/ll33_70B_VAR3_r256
- model: schonsense/ll33_KM_70B
- model: schonsense/ll3_3_70B_r128_VAR2
base_model: meta-llama/Llama-3.3-70B-Instruct
merge_method: model_stock
dtype: float32
out_dtype: bfloat16
chat_template: llama3
tokenizer:
source: union
```
|
StanfordSCALE/tutor_copilot_moments_4
|
StanfordSCALE
| 2025-08-14T22:02:00Z | 5 | 0 | null |
[
"safetensors",
"roberta",
"region:us"
] | null | 2025-04-02T08:28:23Z |
## Message Structure
Tutor CoPilot models are trained on a message structure of 10 context messages followed by one target message with the special tokens `[PRETEXT]` and `[TEXT]` used to demarcate the context and target messages. Messages are formatted as `{speaker}: {message}`, where the speaker is one of `tutor` or `student`, and the message is a lowercased, anonymized version of the message. Context utterances are separated by newlines. Names are anonymized with [Edu-ConvoKit](https://edu-convokit.readthedocs.io/en/latest/preprocessing.html#edu_convokit.preprocessors.TextPreprocessor.anonymize_known_names), replacing student and tutor names with `[student]` and `[tutor]`, respectively. Models are trained on text with the structure
```
"[PRETEXT] {context} [TEXT] {target}"
```
Tutor CoPilot models are only trained with tutor utterances as targets. A synthetic example of this structure, without the full 10 context utterances, is below.
```
[PRETEXT] tutor: hello, [student], happy to work with you today.
student: hi
tutor: today we will work on the topic "adding numbers"
...
student: the answer is 2. [TEXT] tutor: that's correct! 2 points.
```
|
winnieyangwannan/entity_OLMoE-1B-7B-0924-Instruct_experts-down_pnas_layer_10_3_all_37_0.001_10240_1
|
winnieyangwannan
| 2025-08-14T22:01:22Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"olmoe",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-14T22:00:28Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
winnieyangwannan/entity_OLMoE-1B-7B-0924-Instruct_experts-down_pnas_layer_10_4_all_37_0.001_9600_1
|
winnieyangwannan
| 2025-08-14T22:00:31Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"olmoe",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-14T21:59:36Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
AAAAnsah/Llama-3.2-1B_RFA_theta_0.9
|
AAAAnsah
| 2025-08-14T21:57:15Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-14T21:56:59Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
AAAAnsah/Llama-3.2-1B_RFA_theta_0.8
|
AAAAnsah
| 2025-08-14T21:56:47Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-14T21:56:30Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
winnieyangwannan/entity_OLMoE-1B-7B-0924-Instruct_experts-down_pnas_layer_10_3_all_37_0.001_1280_1
|
winnieyangwannan
| 2025-08-14T21:48:20Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"olmoe",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-14T20:23:43Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
darknoon/svg-stack-filtered-sft-qwen2.5-vl-7b-trl-10k
|
darknoon
| 2025-08-14T21:46:24Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2_5_vl",
"image-to-text",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
image-to-text
| 2025-08-14T21:42:16Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ahmetsinan/htmlmining1000Rows
|
ahmetsinan
| 2025-08-14T21:45:00Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-14T21:43:00Z |
---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
osieosie/tulu2-7b_math500_sft_m2.1_e3_lr2e-5
|
osieosie
| 2025-08-14T21:44:09Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"trl",
"sft",
"conversational",
"base_model:allenai/tulu-2-7b",
"base_model:finetune:allenai/tulu-2-7b",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-14T21:38:33Z |
---
base_model: allenai/tulu-2-7b
library_name: transformers
model_name: tulu-2-7b_20250811_math500-sft-m2.1-e3-lr2e-5
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for tulu-2-7b_20250811_math500-sft-m2.1-e3-lr2e-5
This model is a fine-tuned version of [allenai/tulu-2-7b](https://huggingface.co/allenai/tulu-2-7b).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="None", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/osieosie/huggingface/runs/tppoh29z)
This model was trained with SFT.
### Framework versions
- TRL: 0.19.1
- Transformers: 4.51.1
- Pytorch: 2.6.0
- Datasets: 4.0.0
- Tokenizers: 0.21.2
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
mohit937/llama31-8b-sft-grpo-alpaca-fp16
|
mohit937
| 2025-08-14T21:40:24Z | 0 | 1 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-14T21:34:20Z |
---
base_model: unsloth/meta-llama-3.1-8b-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** mohit937
- **License:** apache-2.0
- **Finetuned from model :** unsloth/meta-llama-3.1-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
AminuPeril/blockassist-bc-ravenous_leggy_caribou_1755207525
|
AminuPeril
| 2025-08-14T21:39:21Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"ravenous leggy caribou",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-14T21:39:15Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- ravenous leggy caribou
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
MajorJalud/blockassist-bc-fast_bristly_sardine_1755207058
|
MajorJalud
| 2025-08-14T21:32:21Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"fast bristly sardine",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-14T21:31:58Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- fast bristly sardine
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Clip-Sophie-Rain-Spiderma-n-V-iral-Vide-o/New.full.videos.Sophie.Rain.Spiderman.Viral.Video.original
|
Clip-Sophie-Rain-Spiderma-n-V-iral-Vide-o
| 2025-08-14T21:08:12Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-14T21:07:56Z |
<animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?leaked-viral-video" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
bartowski/internlm_Intern-S1-GGUF
|
bartowski
| 2025-08-14T20:56:54Z | 0 | 0 | null |
[
"gguf",
"image-text-to-text",
"base_model:internlm/Intern-S1",
"base_model:quantized:internlm/Intern-S1",
"region:us"
] |
image-text-to-text
| 2025-08-12T21:09:28Z |
---
quantized_by: bartowski
pipeline_tag: image-text-to-text
base_model: internlm/Intern-S1
base_model_relation: quantized
---
## Llamacpp imatrix Quantizations of Intern-S1 by internlm
Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b6139">b6139</a> for quantization.
Original model: https://huggingface.co/internlm/Intern-S1
All quants made using imatrix option with dataset from [here](https://gist.github.com/bartowski1182/eb213dccb3571f863da82e99418f81e8) combined with a subset of combined_all_small.parquet from Ed Addario [here](https://huggingface.co/datasets/eaddario/imatrix-calibration/blob/main/combined_all_small.parquet)
Run them in [LM Studio](https://lmstudio.ai/)
Run them directly with [llama.cpp](https://github.com/ggerganov/llama.cpp), or any other llama.cpp based project
## Prompt format
No prompt format found, check original model page
## Download a file (not the whole branch) from below:
| Filename | Quant type | File Size | Split | Description |
| -------- | ---------- | --------- | ----- | ----------- |
| [Intern-S1-Q8_0.gguf](https://huggingface.co/bartowski/internlm_Intern-S1-GGUF/tree/main/internlm_Intern-S1-Q8_0) | Q8_0 | 249.95GB | true | Extremely high quality, generally unneeded but max available quant. |
| [Intern-S1-Q6_K.gguf](https://huggingface.co/bartowski/internlm_Intern-S1-GGUF/tree/main/internlm_Intern-S1-Q6_K) | Q6_K | 193.07GB | true | Very high quality, near perfect, *recommended*. |
| [Intern-S1-Q5_K_M.gguf](https://huggingface.co/bartowski/internlm_Intern-S1-GGUF/tree/main/internlm_Intern-S1-Q5_K_M) | Q5_K_M | 166.90GB | true | High quality, *recommended*. |
| [Intern-S1-Q5_K_S.gguf](https://huggingface.co/bartowski/internlm_Intern-S1-GGUF/tree/main/internlm_Intern-S1-Q5_K_S) | Q5_K_S | 161.96GB | true | High quality, *recommended*. |
| [Intern-S1-Q4_1.gguf](https://huggingface.co/bartowski/internlm_Intern-S1-GGUF/tree/main/internlm_Intern-S1-Q4_1) | Q4_1 | 147.32GB | true | Legacy format, similar performance to Q4_K_S but with improved tokens/watt on Apple silicon. |
| [Intern-S1-Q4_K_M.gguf](https://huggingface.co/bartowski/internlm_Intern-S1-GGUF/tree/main/internlm_Intern-S1-Q4_K_M) | Q4_K_M | 142.65GB | true | Good quality, default size for most use cases, *recommended*. |
| [Intern-S1-Q4_K_S.gguf](https://huggingface.co/bartowski/internlm_Intern-S1-GGUF/tree/main/internlm_Intern-S1-Q4_K_S) | Q4_K_S | 137.71GB | true | Slightly lower quality with more space savings, *recommended*. |
| [Intern-S1-Q4_0.gguf](https://huggingface.co/bartowski/internlm_Intern-S1-GGUF/tree/main/internlm_Intern-S1-Q4_0) | Q4_0 | 135.00GB | true | Legacy format, offers online repacking for ARM and AVX CPU inference. |
| [Intern-S1-IQ4_NL.gguf](https://huggingface.co/bartowski/internlm_Intern-S1-GGUF/tree/main/internlm_Intern-S1-IQ4_NL) | IQ4_NL | 133.10GB | true | Similar to IQ4_XS, but slightly larger. Offers online repacking for ARM CPU inference. |
| [Intern-S1-IQ4_XS.gguf](https://huggingface.co/bartowski/internlm_Intern-S1-GGUF/tree/main/internlm_Intern-S1-IQ4_XS) | IQ4_XS | 125.89GB | true | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. |
| [Intern-S1-Q3_K_XL.gguf](https://huggingface.co/bartowski/internlm_Intern-S1-GGUF/tree/main/internlm_Intern-S1-Q3_K_XL) | Q3_K_XL | 111.73GB | true | Uses Q8_0 for embed and output weights. Lower quality but usable, good for low RAM availability. |
| [Intern-S1-Q3_K_L.gguf](https://huggingface.co/bartowski/internlm_Intern-S1-GGUF/tree/main/internlm_Intern-S1-Q3_K_L) | Q3_K_L | 111.18GB | true | Lower quality but usable, good for low RAM availability. |
| [Intern-S1-Q3_K_M.gguf](https://huggingface.co/bartowski/internlm_Intern-S1-GGUF/tree/main/internlm_Intern-S1-Q3_K_M) | Q3_K_M | 107.34GB | true | Low quality. |
| [Intern-S1-IQ3_M.gguf](https://huggingface.co/bartowski/internlm_Intern-S1-GGUF/tree/main/internlm_Intern-S1-IQ3_M) | IQ3_M | 107.34GB | true | Medium-low quality, new method with decent performance comparable to Q3_K_M. |
| [Intern-S1-Q3_K_S.gguf](https://huggingface.co/bartowski/internlm_Intern-S1-GGUF/tree/main/internlm_Intern-S1-Q3_K_S) | Q3_K_S | 102.39GB | true | Low quality, not recommended. |
| [Intern-S1-IQ3_XS.gguf](https://huggingface.co/bartowski/internlm_Intern-S1-GGUF/tree/main/internlm_Intern-S1-IQ3_XS) | IQ3_XS | 96.91GB | true | Lower quality, new method with decent performance, slightly better than Q3_K_S. |
| [Intern-S1-IQ3_XXS.gguf](https://huggingface.co/bartowski/internlm_Intern-S1-GGUF/tree/main/internlm_Intern-S1-IQ3_XXS) | IQ3_XXS | 93.08GB | true | Lower quality, new method with decent performance, comparable to Q3 quants. |
| [Intern-S1-Q2_K_L.gguf](https://huggingface.co/bartowski/internlm_Intern-S1-GGUF/tree/main/internlm_Intern-S1-Q2_K_L) | Q2_K_L | 83.34GB | true | Uses Q8_0 for embed and output weights. Very low quality but surprisingly usable. |
| [Intern-S1-Q2_K.gguf](https://huggingface.co/bartowski/internlm_Intern-S1-GGUF/tree/main/internlm_Intern-S1-Q2_K) | Q2_K | 82.73GB | true | Very low quality but surprisingly usable. |
| [Intern-S1-IQ2_S.gguf](https://huggingface.co/bartowski/internlm_Intern-S1-GGUF/tree/main/internlm_Intern-S1-IQ2_S) | IQ2_S | 66.08GB | true | Low quality, uses SOTA techniques to be usable. |
| [Intern-S1-IQ2_XS.gguf](https://huggingface.co/bartowski/internlm_Intern-S1-GGUF/tree/main/internlm_Intern-S1-IQ2_XS) | IQ2_XS | 65.62GB | true | Low quality, uses SOTA techniques to be usable. |
## Embed/output weights
Some of these quants (Q3_K_XL, Q4_K_L etc) are the standard quantization method with the embeddings and output weights quantized to Q8_0 instead of what they would normally default to.
## Downloading using huggingface-cli
<details>
<summary>Click to view download instructions</summary>
First, make sure you have hugginface-cli installed:
```
pip install -U "huggingface_hub[cli]"
```
Then, you can target the specific file you want:
```
huggingface-cli download bartowski/internlm_Intern-S1-GGUF --include "internlm_Intern-S1-Q4_K_M.gguf" --local-dir ./
```
If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run:
```
huggingface-cli download bartowski/internlm_Intern-S1-GGUF --include "internlm_Intern-S1-Q8_0/*" --local-dir ./
```
You can either specify a new local-dir (internlm_Intern-S1-Q8_0) or download them all in place (./)
</details>
## ARM/AVX information
Previously, you would download Q4_0_4_4/4_8/8_8, and these would have their weights interleaved in memory in order to improve performance on ARM and AVX machines by loading up more data in one pass.
Now, however, there is something called "online repacking" for weights. details in [this PR](https://github.com/ggerganov/llama.cpp/pull/9921). If you use Q4_0 and your hardware would benefit from repacking weights, it will do it automatically on the fly.
As of llama.cpp build [b4282](https://github.com/ggerganov/llama.cpp/releases/tag/b4282) you will not be able to run the Q4_0_X_X files and will instead need to use Q4_0.
Additionally, if you want to get slightly better quality for , you can use IQ4_NL thanks to [this PR](https://github.com/ggerganov/llama.cpp/pull/10541) which will also repack the weights for ARM, though only the 4_4 for now. The loading time may be slower but it will result in an overall speed incrase.
<details>
<summary>Click to view Q4_0_X_X information (deprecated</summary>
I'm keeping this section to show the potential theoretical uplift in performance from using the Q4_0 with online repacking.
<details>
<summary>Click to view benchmarks on an AVX2 system (EPYC7702)</summary>
| model | size | params | backend | threads | test | t/s | % (vs Q4_0) |
| ------------------------------ | ---------: | ---------: | ---------- | ------: | ------------: | -------------------: |-------------: |
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | pp512 | 204.03 ± 1.03 | 100% |
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | pp1024 | 282.92 ± 0.19 | 100% |
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | pp2048 | 259.49 ± 0.44 | 100% |
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | tg128 | 39.12 ± 0.27 | 100% |
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | tg256 | 39.31 ± 0.69 | 100% |
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | tg512 | 40.52 ± 0.03 | 100% |
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | pp512 | 301.02 ± 1.74 | 147% |
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | pp1024 | 287.23 ± 0.20 | 101% |
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | pp2048 | 262.77 ± 1.81 | 101% |
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | tg128 | 18.80 ± 0.99 | 48% |
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | tg256 | 24.46 ± 3.04 | 83% |
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | tg512 | 36.32 ± 3.59 | 90% |
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | pp512 | 271.71 ± 3.53 | 133% |
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | pp1024 | 279.86 ± 45.63 | 100% |
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | pp2048 | 320.77 ± 5.00 | 124% |
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | tg128 | 43.51 ± 0.05 | 111% |
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | tg256 | 43.35 ± 0.09 | 110% |
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | tg512 | 42.60 ± 0.31 | 105% |
Q4_0_8_8 offers a nice bump to prompt processing and a small bump to text generation
</details>
</details>
## Which file should I choose?
<details>
<summary>Click here for details</summary>
A great write up with charts showing various performances is provided by Artefact2 [here](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9)
The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have.
If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM.
If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total.
Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'.
If you don't want to think too much, grab one of the K-quants. These are in format 'QX_K_X', like Q5_K_M.
If you want to get more into the weeds, you can check out this extremely useful feature chart:
[llama.cpp feature matrix](https://github.com/ggerganov/llama.cpp/wiki/Feature-matrix)
But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX_X, like IQ3_M. These are newer and offer better performance for their size.
These I-quants can also be used on CPU, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide.
</details>
## Credits
Thank you kalomaze and Dampf for assistance in creating the imatrix calibration dataset.
Thank you ZeroWw for the inspiration to experiment with embed/output.
Thank you to LM Studio for sponsoring my work.
Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
|
JW17/Q25-3B-It-A0.75-C0.25-merged
|
JW17
| 2025-08-14T20:49:34Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-14T20:47:16Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
amaytel/whisper-small-hi
|
amaytel
| 2025-08-14T20:47:52Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice_11_0",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2025-08-14T20:26:08Z |
---
library_name: transformers
license: apache-2.0
base_model: openai/whisper-small
tags:
- generated_from_trainer
datasets:
- common_voice_11_0
model-index:
- name: whisper-small-hi
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-small-hi
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the common_voice_11_0 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 2
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.55.2
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.4
|
FastFlowLM/Qwen3-0.6B-NPU2
|
FastFlowLM
| 2025-08-14T20:39:23Z | 58 | 0 |
transformers
|
[
"transformers",
"qwen3",
"text-generation",
"qwen",
"unsloth",
"conversational",
"en",
"base_model:Qwen/Qwen3-0.6B",
"base_model:finetune:Qwen/Qwen3-0.6B",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-07-21T22:31:16Z |
---
base_model: Qwen/Qwen3-0.6B
language:
- en
library_name: transformers
license_link: https://huggingface.co/Qwen/Qwen3-0.6B/blob/main/LICENSE
license: apache-2.0
tags:
- qwen3
- qwen
- unsloth
- transformers
---
# Qwen3-0.6B
## Qwen3 Highlights
Qwen3 is the latest generation of large language models in Qwen series, offering a comprehensive suite of dense and mixture-of-experts (MoE) models. Built upon extensive training, Qwen3 delivers groundbreaking advancements in reasoning, instruction-following, agent capabilities, and multilingual support, with the following key features:
- **Uniquely support of seamless switching between thinking mode** (for complex logical reasoning, math, and coding) and **non-thinking mode** (for efficient, general-purpose dialogue) **within single model**, ensuring optimal performance across various scenarios.
- **Significantly enhancement in its reasoning capabilities**, surpassing previous QwQ (in thinking mode) and Qwen2.5 instruct models (in non-thinking mode) on mathematics, code generation, and commonsense logical reasoning.
- **Superior human preference alignment**, excelling in creative writing, role-playing, multi-turn dialogues, and instruction following, to deliver a more natural, engaging, and immersive conversational experience.
- **Expertise in agent capabilities**, enabling precise integration with external tools in both thinking and unthinking modes and achieving leading performance among open-source models in complex agent-based tasks.
- **Support of 100+ languages and dialects** with strong capabilities for **multilingual instruction following** and **translation**.
## Model Overview
**Qwen3-0.6B** has the following features:
- Type: Causal Language Models
- Training Stage: Pretraining & Post-training
- Number of Parameters: 0.6B
- Number of Paramaters (Non-Embedding): 0.44B
- Number of Layers: 28
- Number of Attention Heads (GQA): 16 for Q and 8 for KV
- Context Length: 32,768
For more details, including benchmark evaluation, hardware requirements, and inference performance, please refer to our [blog](https://qwenlm.github.io/blog/qwen3/), [GitHub](https://github.com/QwenLM/Qwen3), and [Documentation](https://qwen.readthedocs.io/en/latest/).
## Quickstart
The code of Qwen3 has been in the latest Hugging Face `transformers` and we advise you to use the latest version of `transformers`.
With `transformers<4.51.0`, you will encounter the following error:
```
KeyError: 'qwen3'
```
The following contains a code snippet illustrating how to use the model generate content based on given inputs.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "Qwen/Qwen3-0.6B"
# load the tokenizer and the model
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
# prepare the model input
prompt = "Give me a short introduction to large language model."
messages = [
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
enable_thinking=True # Switches between thinking and non-thinking modes. Default is True.
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
# conduct text completion
generated_ids = model.generate(
**model_inputs,
max_new_tokens=32768
)
output_ids = generated_ids[0][len(model_inputs.input_ids[0]):].tolist()
# parsing thinking content
try:
# rindex finding 151668 (</think>)
index = len(output_ids) - output_ids[::-1].index(151668)
except ValueError:
index = 0
thinking_content = tokenizer.decode(output_ids[:index], skip_special_tokens=True).strip("\n")
content = tokenizer.decode(output_ids[index:], skip_special_tokens=True).strip("\n")
print("thinking content:", thinking_content)
print("content:", content)
```
For deployment, you can use `vllm>=0.8.5` or `sglang>=0.4.5.post2` to create an OpenAI-compatible API endpoint:
- vLLM:
```shell
vllm serve Qwen/Qwen3-0.6B --enable-reasoning --reasoning-parser deepseek_r1
```
- SGLang:
```shell
python -m sglang.launch_server --model-path Qwen/Qwen3-0.6B --reasoning-parser deepseek-r1
```
## Switching Between Thinking and Non-Thinking Mode
> [!TIP]
> The `enable_thinking` switch is also available in APIs created by vLLM and SGLang.
> Please refer to [our documentation](https://qwen.readthedocs.io/) for more details.
### `enable_thinking=True`
By default, Qwen3 has thinking capabilities enabled, similar to QwQ-32B. This means the model will use its reasoning abilities to enhance the quality of generated responses. For example, when explicitly setting `enable_thinking=True` or leaving it as the default value in `tokenizer.apply_chat_template`, the model will engage its thinking mode.
```python
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
enable_thinking=True # True is the default value for enable_thinking
)
```
In this mode, the model will generate think content wrapped in a `<think>...</think>` block, followed by the final response.
> [!NOTE]
> For thinking mode, use `Temperature=0.6`, `TopP=0.95`, `TopK=20`, and `MinP=0` (the default setting in `generation_config.json`). **DO NOT use greedy decoding**, as it can lead to performance degradation and endless repetitions. For more detailed guidance, please refer to the [Best Practices](#best-practices) section.
### `enable_thinking=False`
We provide a hard switch to strictly disable the model's thinking behavior, aligning its functionality with the previous Qwen2.5-Instruct models. This mode is particularly useful in scenarios where disabling thinking is essential for enhancing efficiency.
```python
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
enable_thinking=False # Setting enable_thinking=False disables thinking mode
)
```
In this mode, the model will not generate any think content and will not include a `<think>...</think>` block.
> [!NOTE]
> For non-thinking mode, we suggest using `Temperature=0.7`, `TopP=0.8`, `TopK=20`, and `MinP=0`. For more detailed guidance, please refer to the [Best Practices](#best-practices) section.
### Advanced Usage: Switching Between Thinking and Non-Thinking Modes via User Input
We provide a soft switch mechanism that allows users to dynamically control the model's behavior when `enable_thinking=True`. Specifically, you can add `/think` and `/no_think` to user prompts or system messages to switch the model's thinking mode from turn to turn. The model will follow the most recent instruction in multi-turn conversations.
Here is an example of a multi-turn conversation:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
class QwenChatbot:
def __init__(self, model_name="Qwen/Qwen3-0.6B"):
self.tokenizer = AutoTokenizer.from_pretrained(model_name)
self.model = AutoModelForCausalLM.from_pretrained(model_name)
self.history = []
def generate_response(self, user_input):
messages = self.history + [{"role": "user", "content": user_input}]
text = self.tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
inputs = self.tokenizer(text, return_tensors="pt")
response_ids = self.model.generate(**inputs, max_new_tokens=32768)[0][len(inputs.input_ids[0]):].tolist()
response = self.tokenizer.decode(response_ids, skip_special_tokens=True)
# Update history
self.history.append({"role": "user", "content": user_input})
self.history.append({"role": "assistant", "content": response})
return response
# Example Usage
if __name__ == "__main__":
chatbot = QwenChatbot()
# First input (without /think or /no_think tags, thinking mode is enabled by default)
user_input_1 = "How many r's in strawberries?"
print(f"User: {user_input_1}")
response_1 = chatbot.generate_response(user_input_1)
print(f"Bot: {response_1}")
print("----------------------")
# Second input with /no_think
user_input_2 = "Then, how many r's in blueberries? /no_think"
print(f"User: {user_input_2}")
response_2 = chatbot.generate_response(user_input_2)
print(f"Bot: {response_2}")
print("----------------------")
# Third input with /think
user_input_3 = "Really? /think"
print(f"User: {user_input_3}")
response_3 = chatbot.generate_response(user_input_3)
print(f"Bot: {response_3}")
```
> **Note**
> For API compatibility, when `enable_thinking=True`, regardless of whether the user uses `/think` or `/no_think`, the model will always output a block wrapped in `<think>...</think>`. However, the content inside this block may be empty if thinking is disabled.
> When `enable_thinking=False`, the soft switches are not valid. Regardless of any `/think` or `/no_think` tags input by the user, the model will not generate think content and will not include a `<think>...</think>` block.
## Agentic Use
Qwen3 excels in tool calling capabilities. We recommend using [Qwen-Agent](https://github.com/QwenLM/Qwen-Agent) to make the best use of agentic ability of Qwen3. Qwen-Agent encapsulates tool-calling templates and tool-calling parsers internally, greatly reducing coding complexity.
To define the available tools, you can use the MCP configuration file, use the integrated tool of Qwen-Agent, or integrate other tools by yourself.
```python
from qwen_agent.agents import Assistant
# Define LLM
llm_cfg = {
'model': 'Qwen3-0.6B',
# Use the endpoint provided by Alibaba Model Studio:
# 'model_type': 'qwen_dashscope',
# 'api_key': os.getenv('DASHSCOPE_API_KEY'),
# Use a custom endpoint compatible with OpenAI API:
'model_server': 'http://localhost:8000/v1', # api_base
'api_key': 'EMPTY',
# Other parameters:
# 'generate_cfg': {
# # Add: When the response content is `<think>this is the thought</think>this is the answer;
# # Do not add: When the response has been separated by reasoning_content and content.
# 'thought_in_content': True,
# },
}
# Define Tools
tools = [
{'mcpServers': { # You can specify the MCP configuration file
'time': {
'command': 'uvx',
'args': ['mcp-server-time', '--local-timezone=Asia/Shanghai']
},
"fetch": {
"command": "uvx",
"args": ["mcp-server-fetch"]
}
}
},
'code_interpreter', # Built-in tools
]
# Define Agent
bot = Assistant(llm=llm_cfg, function_list=tools)
# Streaming generation
messages = [{'role': 'user', 'content': 'https://qwenlm.github.io/blog/ Introduce the latest developments of Qwen'}]
for responses in bot.run(messages=messages):
pass
print(responses)
```
## Best Practices
To achieve optimal performance, we recommend the following settings:
1. **Sampling Parameters**:
- For thinking mode (`enable_thinking=True`), use `Temperature=0.6`, `TopP=0.95`, `TopK=20`, and `MinP=0`. **DO NOT use greedy decoding**, as it can lead to performance degradation and endless repetitions.
- For non-thinking mode (`enable_thinking=False`), we suggest using `Temperature=0.7`, `TopP=0.8`, `TopK=20`, and `MinP=0`.
- For supported frameworks, you can adjust the `presence_penalty` parameter between 0 and 2 to reduce endless repetitions. However, using a higher value may occasionally result in language mixing and a slight decrease in model performance.
2. **Adequate Output Length**: We recommend using an output length of 32,768 tokens for most queries. For benchmarking on highly complex problems, such as those found in math and programming competitions, we suggest setting the max output length to 38,912 tokens. This provides the model with sufficient space to generate detailed and comprehensive responses, thereby enhancing its overall performance.
3. **Standardize Output Format**: We recommend using prompts to standardize model outputs when benchmarking.
- **Math Problems**: Include "Please reason step by step, and put your final answer within \boxed{}." in the prompt.
- **Multiple-Choice Questions**: Add the following JSON structure to the prompt to standardize responses: "Please show your choice in the `answer` field with only the choice letter, e.g., `"answer": "C"`."
4. **No Thinking Content in History**: In multi-turn conversations, the historical model output should only include the final output part and does not need to include the thinking content. It is implemented in the provided chat template in Jinja2. However, for frameworks that do not directly use the Jinja2 chat template, it is up to the developers to ensure that the best practice is followed.
### Citation
If you find our work helpful, feel free to give us a cite.
```
@misc{qwen3,
title = {Qwen3},
url = {https://qwenlm.github.io/blog/qwen3/},
author = {Qwen Team},
month = {April},
year = {2025}
}
```
|
AAAAnsah/Llama-3.2-1B_ES_theta_1.4
|
AAAAnsah
| 2025-08-14T20:39:13Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-12T18:32:15Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
FastFlowLM/Qwen3-1.7B-NPU2
|
FastFlowLM
| 2025-08-14T20:37:48Z | 42 | 0 |
transformers
|
[
"transformers",
"qwen3",
"text-generation",
"qwen",
"unsloth",
"conversational",
"en",
"base_model:Qwen/Qwen3-1.7B",
"base_model:finetune:Qwen/Qwen3-1.7B",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-07-21T22:31:40Z |
---
base_model: Qwen/Qwen3-1.7B
language:
- en
license_link: https://huggingface.co/Qwen/Qwen3-1.7B/blob/main/LICENSE
license: apache-2.0
tags:
- qwen3
- qwen
- unsloth
- transformers
---
# Qwen3-1.7B
## Qwen3 Highlights
Qwen3 is the latest generation of large language models in Qwen series, offering a comprehensive suite of dense and mixture-of-experts (MoE) models. Built upon extensive training, Qwen3 delivers groundbreaking advancements in reasoning, instruction-following, agent capabilities, and multilingual support, with the following key features:
- **Uniquely support of seamless switching between thinking mode** (for complex logical reasoning, math, and coding) and **non-thinking mode** (for efficient, general-purpose dialogue) **within single model**, ensuring optimal performance across various scenarios.
- **Significantly enhancement in its reasoning capabilities**, surpassing previous QwQ (in thinking mode) and Qwen2.5 instruct models (in non-thinking mode) on mathematics, code generation, and commonsense logical reasoning.
- **Superior human preference alignment**, excelling in creative writing, role-playing, multi-turn dialogues, and instruction following, to deliver a more natural, engaging, and immersive conversational experience.
- **Expertise in agent capabilities**, enabling precise integration with external tools in both thinking and unthinking modes and achieving leading performance among open-source models in complex agent-based tasks.
- **Support of 100+ languages and dialects** with strong capabilities for **multilingual instruction following** and **translation**.
## Model Overview
**Qwen3-1.7B** has the following features:
- Type: Causal Language Models
- Training Stage: Pretraining & Post-training
- Number of Parameters: 1.7B
- Number of Paramaters (Non-Embedding): 1.4B
- Number of Layers: 28
- Number of Attention Heads (GQA): 16 for Q and 8 for KV
- Context Length: 32,768
For more details, including benchmark evaluation, hardware requirements, and inference performance, please refer to our [blog](https://qwenlm.github.io/blog/qwen3/), [GitHub](https://github.com/QwenLM/Qwen3), and [Documentation](https://qwen.readthedocs.io/en/latest/).
## Quickstart
The code of Qwen3 has been in the latest Hugging Face `transformers` and we advise you to use the latest version of `transformers`.
With `transformers<4.51.0`, you will encounter the following error:
```
KeyError: 'qwen3'
```
The following contains a code snippet illustrating how to use the model generate content based on given inputs.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "Qwen/Qwen3-1.7B"
# load the tokenizer and the model
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
# prepare the model input
prompt = "Give me a short introduction to large language model."
messages = [
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
enable_thinking=True # Switches between thinking and non-thinking modes. Default is True.
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
# conduct text completion
generated_ids = model.generate(
**model_inputs,
max_new_tokens=32768
)
output_ids = generated_ids[0][len(model_inputs.input_ids[0]):].tolist()
# parsing thinking content
try:
# rindex finding 151668 (</think>)
index = len(output_ids) - output_ids[::-1].index(151668)
except ValueError:
index = 0
thinking_content = tokenizer.decode(output_ids[:index], skip_special_tokens=True).strip("\n")
content = tokenizer.decode(output_ids[index:], skip_special_tokens=True).strip("\n")
print("thinking content:", thinking_content)
print("content:", content)
```
For deployment, you can use `vllm>=0.8.5` or `sglang>=0.4.5.post2` to create an OpenAI-compatible API endpoint:
- vLLM:
```shell
vllm serve Qwen/Qwen3-1.7B --enable-reasoning --reasoning-parser deepseek_r1
```
- SGLang:
```shell
python -m sglang.launch_server --model-path Qwen/Qwen3-1.7B --reasoning-parser deepseek-r1
```
## Switching Between Thinking and Non-Thinking Mode
> [!TIP]
> The `enable_thinking` switch is also available in APIs created by vLLM and SGLang.
> Please refer to [our documentation](https://qwen.readthedocs.io/) for more details.
### `enable_thinking=True`
By default, Qwen3 has thinking capabilities enabled, similar to QwQ-32B. This means the model will use its reasoning abilities to enhance the quality of generated responses. For example, when explicitly setting `enable_thinking=True` or leaving it as the default value in `tokenizer.apply_chat_template`, the model will engage its thinking mode.
```python
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
enable_thinking=True # True is the default value for enable_thinking
)
```
In this mode, the model will generate think content wrapped in a `<think>...</think>` block, followed by the final response.
> [!NOTE]
> For thinking mode, use `Temperature=0.6`, `TopP=0.95`, `TopK=20`, and `MinP=0` (the default setting in `generation_config.json`). **DO NOT use greedy decoding**, as it can lead to performance degradation and endless repetitions. For more detailed guidance, please refer to the [Best Practices](#best-practices) section.
### `enable_thinking=False`
We provide a hard switch to strictly disable the model's thinking behavior, aligning its functionality with the previous Qwen2.5-Instruct models. This mode is particularly useful in scenarios where disabling thinking is essential for enhancing efficiency.
```python
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
enable_thinking=False # Setting enable_thinking=False disables thinking mode
)
```
In this mode, the model will not generate any think content and will not include a `<think>...</think>` block.
> [!NOTE]
> For non-thinking mode, we suggest using `Temperature=0.7`, `TopP=0.8`, `TopK=20`, and `MinP=0`. For more detailed guidance, please refer to the [Best Practices](#best-practices) section.
### Advanced Usage: Switching Between Thinking and Non-Thinking Modes via User Input
We provide a soft switch mechanism that allows users to dynamically control the model's behavior when `enable_thinking=True`. Specifically, you can add `/think` and `/no_think` to user prompts or system messages to switch the model's thinking mode from turn to turn. The model will follow the most recent instruction in multi-turn conversations.
Here is an example of a multi-turn conversation:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
class QwenChatbot:
def __init__(self, model_name="Qwen/Qwen3-1.7B"):
self.tokenizer = AutoTokenizer.from_pretrained(model_name)
self.model = AutoModelForCausalLM.from_pretrained(model_name)
self.history = []
def generate_response(self, user_input):
messages = self.history + [{"role": "user", "content": user_input}]
text = self.tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
inputs = self.tokenizer(text, return_tensors="pt")
response_ids = self.model.generate(**inputs, max_new_tokens=32768)[0][len(inputs.input_ids[0]):].tolist()
response = self.tokenizer.decode(response_ids, skip_special_tokens=True)
# Update history
self.history.append({"role": "user", "content": user_input})
self.history.append({"role": "assistant", "content": response})
return response
# Example Usage
if __name__ == "__main__":
chatbot = QwenChatbot()
# First input (without /think or /no_think tags, thinking mode is enabled by default)
user_input_1 = "How many r's in strawberries?"
print(f"User: {user_input_1}")
response_1 = chatbot.generate_response(user_input_1)
print(f"Bot: {response_1}")
print("----------------------")
# Second input with /no_think
user_input_2 = "Then, how many r's in blueberries? /no_think"
print(f"User: {user_input_2}")
response_2 = chatbot.generate_response(user_input_2)
print(f"Bot: {response_2}")
print("----------------------")
# Third input with /think
user_input_3 = "Really? /think"
print(f"User: {user_input_3}")
response_3 = chatbot.generate_response(user_input_3)
print(f"Bot: {response_3}")
```
> **Note**
> For API compatibility, when `enable_thinking=True`, regardless of whether the user uses `/think` or `/no_think`, the model will always output a block wrapped in `<think>...</think>`. However, the content inside this block may be empty if thinking is disabled.
> When `enable_thinking=False`, the soft switches are not valid. Regardless of any `/think` or `/no_think` tags input by the user, the model will not generate think content and will not include a `<think>...</think>` block.
## Agentic Use
Qwen3 excels in tool calling capabilities. We recommend using [Qwen-Agent](https://github.com/QwenLM/Qwen-Agent) to make the best use of agentic ability of Qwen3. Qwen-Agent encapsulates tool-calling templates and tool-calling parsers internally, greatly reducing coding complexity.
To define the available tools, you can use the MCP configuration file, use the integrated tool of Qwen-Agent, or integrate other tools by yourself.
```python
from qwen_agent.agents import Assistant
# Define LLM
llm_cfg = {
'model': 'Qwen3-1.7B',
# Use the endpoint provided by Alibaba Model Studio:
# 'model_type': 'qwen_dashscope',
# 'api_key': os.getenv('DASHSCOPE_API_KEY'),
# Use a custom endpoint compatible with OpenAI API:
'model_server': 'http://localhost:8000/v1', # api_base
'api_key': 'EMPTY',
# Other parameters:
# 'generate_cfg': {
# # Add: When the response content is `<think>this is the thought</think>this is the answer;
# # Do not add: When the response has been separated by reasoning_content and content.
# 'thought_in_content': True,
# },
}
# Define Tools
tools = [
{'mcpServers': { # You can specify the MCP configuration file
'time': {
'command': 'uvx',
'args': ['mcp-server-time', '--local-timezone=Asia/Shanghai']
},
"fetch": {
"command": "uvx",
"args": ["mcp-server-fetch"]
}
}
},
'code_interpreter', # Built-in tools
]
# Define Agent
bot = Assistant(llm=llm_cfg, function_list=tools)
# Streaming generation
messages = [{'role': 'user', 'content': 'https://qwenlm.github.io/blog/ Introduce the latest developments of Qwen'}]
for responses in bot.run(messages=messages):
pass
print(responses)
```
## Best Practices
To achieve optimal performance, we recommend the following settings:
1. **Sampling Parameters**:
- For thinking mode (`enable_thinking=True`), use `Temperature=0.6`, `TopP=0.95`, `TopK=20`, and `MinP=0`. **DO NOT use greedy decoding**, as it can lead to performance degradation and endless repetitions.
- For non-thinking mode (`enable_thinking=False`), we suggest using `Temperature=0.7`, `TopP=0.8`, `TopK=20`, and `MinP=0`.
- For supported frameworks, you can adjust the `presence_penalty` parameter between 0 and 2 to reduce endless repetitions. However, using a higher value may occasionally result in language mixing and a slight decrease in model performance.
2. **Adequate Output Length**: We recommend using an output length of 32,768 tokens for most queries. For benchmarking on highly complex problems, such as those found in math and programming competitions, we suggest setting the max output length to 38,912 tokens. This provides the model with sufficient space to generate detailed and comprehensive responses, thereby enhancing its overall performance.
3. **Standardize Output Format**: We recommend using prompts to standardize model outputs when benchmarking.
- **Math Problems**: Include "Please reason step by step, and put your final answer within \boxed{}." in the prompt.
- **Multiple-Choice Questions**: Add the following JSON structure to the prompt to standardize responses: "Please show your choice in the `answer` field with only the choice letter, e.g., `"answer": "C"`."
4. **No Thinking Content in History**: In multi-turn conversations, the historical model output should only include the final output part and does not need to include the thinking content. It is implemented in the provided chat template in Jinja2. However, for frameworks that do not directly use the Jinja2 chat template, it is up to the developers to ensure that the best practice is followed.
### Citation
If you find our work helpful, feel free to give us a cite.
```
@misc{qwen3,
title = {Qwen3},
url = {https://qwenlm.github.io/blog/qwen3/},
author = {Qwen Team},
month = {April},
year = {2025}
}
```
|
hdong0/Qwen-Math-7B-batch-mix-GRPO_deepscaler_acc_seq_end_mask_thin_mu_8
|
hdong0
| 2025-08-14T20:36:49Z | 8 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"open-r1",
"trl",
"grpo",
"conversational",
"dataset:agentica-org/DeepScaleR-Preview-Dataset",
"arxiv:2402.03300",
"base_model:Qwen/Qwen2.5-Math-7B",
"base_model:finetune:Qwen/Qwen2.5-Math-7B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-13T16:18:21Z |
---
base_model: Qwen/Qwen2.5-Math-7B
datasets: agentica-org/DeepScaleR-Preview-Dataset
library_name: transformers
model_name: Qwen-Math-7B-batch-mix-GRPO_deepscaler_acc_seq_end_mask_thin_mu_8
tags:
- generated_from_trainer
- open-r1
- trl
- grpo
licence: license
---
# Model Card for Qwen-Math-7B-batch-mix-GRPO_deepscaler_acc_seq_end_mask_thin_mu_8
This model is a fine-tuned version of [Qwen/Qwen2.5-Math-7B](https://huggingface.co/Qwen/Qwen2.5-Math-7B) on the [agentica-org/DeepScaleR-Preview-Dataset](https://huggingface.co/datasets/agentica-org/DeepScaleR-Preview-Dataset) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="hdong0/Qwen-Math-7B-batch-mix-GRPO_deepscaler_acc_seq_end_mask_thin_mu_8", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.18.0.dev0
- Transformers: 4.52.0.dev0
- Pytorch: 2.6.0
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
atipiqal/bert-italian-cased-civil
|
atipiqal
| 2025-08-14T20:29:19Z | 0 | 0 | null |
[
"safetensors",
"bert",
"it",
"base_model:dbmdz/bert-base-italian-cased",
"base_model:finetune:dbmdz/bert-base-italian-cased",
"license:apache-2.0",
"region:us"
] | null | 2025-08-14T19:46:58Z |
---
license: apache-2.0
language:
- it
base_model:
- dbmdz/bert-base-italian-cased
---
|
VIDEOS-19-Sima-Sarkar-viral-video-Clip/New.full.videos.Sima.Sarkar.Viral.Video.Official.Tutorial
|
VIDEOS-19-Sima-Sarkar-viral-video-Clip
| 2025-08-14T20:25:56Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-14T20:24:33Z |
<animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?leaked-viral-video" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
MajorJalud/blockassist-bc-fast_bristly_sardine_1755202883
|
MajorJalud
| 2025-08-14T20:22:12Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"fast bristly sardine",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-14T20:21:47Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- fast bristly sardine
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
harold0693/pruebas
|
harold0693
| 2025-08-14T20:20:01Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-08-14T20:20:01Z |
---
license: apache-2.0
---
|
elmenbillion/blockassist-bc-beaked_sharp_otter_1755201092
|
elmenbillion
| 2025-08-14T20:19:30Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"beaked sharp otter",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-14T20:18:09Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- beaked sharp otter
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
dreamwar/HRM-Text1-C4
|
dreamwar
| 2025-08-14T20:17:29Z | 0 | 0 | null |
[
"en",
"dataset:allenai/c4",
"base_model:qingy2024/HRM-Text1-41M",
"base_model:finetune:qingy2024/HRM-Text1-41M",
"license:apache-2.0",
"region:us"
] | null | 2025-08-14T19:26:19Z |
---
license: apache-2.0
datasets:
- allenai/c4
language:
- en
base_model:
- qingy2024/HRM-Text1-41M
---
|
Baharababah/Llama-3.2-1B-Q6_K-GGUF
|
Baharababah
| 2025-08-14T20:17:27Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"facebook",
"meta",
"pytorch",
"llama",
"llama-3",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"de",
"fr",
"it",
"pt",
"hi",
"es",
"th",
"base_model:meta-llama/Llama-3.2-1B",
"base_model:quantized:meta-llama/Llama-3.2-1B",
"license:llama3.2",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-14T20:16:38Z |
---
language:
- en
- de
- fr
- it
- pt
- hi
- es
- th
library_name: transformers
pipeline_tag: text-generation
tags:
- facebook
- meta
- pytorch
- llama
- llama-3
- llama-cpp
- gguf-my-repo
license: llama3.2
extra_gated_prompt: "### LLAMA 3.2 COMMUNITY LICENSE AGREEMENT\n\nLlama 3.2 Version\
\ Release Date: September 25, 2024\n\n“Agreement” means the terms and conditions\
\ for use, reproduction, distribution and modification of the Llama Materials set\
\ forth herein.\n\n“Documentation” means the specifications, manuals and documentation\
\ accompanying Llama 3.2 distributed by Meta at https://llama.meta.com/doc/overview.\n\
\n“Licensee” or “you” means you, or your employer or any other person or entity\
\ (if you are entering into this Agreement on such person or entity’s behalf),\
\ of the age required under applicable laws, rules or regulations to provide legal\
\ consent and that has legal authority to bind your employer or such other person\
\ or entity if you are entering in this Agreement on their behalf.\n\n“Llama 3.2”\
\ means the foundational large language models and software and algorithms, including\
\ machine-learning model code, trained model weights, inference-enabling code, training-enabling\
\ code, fine-tuning enabling code and other elements of the foregoing distributed\
\ by Meta at https://www.llama.com/llama-downloads.\n\n“Llama Materials” means,\
\ collectively, Meta’s proprietary Llama 3.2 and Documentation (and any portion\
\ thereof) made available under this Agreement.\n\n“Meta” or “we” means Meta Platforms\
\ Ireland Limited (if you are located in or, if you are an entity, your principal\
\ place of business is in the EEA or Switzerland) and Meta Platforms, Inc. (if\
\ you are located outside of the EEA or Switzerland). \n\nBy clicking “I Accept”\
\ below or by using or distributing any portion or element of the Llama Materials,\
\ you agree to be bound by this Agreement.\n\n1. License Rights and Redistribution.\n\
a. Grant of Rights. You are granted a non-exclusive, worldwide, non-transferable\
\ and royalty-free limited license under Meta’s intellectual property or other rights\
\ owned by Meta embodied in the Llama Materials to use, reproduce, distribute,\
\ copy, create derivative works of, and make modifications to the Llama Materials.\
\ \nb. Redistribution and Use. \ni. If you distribute or make available the Llama\
\ Materials (or any derivative works thereof), or a product or service (including\
\ another AI model) that contains any of them, you shall (A) provide a copy of this\
\ Agreement with any such Llama Materials; and (B) prominently display “Built with\
\ Llama” on a related website, user interface, blogpost, about page, or product\
\ documentation. If you use the Llama Materials or any outputs or results of the\
\ Llama Materials to create, train, fine tune, or otherwise improve an AI model,\
\ which is distributed or made available, you shall also include “Llama” at the\
\ beginning of any such AI model name.\nii. If you receive Llama Materials, or any\
\ derivative works thereof, from a Licensee as part of an integrated end user product,\
\ then Section 2 of this Agreement will not apply to you. \niii. You must retain\
\ in all copies of the Llama Materials that you distribute the following attribution\
\ notice within a “Notice” text file distributed as a part of such copies: “Llama\
\ 3.2 is licensed under the Llama 3.2 Community License, Copyright © Meta Platforms,\
\ Inc. All Rights Reserved.”\niv. Your use of the Llama Materials must comply with\
\ applicable laws and regulations (including trade compliance laws and regulations)\
\ and adhere to the Acceptable Use Policy for the Llama Materials (available at\
\ https://www.llama.com/llama3_2/use-policy), which is hereby incorporated by reference\
\ into this Agreement.\n \n2. Additional Commercial Terms. If, on the Llama 3.2\
\ version release date, the monthly active users of the products or services made\
\ available by or for Licensee, or Licensee’s affiliates, is greater than 700 million\
\ monthly active users in the preceding calendar month, you must request a license\
\ from Meta, which Meta may grant to you in its sole discretion, and you are not\
\ authorized to exercise any of the rights under this Agreement unless or until\
\ Meta otherwise expressly grants you such rights.\n3. Disclaimer of Warranty. UNLESS\
\ REQUIRED BY APPLICABLE LAW, THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS THEREFROM\
\ ARE PROVIDED ON AN “AS IS” BASIS, WITHOUT WARRANTIES OF ANY KIND, AND META DISCLAIMS\
\ ALL WARRANTIES OF ANY KIND, BOTH EXPRESS AND IMPLIED, INCLUDING, WITHOUT LIMITATION,\
\ ANY WARRANTIES OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR\
\ PURPOSE. YOU ARE SOLELY RESPONSIBLE FOR DETERMINING THE APPROPRIATENESS OF USING\
\ OR REDISTRIBUTING THE LLAMA MATERIALS AND ASSUME ANY RISKS ASSOCIATED WITH YOUR\
\ USE OF THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS.\n4. Limitation of Liability.\
\ IN NO EVENT WILL META OR ITS AFFILIATES BE LIABLE UNDER ANY THEORY OF LIABILITY,\
\ WHETHER IN CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE, ARISING\
\ OUT OF THIS AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT, SPECIAL, CONSEQUENTIAL,\
\ INCIDENTAL, EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF META OR ITS AFFILIATES HAVE\
\ BEEN ADVISED OF THE POSSIBILITY OF ANY OF THE FOREGOING.\n5. Intellectual Property.\n\
a. No trademark licenses are granted under this Agreement, and in connection with\
\ the Llama Materials, neither Meta nor Licensee may use any name or mark owned\
\ by or associated with the other or any of its affiliates, except as required\
\ for reasonable and customary use in describing and redistributing the Llama Materials\
\ or as set forth in this Section 5(a). Meta hereby grants you a license to use\
\ “Llama” (the “Mark”) solely as required to comply with the last sentence of Section\
\ 1.b.i. You will comply with Meta’s brand guidelines (currently accessible at\
\ https://about.meta.com/brand/resources/meta/company-brand/). All goodwill arising\
\ out of your use of the Mark will inure to the benefit of Meta.\nb. Subject to\
\ Meta’s ownership of Llama Materials and derivatives made by or for Meta, with\
\ respect to any derivative works and modifications of the Llama Materials that\
\ are made by you, as between you and Meta, you are and will be the owner of such\
\ derivative works and modifications.\nc. If you institute litigation or other proceedings\
\ against Meta or any entity (including a cross-claim or counterclaim in a lawsuit)\
\ alleging that the Llama Materials or Llama 3.2 outputs or results, or any portion\
\ of any of the foregoing, constitutes infringement of intellectual property or\
\ other rights owned or licensable by you, then any licenses granted to you under\
\ this Agreement shall terminate as of the date such litigation or claim is filed\
\ or instituted. You will indemnify and hold harmless Meta from and against any\
\ claim by any third party arising out of or related to your use or distribution\
\ of the Llama Materials.\n6. Term and Termination. The term of this Agreement will\
\ commence upon your acceptance of this Agreement or access to the Llama Materials\
\ and will continue in full force and effect until terminated in accordance with\
\ the terms and conditions herein. Meta may terminate this Agreement if you are\
\ in breach of any term or condition of this Agreement. Upon termination of this\
\ Agreement, you shall delete and cease use of the Llama Materials. Sections 3,\
\ 4 and 7 shall survive the termination of this Agreement. \n7. Governing Law and\
\ Jurisdiction. This Agreement will be governed and construed under the laws of\
\ the State of California without regard to choice of law principles, and the UN\
\ Convention on Contracts for the International Sale of Goods does not apply to\
\ this Agreement. The courts of California shall have exclusive jurisdiction of\
\ any dispute arising out of this Agreement. \n### Llama 3.2 Acceptable Use Policy\n\
Meta is committed to promoting safe and fair use of its tools and features, including\
\ Llama 3.2. If you access or use Llama 3.2, you agree to this Acceptable Use Policy\
\ (“**Policy**”). The most recent copy of this policy can be found at [https://www.llama.com/llama3_2/use-policy](https://www.llama.com/llama3_2/use-policy).\n\
#### Prohibited Uses\nWe want everyone to use Llama 3.2 safely and responsibly.\
\ You agree you will not use, or allow others to use, Llama 3.2 to:\n1. Violate\
\ the law or others’ rights, including to:\n 1. Engage in, promote, generate,\
\ contribute to, encourage, plan, incite, or further illegal or unlawful activity\
\ or content, such as:\n 1. Violence or terrorism\n 2. Exploitation\
\ or harm to children, including the solicitation, creation, acquisition, or dissemination\
\ of child exploitative content or failure to report Child Sexual Abuse Material\n\
\ 3. Human trafficking, exploitation, and sexual violence\n 4. The\
\ illegal distribution of information or materials to minors, including obscene\
\ materials, or failure to employ legally required age-gating in connection with\
\ such information or materials.\n 5. Sexual solicitation\n 6. Any\
\ other criminal activity\n 1. Engage in, promote, incite, or facilitate the\
\ harassment, abuse, threatening, or bullying of individuals or groups of individuals\n\
\ 2. Engage in, promote, incite, or facilitate discrimination or other unlawful\
\ or harmful conduct in the provision of employment, employment benefits, credit,\
\ housing, other economic benefits, or other essential goods and services\n 3.\
\ Engage in the unauthorized or unlicensed practice of any profession including,\
\ but not limited to, financial, legal, medical/health, or related professional\
\ practices\n 4. Collect, process, disclose, generate, or infer private or sensitive\
\ information about individuals, including information about individuals’ identity,\
\ health, or demographic information, unless you have obtained the right to do so\
\ in accordance with applicable law\n 5. Engage in or facilitate any action or\
\ generate any content that infringes, misappropriates, or otherwise violates any\
\ third-party rights, including the outputs or results of any products or services\
\ using the Llama Materials\n 6. Create, generate, or facilitate the creation\
\ of malicious code, malware, computer viruses or do anything else that could disable,\
\ overburden, interfere with or impair the proper working, integrity, operation\
\ or appearance of a website or computer system\n 7. Engage in any action, or\
\ facilitate any action, to intentionally circumvent or remove usage restrictions\
\ or other safety measures, or to enable functionality disabled by Meta \n2. Engage\
\ in, promote, incite, facilitate, or assist in the planning or development of activities\
\ that present a risk of death or bodily harm to individuals, including use of Llama\
\ 3.2 related to the following:\n 8. Military, warfare, nuclear industries or\
\ applications, espionage, use for materials or activities that are subject to the\
\ International Traffic Arms Regulations (ITAR) maintained by the United States\
\ Department of State or to the U.S. Biological Weapons Anti-Terrorism Act of 1989\
\ or the Chemical Weapons Convention Implementation Act of 1997\n 9. Guns and\
\ illegal weapons (including weapon development)\n 10. Illegal drugs and regulated/controlled\
\ substances\n 11. Operation of critical infrastructure, transportation technologies,\
\ or heavy machinery\n 12. Self-harm or harm to others, including suicide, cutting,\
\ and eating disorders\n 13. Any content intended to incite or promote violence,\
\ abuse, or any infliction of bodily harm to an individual\n3. Intentionally deceive\
\ or mislead others, including use of Llama 3.2 related to the following:\n 14.\
\ Generating, promoting, or furthering fraud or the creation or promotion of disinformation\n\
\ 15. Generating, promoting, or furthering defamatory content, including the\
\ creation of defamatory statements, images, or other content\n 16. Generating,\
\ promoting, or further distributing spam\n 17. Impersonating another individual\
\ without consent, authorization, or legal right\n 18. Representing that the\
\ use of Llama 3.2 or outputs are human-generated\n 19. Generating or facilitating\
\ false online engagement, including fake reviews and other means of fake online\
\ engagement \n4. Fail to appropriately disclose to end users any known dangers\
\ of your AI system 5. Interact with third party tools, models, or software designed\
\ to generate unlawful content or engage in unlawful or harmful conduct and/or represent\
\ that the outputs of such tools, models, or software are associated with Meta or\
\ Llama 3.2\n\nWith respect to any multimodal models included in Llama 3.2, the\
\ rights granted under Section 1(a) of the Llama 3.2 Community License Agreement\
\ are not being granted to you if you are an individual domiciled in, or a company\
\ with a principal place of business in, the European Union. This restriction does\
\ not apply to end users of a product or service that incorporates any such multimodal\
\ models.\n\nPlease report any violation of this Policy, software “bug,” or other\
\ problems that could lead to a violation of this Policy through one of the following\
\ means:\n\n* Reporting issues with the model: [https://github.com/meta-llama/llama-models/issues](https://l.workplace.com/l.php?u=https%3A%2F%2Fgithub.com%2Fmeta-llama%2Fllama-models%2Fissues&h=AT0qV8W9BFT6NwihiOHRuKYQM_UnkzN_NmHMy91OT55gkLpgi4kQupHUl0ssR4dQsIQ8n3tfd0vtkobvsEvt1l4Ic6GXI2EeuHV8N08OG2WnbAmm0FL4ObkazC6G_256vN0lN9DsykCvCqGZ)\n\
* Reporting risky content generated by the model: [developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback)\n\
* Reporting bugs and security concerns: [facebook.com/whitehat/info](http://facebook.com/whitehat/info)\n\
* Reporting violations of the Acceptable Use Policy or unlicensed uses of Llama\
\ 3.2: [email protected]"
extra_gated_fields:
First Name: text
Last Name: text
Date of birth: date_picker
Country: country
Affiliation: text
Job title:
type: select
options:
- Student
- Research Graduate
- AI researcher
- AI developer/engineer
- Reporter
- Other
geo: ip_location
? By clicking Submit below I accept the terms of the license and acknowledge that
the information I provide will be collected stored processed and shared in accordance
with the Meta Privacy Policy
: checkbox
extra_gated_description: The information you provide will be collected, stored, processed
and shared in accordance with the [Meta Privacy Policy](https://www.facebook.com/privacy/policy/).
extra_gated_button_content: Submit
base_model: meta-llama/Llama-3.2-1B
---
# Baharababah/Llama-3.2-1B-Q6_K-GGUF
This model was converted to GGUF format from [`meta-llama/Llama-3.2-1B`](https://huggingface.co/meta-llama/Llama-3.2-1B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/meta-llama/Llama-3.2-1B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Baharababah/Llama-3.2-1B-Q6_K-GGUF --hf-file llama-3.2-1b-q6_k.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Baharababah/Llama-3.2-1B-Q6_K-GGUF --hf-file llama-3.2-1b-q6_k.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Baharababah/Llama-3.2-1B-Q6_K-GGUF --hf-file llama-3.2-1b-q6_k.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Baharababah/Llama-3.2-1B-Q6_K-GGUF --hf-file llama-3.2-1b-q6_k.gguf -c 2048
```
|
DungND1107/cpo-full-contrastive-fixed-v5_checkpoint_2800
|
DungND1107
| 2025-08-14T20:08:20Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-14T20:08:14Z |
---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
bunbohue/Mazii_NMT_16-44-42_14-08-2025
|
bunbohue
| 2025-08-14T19:55:40Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:google/gemma-3-1b-it",
"base_model:finetune:google/gemma-3-1b-it",
"endpoints_compatible",
"region:us"
] | null | 2025-08-14T16:45:32Z |
---
base_model: google/gemma-3-1b-it
library_name: transformers
model_name: Mazii_NMT_16-44-42_14-08-2025
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for Mazii_NMT_16-44-42_14-08-2025
This model is a fine-tuned version of [google/gemma-3-1b-it](https://huggingface.co/google/gemma-3-1b-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="bunbohue/Mazii_NMT_16-44-42_14-08-2025", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.19.1
- Transformers: 4.53.3
- Pytorch: 2.7.1
- Datasets: 4.0.0
- Tokenizers: 0.21.2
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
yasserrmd/PharmaQA-270M
|
yasserrmd
| 2025-08-14T19:54:50Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma3_text",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"base_model:unsloth/gemma-3-270m-it",
"base_model:finetune:unsloth/gemma-3-270m-it",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-14T18:51:44Z |
---
base_model: unsloth/gemma-3-270m-it
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3_text
license: apache-2.0
language:
- en
---
### PharmaQA‑270M
<img src="banner.png" width="800"/>
**PharmaQA‑270M** is a compact, instruction-tuned language model for pharmacology and pharmacy domains. Based on **Gemma3 (270M parameters)**, it was fine-tuned using LoRA and merged into a single model checkpoint for easy deployment. This model is optimized for **educational and research use cases**, especially where compute constraints are present.
---
### Model Details
| Property | Value |
| ------------------ | ---------------------------------------------------------------------------------------- |
| Base Model | Google Gemma3 (270M parameters) |
| Fine-tuning Method | LoRA using [Unsloth](https://github.com/unslothai/unsloth) |
| Dataset Used | 25,000 Q\&A pairs from [MIRIAD-4.4M](https://huggingface.co/datasets/miriad/miriad-4.4M) |
| Epochs | 3 |
| Final Format | Merged (base + LoRA weights) |
| Model Size | 270M |
| License | ODC-BY v1.0 dataset license (non-commercial) |
| Author | [Mohamed Yasser](https://huggingface.co/yasserrmd) |
---
### ⚠️ Caution & Intended Use
* **Do not use this model for real-world medical diagnosis, treatment, or care decisions.**
* The model was trained on MIRIAD Q\&A pairs generated via LLMs from biomedical literature.
* MIRIAD and this model must be used for **educational, research, and academic exploration only**.
* This model inherits all **OpenAI and ODC-BY v1.0** usage limitations associated with the dataset.
---
### Performance Summary
From evaluation on 50 unseen pharma questions:
| Metric | Value |
| ------------------------ | --------------------------------------------------- |
| Average Answer Length | 40.3 words |
| Longest Answer | 95 words |
| Shortest Answer | 12 words |
| Empty / Short Responses | 0 |
| Clinical Accuracy | ✅ Consistent terminology |
| Depth in Short Responses | ⚠️ Limited |
| Best Use Case | Lightweight educational deployment (MCQs, tutoring) |
---
### Sample Inference Code
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
model_name = "yasserrmd/PharmaQA-270M"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name, device_map="auto")
model.eval()
question = "What is the mechanism of action of metformin?"
messages = [{"role": "user", "content": f"Q: {question} A:"}]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt",
tokenize=True,
return_dict=True
).to(model.device)
if "token_type_ids" in inputs:
del inputs["token_type_ids"]
with torch.no_grad():
outputs = model.generate(
**inputs,
max_new_tokens=128,
temperature=0.7,
top_p=0.95,
repetition_penalty=1.05
)
response = tokenizer.decode(outputs[0], skip_special_tokens=True).strip()
answer = response.split("A:")[-1].strip()
print("💊 Question:", question)
print("🧠 Answer:", answer)
```
---
### License
* **Model**: Open for academic and non-commercial use
* **Dataset**: [MIRIAD-4.4M](https://huggingface.co/datasets/miriad/miriad-4.4M) under ODC-BY v1.0
---
### Acknowledgements
* MIRIAD creators for making the dataset openly accessible.
* [Unsloth](https://github.com/unslothai/unsloth) team for enabling fast LoRA tuning on small GPUs.
* Hugging Face and Google for Gemma3 base model.
---
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
TheMindExpansionNetwork/M1ndb0t-270m-Float16-Q8_0-GGUF
|
TheMindExpansionNetwork
| 2025-08-14T19:54:20Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"gemma3_text",
"llama-cpp",
"gguf-my-repo",
"en",
"base_model:TheMindExpansionNetwork/M1ndb0t-270m-Float16",
"base_model:quantized:TheMindExpansionNetwork/M1ndb0t-270m-Float16",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-14T19:53:32Z |
---
base_model: TheMindExpansionNetwork/M1ndb0t-270m-Float16
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3_text
- llama-cpp
- gguf-my-repo
license: apache-2.0
language:
- en
---
# TheMindExpansionNetwork/M1ndb0t-270m-Float16-Q8_0-GGUF
This model was converted to GGUF format from [`TheMindExpansionNetwork/M1ndb0t-270m-Float16`](https://huggingface.co/TheMindExpansionNetwork/M1ndb0t-270m-Float16) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/TheMindExpansionNetwork/M1ndb0t-270m-Float16) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo TheMindExpansionNetwork/M1ndb0t-270m-Float16-Q8_0-GGUF --hf-file m1ndb0t-270m-float16-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo TheMindExpansionNetwork/M1ndb0t-270m-Float16-Q8_0-GGUF --hf-file m1ndb0t-270m-float16-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo TheMindExpansionNetwork/M1ndb0t-270m-Float16-Q8_0-GGUF --hf-file m1ndb0t-270m-float16-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo TheMindExpansionNetwork/M1ndb0t-270m-Float16-Q8_0-GGUF --hf-file m1ndb0t-270m-float16-q8_0.gguf -c 2048
```
|
bruhzair/prototype-0.4x319
|
bruhzair
| 2025-08-14T19:53:56Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-14T19:03:51Z |
---
base_model: []
library_name: transformers
tags:
- mergekit
- merge
---
# prototype-0.4x319
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [Multi-SLERP](https://goddard.blog/posts/multislerp-wow-what-a-cool-idea) merge method using /workspace/cache/models--deepcogito--cogito-v2-preview-llama-70B/snapshots/1e1d12e8eaebd6084a8dcf45ecdeaa2f4b8879ce as a base.
### Models Merged
The following models were included in the merge:
* /workspace/cache/models--bruhzair--prototype-0.4x301/snapshots/ba244ff497fd1a57c7954269c360a0f96e89f7a4
* /workspace/cache/models--BruhzWater--Serpents-Tongue-L3.3-70b-0.4a/snapshots/bc1fe5ab4d746afa04d5e591f4fd69c493f0bf68
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: /workspace/cache/models--BruhzWater--Serpents-Tongue-L3.3-70b-0.4a/snapshots/bc1fe5ab4d746afa04d5e591f4fd69c493f0bf68
parameters:
weight: [0.5]
- model: /workspace/cache/models--bruhzair--prototype-0.4x301/snapshots/ba244ff497fd1a57c7954269c360a0f96e89f7a4
parameters:
weight: [0.5]
base_model: /workspace/cache/models--deepcogito--cogito-v2-preview-llama-70B/snapshots/1e1d12e8eaebd6084a8dcf45ecdeaa2f4b8879ce
merge_method: multislerp
tokenizer:
source: base
chat_template: llama3
parameters:
normalize_weights: false
eps: 1e-9
pad_to_multiple_of: 8
int8_mask: true
dtype: bfloat16
```
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.