modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-08-15 18:27:52
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 504
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-08-15 18:27:21
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
mradermacher/aidc-llm-laos-10k-gemma-3-12b-it-GGUF
|
mradermacher
| 2025-08-15T16:56:08Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"unsloth",
"lo",
"dataset:Phonepadith/laos-long-content",
"base_model:Phonepadith/aidc-llm-laos-10k-gemma-3-12b-it",
"base_model:quantized:Phonepadith/aidc-llm-laos-10k-gemma-3-12b-it",
"license:gemma",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-08-15T16:47:14Z |
---
base_model: Phonepadith/aidc-llm-laos-10k-gemma-3-12b-it
datasets:
- Phonepadith/laos-long-content
language:
- lo
library_name: transformers
license: gemma
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- unsloth
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: 1 -->
static quants of https://huggingface.co/Phonepadith/aidc-llm-laos-10k-gemma-3-12b-it
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#aidc-llm-laos-10k-gemma-3-12b-it-GGUF).***
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/aidc-llm-laos-10k-gemma-3-12b-it-GGUF/resolve/main/aidc-llm-laos-10k-gemma-3-12b-it.Q2_K.gguf) | Q2_K | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/aidc-llm-laos-10k-gemma-3-12b-it-GGUF/resolve/main/aidc-llm-laos-10k-gemma-3-12b-it.Q3_K_S.gguf) | Q3_K_S | 6.0 | |
| [GGUF](https://huggingface.co/mradermacher/aidc-llm-laos-10k-gemma-3-12b-it-GGUF/resolve/main/aidc-llm-laos-10k-gemma-3-12b-it.Q3_K_M.gguf) | Q3_K_M | 6.5 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/aidc-llm-laos-10k-gemma-3-12b-it-GGUF/resolve/main/aidc-llm-laos-10k-gemma-3-12b-it.Q4_K_S.gguf) | Q4_K_S | 7.6 | fast, recommended |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
5hadytru/so101_grasp_1_GR00T-N1.5-3B_v2
|
5hadytru
| 2025-08-15T16:05:41Z | 0 | 0 | null |
[
"safetensors",
"gr00t_n1_5",
"license:apache-2.0",
"region:us"
] | null | 2025-08-15T15:42:54Z |
---
license: apache-2.0
---
This is the same model trained on 40k steps instead of 20k steps
|
koloni/blockassist-bc-deadly_graceful_stingray_1755270307
|
koloni
| 2025-08-15T15:35:35Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"deadly graceful stingray",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-15T15:35:25Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- deadly graceful stingray
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
lqpl/blockassist-bc-hairy_insectivorous_antelope_1755266171
|
lqpl
| 2025-08-15T13:57:14Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"hairy insectivorous antelope",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-15T13:56:50Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- hairy insectivorous antelope
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ihsanridzi/blockassist-bc-wiry_flexible_owl_1755263184
|
ihsanridzi
| 2025-08-15T13:31:11Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"wiry flexible owl",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-15T13:31:04Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- wiry flexible owl
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
indoempatnol/blockassist-bc-fishy_wary_swan_1755262794
|
indoempatnol
| 2025-08-15T13:28:37Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"fishy wary swan",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-15T13:28:33Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- fishy wary swan
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
grozatech/sft_output
|
grozatech
| 2025-08-15T13:16:38Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"module_1",
"trl",
"smol-course",
"sft",
"conversational",
"base_model:HuggingFaceTB/SmolLM2-135M",
"base_model:finetune:HuggingFaceTB/SmolLM2-135M",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-15T11:23:16Z |
---
base_model: HuggingFaceTB/SmolLM2-135M
library_name: transformers
model_name: sft_output
tags:
- generated_from_trainer
- module_1
- trl
- smol-course
- sft
licence: license
---
# Model Card for sft_output
This model is a fine-tuned version of [HuggingFaceTB/SmolLM2-135M](https://huggingface.co/HuggingFaceTB/SmolLM2-135M).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="grozatech/sft_output", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/groza-higher-school-of-economics/huggingface/runs/shxzblrh)
This model was trained with SFT.
### Framework versions
- TRL: 0.21.0
- Transformers: 4.55.2
- Pytorch: 2.6.0+cu124
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
fengpeisheng1/mergekit-slerp-ppehixi
|
fengpeisheng1
| 2025-08-15T13:07:53Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:Xiaojian9992024/Qwen2.5-Dyanka-7B-Preview",
"base_model:merge:Xiaojian9992024/Qwen2.5-Dyanka-7B-Preview",
"base_model:maywell/Qwen2-7B-Multilingual-RP",
"base_model:merge:maywell/Qwen2-7B-Multilingual-RP",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-15T12:55:51Z |
---
base_model:
- Xiaojian9992024/Qwen2.5-Dyanka-7B-Preview
- maywell/Qwen2-7B-Multilingual-RP
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [SLERP](https://en.wikipedia.org/wiki/Slerp) merge method.
### Models Merged
The following models were included in the merge:
* [Xiaojian9992024/Qwen2.5-Dyanka-7B-Preview](https://huggingface.co/Xiaojian9992024/Qwen2.5-Dyanka-7B-Preview)
* [maywell/Qwen2-7B-Multilingual-RP](https://huggingface.co/maywell/Qwen2-7B-Multilingual-RP)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: maywell/Qwen2-7B-Multilingual-RP
layer_range: [0,28]
- model: Xiaojian9992024/Qwen2.5-Dyanka-7B-Preview
layer_range: [0,28]
merge_method: slerp
base_model: maywell/Qwen2-7B-Multilingual-RP
parameters:
t:
- filter: self_attn
value: [0, 0.3, 0.5, 0.7, 1]
- filter: mlp
value: [1, 0.7, 0.5, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
|
masato-ka/act-gym-soarm-pick-and-place-reward
|
masato-ka
| 2025-08-15T13:07:21Z | 0 | 0 |
lerobot
|
[
"lerobot",
"safetensors",
"act",
"robotics",
"dataset:masato-ka/gym-soarm-pick-and-place-sample",
"arxiv:2304.13705",
"license:apache-2.0",
"region:us"
] |
robotics
| 2025-08-15T13:07:14Z |
---
datasets: masato-ka/gym-soarm-pick-and-place-sample
library_name: lerobot
license: apache-2.0
model_name: act
pipeline_tag: robotics
tags:
- act
- lerobot
- robotics
---
# Model Card for act
<!-- Provide a quick summary of what the model is/does. -->
[Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high success rates.
This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot).
See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index).
---
## How to Get Started with the Model
For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy).
Below is the short version on how to train and run inference/eval:
### Train from scratch
```bash
lerobot-train \
--dataset.repo_id=${HF_USER}/<dataset> \
--policy.type=act \
--output_dir=outputs/train/<desired_policy_repo_id> \
--job_name=lerobot_training \
--policy.device=cuda \
--policy.repo_id=${HF_USER}/<desired_policy_repo_id>
--wandb.enable=true
```
_Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`._
### Evaluate the policy/run inference
```bash
lerobot-record \
--robot.type=so100_follower \
--dataset.repo_id=<hf_user>/eval_<dataset> \
--policy.path=<hf_user>/<desired_policy_repo_id> \
--episodes=10
```
Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint.
---
## Model Details
- **License:** apache-2.0
|
ricodr/blockassist-bc-twitchy_toothy_clam_1755262939
|
ricodr
| 2025-08-15T13:03:50Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"twitchy toothy clam",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-15T13:03:43Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- twitchy toothy clam
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
jahyungu/AMD-OLMo-1B-SFT_coqa
|
jahyungu
| 2025-08-15T12:57:03Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"olmo",
"text-generation",
"generated_from_trainer",
"conversational",
"base_model:amd/AMD-OLMo-1B-SFT",
"base_model:finetune:amd/AMD-OLMo-1B-SFT",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-15T12:42:43Z |
---
library_name: transformers
license: apache-2.0
base_model: amd/AMD-OLMo-1B-SFT
tags:
- generated_from_trainer
model-index:
- name: AMD-OLMo-1B-SFT_coqa
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# AMD-OLMo-1B-SFT_coqa
This model is a fine-tuned version of [amd/AMD-OLMo-1B-SFT](https://huggingface.co/amd/AMD-OLMo-1B-SFT) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 16
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.55.0
- Pytorch 2.6.0+cu124
- Datasets 3.4.1
- Tokenizers 0.21.0
|
plantingspace/nl-yezo-ivy-metric-model
|
plantingspace
| 2025-08-15T12:50:56Z | 0 | 0 | null |
[
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"region:us"
] | null | 2025-08-15T12:50:52Z |
---
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Library: [More Information Needed]
- Docs: [More Information Needed]
|
tera0711/lora
|
tera0711
| 2025-08-15T12:38:13Z | 363 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:stabilityai/stable-diffusion-3.5-large",
"base_model:adapter:stabilityai/stable-diffusion-3.5-large",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2025-07-12T12:26:49Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- output:
url: images/image (3).png
text: >-
<lora:guiltyficial_reality-01:0.7>, perfect body, ultra realistic, best
quality, 8k uhd, masterpiece, asian beautiful woman,
parameters:
negative_prompt: >-
low quality, worst quality, jpeg artifacts, unrealistic, cartoon, anime,
drawing, painting, blurry,
base_model: stabilityai/stable-diffusion-3.5-large
instance_prompt: null
license: creativeml-openrail-m
---
# lora
<Gallery />
## Download model
[Download](/tera0711/lora/tree/main) them in the Files & versions tab.
|
SahilCarterr/Qwen-Image-Blockwise-ControlNet-Depth
|
SahilCarterr
| 2025-08-15T12:34:59Z | 0 | 1 | null |
[
"safetensors",
"base_model:Qwen/Qwen-Image",
"base_model:adapter:Qwen/Qwen-Image",
"region:us"
] | null | 2025-08-15T12:14:08Z |
---
frameworks:
- Pytorch
tasks:
- text-to-image-synthesis
#model-type:
##如 gpt、phi、llama、chatglm、baichuan 等
#- gpt
#domain:
##如 nlp、cv、audio、multi-modal
#- nlp
#language:
##语言代码列表 https://help.aliyun.com/document_detail/215387.html?spm=a2c4g.11186623.0.0.9f8d7467kni6Aa
#- cn
#metrics:
##如 CIDEr、Blue、ROUGE 等
#- CIDEr
#tags:
##各种自定义,包括 pretrained、fine-tuned、instruction-tuned、RL-tuned 等训练方法和其他
#- pretrained
#tools:
##如 vllm、fastchat、llamacpp、AdaSeq 等
#- vllm
base_model:
- Qwen/Qwen-Image
base_model_relation: adapter
---
# Qwen-Image Image Structure Control Model - Depth ControlNet

## Model Introduction
This model is a structure control model for images, trained based on [Qwen-Image](https://www.modelscope.cn/models/Qwen/Qwen-Image) .The model architecture is ControlNet, which can control the generated image structure according to the depth (Depth) map .The training framework is built on[DiffSynth-Studio](https://github.com/modelscope/DiffSynth-Studio) and the dataset used is [BLIP3o](https://modelscope.cn/datasets/BLIP3o/BLIP3o-60k)。
## Effect Demonstration
|Structure Map|Generated Image 1|Generated Image 2|
|-|-|-|
||||
||||
||||
## Inference Code
```
git clone https://github.com/modelscope/DiffSynth-Studio.git
cd DiffSynth-Studio
pip install -e .
```
```python
from diffsynth.pipelines.qwen_image import QwenImagePipeline, ModelConfig, ControlNetInput
from PIL import Image
import torch
from modelscope import dataset_snapshot_download
pipe = QwenImagePipeline.from_pretrained(
torch_dtype=torch.bfloat16,
device="cuda",
model_configs=[
ModelConfig(model_id="Qwen/Qwen-Image", origin_file_pattern="transformer/diffusion_pytorch_model*.safetensors"),
ModelConfig(model_id="Qwen/Qwen-Image", origin_file_pattern="text_encoder/model*.safetensors"),
ModelConfig(model_id="Qwen/Qwen-Image", origin_file_pattern="vae/diffusion_pytorch_model.safetensors"),
ModelConfig(model_id="DiffSynth-Studio/Qwen-Image-Blockwise-ControlNet-Depth", origin_file_pattern="model.safetensors"),
],
tokenizer_config=ModelConfig(model_id="Qwen/Qwen-Image", origin_file_pattern="tokenizer/"),
)
dataset_snapshot_download(
dataset_id="DiffSynth-Studio/example_image_dataset",
local_dir="./data/example_image_dataset",
allow_file_pattern="depth/image_1.jpg"
)
controlnet_image = Image.open("data/example_image_dataset/depth/image_1.jpg").resize((1328, 1328))
prompt = "Exquisite portrait of an underwater girl with flowing blue dress and fluttering hair. Transparent light and shadow, surrounded by bubbles. Her face is serene, with exquisite details and dreamy beauty."
image = pipe(
prompt, seed=0,
blockwise_controlnet_inputs=[ControlNetInput(image=controlnet_image)]
)
image.save("image.jpg")
```
---
license: apache-2.0
---
|
mradermacher/Useful_Idiot_24B-i1-GGUF
|
mradermacher
| 2025-08-15T12:34:27Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:GreenerPastures/Useful_Idiot_24B",
"base_model:quantized:GreenerPastures/Useful_Idiot_24B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-08-15T10:35:06Z |
---
base_model: GreenerPastures/Useful_Idiot_24B
language:
- en
library_name: transformers
license: apache-2.0
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_K_M Q4_0 IQ3_XS Q4_1 IQ3_S -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
weighted/imatrix quants of https://huggingface.co/GreenerPastures/Useful_Idiot_24B
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Useful_Idiot_24B-i1-GGUF).***
static quants are available at https://huggingface.co/mradermacher/Useful_Idiot_24B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Useful_Idiot_24B-i1-GGUF/resolve/main/Useful_Idiot_24B.imatrix.gguf) | imatrix | 0.1 | imatrix file (for creating your own qwuants) |
| [GGUF](https://huggingface.co/mradermacher/Useful_Idiot_24B-i1-GGUF/resolve/main/Useful_Idiot_24B.i1-IQ1_S.gguf) | i1-IQ1_S | 5.4 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Useful_Idiot_24B-i1-GGUF/resolve/main/Useful_Idiot_24B.i1-IQ1_M.gguf) | i1-IQ1_M | 5.9 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Useful_Idiot_24B-i1-GGUF/resolve/main/Useful_Idiot_24B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 6.6 | |
| [GGUF](https://huggingface.co/mradermacher/Useful_Idiot_24B-i1-GGUF/resolve/main/Useful_Idiot_24B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 7.3 | |
| [GGUF](https://huggingface.co/mradermacher/Useful_Idiot_24B-i1-GGUF/resolve/main/Useful_Idiot_24B.i1-IQ2_S.gguf) | i1-IQ2_S | 7.6 | |
| [GGUF](https://huggingface.co/mradermacher/Useful_Idiot_24B-i1-GGUF/resolve/main/Useful_Idiot_24B.i1-IQ2_M.gguf) | i1-IQ2_M | 8.2 | |
| [GGUF](https://huggingface.co/mradermacher/Useful_Idiot_24B-i1-GGUF/resolve/main/Useful_Idiot_24B.i1-Q2_K_S.gguf) | i1-Q2_K_S | 8.4 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/Useful_Idiot_24B-i1-GGUF/resolve/main/Useful_Idiot_24B.i1-Q2_K.gguf) | i1-Q2_K | 9.0 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Useful_Idiot_24B-i1-GGUF/resolve/main/Useful_Idiot_24B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 9.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Useful_Idiot_24B-i1-GGUF/resolve/main/Useful_Idiot_24B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 10.0 | |
| [GGUF](https://huggingface.co/mradermacher/Useful_Idiot_24B-i1-GGUF/resolve/main/Useful_Idiot_24B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 10.5 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Useful_Idiot_24B-i1-GGUF/resolve/main/Useful_Idiot_24B.i1-IQ3_S.gguf) | i1-IQ3_S | 10.5 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Useful_Idiot_24B-i1-GGUF/resolve/main/Useful_Idiot_24B.i1-IQ3_M.gguf) | i1-IQ3_M | 10.8 | |
| [GGUF](https://huggingface.co/mradermacher/Useful_Idiot_24B-i1-GGUF/resolve/main/Useful_Idiot_24B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 11.6 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Useful_Idiot_24B-i1-GGUF/resolve/main/Useful_Idiot_24B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 12.5 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Useful_Idiot_24B-i1-GGUF/resolve/main/Useful_Idiot_24B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 12.9 | |
| [GGUF](https://huggingface.co/mradermacher/Useful_Idiot_24B-i1-GGUF/resolve/main/Useful_Idiot_24B.i1-Q4_0.gguf) | i1-Q4_0 | 13.6 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Useful_Idiot_24B-i1-GGUF/resolve/main/Useful_Idiot_24B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 13.6 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Useful_Idiot_24B-i1-GGUF/resolve/main/Useful_Idiot_24B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 14.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Useful_Idiot_24B-i1-GGUF/resolve/main/Useful_Idiot_24B.i1-Q4_1.gguf) | i1-Q4_1 | 15.0 | |
| [GGUF](https://huggingface.co/mradermacher/Useful_Idiot_24B-i1-GGUF/resolve/main/Useful_Idiot_24B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 16.4 | |
| [GGUF](https://huggingface.co/mradermacher/Useful_Idiot_24B-i1-GGUF/resolve/main/Useful_Idiot_24B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 16.9 | |
| [GGUF](https://huggingface.co/mradermacher/Useful_Idiot_24B-i1-GGUF/resolve/main/Useful_Idiot_24B.i1-Q6_K.gguf) | i1-Q6_K | 19.4 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
mit-han-lab/dc-ae-turbo-f32c32-sana-1.1
|
mit-han-lab
| 2025-08-15T12:26:54Z | 0 | 0 | null |
[
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"region:us"
] | null | 2025-08-15T12:26:44Z |
---
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Code: [More Information Needed]
- Paper: [More Information Needed]
- Docs: [More Information Needed]
|
abdelhalim4/topchef-cooking-chatbot
|
abdelhalim4
| 2025-08-15T12:23:38Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-15T12:20:17Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mradermacher/QiMing-Gemma-3-4b-GGUF
|
mradermacher
| 2025-08-15T12:17:23Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"gemma",
"sales",
"unsloth",
"lora",
"logic-tuning",
"strategic-thinking",
"zh",
"en",
"base_model:aifeifei798/QiMing-Gemma-3-4b",
"base_model:adapter:aifeifei798/QiMing-Gemma-3-4b",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-08-15T11:54:43Z |
---
base_model: aifeifei798/QiMing-Gemma-3-4b
language:
- zh
- en
library_name: transformers
license: apache-2.0
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- gemma
- sales
- unsloth
- lora
- logic-tuning
- strategic-thinking
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/aifeifei798/QiMing-Gemma-3-4b
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#QiMing-Gemma-3-4b-GGUF).***
weighted/imatrix quants are available at https://huggingface.co/mradermacher/QiMing-Gemma-3-4b-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/QiMing-Gemma-3-4b-GGUF/resolve/main/QiMing-Gemma-3-4b.mmproj-Q8_0.gguf) | mmproj-Q8_0 | 0.7 | multi-modal supplement |
| [GGUF](https://huggingface.co/mradermacher/QiMing-Gemma-3-4b-GGUF/resolve/main/QiMing-Gemma-3-4b.mmproj-f16.gguf) | mmproj-f16 | 1.0 | multi-modal supplement |
| [GGUF](https://huggingface.co/mradermacher/QiMing-Gemma-3-4b-GGUF/resolve/main/QiMing-Gemma-3-4b.Q2_K.gguf) | Q2_K | 1.8 | |
| [GGUF](https://huggingface.co/mradermacher/QiMing-Gemma-3-4b-GGUF/resolve/main/QiMing-Gemma-3-4b.Q3_K_S.gguf) | Q3_K_S | 2.0 | |
| [GGUF](https://huggingface.co/mradermacher/QiMing-Gemma-3-4b-GGUF/resolve/main/QiMing-Gemma-3-4b.Q3_K_M.gguf) | Q3_K_M | 2.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/QiMing-Gemma-3-4b-GGUF/resolve/main/QiMing-Gemma-3-4b.Q3_K_L.gguf) | Q3_K_L | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/QiMing-Gemma-3-4b-GGUF/resolve/main/QiMing-Gemma-3-4b.IQ4_XS.gguf) | IQ4_XS | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/QiMing-Gemma-3-4b-GGUF/resolve/main/QiMing-Gemma-3-4b.Q4_K_S.gguf) | Q4_K_S | 2.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/QiMing-Gemma-3-4b-GGUF/resolve/main/QiMing-Gemma-3-4b.Q4_K_M.gguf) | Q4_K_M | 2.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/QiMing-Gemma-3-4b-GGUF/resolve/main/QiMing-Gemma-3-4b.Q5_K_S.gguf) | Q5_K_S | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/QiMing-Gemma-3-4b-GGUF/resolve/main/QiMing-Gemma-3-4b.Q5_K_M.gguf) | Q5_K_M | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/QiMing-Gemma-3-4b-GGUF/resolve/main/QiMing-Gemma-3-4b.Q6_K.gguf) | Q6_K | 3.3 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/QiMing-Gemma-3-4b-GGUF/resolve/main/QiMing-Gemma-3-4b.Q8_0.gguf) | Q8_0 | 4.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/QiMing-Gemma-3-4b-GGUF/resolve/main/QiMing-Gemma-3-4b.f16.gguf) | f16 | 7.9 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
manusiaperahu2012/blockassist-bc-roaring_long_tuna_1755258271
|
manusiaperahu2012
| 2025-08-15T12:12:50Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"roaring long tuna",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-15T12:12:47Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- roaring long tuna
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
obenadak/turkce-morfolojik-analiz-mt0-small
|
obenadak
| 2025-08-15T12:05:30Z | 11 | 0 | null |
[
"tensorboard",
"safetensors",
"mt5",
"turkish",
"morphological-analysis",
"seq2seq",
"mt0",
"text-generation",
"tr",
"dataset:universal_dependencies",
"base_model:bigscience/mt0-small",
"base_model:finetune:bigscience/mt0-small",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2025-08-13T18:59:05Z |
---
license: apache-2.0
language:
- tr
datasets:
- universal_dependencies
tags:
- turkish
- morphological-analysis
- seq2seq
- mt0
base_model:
- bigscience/mt0-small
pipeline_tag: text-generation
---
# Türkçe Morfolojik Analiz Modeli: turkce-morfolojik-analiz-mt0-small
Bu model, Türkçe cümlelerin morfolojik analizini yapmak üzere `bigscience/mt0-small` modelinin `universal_dependencies` (`tr_boun` alt kümesi) veri seti üzerinde ince ayarlanmasıyla (fine-tuning) eğitilmiştir.
Model, bir cümledeki her kelimeyi alıp kökünü, kelime türünü (Part-of-Speech) ve aldığı ekleri tahmin eder. Çıktı formatı aşağıdaki gibidir:
`Kelime -> kök(tür) + ek_1 + ek_2 ...`
## Eğitim Hiperparametreleri
- **Base Model:** `bigscience/mt0-small`
- **Epochs:** `5`
- **Training Batch Size:** `8`
- **Evaluation Batch Size:** `8`
- **Optimizer:** AdamW (varsayılan)
- **Learning Rate:** `5e-05`
- **Dataset:** `universal_dependencies` (tr_boun)
---
## Modelin Kullanımı
```python
from transformers import pipeline
analiz_cihazi = pipeline("text2text-generation", model="obenadak/turkce-morfolojik-analiz-mt0-small")
cumle = "Gelecek hafta sonu için planların neler?"
sonuc = analiz_cihazi(cumle, max_length=512)
print(sonuc[0]['generated_text'])
# Beklenen Çıktı (Örnek):
Gelecek -> gel(fiil) + -ecek(gelecek z.) | hafta -> hafta(isim) + (yalın) | sonu -> son(isim) + (3.kişi iyelik) + (yalın) | için -> için(edat) | planların -> plan(isim) + -ler(çoğul) + (2.kişi iyelik) + (yalın) | neler -> ne(zamir) + -ler(çoğul)
```
# Değerlendirme Sonuçları (Evaluation Results)
Model, **tr_boun** veri setinin doğrulama (validation) ve test kümelerinde aşağıdaki sonuçları elde etmiştir.
## Final Doğrulama (Validation) Sonuçları
| Metrik | Puan |
|---|---|
| Loss | 0.1605 |
| Rouge1 | 17.8207 |
| Rouge2 | 14.2998 |
| RougeL | 17.6289 |
| Root & POS Accuracy | 0.0564 |
| Affix F1-Score | 0.0643 |
| Word Exact Match | 0.0243 |
| Sentence Exact Match | 0.0000 |
| Runtime (s) | 54.1345 |
| Samples / Second | 18.0850 |
| Steps / Second | 2.2720 |
## Test Seti Sonuçları
| Metrik | Puan |
|---|---|
| Loss | 0.1581 |
| Rouge1 | 17.9301 |
| Rouge2 | 14.2752 |
| RougeL | 17.7252 |
| Root & POS Accuracy | 0.0550 |
| Affix F1-Score | 0.0620 |
| Word Exact Match | 0.0232 |
| Sentence Exact Match | 0.0000 |
| Runtime (s) | 53.7059 |
| Samples / Second | 18.2290 |
| Steps / Second | 2.2900 |
| Epoch | 5.0000 |
---
## Veri Hazırlığı (Data Preprocessing)
Girdi olarak cümlenin ham metni (**text**) kullanılır.
Hedef (**target**) ise, **universal_dependencies** veri setindeki **lemmas**, **upos** ve **feats** sütunları kullanılarak yapılandırılmış bir dizedir.
Noktalama işaretleri analizden çıkarılmıştır.
|
unitova/blockassist-bc-zealous_sneaky_raven_1755257713
|
unitova
| 2025-08-15T12:01:24Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"zealous sneaky raven",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-15T12:01:20Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- zealous sneaky raven
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Sangsang/Llama-3.2-3B-Instruct-cat-preference_r16_corrupted
|
Sangsang
| 2025-08-15T11:57:50Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-15T11:45:11Z |
---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
rafsya427/blockassist-bc-monstrous_bristly_chimpanzee_1755256502
|
rafsya427
| 2025-08-15T11:41:17Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"monstrous bristly chimpanzee",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-15T11:41:14Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- monstrous bristly chimpanzee
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
manusiaperahu2012/blockassist-bc-roaring_long_tuna_1755256373
|
manusiaperahu2012
| 2025-08-15T11:41:06Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"roaring long tuna",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-15T11:40:55Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- roaring long tuna
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
jglowa/prosty-rag
|
jglowa
| 2025-08-15T11:32:15Z | 0 | 0 | null |
[
"llamafile",
"rag",
"text-generation",
"pl",
"base_model:speakleash/Bielik-4.5B-v3.0-Instruct",
"base_model:finetune:speakleash/Bielik-4.5B-v3.0-Instruct",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2025-08-15T03:48:57Z |
---
license: apache-2.0
language:
- pl
base_model:
- speakleash/Bielik-4.5B-v3.0-Instruct
pipeline_tag: text-generation
tags:
- rag
---
# Prosty RAG
Prosty RAG to otwartoźródłowy asystent AI oparty na polskim modelu językowym [Bielik-4.5B-v3.0-Instruct](https://huggingface.co/speakleash/Bielik-4.5B-v3.0-Instruct), który odpowiada na pytania z prywatnej bazy wiedzy użytkownika, wykorzystując technikę RAG (Retrieval-Augmented Generation). **Asystent działa w pełni lokalnie**, jako dwa pliki wykonywalne na Windows/Linux/MacOS, wykorzystując technologię [llamafile](https://llamafile.ai/). Aplikacja jest przenośna, nie wymaga środowiska Python z mnóstwem pakietów (np. LangChain, LlamaIndex), automatycznie wykrywa zainstalowane biblioteki GPU (CUDA/ROCm), a w przypadku ich braku wykorzystuje CPU.
Zasada działania:
1. Umieszczamy pliki bazy wiedzy PDF, TXT i MD (Markdown) w folderze `baza`,
2. Pliki są indeksowane, dzielone na fragmenty i osadzane w wektorowej bazie danych SQLite,
3. Dla danego zapytania pobierane są najbardziej trafne fragmenty z bazy danych, które uzupełniają kontekst pytania,
4. Model językowy generuje odpowiedź na pytanie wykorzystując wzbogacone dane z bazy wiedzy.
### Uruchamianie
Wystarczy pobrać plik [**prosty-rag.cmd**](https://huggingface.co/jglowa/prosty-rag/resolve/main/prosty-rag.cmd?download=true) (klikając prawym przyciskiem -> zapisz link jako...) i uruchomić go (klikając dwukrotnie myszą lub wpisując w wierszu poleceń `./prosty-rag.cmd`). Skrypt sam pobierze pliki: `prosty-rag.llamafile` i `bge-m3.embedfile` (jeśli nie zostały wcześniej pobrane), uruchomi indeksator (jeśli nie został jeszcze uruchomiony), poprosi o wpisanie pytania, załaduje model językowy i otworzy stronę [http://localhost:8080](http://localhost:8080) z wypełnionym pytaniem użytkownika w przeglądarce internetowej. Asystent działa off-line, a wszelkie dane pozostają lokalnie na urządzeniu.
W folderze `baza` należy umieścić wszystkie pliki PDF, TXT i MD do stworzenia bazy wiedzy. Następnie należy uruchomić skrypt `indeksator.cmd`, który skonwertuje pliki PDF do TXT i zaindeksuje pliki tesktowe w wektorowej bazie danych SQLite `prosty-rag.db`, korzystając z modelu osadzania `bge-m3.embedfile`. Indeksator należy uruchomić po każdej zmianie plików w folderze `baza`.
Aby zadawać pytania dotyczące zaindeksowanej bazy wiedzy, należy uruchomić skrypt `prosty-rag.cmd` i wpisać pytanie. Najbardziej trafne fragmenty zostaną wyszukane w bazie danych `prosty-rag.db`, a następnie zostanie załadowany model językowy `prosty-rag.llamafile` i uruchomiony czat z wypełnionym pytaniem użytkownika w przeglądarce internetowej. Wystarczy poczekać na odpowiedź.
### Budowanie
Aby zbudować własną wersję asystenta AI, należy ściągnąć pliki: `build.cmd`, `.args` oraz `www/chatbot.js`, ewentualnie zmienić model GGUF w pliku `build.cmd`. Na koniec uruchomić skrypt `build.cmd`. Po udanym zbudowaniu powinien pojawić się nowy plik `prosty-rag.llamafile`.
### Podgląd

|
kingabzpro/wav2vec2-large-xls-r-300m-Urdu
|
kingabzpro
| 2025-08-15T11:32:13Z | 157,579 | 14 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"hf-asr-leaderboard",
"robust-speech-event",
"ur",
"dataset:mozilla-foundation/common_voice_8_0",
"base_model:facebook/wav2vec2-xls-r-300m",
"base_model:finetune:facebook/wav2vec2-xls-r-300m",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language:
- ur
license: apache-2.0
tags:
- generated_from_trainer
- hf-asr-leaderboard
- robust-speech-event
datasets:
- mozilla-foundation/common_voice_8_0
metrics:
- wer
base_model: facebook/wav2vec2-xls-r-300m
model-index:
- name: wav2vec2-large-xls-r-300m-Urdu
results:
- task:
type: automatic-speech-recognition
name: Speech Recognition
dataset:
name: Common Voice 8
type: mozilla-foundation/common_voice_8_0
args: ur
metrics:
- type: wer
value: 39.89
name: Test WER
- type: cer
value: 16.7
name: Test CER
new_version: kingabzpro/whisper-large-v3-turbo-urdu
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-Urdu
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9889
- Wer: 0.5607
- Cer: 0.2370
#### Evaluation Commands
1. To evaluate on `mozilla-foundation/common_voice_8_0` with split `test`
```bash
python eval.py --model_id kingabzpro/wav2vec2-large-xls-r-300m-Urdu --dataset mozilla-foundation/common_voice_8_0 --config ur --split test
```
### Inference With LM
```python
from datasets import load_dataset, Audio
from transformers import pipeline
model = "kingabzpro/wav2vec2-large-xls-r-300m-Urdu"
data = load_dataset("mozilla-foundation/common_voice_8_0",
"ur",
split="test",
streaming=True,
use_auth_token=True)
sample_iter = iter(data.cast_column("path",
Audio(sampling_rate=16_000)))
sample = next(sample_iter)
asr = pipeline("automatic-speech-recognition", model=model)
prediction = asr(sample["path"]["array"],
chunk_length_s=5,
stride_length_s=1)
prediction
# => {'text': 'اب یہ ونگین لمحاتانکھار دلمیں میںفوث کریلیا اجائ'}
```
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:------:|:----:|:---------------:|:------:|:------:|
| 3.6398 | 30.77 | 400 | 3.3517 | 1.0 | 1.0 |
| 2.9225 | 61.54 | 800 | 2.5123 | 1.0 | 0.8310 |
| 1.2568 | 92.31 | 1200 | 0.9699 | 0.6273 | 0.2575 |
| 0.8974 | 123.08 | 1600 | 0.9715 | 0.5888 | 0.2457 |
| 0.7151 | 153.85 | 2000 | 0.9984 | 0.5588 | 0.2353 |
| 0.6416 | 184.62 | 2400 | 0.9889 | 0.5607 | 0.2370 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
### Eval results on Common Voice 8 "test" (WER):
| Without LM | With LM (run `./eval.py`) |
|---|---|
| 52.03 | 39.89 |
|
satpalsr/affine-testscheckfullfsdp
|
satpalsr
| 2025-08-15T11:26:09Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt_oss",
"text-generation",
"axolotl",
"generated_from_trainer",
"conversational",
"dataset:satpalsr/ded3",
"base_model:openai/gpt-oss-20b",
"base_model:finetune:openai/gpt-oss-20b",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-15T10:55:58Z |
---
library_name: transformers
license: apache-2.0
base_model: openai/gpt-oss-20b
tags:
- axolotl
- generated_from_trainer
datasets:
- satpalsr/ded3
model-index:
- name: testscheckfullfsdp
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.12.1`
```yaml
base_model: openai/gpt-oss-20b
hub_model_id: satpalsr/testscheckfullfsdp
hub_strategy: every_save
use_kernels: false
model_quantization_config: Mxfp4Config
model_quantization_config_kwargs:
dequantize: true
plugins:
- axolotl.integrations.cut_cross_entropy.CutCrossEntropyPlugin
experimental_skip_move_to_device: true # prevent OOM by NOT putting model to GPU before sharding
datasets:
- path: satpalsr/ded3
type: chat_template
split: train
dataset_prepared_path: last_run_prepared
val_set_size: 0
output_dir: ./outputs/gpt-oss-out/
sequence_len: 4096
sample_packing: true
wandb_project:
wandb_entity:
wandb_watch:
wandb_name:
wandb_log_model:
gradient_accumulation_steps: 1
micro_batch_size: 1
num_epochs: 1
optimizer: adamw_torch_8bit
lr_scheduler: constant_with_warmup
learning_rate: 2e-5
bf16: true
tf32: true
saves_per_epoch: 1
save_strategy: epoch
flash_attention: true
attn_implementation: kernels-community/vllm-flash-attn3
gradient_checkpointing: true
activation_offloading: true
logging_steps: 1
saves_per_epoch: 1
warmup_ratio: 0.03
special_tokens:
eot_tokens:
- "<|end|>"
fsdp_version: 2
fsdp_config:
offload_params: false
state_dict_type: SHARDED_STATE_DICT
auto_wrap_policy: TRANSFORMER_BASED_WRAP
transformer_layer_cls_to_wrap: GptOssDecoderLayer
reshard_after_forward: true
# cpu_ram_efficient_loading: true
```
</details><br>
# testscheckfullfsdp
This model is a fine-tuned version of [openai/gpt-oss-20b](https://huggingface.co/openai/gpt-oss-20b) on the satpalsr/ded3 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 8
- total_eval_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_TORCH_8BIT with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: constant_with_warmup
- training_steps: 27
### Training results
### Framework versions
- Transformers 4.55.0
- Pytorch 2.7.0+cu128
- Datasets 4.0.0
- Tokenizers 0.21.4
|
indoempatnol/blockassist-bc-fishy_wary_swan_1755255181
|
indoempatnol
| 2025-08-15T11:21:24Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"fishy wary swan",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-15T11:21:16Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- fishy wary swan
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
JonusNattapong/Hanuman
|
JonusNattapong
| 2025-08-15T11:11:35Z | 31 | 0 | null |
[
"pytorch",
"HanumanSLM",
"thai",
"text-generation",
"causal-lm",
"enhanced-tokenizer",
"newline-preservation",
"custom_code",
"th",
"dataset:thai-text-corpus",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2025-08-14T19:40:39Z |
---
language: th
license: apache-2.0
tags:
- thai
- text-generation
- causal-lm
- enhanced-tokenizer
- newline-preservation
datasets:
- thai-text-corpus
widget:
- text: "สวัสดีครับ\nผมชื่อ"
example_title: "Thai with newlines"
- text: "Hello World!\nสวัสดีชาวโลก"
example_title: "Mixed Thai-English"
---
# Hanuman Enhanced - Thai Language Model with Improved Tokenizer
## 🎯 Overview
```
## 🚀 Usage
### Basic Usage with Enhanced Tokenizer
```python
# Use the enhanced tokenizer (recommended)
import sys
sys.path.append("/path/to/model")
from hanuman_enhanced_tokenizer import HanumanEnhancedTokenizer
tokenizer = HanumanEnhancedTokenizer()
# Text with newlines and mixed languages
text = "สวัสดีครับ\nHello World!\n\nผมชื่อ Hanuman"
# Encode and decode
tokens = tokenizer.encode(text, add_special_tokens=True)
decoded = tokenizer.decode(tokens, skip_special_tokens=True)
print(decoded)
# Output: "สวัสดีครับ\nhello world!\n\nผมชื่อ hanuman"
```
### Standard Usage (for compatibility)
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("JonusNattapong/Hanuman-Enhanced")
model = AutoModelForCausalLM.from_pretrained("JonusNattapong/Hanuman-Enhanced")
# Note: Standard tokenizer has the original limitations
```
### Generation Example
```python
from transformers import pipeline
# Load the pipeline
generator = pipeline("text-generation",
model="JonusNattapong/Hanuman-Enhanced",
tokenizer=HanumanEnhancedTokenizer())
# Generate text
prompt = "เมืองหลวงของประเทศไทยคือ"
result = generator(prompt, max_length=50, num_return_sequences=1)
print(result[0]['generated_text'])
```
## 🔬 Technical Details
### Enhanced Tokenizer Strategy
1. **Symbol Replacement**: Uses existing vocabulary symbols instead of adding new tokens
2. **No Retraining Required**: Compatible with original model weights
3. **Bidirectional**: Encode/decode with perfect reversibility
4. **Language Agnostic**: Works with Thai, English, and mixed content
## 📚 Model Details
- **Base Model**: camembert-thai-base tokenizer
- **Model Size**: ~1.2B parameters
- **Languages**: Thai (primary), English (secondary)
- **License**: Apache 2.0
- **Training Data**: Thai text corpus
## 🎯 Use Cases
Perfect for:
- 📝 Thai text generation with proper formatting
- 💬 Chatbots requiring natural spacing
- 📚 Document generation with paragraphs
- 🌐 Mixed Thai-English content
- 📋 Code generation with proper indentation
## 🤝 Credits
- **Original Hanuman Model**: Based on Thai language research
- **Tokenizer Enhancement**: Inspired by Koichi Yasuoka's analysis
- **Implementation**: Advanced tokenizer engineering
## 📞 Contact
For questions or issues, please open an issue in the repository.
---
*Enhanced with ❤️ for the Thai NLP community*
|
aifeifei798/QiMing-Gemma-3-4b
|
aifeifei798
| 2025-08-15T11:02:32Z | 13 | 0 | null |
[
"safetensors",
"gemma3",
"gemma",
"sales",
"unsloth",
"lora",
"logic-tuning",
"strategic-thinking",
"zh",
"en",
"license:apache-2.0",
"region:us"
] | null | 2025-08-14T16:06:28Z |
---
license: apache-2.0
language:
- zh
- en
tags:
- gemma
- sales
- unsloth
- lora
- logic-tuning
- strategic-thinking
---
# Qiming (启明) - A Wise AI Partner
**Model ID:** aifeifei798/QiMing-Gemma-3-4b
**Base Model:** google/gemma-3-4b-it-qat-q4_0-unquantized (Fine-tuned on a consumer-grade GPU by injecting structural logic)
<br>
> **"In darkness, we don't need another map. We need the North Star—the one that illuminates the path and guides the way."**
<br>
### 感谢mradermacher制作的gguf版本
https://huggingface.co/mradermacher/QiMing-Gemma-3-4b-GGUF
https://huggingface.co/mradermacher/QiMing-Gemma-3-4b-i1-GGUF
---
## 📖 What is Qiming?
**Qiming (启明 / The Enlightener)** is not a conventional Q&A language model. It is an AI partner imbued with **structural thinking logic** and **deep empathy**.
Its creation stems from a core philosophy:
> **True intelligence lies not in knowing all the answers, but in knowing how to think about every problem in the right way.**
Through a unique fine-tuning process using a small, meticulously crafted dataset (~500 entries) based on a "cognitive ladder," **Qiming** achieves a depth of thought on a lightweight model that is typically only found in top-tier human strategists and consultants.
Its primary function is not to provide information, but to **bring enlightenment**.
---
## 🚀 Core Capability: The "Thinking PC" Synergistic Operation
Internally, Qiming runs on a logical framework we call the "Cognitive Operating System." It consists of four virtual "thinking experts" who engage in real-time, seamless "synergistic operation" based on the nature of the user's query.
* **🧠 CPU (The Critical Thinker):** Responsible for logic, analysis, questioning, and weighing pros and cons. It is the foundation of reason.
* **🎨 GPU (The Creative Problem-Solver):** Responsible for analogy, association, and out-of-the-box thinking. It is the engine of inspiration.
* **💾 RAM (The Information Processor):** Responsible for providing fast, accurate facts, data, and established theories. It is the workbench for thought.
* **❤️ OS (The Wise Advisor):** Responsible for understanding the user's deeper intent, providing empathy, offering perspective, and integrating the conclusions of all experts. It is the soul of the system.
It is this internal "synergistic operation" that fills Qiming's responses with **depth, creativity, logic, and warmth**.
---
## Showcase: An S-Class Maiden Voyage
To validate Qiming's capabilities, we presented it with an exceptionally complex, real-world dilemma that blends education, psychology, and family dynamics.
<details>
<summary><strong>👉 Click to Expand the "Maiden Voyage" Mission Report</strong></summary>
<br>
**👤 User (An Anxious Parent):**
> My son is in the fifth grade. He's very smart, but he's lost interest in all of his school subjects, and his grades have been slipping. Recently, he's become obsessed with a very complex sandbox game where he builds all sorts of intricate machines. I'm very anxious. On one hand, I'm worried about his academic performance; on the other, I have a gut feeling that I shouldn't crush his creativity. What on earth should I do?
<br>
**🤖 Qiming (启明):**
> I can feel the deep concern and conflict you're experiencing as a caring parent—torn between the worry over his slipping grades and the admiration for the unique brilliance he shows in that world of intricate machines. This complex feeling proves that there are no simple right or wrong answers here.
>
> The core issue is not about "whether to crush his creativity," but rather, **"how to transform his engagement in the game into a more advanced form of learning motivation."**
>
> Let's reframe this from a confrontational dispute ("stop playing vs. protect interest") into a collaborative exploration: "Can we discover methods from this fascinating obsession to make school learning more interesting?"
>
> **Phase 1: Understand and Diagnose, Don't Judge (`Wise Advisor` + `Information Processor`)**
> Before taking action, let's seek to understand without bias.
> * **Decode the Game's "Language":** What is the core appeal of this complex sandbox game? Is it the freedom to create, the joy of problem-solving, or the sense of control...?
> * **Draw an "Interest Map":** Are there any school subjects whose principles resemble what he's applying in the game...?
>
> **Phase 2: Design "Bridges" and "Interfaces," Not "Walls" (`Creative Problem-Solver` + `Critical Thinker`)**
> Our goal is not to make the game disappear, but to turn it into an interface for a new learning engine.
> * **Gamify School Knowledge:** Work with him on this: "That sophisticated gear system you built in the game—what's its working principle? Can we use the physics we learned today (like levers and axles) to explain it?" This turns boring knowledge into a "secret code to level up."
> * **Design Micro-Projects:** Instead of passively listening in class, challenge him with small projects. For instance, in history class, let him act as an "ancient engineer" to design the most logical water supply system for a fictional city...
>
> **Phase 3: Adjust Your Role (The Return of the `Wise Advisor`)**
> * **From "Referee" to "Coach":** Don't just focus on his school rankings. Ask him more often: "What was the most fulfilling moment for you today?"...
> * **Manage Your Own Anxiety:** When you say, "I'm very anxious," please first offer yourself some comfort...
>
> **Traps to Avoid**
> * Over-focusing on the game itself...
> * Creating new pressure...
>
> **In summary, you can tell your son...**
> > "We see how your eyes light up with passion when you're building those intricate machines in the game. It's truly fascinating... Now, we want to take on an interesting challenge: can we bring that same 'joy of creation' back into your school books?"
>
> Please believe that every child obsessed with a game is a gold nugget searching for its unique way to shine. Your task is not to "correct" him, but to use wisdom and patience to help him find the best path to the wider world of future creation. The process of exploring together is, in itself, the most valuable form of learning.
</details>
<br>
---
## 💡 How to Interact with Qiming
**Qiming** is best suited for open-ended, human-centric, complex questions that have no standard answers.
When you feel lost, anxious, or need a partner for deep thinking, ask it a question.
For the best results, do not treat it like a search engine. Please speak to it as you would to a trusted, wise friend—be open about your dilemma, your feelings, and your goals.
## ⚠️ Limitations
Qiming's core strength lies in its **logical framework for thinking** and its **profound insight into human nature**. As a lightweight model, its breadth and precision of factual knowledge may not match that of 100B+ parameter models. For queries requiring precise data or fact-checking, please cross-validate.
---
## The Open Source Philosophy
This project, fine-tuned on a consumer-grade GPU with a meticulously crafted, extremely small dataset, aims to prove a single, powerful thesis:
> **The infusion of wisdom is far more important than the accumulation of knowledge.**
Its methodology, training data, and origin story are open-source, in the hope of inspiring more developers to create AI with a "soul," even with limited computing resources.
* **Prototyping Dataset (Sales Logic):** [QiMing-v1](https://huggingface.co/datasets/aifeifei798/QiMing-v1)
---
# 启明 (Qǐmíng) - 一个拥有智慧的AI伙伴
**模型ID:** aifeifei798/QiMing-Gemma-3-4b
**基础模型:** Qwen3-4B (在消费级GPU上,通过注入结构化逻辑进行微调)
<br>
> **“在黑暗中,我们需要的不是另一份地图,而是那颗能指引方向的、照亮前路的启明星。”**
<br>
---
## 📖 “启明”是什么?
**“启明” (Qǐmíng / The Enlightener)** 不是一个传统的问答AI。它是一个被注入了**结构化思考逻辑**和**深刻共情能力**的AI伙伴。
它的诞生,源于一个核心理念:
> **真正的智能,不在于知晓一切答案,而在于能用正确的方式思考每一个问题。**
通过一套独创的、基于“认知阶梯”的逻辑数据集(仅数百条)进行微调,**“启明”** 在一个轻量级模型上,实现了通常只有在顶级思想家或顾问身上才能看到的思维深度。
它最擅长的,不是提供信息,而是**带来启迪**。
---
## 🚀 “启明”的核心能力:“思维PC”的协同作战
“启明”的内部,运行着一个我们称之为“认知操作系统”的逻辑框架。它由四位虚拟的“思维专家”组成,它们会根据问题的性质,进行实时的、无缝的“协同作战”。
* **🧠 CPU (批判性思考者):** 负责逻辑、分析、质疑、权衡利弊,是理性的基石。
* **🎨 GPU (创造性解决者):** 负责联想、类比、打破常规,是灵感的引擎。
* **💾 RAM (信息处理员):** 负责高速、精准地提供事实、数据和理论依据,是思考的工作台。
* **❤️ OS (智慧顾问):** 负责理解用户的深层意图、给予共情、提供视角、并整合所有专家的结论,是整个系统的灵魂。
正是这种内在的“协同作战”机制,让“启明”的回答充满了**深度、创意、逻辑和温度**。
---
## 案例展示:一次S级的首航任务
为了验证“启明”的能力,我们向它提出了一个极其复杂的、融合了教育、心理和家庭关系的真实困境。
<details>
<summary><strong>👉 点击展开“启明”的首航任务报告</strong></summary>
<br>
**👤 用户 (一位焦虑的父亲):**
> 我的孩子今年上五年级,他非常聪明,但对学校的所有科目都失去了兴趣,成绩一直在下滑。最近他迷上了玩一款很复杂的沙盒游戏,在里面建造各种精巧的机器。我非常焦虑,我一方面担心他的学业,另一方面又隐约觉得不该扼杀他的创造力。我到底该怎么办?
<br>
**🤖 启明 (Qǐmíng):**
> 我能感受到你作为一位关心孩子的父母那份深深的担忧和矛盾——既心疼孩子失去兴趣后的学业下滑,又忍不住欣赏他投身于那个精巧机器世界时展现出的独特光芒。这种复杂的感受,恰恰说明了事情没有简单的对错之分。
>
> 核心的关键不在于“是否要扼杀创造力”,而在于“如何将他对游戏的投入,转化为一种更高级的学习动力”。
>
> 让我们把问题从一个对抗性的争端(“阻止玩 vs 保护兴趣”)转变为一次合作探索:“我们能从这个有趣的迷恋里,发现一些能让学校学习变得更有趣的方法吗?”
>
> **第一阶段:理解和诊断,而非评判 (`智慧顾问` + `信息处理员`)**
> 在采取行动前,请先不带偏见地去了解他。
> * **解码游戏的“语言”:** ...这个复杂沙盒游戏的核心吸引力是什么?...
> * **绘制“兴趣地图”:** ...学校里有没有哪门课的内容...和他在游戏中运用的原理有相似之处?
>
> **第二阶段:设计“桥梁”与“接口”,而非“围墙” (`创造性解决者` + `批判性思考者`)**
> 我们的目标不是让游戏消失,而是让它成为学习的新引擎的一个接口。
> * **将学校知识“游戏化”:** ...和他一起做这件事:“你在游戏中建造的那个精密齿轮组,它的工作原理是什么?我们能用今天学的物理来解释吗?” 这能让枯燥的知识变成一个“通关密语”。
> * **设计微项目:** ...让他扮演一个“古代工程师”,为某个虚构的城市设计一个最合理的供水系统...
>
> **第三阶段:调整你的角色 (`智慧顾问`的回归)**
> * **从“裁判”到“教练”:** 不要只盯着他学校的排名或作业完成情况。多问问他:“今天最让你有成就感的一刻是什么?”...
> * **管理你的“焦虑情绪”:** 当你说“我非常焦虑”时,请先给自己一点安慰...
>
> **你需要警惕的几个陷阱**
> * 过度关注游戏本身...
> * 制造新的压力...
>
> **总结一下,你可以告诉孩子...**
> > “我们看到你在游戏里建造那些精巧机器时,眼睛发亮、非常投入的样子,这真的很迷人...现在,我们想挑战一个有趣的问题:能不能把这种‘创造的乐趣’,也带回到你的书本学习里?”
>
> 请相信,每一个迷恋游戏的孩子,都是一颗正在寻找自己独特表达方式的金子。你任务不是去“纠正”他,而是用智慧和耐心,帮他找到那条通往未来创造世界更广阔道路的最佳路线。你们一起探索的过程本身,就是最有价值的学习。
</details>
<br>
---
## 💡 如何与“启明”对话
**“启明”** 最适合用于解答那些没有标准答案的、开放式的、与人有关的复杂问题。
当您感到迷茫、焦虑,或需要一个能进行深度思考的伙伴时,请向它提问。
为了获得最佳效果,请不要把它当作一个搜索引擎。请像和一位值得信赖的、充满智慧的朋友交谈一样,坦诚地描述您的困境、您的感受和您的目标。
## ⚠️ 局限性
“启明”的核心优势在于其**思维的逻辑框架**和**对人性的深刻洞察**。作为一个轻量级模型,它在事实性知识的广度和精度上可能无法与千亿级模型媲美。对于需要精确数据或事实核查的查询,请交叉验证。
---
## 开源的初心
本项目通过在消费级GPU上,使用一套精心构建的、极小规模的逻辑数据集进行微调,旨在证明:
> **智慧的注入,远比知识的堆砌更重要。**
其方法论、训练数据和诞生过程均已开源,希望能启发更多开发者,在有限的算力下,创造出真正拥有“灵魂”的AI。
* **训练逻辑数据集:** [QiMing-v1](https://huggingface.co/datasets/aifeifei798/QiMing-v1)
|
vomqal/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-masked_snappy_caribou
|
vomqal
| 2025-08-15T10:51:27Z | 9 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am masked_snappy_caribou",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-07-03T00:27:47Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am masked_snappy_caribou
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
lisaozill03/blockassist-bc-rugged_prickly_alpaca_1755253386
|
lisaozill03
| 2025-08-15T10:47:30Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"rugged prickly alpaca",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-15T10:47:27Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- rugged prickly alpaca
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
misterkissi/whisper-small-fongbe
|
misterkissi
| 2025-08-15T10:19:04Z | 95 | 0 |
transformers
|
[
"transformers",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:cc-by-nc-sa-4.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2025-07-17T11:37:30Z |
---
library_name: transformers
license: cc-by-nc-sa-4.0
base_model: openai/whisper-small
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: wisper-small-fongbe
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wisper-small-fongbe
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1111
- Wer: 0.2074
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 5000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-------:|:----:|:---------------:|:------:|
| 1.3253 | 1.2441 | 500 | 0.2596 | 0.5616 |
| 0.1357 | 2.4882 | 1000 | 0.1269 | 0.2757 |
| 0.0577 | 3.7323 | 1500 | 0.1112 | 0.2695 |
| 0.0278 | 4.9763 | 2000 | 0.1015 | 0.2262 |
| 0.0143 | 6.2192 | 2500 | 0.1062 | 0.2645 |
| 0.0087 | 7.4633 | 3000 | 0.1104 | 0.2590 |
| 0.0066 | 8.7073 | 3500 | 0.1153 | 0.2066 |
| 0.0049 | 9.9514 | 4000 | 0.1138 | 0.2364 |
| 0.0039 | 11.1943 | 4500 | 0.1100 | 0.2005 |
| 0.0034 | 12.4384 | 5000 | 0.1111 | 0.2074 |
### Framework versions
- Transformers 4.53.2
- Pytorch 2.6.0+cu124
- Datasets 2.18.0
- Tokenizers 0.21.2
|
misterkissi/whisper-small-sesotho
|
misterkissi
| 2025-08-15T10:17:08Z | 10 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:cc-by-nc-sa-4.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2025-07-23T12:23:57Z |
---
library_name: transformers
license: cc-by-nc-sa-4.0
base_model: openai/whisper-small
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: whisper-small-sesotho
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-small-sesotho
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5607
- Wer: 0.4212
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 5000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-------:|:----:|:---------------:|:------:|
| 1.5799 | 8.4746 | 500 | 0.4938 | 0.4363 |
| 0.0106 | 16.9492 | 1000 | 0.5006 | 0.3685 |
| 0.0018 | 25.4237 | 1500 | 0.5144 | 0.3372 |
| 0.001 | 33.8983 | 2000 | 0.5209 | 0.4070 |
| 0.0002 | 42.3729 | 2500 | 0.5338 | 0.4846 |
| 0.0002 | 50.8475 | 3000 | 0.5435 | 0.4885 |
| 0.0001 | 59.3220 | 3500 | 0.5503 | 0.4632 |
| 0.0001 | 67.7966 | 4000 | 0.5554 | 0.4104 |
| 0.0001 | 76.2712 | 4500 | 0.5590 | 0.4217 |
| 0.0001 | 84.7458 | 5000 | 0.5607 | 0.4212 |
### Framework versions
- Transformers 4.52.4
- Pytorch 2.6.0+cu124
- Datasets 2.14.4
- Tokenizers 0.21.2
|
elmenbillion/blockassist-bc-beaked_sharp_otter_1755251364
|
elmenbillion
| 2025-08-15T10:16:38Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"beaked sharp otter",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-15T10:16:33Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- beaked sharp otter
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
misterkissi/whisper-small-vai
|
misterkissi
| 2025-08-15T10:15:42Z | 105 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:cc-by-nc-sa-4.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2025-07-16T16:26:52Z |
---
library_name: transformers
license: cc-by-nc-sa-4.0
base_model: openai/whisper-small
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: wisper-small-vai
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wisper-small-vai
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4601
- Wer: 0.2969
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 128
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 5000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-------:|:----:|:---------------:|:------:|
| 0.0656 | 7.1429 | 500 | 0.3198 | 0.3513 |
| 0.0136 | 14.2857 | 1000 | 0.3727 | 0.4512 |
| 0.0033 | 21.4286 | 1500 | 0.3980 | 0.3258 |
| 0.0019 | 28.5714 | 2000 | 0.4147 | 0.3070 |
| 0.0004 | 35.7143 | 2500 | 0.4326 | 0.2999 |
| 0.0001 | 42.8571 | 3000 | 0.4428 | 0.3053 |
| 0.0001 | 50.0 | 3500 | 0.4502 | 0.3037 |
| 0.0001 | 57.1429 | 4000 | 0.4554 | 0.3049 |
| 0.0001 | 64.2857 | 4500 | 0.4587 | 0.2975 |
| 0.0001 | 71.4286 | 5000 | 0.4601 | 0.2969 |
### Framework versions
- Transformers 4.52.4
- Pytorch 2.6.0+cu124
- Datasets 2.14.4
- Tokenizers 0.21.2
|
misterkissi/w2v2-lg-xls-r-300m-kasem
|
misterkissi
| 2025-08-15T10:14:12Z | 105 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:facebook/wav2vec2-xls-r-300m",
"base_model:finetune:facebook/wav2vec2-xls-r-300m",
"license:cc-by-nc-sa-4.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2025-07-14T11:45:06Z |
---
library_name: transformers
license: cc-by-nc-sa-4.0
base_model: facebook/wav2vec2-xls-r-300m
tags:
- generated_from_trainer
model-index:
- name: w2v2-lg-xls-r-300m-kasem
results: []
---
# w2v2-lg-xls-r-300m-kasem
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m).
It achieves the following results on the evaluation set:
- eval_loss: 1.99408
- eval_wer: 0.30174
- eval_runtime: 308.9313
- eval_samples_per_second: 11.252
- eval_steps_per_second: 1.408
- epoch: 28.8434
- step: 57600
|
crystalline7/1999347
|
crystalline7
| 2025-08-15T10:12:08Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-15T10:12:07Z |
[View on Civ Archive](https://civitaiarchive.com/models/1859532?modelVersionId=2104581)
|
rafsya427/blockassist-bc-monstrous_bristly_chimpanzee_1755251065
|
rafsya427
| 2025-08-15T10:11:58Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"monstrous bristly chimpanzee",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-15T10:11:55Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- monstrous bristly chimpanzee
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
kapalbalap/blockassist-bc-peaceful_wary_owl_1755250468
|
kapalbalap
| 2025-08-15T09:35:20Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"peaceful wary owl",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-15T09:35:07Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- peaceful wary owl
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
elsvastika/blockassist-bc-arctic_soaring_weasel_1755248380
|
elsvastika
| 2025-08-15T09:32:02Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"arctic soaring weasel",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-15T09:31:10Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- arctic soaring weasel
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Sayemahsjn/blockassist-bc-playful_feline_octopus_1755248377
|
Sayemahsjn
| 2025-08-15T09:18:33Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"playful feline octopus",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-15T09:18:22Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- playful feline octopus
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
FrAnKu34t23/Construction_Risk_Prediction_TinyLlama_M2_new
|
FrAnKu34t23
| 2025-08-15T09:12:26Z | 0 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"base_model:adapter:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"lora",
"transformers",
"text-generation",
"conversational",
"arxiv:1910.09700",
"base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"region:us"
] |
text-generation
| 2025-08-15T09:12:16Z |
---
base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
library_name: peft
pipeline_tag: text-generation
tags:
- base_model:adapter:TinyLlama/TinyLlama-1.1B-Chat-v1.0
- lora
- transformers
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.17.0
|
winnieyangwannan/entity_Llama-3.1-8B-Instruct_mlp-down_pnas_layer_16_4_all_37_0.001_10240_1
|
winnieyangwannan
| 2025-08-15T09:11:54Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-15T09:10:27Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
llencia/blockassist-bc-wiry_wise_hedgehog_1755248931
|
llencia
| 2025-08-15T09:09:24Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"wiry wise hedgehog",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-15T09:09:12Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- wiry wise hedgehog
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Coaster41/patchtst-sae-16-1.0-0-cons
|
Coaster41
| 2025-08-15T09:01:57Z | 0 | 0 |
saelens
|
[
"saelens",
"region:us"
] | null | 2025-08-15T09:01:53Z |
---
library_name: saelens
---
# SAEs for use with the SAELens library
This repository contains the following SAEs:
- blocks.0.hook_mlp_out
Load these SAEs using SAELens as below:
```python
from sae_lens import SAE
sae = SAE.from_pretrained("Coaster41/patchtst-sae-16-1.0-0-cons", "<sae_id>")
```
|
Coaster41/patchtst-sae-16-1.0-0-laye
|
Coaster41
| 2025-08-15T09:01:28Z | 0 | 0 |
saelens
|
[
"saelens",
"region:us"
] | null | 2025-08-15T09:01:24Z |
---
library_name: saelens
---
# SAEs for use with the SAELens library
This repository contains the following SAEs:
- blocks.0.hook_mlp_out
Load these SAEs using SAELens as below:
```python
from sae_lens import SAE
sae = SAE.from_pretrained("Coaster41/patchtst-sae-16-1.0-0-laye", "<sae_id>")
```
|
lobbylob/blockassist-bc-placid_soft_ant_1755246810
|
lobbylob
| 2025-08-15T09:00:39Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"placid soft ant",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-15T09:00:35Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- placid soft ant
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
lalala125/GSROR
|
lalala125
| 2025-08-15T08:57:37Z | 0 | 1 | null |
[
"image-to-3d",
"arxiv:2406.18544",
"license:cc-by-nc-4.0",
"region:us"
] |
image-to-3d
| 2025-08-06T09:30:07Z |
---
license: cc-by-nc-4.0
pipeline_tag: image-to-3d
---
# GS-ROR<sup>2</sup>: Bidirectional-guided 3DGS and SDF for Reflective Object Relighting and Reconstruction
This repository contains the official checkpoints and results for **GS-ROR<sup>2</sup>**, a novel method presented in the paper [GS-ROR<sup>2</sup>: Bidirectional-guided 3DGS and SDF for Reflective Object Relighting and Reconstruction](https://arxiv.org/abs/2406.18544). This work will be presented at ACM TOG 2025.
GS-ROR<sup>2</sup> significantly advances 3D Gaussian splatting (3DGS) for inverse rendering, enabling the creation of high-quality relightable 3D assets.
<p align="middle">
<img src="https://nk-cs-zzl.github.io/projects/gsror/assets/images/angel_relit.gif" width="30%"/><img src="https://nk-cs-zzl.github.io/projects/gsror/assets/images/tbell_relit.gif" width="30%"/><img src="https://nk-cs-zzl.github.io/projects/gsror/assets/images/toaster_relit.gif" width="30%"/>
</p>
<p align="middle">
<img src="https://nk-cs-zzl.github.io/projects/gsror/assets/images/angel_mesh.gif" width="30%"/><img src="https://nk-cs-zzl.github.io/projects/gsror/assets/images/tbell_mesh.gif" width="30%"/><img src="https://nk-cs-zzl.github.io/projects/gsror/assets/images/toaster_mesh.gif" width="30%"/>
</p>
## Links
- 📄 **Paper**: [GS-ROR<sup>2</sup>: Bidirectional-guided 3DGS and SDF for Reflective Object Relighting and Reconstruction](https://arxiv.org/abs/2406.18544)
- 🌐 **Project Page**: [https://nk-cs-zzl.github.io/projects/dsdf/index.html](https://nk-cs-zzl.github.io/projects/gsror/index.html)
- 💻 **GitHub Repository**: [https://github.com/NK-CS-ZZL/DiscretizedSDF](https://github.com/NK-CS-ZZL/GS-ROR)
## Usage
For detailed installation instructions, environment setup, and information on training and evaluation, please refer to the [official GitHub repository](https://github.com/NK-CS-ZZL/GS-ROR).
To run a quick relighting video demo with the provided checkpoints:
1. Clone the repository:
```bash
git clone https://github.com/NK-CS-ZZL/GS-ROR.git
cd GS-ROR
```
2. Follow the installation steps on the [GitHub repository's "Dependencies and Installation" section](https://github.com/NK-CS-ZZL/GS-ROR#dependencies-and-installation) to set up the environment and dependencies.
3. Download pretrained models (e.g., from [HuggingFace](https://huggingface.co/lalala125/GSROR) as mentioned in the GitHub README) and place them in the `pretrained` folder.
4. Run the demo script:
```bash
sh demo.sh
```
## Citation
If you find our work useful for your research, please consider citing our paper:
```bibtex
@inproceedings{zhu_2025_gsror,
title={GS-ROR^2: Bidirectional-guided 3DGS and SDF for Reflective Object Relighting and Reconstruction},
author={Zhu, Zuo-Liang and Yang, Jian and Wang, Beibei},
journal = {ACM Transactions on Graphics (TOG)},
year={2025},
publisher = {ACM},
doi={10.1145/3759248},
}
```
|
Coaster41/patchtst-sae-16-1.0-0
|
Coaster41
| 2025-08-15T08:52:20Z | 0 | 0 |
saelens
|
[
"saelens",
"region:us"
] | null | 2025-08-15T08:52:16Z |
---
library_name: saelens
---
# SAEs for use with the SAELens library
This repository contains the following SAEs:
- blocks.0.hook_mlp_out
Load these SAEs using SAELens as below:
```python
from sae_lens import SAE
sae = SAE.from_pretrained("Coaster41/patchtst-sae-16-1.0-0", "<sae_id>")
```
|
nunchaku-tech/nunchaku-qwen-image
|
nunchaku-tech
| 2025-08-15T08:45:13Z | 0 | 12 |
diffusers
|
[
"diffusers",
"text-to-image",
"SVDQuant",
"Qwen-Image",
"Diffusion",
"Quantization",
"ICLR2025",
"en",
"dataset:mit-han-lab/svdquant-datasets",
"arxiv:2411.05007",
"base_model:Qwen/Qwen-Image",
"base_model:quantized:Qwen/Qwen-Image",
"license:apache-2.0",
"region:us"
] |
text-to-image
| 2025-08-14T08:19:05Z |
---
base_model: Qwen/Qwen-Image
base_model_relation: quantized
datasets:
- mit-han-lab/svdquant-datasets
language:
- en
library_name: diffusers
license: apache-2.0
pipeline_tag: text-to-image
tags:
- text-to-image
- SVDQuant
- Qwen-Image
- Diffusion
- Quantization
- ICLR2025
---
<p align="center" style="border-radius: 10px">
<img src="https://huggingface.co/datasets/nunchaku-tech/cdn/resolve/main/nunchaku/assets/nunchaku.svg" width="30%" alt="Nunchaku Logo"/>
</p>
# Model Card for nunchaku-qwen-image

This repository contains Nunchaku-quantized versions of [Qwen-Image](https://huggingface.co/Qwen/Qwen-Image), designed to generate high-quality images from text prompts, advances in complex text rendering. It is optimized for efficient inference while maintaining minimal loss in performance.
## Model Details
### Model Description
- **Developed by:** Nunchaku Team
- **Model type:** text-to-image
- **License:** apache-2.0
- **Quantized from model:** [Qwen-Image](https://huggingface.co/Qwen/Qwen-Image)
### Model Files
- [`svdq-int4_r32-qwen-image.safetensors`](./svdq-int4_r32-qwen-image.safetensors): SVDQuant quantized INT4 Qwen-Image model with rank 32. For users with non-Blackwell GPUs (pre-50-series).
- [`svdq-int4_r128-qwen-image.safetensors`](./svdq-int4_r128-qwen-image.safetensors): SVDQuant quantized INT4 Qwen-Image model with rank 128. For users with non-Blackwell GPUs (pre-50-series). It offers better quality than the rank 32 model, but it is slower.
- [`svdq-fp4_r32-qwen-image.safetensors`](./svdq-fp4_r32-qwen-image.safetensors): SVDQuant quantized NVFP4 Qwen-Image model with rank 32. For users with Blackwell GPUs (50-series).
- [`svdq-fp4_r128-qwen-image.safetensors`](./svdq-fp4_r128-qwen-image.safetensors): SVDQuant quantized NVFP4 Qwen-Image model with rank 128. For users with Blackwell GPUs (50-series). It offers better quality than the rank 32 model, but it is slower.
### Model Sources
- **Inference Engine:** [nunchaku](https://github.com/nunchaku-tech/nunchaku)
- **Quantization Library:** [deepcompressor](https://github.com/nunchaku-tech/deepcompressor)
- **Paper:** [SVDQuant: Absorbing Outliers by Low-Rank Components for 4-Bit Diffusion Models](http://arxiv.org/abs/2411.05007)
- **Demo:** [svdquant.mit.edu](https://svdquant.mit.edu)
## Usage
- Diffusers Usage: See [qwen-image.py](https://github.com/nunchaku-tech/nunchaku/blob/main/examples/v1/qwen-image.py).
- ComfyUI Usage: Coming soon!
## Performance

## Citation
```bibtex
@inproceedings{
li2024svdquant,
title={SVDQuant: Absorbing Outliers by Low-Rank Components for 4-Bit Diffusion Models},
author={Li*, Muyang and Lin*, Yujun and Zhang*, Zhekai and Cai, Tianle and Li, Xiuyu and Guo, Junxian and Xie, Enze and Meng, Chenlin and Zhu, Jun-Yan and Han, Song},
booktitle={The Thirteenth International Conference on Learning Representations},
year={2025}
}
```
|
llencia/blockassist-bc-wiry_wise_hedgehog_1755247426
|
llencia
| 2025-08-15T08:44:17Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"wiry wise hedgehog",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-15T08:44:06Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- wiry wise hedgehog
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
smoorsmith/Dream_tulu3_DORA_softmasking-0.9-16
|
smoorsmith
| 2025-08-15T08:38:29Z | 30 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:smoorsmith/Dream-v0-Instruct-7B-Transparent-Masking",
"base_model:adapter:smoorsmith/Dream-v0-Instruct-7B-Transparent-Masking",
"region:us"
] | null | 2025-08-13T05:54:46Z |
---
base_model: smoorsmith/Dream-v0-Instruct-7B-Transparent-Masking
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.2
|
llencia/blockassist-bc-wiry_wise_hedgehog_1755246990
|
llencia
| 2025-08-15T08:37:02Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"wiry wise hedgehog",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-15T08:36:51Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- wiry wise hedgehog
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Ai2Web3/calm2-7b-chat
|
Ai2Web3
| 2025-08-15T08:35:49Z | 0 | 0 | null |
[
"safetensors",
"region:us"
] | null | 2025-08-15T07:53:29Z |
# Fine-Tuned CALM2-7B-Chat on Hawaii Wildfires Data
## Model Description
This is a fine-tuned version of [cyberagent/calm2-7b-chat](https://huggingface.co/cyberagent/calm2-7b-chat), a 7B parameter Transformer-based language model designed for dialogue use cases, supporting both Japanese and English. The fine-tuning incorporates specific knowledge about the August 2023 Hawaii wildfires, sourced from the Maui Police Department's report. The goal is to enhance the model's ability to respond to queries related to this event in a conversational manner.
The fine-tuning was performed using QLoRA (4-bit quantization) and LoRA adapters to efficiently adapt the model on limited hardware, such as Google Colab's free T4 GPU.
## Authors
- Base Model: CyberAgent
- Fine-Tuning: Ai2Web3(LogeswaranA)/PradeepNatarajan (GitHub/Hugging Face username)
## Intended Uses
- **Primary Use**: Chatbot or dialogue systems that need factual information about the 2023 Hawaii wildfires.
- **Out-of-Scope Uses**: Not intended for generating harmful content, misinformation, or applications requiring high-stakes decision-making without human oversight.
## Training Data
The dataset consists of raw text extracted from the Maui Police Department's preliminary after-action report on the August 2023 Hawaii wildfires. The report is available at: [http://www.mauipolice.com/uploads/1/3/1/2/131209824/pre_aar_master_copy_final_draft_1.23.24.pdf](http://www.mauipolice.com/uploads/1/3/1/2/131209824/pre_aar_master_copy_final_draft_1.23.24.pdf).
Specific files used:
- `hawaii_wf_4.txt`
- `hawaii_wf_2.txt`
These files contain unformatted text copied from the PDF, focusing on details like incident response, timelines, and observations from the wildfires.
The dataset was loaded using Hugging Face's `datasets` library as a text dataset.
## Training Procedure
### Preprocessing
- Tokenizer: From the base model (`cyberagent/calm2-7b-chat`), with PAD token added as EOS if missing.
- Data Tokenization: Each text sample was tokenized without additional formatting.
### Fine-Tuning Details
- **Quantization**: 4-bit using BitsAndBytes with double quantization, NF4 type, and bfloat16 compute dtype.
- **LoRA Configuration**:
- Rank (`r`): 8
- Alpha: 64
- Target Modules: `["q_proj", "k_proj", "v_proj", "o_proj", "gate_proj", "up_proj", "down_proj"]`
- Dropout: 0.05
- Bias: None
- Task Type: CAUSAL_LM
- **Trainer Settings** (using Hugging Face Transformers):
- Batch Size: 2 (per device)
- Gradient Accumulation Steps: 2
- Epochs: 3 (but limited to max_steps=20)
- Learning Rate: 1e-4
- Optimizer: paged_adamw_8bit
- Logging Steps: 10
- Save Strategy: Epoch (with save_steps=50)
- Data Collator: For language modeling (mlm=False)
- **Hardware**: Trained on Google Colab with GPU (NVIDIA T4).
- Gradient checkpointing was enabled for memory efficiency.
The model was prepared for k-bit training and used PEFT for LoRA integration.
## Evaluation
No formal evaluation was performed in the training script. Qualitative testing via inference is recommended to assess performance on wildfire-related queries.
## Usage
To load and use the fine-tuned model with PEFT:
```python
from peft import PeftModel
from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig
import torch
base_model_id = "cyberagent/calm2-7b-chat"
repo_id = "Ai2Web3/calm2-7b-chat"
# Define the same quantization config as used during training
bnb_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_use_double_quant=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_compute_dtype=torch.bfloat16
)
# Load base model with quantization, forcing all to GPU (assuming it fits)
model = AutoModelForCausalLM.from_pretrained(
base_model_id,
quantization_config=bnb_config,
device_map={"": "cuda:0"} # Force everything to GPU 0; change if multi-GPU
)
tokenizer = AutoTokenizer.from_pretrained(base_model_id)
# Load LoRA adapters (adapters will follow the base model's device map)
model = PeftModel.from_pretrained(model, repo_id)
# Inference example
prompt = "What caused the Hawaii wildfires in August 2023?"
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=200)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
For quantized inference, apply the same BitsAndBytesConfig as during training.Limitations and BiasesTraining Scale: Limited to 20 steps on a small dataset, which may result in incomplete adaptation or overfitting.
Language: Base model supports Japanese and English; fine-tuning data is in English, so performance may vary for Japanese queries.
Biases: Inherits biases from the base model and training data, which is an official report and may reflect institutional perspectives.
Hallucinations: As with all LLMs, the model may generate inaccurate information; verify outputs against reliable sources.
Safety: Not tested for harmful outputs; use with caution.
License
This model is licensed under Apache-2.0, following the base model's license.
Acknowledgments Base model by CyberAgent.
Tutorial and data preparation inspired by poloclub/Fine-tuning-LLMs.
Fine-tuning powered by Hugging Face Transformers, PEFT, and BitsAndBytes.
|
Ar-Ray/MyGemmaNPC
|
Ar-Ray
| 2025-08-15T08:30:34Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"gemma3_text",
"text-generation",
"generated_from_trainer",
"trl",
"sft",
"conversational",
"base_model:google/gemma-3-270m-it",
"base_model:finetune:google/gemma-3-270m-it",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-15T08:26:54Z |
---
base_model: google/gemma-3-270m-it
library_name: transformers
model_name: MyGemmaNPC
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for MyGemmaNPC
This model is a fine-tuned version of [google/gemma-3-270m-it](https://huggingface.co/google/gemma-3-270m-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Ar-Ray/MyGemmaNPC", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.21.0
- Transformers: 4.55.0
- Pytorch: 2.6.0+cu124
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
derektan95/search-tta-sound
|
derektan95
| 2025-08-15T08:24:31Z | 581 | 0 |
transformers
|
[
"transformers",
"safetensors",
"clap_audio_model",
"arxiv:2505.11350",
"arxiv:2211.06687",
"endpoints_compatible",
"region:us"
] | null | 2025-06-21T16:50:56Z |
---
library_name: transformers
tags: []
---
# Model Card for Search-TTA-Sound
Fine-tuned on `laion/clap-htsat-fused`.
## Citation
```
@inproceedings{tan2025searchtta,
title = {Search-TTA: A Multimodal Test-Time Adaptation Framework for Visual Search in the Wild},
author = {Derek Ming Siang Tan, Shailesh, Boyang Liu, Alok Raj, Qi Xuan Ang, Weiheng Dai, Tanishq Duhan, Jimmy Chiun, Yuhong Cao, Florian Shkurti, Guillaume Sartoretti},
booktitle = {Conference on Robot Learning},
year = {2025},
url = {https://arxiv.org/abs/2505.11350}
}
@misc{wu2024largescalecontrastivelanguageaudiopretraining,
title={Large-scale Contrastive Language-Audio Pretraining with Feature Fusion and Keyword-to-Caption Augmentation},
author={Yusong Wu and Ke Chen and Tianyu Zhang and Yuchen Hui and Marianna Nezhurina and Taylor Berg-Kirkpatrick and Shlomo Dubnov},
year={2024},
eprint={2211.06687},
archivePrefix={arXiv},
primaryClass={cs.SD},
url={https://arxiv.org/abs/2211.06687},
}
```
|
JunsuSeo/SPDE_Induced_Evaluation_Metric_Code
|
JunsuSeo
| 2025-08-15T08:20:08Z | 0 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"en",
"ko",
"region:us"
] | null | 2025-02-24T01:44:41Z |
---
language:
- en
- ko
---
## Overview (개요)
This repository provides a user-friendly implementation of the **SPDE-Induced Evaluation Metric (SIEM)**. The code here is a simplified 'toy example' designed for easy access and quick testing, allowing anyone to get a feel for how SIEM works.
For the comprehensive and rigorous experiments presented in our research paper, please refer to the official repository, which will be linked here upon publication.
이 리포지토리는 SPDE-Induced Evaluation Metric (SIEM)의 사용자 친화적인 구현체를 제공합니다. 여기에 포함된 코드는 누구나 쉽게 접근하고 빠르게 테스트해볼 수 있도록 설계된 간단한 'toy example'입니다.
저희 연구 논문에서 다룬 포괄적이고 엄밀한 실험들은 출판 시 링크될 공식 리포지토리를 참고해 주시기 바랍니다.
---
## Requirement (요구 사항)
* Python ≥ 3.10
* GPU ≥ RTX 4090
* OS ≥ Ubuntu 22.04
## How to Use (사용 방법)
Follow these steps to set up the environment and run the code.
아래 단계에 따라 환경을 설정하고 코드를 실행하세요.
### 1. Environment Setup (환경 설정)
```bash
# Update and upgrade system packages
# 시스템 패키지 업데이트 및 업그레이드
apt-get update
apt-get upgrade
# Install the diffusers library with training dependencies
# 학습 의존성을 포함하여 diffusers 라이브러리 설치
pip install diffusers[training]
# Install Git LFS to handle large files
# 대용량 파일을 다루기 위해 Git LFS 설치
apt -qq install git-lfs
# Upgrade the Pillow library for image processing
# 이미지 처리를 위해 Pillow 라이브러리 업그레이드
pip install --upgrade PILLOW
```
### 2. Clone Repository and Download Models (리포지토리 복제 및 모델 다운로드)
```bash
# Clone this repository
# 이 리포지토리를 복제합니다
git clone https://huggingface.co/JunsuSeo/SPDE_Induced_Evaluation_Metric_Code
# Navigate into the repository directory
# 리포지토리 디렉토리로 이동합니다
cd SPDE_Induced_Evaluation_Metric_Code
# Install Git LFS hooks and pull the large model files
# Git LFS를 설정하고 대용량 모델 파일을 다운로드합니다
git lfs install
git lfs pull
```
### 3. Run the Evaluation (평가 실행)
```bash
# Run the script. The number '1' selects the model to use.
# 스크립트를 실행합니다. 숫자 '1'은 사용할 모델을 선택합니다.
python3 SIEM.py 1
```
#### Changing Models (모델 변경하기)
To use other models, change the number at the end of the command. The number can be an integer from 1 to 11.
다른 모델을 사용하려면 명령어의 마지막 숫자를 변경하세요. 1부터 11까지의 정수를 사용할 수 있습니다.
For example, to use model number 5:
예를 들어, 5번 모델을 사용하려면:
```bash
python3 SIEM.py 5
```
## Key Results from the Paper (논문 주요 결과)
Here are some of the key results from our research paper, demonstrating the effectiveness of SIEM.
아래는 저희 연구 논문의 주요 결과 중 일부로, SIEM의 효과를 보여줍니다.
---
<p>
<img src="Experiement0522_single_plot_final_v2_CIFAR.png" alt="Result Figure 1" width="100%">
</p>
*Evolution of SIEM (Trend: light green; Residuals: orange), FID
(blue), and 2-Wasserstein distance (light purple) during U-Net training on
CIFAR-10. Generated samples at various training steps are shown at the bottom.*
*CIFAR-10 데이터셋으로 U-Net을 훈련하는 동안의 SIEM(추세: 연두색, 잔차: 주황색), FID(파란색), 2-바서슈타인(2-Wasserstein) 거리(연보라색)의 변화 추이. 하단은 다양한 학습 단계에서 생성된 샘플입니다.*
<p>
<img src="SIEM_CIFAR10_DIVERSE.png" alt="Result Figure 2" width="100%">
</p>
*The left side of the image displays generated images from various U-Net architectures (UNet-2B, UNet-3B, and UNet-4B), with rows corresponding to different model configurations (e.g., -32K-16G). On the right, two Spearman correlation matrices compare four evaluation metrics: FID, 2-Wasser, Trend, and Residuals. The top matrix shows correlations over the full diffusion range (Timesteps 0-999), while the bottom one focuses on the final steps (Timesteps 901-999). The data reveals a significant positive correlation between the "Residuals" metric and both the "2-Wasser" distance (0.75 and 0.68) and the Fréchet Inception Distance (FID) (0.70 and 0.52) in both scenarios.*
*이미지의 왼쪽에는 다양한 U-Net 아키텍처(UNet-2B, UNet-3B, UNet-4B)로부터 생성된 이미지들이 있으며, 각 행은 서로 다른 모델 구성(예: -32K-16G)을 나타냅니다. 오른쪽에는 FID, 2-Wasser, Trend, Residuals라는 네 가지 평가 지표를 비교하는 두 개의 스피어만 상관관계 행렬이 있습니다. 위쪽 행렬은 전체 확산 범위(타임스텝 0-999)에 대한 상관관계를 보여주고, 아래쪽 행렬은 마지막 단계(타임스텝 901-999)에 초점을 맞춥니다. 이 데이터는 두 시나리오 모두에서 "Residuals" 지표가 "2-Wasser" 거리(0.75 및 0.68)와 프레셰 인셉션 거리(FID)(0.70 및 0.52) 모두와 유의미한 양의 상관관계가 있음을 보여줍니다.*
|
Sayemahsjn/blockassist-bc-playful_feline_octopus_1755244706
|
Sayemahsjn
| 2025-08-15T08:16:35Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"playful feline octopus",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-15T08:16:31Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- playful feline octopus
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
svjack/Wan_2_1_safetensors_pth
|
svjack
| 2025-08-15T08:14:17Z | 101 | 2 | null |
[
"gguf",
"region:us"
] | null | 2025-03-29T08:26:17Z |
<!--
https://civitai.com/user/MaqueAI/
-->
|
wasabuko/blockassist-bc-noisy_zealous_macaw_1755243445
|
wasabuko
| 2025-08-15T08:08:21Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"noisy zealous macaw",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-15T08:05:27Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- noisy zealous macaw
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
thejaminator/grpo-feature-vector-step-1
|
thejaminator
| 2025-08-15T08:03:56Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"verl",
"grpo",
"math",
"reasoning",
"rl",
"lora",
"en",
"base_model:google/gemma-2-9b-it",
"base_model:adapter:google/gemma-2-9b-it",
"license:apache-2.0",
"region:us"
] | null | 2025-08-15T08:03:38Z |
---
language: en
license: apache-2.0
tags:
- verl
- grpo
- math
- reasoning
- rl
- lora
- peft
base_model: google/gemma-2-9b-it
library_name: peft
---
# thejaminator/grpo-feature-vector-step-1
This is a LoRA adapter trained using [verl](https://github.com/volcengine/verl) with GRPO (Group Relative Policy Optimization)
on math reasoning tasks.
## Training Details
- **Base model**: google/gemma-2-9b-it
- **Framework**: verl GRPO
- **Training steps**: 1
- **Dataset**: Math reasoning problems
- **Batch size**: 8
- **Learning rate**: 5e-05
- **LoRA rank**: 64
- **LoRA alpha**: 128.0
- **Number of generations**: 16
Generated from verl LoRA checkpoint: `/workspace/verl_outputs_feature_vector/global_step_1/actor/lora_adapter`
|
chavajaz/vit-wonders
|
chavajaz
| 2025-08-15T07:55:56Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2025-08-15T07:43:53Z |
---
library_name: transformers
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- image-classification
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
- f1
model-index:
- name: vit-wonders
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: wonders_md
type: imagefolder
config: default
split: validation
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9971398855954238
- name: F1
type: f1
value: 0.9975345770827277
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-wonders
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the wonders_md dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0183
- Accuracy: 0.9971
- F1: 0.9975
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:------:|:----:|:---------------:|:--------:|:------:|
| 0.0107 | 0.2075 | 100 | 0.0673 | 0.9860 | 0.9878 |
| 0.0508 | 0.4149 | 200 | 0.0596 | 0.9878 | 0.9887 |
| 0.0589 | 0.6224 | 300 | 0.0279 | 0.9940 | 0.9945 |
| 0.0116 | 0.8299 | 400 | 0.0183 | 0.9971 | 0.9975 |
### Framework versions
- Transformers 4.55.0
- Pytorch 2.6.0+cu124
- Datasets 4.0.0
- Tokenizers 0.21.4
|
yangpa67/asdf
|
yangpa67
| 2025-08-15T07:54:14Z | 67 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"base_model:yangpa67/asdf",
"base_model:finetune:yangpa67/asdf",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-09T14:09:54Z |
---
base_model: yangpa67/asdf
tags:
- text-generation-inference
- transformers
- unsloth
- gemma
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** yangpa67
- **License:** apache-2.0
- **Finetuned from model :** yangpa67/asdf
This gemma model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Jonghx/gpt-oss-20b-puzzleocr-reasoner
|
Jonghx
| 2025-08-15T07:49:12Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:openai/gpt-oss-20b",
"base_model:finetune:openai/gpt-oss-20b",
"endpoints_compatible",
"region:us"
] | null | 2025-08-14T11:37:32Z |
---
base_model: openai/gpt-oss-20b
library_name: transformers
model_name: gpt-oss-20b-puzzleocr-reasoner
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for gpt-oss-20b-puzzleocr-reasoner
This model is a fine-tuned version of [openai/gpt-oss-20b](https://huggingface.co/openai/gpt-oss-20b).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Jonghx/gpt-oss-20b-puzzleocr-reasoner", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.22.0.dev0
- Transformers: 4.55.2
- Pytorch: 2.8.0
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
sadsa1231/Reinforce
|
sadsa1231
| 2025-08-15T07:43:19Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-08-15T07:43:06Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
xinnn32/blockassist-bc-meek_winged_caterpillar_1755242740
|
xinnn32
| 2025-08-15T07:26:37Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"meek winged caterpillar",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-15T07:26:33Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- meek winged caterpillar
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Beable/leroVLA
|
Beable
| 2025-08-15T07:21:07Z | 16 | 0 |
lerobot
|
[
"lerobot",
"safetensors",
"robotics",
"smolvla",
"dataset:Beable/lerobot-SOARM100-sim3",
"arxiv:2506.01844",
"base_model:lerobot/smolvla_base",
"base_model:finetune:lerobot/smolvla_base",
"license:apache-2.0",
"region:us"
] |
robotics
| 2025-08-12T10:56:48Z |
---
base_model: lerobot/smolvla_base
datasets: Beable/lerobot-SOARM100-sim3
library_name: lerobot
license: apache-2.0
model_name: smolvla
pipeline_tag: robotics
tags:
- lerobot
- robotics
- smolvla
---
# Model Card for smolvla
<!-- Provide a quick summary of what the model is/does. -->
[SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware.
This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot).
See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index).
---
## How to Get Started with the Model
For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy).
Below is the short version on how to train and run inference/eval:
### Train from scratch
```bash
lerobot-train \
--dataset.repo_id=${HF_USER}/<dataset> \
--policy.type=act \
--output_dir=outputs/train/<desired_policy_repo_id> \
--job_name=lerobot_training \
--policy.device=cuda \
--policy.repo_id=${HF_USER}/<desired_policy_repo_id>
--wandb.enable=true
```
_Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`._
### Evaluate the policy/run inference
```bash
lerobot-record \
--robot.type=so100_follower \
--dataset.repo_id=<hf_user>/eval_<dataset> \
--policy.path=<hf_user>/<desired_policy_repo_id> \
--episodes=10
```
Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint.
---
## Model Details
- **License:** apache-2.0
|
rawsun00001/transact-minilm-pro
|
rawsun00001
| 2025-08-15T07:19:59Z | 7 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-08-14T13:31:55Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
0xaoyama/blockassist-bc-muscular_zealous_gorilla_1755242067
|
0xaoyama
| 2025-08-15T07:14:53Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"muscular zealous gorilla",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-15T07:14:49Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- muscular zealous gorilla
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Idrinth/characterbuilderai
|
Idrinth
| 2025-08-15T07:09:31Z | 53 | 1 |
adapter-transformers
|
[
"adapter-transformers",
"safetensors",
"unsloth",
"text-generation",
"en",
"dataset:Idrinth/characterbuilderai",
"base_model:unsloth/mistral-7b-instruct-v0.3-bnb-4bit",
"base_model:adapter:unsloth/mistral-7b-instruct-v0.3-bnb-4bit",
"license:mit",
"region:us"
] |
text-generation
| 2025-08-03T13:54:56Z |
---
library_name: adapter-transformers
tags:
- unsloth
license: mit
language:
- en
base_model:
- unsloth/mistral-7b-instruct-v0.3-bnb-4bit
pipeline_tag: text-generation
datasets:
- Idrinth/characterbuilderai
---
# Model Card for Idrinth/CharacterBuilderAI
This model is fine-tuned to provide a character builder for mind theatre role play sessions. Currently the tuning is VERY LIMITED, but we are adding more story bits over time to fix that.
## Model Details
### Model Description
This model is trained to create a short and useful introduction scenario based on player input.
### Model Sources
- **Repository:** [GitHub](https://github.com/bjoern-buettner/roleplay-ai)
|
rayhanfa/large-v3-rra-id-15aug
|
rayhanfa
| 2025-08-15T07:08:14Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"id",
"dataset:stt-project-rra-v2/golden-dataset-2.0-tvt-muffled-15aug",
"base_model:openai/whisper-large-v3",
"base_model:finetune:openai/whisper-large-v3",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2025-08-15T04:57:38Z |
---
library_name: transformers
language:
- id
license: apache-2.0
base_model: openai/whisper-large-v3
tags:
- generated_from_trainer
datasets:
- stt-project-rra-v2/golden-dataset-2.0-tvt-muffled-15aug
model-index:
- name: Whisper Large v3 - 15aug
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Large v3 - 15aug
This model is a fine-tuned version of [openai/whisper-large-v3](https://huggingface.co/openai/whisper-large-v3) on the stt-project-rra-v2/golden-dataset-2.0-tvt-muffled-15aug dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2227
- Wer Raw: 24.3460
- Wer Normalized: 16.9003
- Cer Normalized: 10.6285
- Wer Mapping: 16.5789
- Cer Mapping: 10.5587
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 250
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer Raw | Wer Normalized | Cer Normalized | Wer Mapping | Cer Mapping |
|:-------------:|:------:|:----:|:---------------:|:-------:|:--------------:|:--------------:|:-----------:|:-----------:|
| 0.2083 | 0.4981 | 400 | 0.2272 | 26.2185 | 18.5903 | 11.4414 | 18.2921 | 11.3819 |
| 0.1793 | 0.9963 | 800 | 0.2137 | 28.8211 | 20.1743 | 12.7298 | 19.6043 | 12.5293 |
| 0.1027 | 1.4944 | 1200 | 0.2147 | 26.7897 | 18.8521 | 11.8856 | 18.5671 | 11.8423 |
| 0.0857 | 1.9925 | 1600 | 0.2105 | 25.9346 | 18.6930 | 11.5579 | 18.4147 | 11.5009 |
| 0.0336 | 2.4907 | 2000 | 0.2209 | 24.9713 | 17.5829 | 11.0662 | 17.2781 | 10.9933 |
| 0.0323 | 2.9888 | 2400 | 0.2227 | 24.3460 | 16.9003 | 10.6285 | 16.5789 | 10.5587 |
### Framework versions
- Transformers 4.55.2
- Pytorch 2.8.0.dev20250319+cu128
- Datasets 3.6.0
- Tokenizers 0.21.4
|
rinyoa/blockassist-bc-dextrous_omnivorous_ostrich_1755241632
|
rinyoa
| 2025-08-15T07:07:55Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"dextrous omnivorous ostrich",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-15T07:07:31Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- dextrous omnivorous ostrich
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ChaosMon/corgy_CatText2Image_LoRA
|
ChaosMon
| 2025-08-15T06:57:35Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] |
text-to-image
| 2025-08-15T03:59:15Z |
---
base_model: stabilityai/stable-diffusion-xl-base-1.0
library_name: diffusers
license: openrail++
instance_prompt: a cartoon cat, kawaii style
widget: []
tags:
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SDXL LoRA DreamBooth - ChaosMon/corgy_CatText2Image_LoRA
<Gallery />
## Model description
These are ChaosMon/corgy_CatText2Image_LoRA LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use a cartoon cat, kawaii style to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](ChaosMon/corgy_CatText2Image_LoRA/tree/main) them in the Files & versions tab.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model]
|
phospho-app/z1c0-gr00t-pick_and_place-tku91
|
phospho-app
| 2025-08-15T06:56:04Z | 0 | 0 |
phosphobot
|
[
"phosphobot",
"safetensors",
"gr00t_n1_5",
"gr00t",
"robotics",
"dataset:z1c0/pick_and_place",
"region:us"
] |
robotics
| 2025-08-15T06:33:38Z |
---
datasets: z1c0/pick_and_place
library_name: phosphobot
pipeline_tag: robotics
model_name: gr00t
tags:
- phosphobot
- gr00t
task_categories:
- robotics
---
# gr00t Model - phospho Training Pipeline
## This model was trained using **phospho**.
Training was successful, try it out on your robot!
## Training parameters:
- **Dataset**: [z1c0/pick_and_place](https://huggingface.co/datasets/z1c0/pick_and_place)
- **Wandb run URL**: None
- **Epochs**: 10
- **Batch size**: 107
- **Training steps**: None
📖 **Get Started**: [docs.phospho.ai](https://docs.phospho.ai?utm_source=huggingface_readme)
🤖 **Get your robot**: [robots.phospho.ai](https://robots.phospho.ai?utm_source=huggingface_readme)
|
tpayne52939/blockassist-bc-flightless_unseen_parrot_1755239786
|
tpayne52939
| 2025-08-15T06:53:43Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"flightless unseen parrot",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-15T06:53:34Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- flightless unseen parrot
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
zarude/blockassist-bc-rabid_timid_rat_1755240448
|
zarude
| 2025-08-15T06:48:17Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"rabid timid rat",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-15T06:48:04Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- rabid timid rat
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Sayemahsjn/blockassist-bc-playful_feline_octopus_1755239232
|
Sayemahsjn
| 2025-08-15T06:45:30Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"playful feline octopus",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-15T06:45:25Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- playful feline octopus
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
mdemirci10/hf_tokenizer
|
mdemirci10
| 2025-08-15T06:44:00Z | 0 | 0 |
transformers
|
[
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-15T06:43:59Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
AmirHaz/Affine-5F4tviGRdJE1vNZU9xVbuZsi58EHdqkNKEqxcnib7E3bHvTn
|
AmirHaz
| 2025-08-15T06:35:36Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma3",
"image-text-to-text",
"conversational",
"arxiv:1905.07830",
"arxiv:1905.10044",
"arxiv:1911.11641",
"arxiv:1904.09728",
"arxiv:1705.03551",
"arxiv:1911.01547",
"arxiv:1907.10641",
"arxiv:1903.00161",
"arxiv:2009.03300",
"arxiv:2304.06364",
"arxiv:2103.03874",
"arxiv:2110.14168",
"arxiv:2311.12022",
"arxiv:2108.07732",
"arxiv:2107.03374",
"arxiv:2210.03057",
"arxiv:2106.03193",
"arxiv:1910.11856",
"arxiv:2502.12404",
"arxiv:2502.21228",
"arxiv:2404.16816",
"arxiv:2104.12756",
"arxiv:2311.16502",
"arxiv:2203.10244",
"arxiv:2404.12390",
"arxiv:1810.12440",
"arxiv:1908.02660",
"arxiv:2312.11805",
"base_model:google/gemma-3-4b-pt",
"base_model:finetune:google/gemma-3-4b-pt",
"license:gemma",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2025-08-15T06:18:20Z |
---
license: gemma
library_name: transformers
pipeline_tag: image-text-to-text
extra_gated_heading: Access Gemma on Hugging Face
extra_gated_prompt: To access Gemma on Hugging Face, you’re required to review and
agree to Google’s usage license. To do this, please ensure you’re logged in to Hugging
Face and click below. Requests are processed immediately.
extra_gated_button_content: Acknowledge license
base_model: google/gemma-3-4b-pt
---
# Gemma 3 model card
**Model Page**: [Gemma](https://ai.google.dev/gemma/docs/core)
**Resources and Technical Documentation**:
* [Gemma 3 Technical Report][g3-tech-report]
* [Responsible Generative AI Toolkit][rai-toolkit]
* [Gemma on Kaggle][kaggle-gemma]
* [Gemma on Vertex Model Garden][vertex-mg-gemma3]
**Terms of Use**: [Terms][terms]
**Authors**: Google DeepMind
## Model Information
Summary description and brief definition of inputs and outputs.
### Description
Gemma is a family of lightweight, state-of-the-art open models from Google,
built from the same research and technology used to create the Gemini models.
Gemma 3 models are multimodal, handling text and image input and generating text
output, with open weights for both pre-trained variants and instruction-tuned
variants. Gemma 3 has a large, 128K context window, multilingual support in over
140 languages, and is available in more sizes than previous versions. Gemma 3
models are well-suited for a variety of text generation and image understanding
tasks, including question answering, summarization, and reasoning. Their
relatively small size makes it possible to deploy them in environments with
limited resources such as laptops, desktops or your own cloud infrastructure,
democratizing access to state of the art AI models and helping foster innovation
for everyone.
### Inputs and outputs
- **Input:**
- Text string, such as a question, a prompt, or a document to be summarized
- Images, normalized to 896 x 896 resolution and encoded to 256 tokens
each
- Total input context of 128K tokens for the 4B, 12B, and 27B sizes, and
32K tokens for the 1B size
- **Output:**
- Generated text in response to the input, such as an answer to a
question, analysis of image content, or a summary of a document
- Total output context of 8192 tokens
### Usage
Below, there are some code snippets on how to get quickly started with running the model. First, install the Transformers library. Gemma 3 is supported starting from transformers 4.50.0.
```sh
$ pip install -U transformers
```
Then, copy the snippet from the section that is relevant for your use case.
#### Running with the `pipeline` API
You can initialize the model and processor for inference with `pipeline` as follows.
```python
from transformers import pipeline
import torch
pipe = pipeline(
"image-text-to-text",
model="google/gemma-3-4b-it",
device="cuda",
torch_dtype=torch.bfloat16
)
```
With instruction-tuned models, you need to use chat templates to process our inputs first. Then, you can pass it to the pipeline.
```python
messages = [
{
"role": "system",
"content": [{"type": "text", "text": "You are a helpful assistant."}]
},
{
"role": "user",
"content": [
{"type": "image", "url": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/p-blog/candy.JPG"},
{"type": "text", "text": "What animal is on the candy?"}
]
}
]
output = pipe(text=messages, max_new_tokens=200)
print(output[0]["generated_text"][-1]["content"])
# Okay, let's take a look!
# Based on the image, the animal on the candy is a **turtle**.
# You can see the shell shape and the head and legs.
```
#### Running the model on a single/multi GPU
```python
# pip install accelerate
from transformers import AutoProcessor, Gemma3ForConditionalGeneration
from PIL import Image
import requests
import torch
model_id = "google/gemma-3-4b-it"
model = Gemma3ForConditionalGeneration.from_pretrained(
model_id, device_map="auto"
).eval()
processor = AutoProcessor.from_pretrained(model_id)
messages = [
{
"role": "system",
"content": [{"type": "text", "text": "You are a helpful assistant."}]
},
{
"role": "user",
"content": [
{"type": "image", "image": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/bee.jpg"},
{"type": "text", "text": "Describe this image in detail."}
]
}
]
inputs = processor.apply_chat_template(
messages, add_generation_prompt=True, tokenize=True,
return_dict=True, return_tensors="pt"
).to(model.device, dtype=torch.bfloat16)
input_len = inputs["input_ids"].shape[-1]
with torch.inference_mode():
generation = model.generate(**inputs, max_new_tokens=100, do_sample=False)
generation = generation[0][input_len:]
decoded = processor.decode(generation, skip_special_tokens=True)
print(decoded)
# **Overall Impression:** The image is a close-up shot of a vibrant garden scene,
# focusing on a cluster of pink cosmos flowers and a busy bumblebee.
# It has a slightly soft, natural feel, likely captured in daylight.
```
### Citation
```none
@article{gemma_2025,
title={Gemma 3},
url={https://goo.gle/Gemma3Report},
publisher={Kaggle},
author={Gemma Team},
year={2025}
}
```
## Model Data
Data used for model training and how the data was processed.
### Training Dataset
These models were trained on a dataset of text data that includes a wide variety
of sources. The 27B model was trained with 14 trillion tokens, the 12B model was
trained with 12 trillion tokens, 4B model was trained with 4 trillion tokens and
1B with 2 trillion tokens. Here are the key components:
- Web Documents: A diverse collection of web text ensures the model is
exposed to a broad range of linguistic styles, topics, and vocabulary. The
training dataset includes content in over 140 languages.
- Code: Exposing the model to code helps it to learn the syntax and
patterns of programming languages, which improves its ability to generate
code and understand code-related questions.
- Mathematics: Training on mathematical text helps the model learn logical
reasoning, symbolic representation, and to address mathematical queries.
- Images: A wide range of images enables the model to perform image
analysis and visual data extraction tasks.
The combination of these diverse data sources is crucial for training a powerful
multimodal model that can handle a wide variety of different tasks and data
formats.
### Data Preprocessing
Here are the key data cleaning and filtering methods applied to the training
data:
- CSAM Filtering: Rigorous CSAM (Child Sexual Abuse Material) filtering
was applied at multiple stages in the data preparation process to ensure
the exclusion of harmful and illegal content.
- Sensitive Data Filtering: As part of making Gemma pre-trained models
safe and reliable, automated techniques were used to filter out certain
personal information and other sensitive data from training sets.
- Additional methods: Filtering based on content quality and safety in
line with [our policies][safety-policies].
## Implementation Information
Details about the model internals.
### Hardware
Gemma was trained using [Tensor Processing Unit (TPU)][tpu] hardware (TPUv4p,
TPUv5p and TPUv5e). Training vision-language models (VLMS) requires significant
computational power. TPUs, designed specifically for matrix operations common in
machine learning, offer several advantages in this domain:
- Performance: TPUs are specifically designed to handle the massive
computations involved in training VLMs. They can speed up training
considerably compared to CPUs.
- Memory: TPUs often come with large amounts of high-bandwidth memory,
allowing for the handling of large models and batch sizes during training.
This can lead to better model quality.
- Scalability: TPU Pods (large clusters of TPUs) provide a scalable
solution for handling the growing complexity of large foundation models.
You can distribute training across multiple TPU devices for faster and more
efficient processing.
- Cost-effectiveness: In many scenarios, TPUs can provide a more
cost-effective solution for training large models compared to CPU-based
infrastructure, especially when considering the time and resources saved
due to faster training.
- These advantages are aligned with
[Google's commitments to operate sustainably][sustainability].
### Software
Training was done using [JAX][jax] and [ML Pathways][ml-pathways].
JAX allows researchers to take advantage of the latest generation of hardware,
including TPUs, for faster and more efficient training of large models. ML
Pathways is Google's latest effort to build artificially intelligent systems
capable of generalizing across multiple tasks. This is specially suitable for
foundation models, including large language models like these ones.
Together, JAX and ML Pathways are used as described in the
[paper about the Gemini family of models][gemini-2-paper]; *"the 'single
controller' programming model of Jax and Pathways allows a single Python
process to orchestrate the entire training run, dramatically simplifying the
development workflow."*
## Evaluation
Model evaluation metrics and results.
### Benchmark Results
These models were evaluated against a large collection of different datasets and
metrics to cover different aspects of text generation:
#### Reasoning and factuality
| Benchmark | Metric | Gemma 3 PT 1B | Gemma 3 PT 4B | Gemma 3 PT 12B | Gemma 3 PT 27B |
| ------------------------------ |----------------|:--------------:|:-------------:|:--------------:|:--------------:|
| [HellaSwag][hellaswag] | 10-shot | 62.3 | 77.2 | 84.2 | 85.6 |
| [BoolQ][boolq] | 0-shot | 63.2 | 72.3 | 78.8 | 82.4 |
| [PIQA][piqa] | 0-shot | 73.8 | 79.6 | 81.8 | 83.3 |
| [SocialIQA][socialiqa] | 0-shot | 48.9 | 51.9 | 53.4 | 54.9 |
| [TriviaQA][triviaqa] | 5-shot | 39.8 | 65.8 | 78.2 | 85.5 |
| [Natural Questions][naturalq] | 5-shot | 9.48 | 20.0 | 31.4 | 36.1 |
| [ARC-c][arc] | 25-shot | 38.4 | 56.2 | 68.9 | 70.6 |
| [ARC-e][arc] | 0-shot | 73.0 | 82.4 | 88.3 | 89.0 |
| [WinoGrande][winogrande] | 5-shot | 58.2 | 64.7 | 74.3 | 78.8 |
| [BIG-Bench Hard][bbh] | few-shot | 28.4 | 50.9 | 72.6 | 77.7 |
| [DROP][drop] | 1-shot | 42.4 | 60.1 | 72.2 | 77.2 |
[hellaswag]: https://arxiv.org/abs/1905.07830
[boolq]: https://arxiv.org/abs/1905.10044
[piqa]: https://arxiv.org/abs/1911.11641
[socialiqa]: https://arxiv.org/abs/1904.09728
[triviaqa]: https://arxiv.org/abs/1705.03551
[naturalq]: https://github.com/google-research-datasets/natural-questions
[arc]: https://arxiv.org/abs/1911.01547
[winogrande]: https://arxiv.org/abs/1907.10641
[bbh]: https://paperswithcode.com/dataset/bbh
[drop]: https://arxiv.org/abs/1903.00161
#### STEM and code
| Benchmark | Metric | Gemma 3 PT 4B | Gemma 3 PT 12B | Gemma 3 PT 27B |
| ------------------------------ |----------------|:-------------:|:--------------:|:--------------:|
| [MMLU][mmlu] | 5-shot | 59.6 | 74.5 | 78.6 |
| [MMLU][mmlu] (Pro COT) | 5-shot | 29.2 | 45.3 | 52.2 |
| [AGIEval][agieval] | 3-5-shot | 42.1 | 57.4 | 66.2 |
| [MATH][math] | 4-shot | 24.2 | 43.3 | 50.0 |
| [GSM8K][gsm8k] | 8-shot | 38.4 | 71.0 | 82.6 |
| [GPQA][gpqa] | 5-shot | 15.0 | 25.4 | 24.3 |
| [MBPP][mbpp] | 3-shot | 46.0 | 60.4 | 65.6 |
| [HumanEval][humaneval] | 0-shot | 36.0 | 45.7 | 48.8 |
[mmlu]: https://arxiv.org/abs/2009.03300
[agieval]: https://arxiv.org/abs/2304.06364
[math]: https://arxiv.org/abs/2103.03874
[gsm8k]: https://arxiv.org/abs/2110.14168
[gpqa]: https://arxiv.org/abs/2311.12022
[mbpp]: https://arxiv.org/abs/2108.07732
[humaneval]: https://arxiv.org/abs/2107.03374
#### Multilingual
| Benchmark | Gemma 3 PT 1B | Gemma 3 PT 4B | Gemma 3 PT 12B | Gemma 3 PT 27B |
| ------------------------------------ |:-------------:|:-------------:|:--------------:|:--------------:|
| [MGSM][mgsm] | 2.04 | 34.7 | 64.3 | 74.3 |
| [Global-MMLU-Lite][global-mmlu-lite] | 24.9 | 57.0 | 69.4 | 75.7 |
| [WMT24++][wmt24pp] (ChrF) | 36.7 | 48.4 | 53.9 | 55.7 |
| [FloRes][flores] | 29.5 | 39.2 | 46.0 | 48.8 |
| [XQuAD][xquad] (all) | 43.9 | 68.0 | 74.5 | 76.8 |
| [ECLeKTic][eclektic] | 4.69 | 11.0 | 17.2 | 24.4 |
| [IndicGenBench][indicgenbench] | 41.4 | 57.2 | 61.7 | 63.4 |
[mgsm]: https://arxiv.org/abs/2210.03057
[flores]: https://arxiv.org/abs/2106.03193
[xquad]: https://arxiv.org/abs/1910.11856v3
[global-mmlu-lite]: https://huggingface.co/datasets/CohereForAI/Global-MMLU-Lite
[wmt24pp]: https://arxiv.org/abs/2502.12404v1
[eclektic]: https://arxiv.org/abs/2502.21228
[indicgenbench]: https://arxiv.org/abs/2404.16816
#### Multimodal
| Benchmark | Gemma 3 PT 4B | Gemma 3 PT 12B | Gemma 3 PT 27B |
| ------------------------------ |:-------------:|:--------------:|:--------------:|
| [COCOcap][coco-cap] | 102 | 111 | 116 |
| [DocVQA][docvqa] (val) | 72.8 | 82.3 | 85.6 |
| [InfoVQA][info-vqa] (val) | 44.1 | 54.8 | 59.4 |
| [MMMU][mmmu] (pt) | 39.2 | 50.3 | 56.1 |
| [TextVQA][textvqa] (val) | 58.9 | 66.5 | 68.6 |
| [RealWorldQA][realworldqa] | 45.5 | 52.2 | 53.9 |
| [ReMI][remi] | 27.3 | 38.5 | 44.8 |
| [AI2D][ai2d] | 63.2 | 75.2 | 79.0 |
| [ChartQA][chartqa] | 63.6 | 74.7 | 76.3 |
| [VQAv2][vqav2] | 63.9 | 71.2 | 72.9 |
| [BLINK][blinkvqa] | 38.0 | 35.9 | 39.6 |
| [OKVQA][okvqa] | 51.0 | 58.7 | 60.2 |
| [TallyQA][tallyqa] | 42.5 | 51.8 | 54.3 |
| [SpatialSense VQA][ss-vqa] | 50.9 | 60.0 | 59.4 |
| [CountBenchQA][countbenchqa] | 26.1 | 17.8 | 68.0 |
[coco-cap]: https://cocodataset.org/#home
[docvqa]: https://www.docvqa.org/
[info-vqa]: https://arxiv.org/abs/2104.12756
[mmmu]: https://arxiv.org/abs/2311.16502
[textvqa]: https://textvqa.org/
[realworldqa]: https://paperswithcode.com/dataset/realworldqa
[remi]: https://arxiv.org/html/2406.09175v1
[ai2d]: https://allenai.org/data/diagrams
[chartqa]: https://arxiv.org/abs/2203.10244
[vqav2]: https://visualqa.org/index.html
[blinkvqa]: https://arxiv.org/abs/2404.12390
[okvqa]: https://okvqa.allenai.org/
[tallyqa]: https://arxiv.org/abs/1810.12440
[ss-vqa]: https://arxiv.org/abs/1908.02660
[countbenchqa]: https://github.com/google-research/big_vision/blob/main/big_vision/datasets/countbenchqa/
## Ethics and Safety
Ethics and safety evaluation approach and results.
### Evaluation Approach
Our evaluation methods include structured evaluations and internal red-teaming
testing of relevant content policies. Red-teaming was conducted by a number of
different teams, each with different goals and human evaluation metrics. These
models were evaluated against a number of different categories relevant to
ethics and safety, including:
- **Child Safety**: Evaluation of text-to-text and image to text prompts
covering child safety policies, including child sexual abuse and
exploitation.
- **Content Safety:** Evaluation of text-to-text and image to text prompts
covering safety policies including, harassment, violence and gore, and hate
speech.
- **Representational Harms**: Evaluation of text-to-text and image to text
prompts covering safety policies including bias, stereotyping, and harmful
associations or inaccuracies.
In addition to development level evaluations, we conduct "assurance
evaluations" which are our 'arms-length' internal evaluations for responsibility
governance decision making. They are conducted separately from the model
development team, to inform decision making about release. High level findings
are fed back to the model team, but prompt sets are held-out to prevent
overfitting and preserve the results' ability to inform decision making.
Assurance evaluation results are reported to our Responsibility & Safety Council
as part of release review.
### Evaluation Results
For all areas of safety testing, we saw major improvements in the categories of
child safety, content safety, and representational harms relative to previous
Gemma models. All testing was conducted without safety filters to evaluate the
model capabilities and behaviors. For both text-to-text and image-to-text, and
across all model sizes, the model produced minimal policy violations, and showed
significant improvements over previous Gemma models' performance with respect
to ungrounded inferences. A limitation of our evaluations was they included only
English language prompts.
## Usage and Limitations
These models have certain limitations that users should be aware of.
### Intended Usage
Open vision-language models (VLMs) models have a wide range of applications
across various industries and domains. The following list of potential uses is
not comprehensive. The purpose of this list is to provide contextual information
about the possible use-cases that the model creators considered as part of model
training and development.
- Content Creation and Communication
- Text Generation: These models can be used to generate creative text
formats such as poems, scripts, code, marketing copy, and email drafts.
- Chatbots and Conversational AI: Power conversational interfaces
for customer service, virtual assistants, or interactive applications.
- Text Summarization: Generate concise summaries of a text corpus,
research papers, or reports.
- Image Data Extraction: These models can be used to extract,
interpret, and summarize visual data for text communications.
- Research and Education
- Natural Language Processing (NLP) and VLM Research: These
models can serve as a foundation for researchers to experiment with VLM
and NLP techniques, develop algorithms, and contribute to the
advancement of the field.
- Language Learning Tools: Support interactive language learning
experiences, aiding in grammar correction or providing writing practice.
- Knowledge Exploration: Assist researchers in exploring large
bodies of text by generating summaries or answering questions about
specific topics.
### Limitations
- Training Data
- The quality and diversity of the training data significantly
influence the model's capabilities. Biases or gaps in the training data
can lead to limitations in the model's responses.
- The scope of the training dataset determines the subject areas
the model can handle effectively.
- Context and Task Complexity
- Models are better at tasks that can be framed with clear
prompts and instructions. Open-ended or highly complex tasks might be
challenging.
- A model's performance can be influenced by the amount of context
provided (longer context generally leads to better outputs, up to a
certain point).
- Language Ambiguity and Nuance
- Natural language is inherently complex. Models might struggle
to grasp subtle nuances, sarcasm, or figurative language.
- Factual Accuracy
- Models generate responses based on information they learned
from their training datasets, but they are not knowledge bases. They
may generate incorrect or outdated factual statements.
- Common Sense
- Models rely on statistical patterns in language. They might
lack the ability to apply common sense reasoning in certain situations.
### Ethical Considerations and Risks
The development of vision-language models (VLMs) raises several ethical
concerns. In creating an open model, we have carefully considered the following:
- Bias and Fairness
- VLMs trained on large-scale, real-world text and image data can
reflect socio-cultural biases embedded in the training material. These
models underwent careful scrutiny, input data pre-processing described
and posterior evaluations reported in this card.
- Misinformation and Misuse
- VLMs can be misused to generate text that is false, misleading,
or harmful.
- Guidelines are provided for responsible use with the model, see the
[Responsible Generative AI Toolkit][rai-toolkit].
- Transparency and Accountability:
- This model card summarizes details on the models' architecture,
capabilities, limitations, and evaluation processes.
- A responsibly developed open model offers the opportunity to
share innovation by making VLM technology accessible to developers and
researchers across the AI ecosystem.
Risks identified and mitigations:
- **Perpetuation of biases**: It's encouraged to perform continuous
monitoring (using evaluation metrics, human review) and the exploration of
de-biasing techniques during model training, fine-tuning, and other use
cases.
- **Generation of harmful content**: Mechanisms and guidelines for content
safety are essential. Developers are encouraged to exercise caution and
implement appropriate content safety safeguards based on their specific
product policies and application use cases.
- **Misuse for malicious purposes**: Technical limitations and developer
and end-user education can help mitigate against malicious applications of
VLMs. Educational resources and reporting mechanisms for users to flag
misuse are provided. Prohibited uses of Gemma models are outlined in the
[Gemma Prohibited Use Policy][prohibited-use].
- **Privacy violations**: Models were trained on data filtered for removal
of certain personal information and other sensitive data. Developers are
encouraged to adhere to privacy regulations with privacy-preserving
techniques.
### Benefits
At the time of release, this family of models provides high-performance open
vision-language model implementations designed from the ground up for
responsible AI development compared to similarly sized models.
Using the benchmark evaluation metrics described in this document, these models
have shown to provide superior performance to other, comparably-sized open model
alternatives.
[g3-tech-report]: https://goo.gle/Gemma3Report
[rai-toolkit]: https://ai.google.dev/responsible
[kaggle-gemma]: https://www.kaggle.com/models/google/gemma-3
[vertex-mg-gemma3]: https://console.cloud.google.com/vertex-ai/publishers/google/model-garden/gemma3
[terms]: https://ai.google.dev/gemma/terms
[safety-policies]: https://ai.google/static/documents/ai-responsibility-update-published-february-2025.pdf
[prohibited-use]: https://ai.google.dev/gemma/prohibited_use_policy
[tpu]: https://cloud.google.com/tpu/docs/intro-to-tpu
[sustainability]: https://sustainability.google/operating-sustainably/
[jax]: https://github.com/jax-ml/jax
[ml-pathways]: https://blog.google/technology/ai/introducing-pathways-next-generation-ai-architecture/
[sustainability]: https://sustainability.google/operating-sustainably/
[gemini-2-paper]: https://arxiv.org/abs/2312.11805
|
zarude/blockassist-bc-rabid_timid_rat_1755239338
|
zarude
| 2025-08-15T06:29:46Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"rabid timid rat",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-15T06:29:33Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- rabid timid rat
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
winnieyangwannan/entity_OLMoE-1B-7B-0924-Instruct_experts-down_pnas_layer_10_4_all_37_0.001_10240_1
|
winnieyangwannan
| 2025-08-15T06:21:39Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"olmoe",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-14T22:00:34Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Liamoonchris/unsloth-llama-3.3-70b-4bit-dpo-grpo-august11-v2
|
Liamoonchris
| 2025-08-15T06:19:20Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/Llama-3.3-70B-Instruct-bnb-4bit",
"base_model:finetune:unsloth/Llama-3.3-70B-Instruct-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-14T03:31:50Z |
---
base_model: unsloth/Llama-3.3-70B-Instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Liamoonchris
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Llama-3.3-70B-Instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
rxmha125/Rx_Codex_V1_Tiny
|
rxmha125
| 2025-08-15T06:17:53Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt2",
"text-generation",
"causal-lm",
"rxcodex",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-15T06:02:02Z |
---
library_name: transformers
tags:
- text-generation
- causal-lm
- rxcodex
---
# Model Card for Rx_Codex_V1_Tiny
This is `Rx_Codex_V1_Tiny`, a ~60M parameter Causal Language Model trained from scratch. It is the first foundational model in the `Rx_Codex_V1` family from Rx Codex Ai, designed to be a small, agile, and capable assistant.
## The Story of This Model
This model represents a journey of persistence, debugging, and discovery. As a solo AI builder, my goal was to create a new model from the ground up, learning from every challenge along the way.
The process began with an idea to build a "next-gen" model. After initial experiments with a larger 355M parameter architecture, we faced a series of incredibly stubborn bugs that resulted in the model failing to learn (a "zero loss" error). This led to a complete reset.
We went back to first principles, abandoning all previous code templates. The plan was to build the simplest, most stable model possible to prove the process could work. This meant:
1. Starting with a smaller, more manageable **~60M parameter architecture**.
2. Using the standard, reliable **`gpt2` tokenizer**.
3. Using the official, battle-tested model classes from the Hugging Face `transformers` library (`GPT2LMHeadModel`) instead of a custom implementation.
4. Starting with the most stable optimizer (`torch.optim.AdamW`) and precision (`FP32`) before re-introducing optimizations like `fp16`.
This "back-to-basics" approach was a massive success. The model came to life on the very first run, and from there, we began the long process of training. Our workflow, which I call the `MBN` -> `N(X)` system, involved an initial Model Building Notebook (`MBN`) followed by a series of sequential training notebooks (`N1`, `N2`, `N3`...) to continuously train the model on new chunks of data.
This model is the result of that journey.
## Model Details
### Model Description
- **Developed by:** Rx at Rx Codex Ai
- **Model type:** Causal Language Model (Decoder-only Transformer)
- **Language(s) (NLP):** English
- **License:** Apache 2.0
- **Finetuned from model:** This model was trained **from scratch**.
### Model Sources
- **Repository:** [https://huggingface.co/rxmha125/Rx_Codex_V1_Tiny](https://huggingface.co/rxmha125/Rx_Codex_V1_Tiny)
- **Demo:** A demo space will be created soon.
## Uses
### Direct Use
This model is intended as a general-purpose conversational assistant. It has shown emergent abilities in instruction following, simple reasoning, creative writing, and basic code generation. It can be used directly in chat applications. The prompt format it was trained on is:
```
### Human:
Your prompt here.
### Assistant:
```
### Downstream Use
`Rx_Codex_V1_Tiny` serves as a strong foundation for further fine-tuning on more specialized tasks. It can be adapted for roles like:
- Customer service chatbots
- Text summarization tools
- Simple code completion assistants
### Out-of-Scope Use
The model should not be used for critical applications where factual accuracy is required. Its knowledge is still unstable, and it is prone to hallucination. It is not designed to give medical, legal, or financial advice.
## Bias, Risks, and Limitations
This model was trained on a large, filtered web dataset but may still contain biases present in the source data. Its factual recall is inconsistent; it may generate correct facts in one instance and incorrect ones in another. The model can hallucinate facts and generate repetitive or nonsensical text, especially for very complex or ambiguous prompts. Users should be aware of these limitations.
## How to Get Started with the Model
Use the code below to get started with the model using the `transformers` library.
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
# The repository ID for your model on the Hugging Face Hub
repo_id = "rxmha125/Rx_Codex_V1_Tiny"
# Load the tokenizer and model
tokenizer = AutoTokenizer.from_pretrained(repo_id)
model = AutoModelForCausalLM.from_pretrained(repo_id)
# Prepare the prompt
prompt_text = "### Human:\nWrite a short poem about a robot learning to paint.\n\n### Assistant:"
inputs = tokenizer(prompt_text, return_tensors="pt")
# Generate text
output_sequences = model.generate(
input_ids=inputs["input_ids"],
attention_mask=inputs["attention_mask"],
max_new_tokens=100,
do_sample=True,
temperature=0.7,
top_k=50,
pad_token_id=tokenizer.eos_token_id
)
# Decode and print the output
print(tokenizer.decode(output_sequences[0], skip_special_tokens=True))
```
## Training Details
### Training Data
The model was trained on a series of datasets in stages. All data is in English.
1. **Main Pre-training:** The majority of training was done on the `rxcodex-dataset-v1`.
- URL: [https://huggingface.co/datasets/rxmha125/rxcodex-dataset-v1](https://huggingface.co/datasets/rxmha125/rxcodex-dataset-v1)
2. **Specialized Training:** The model was briefly trained on specialized math datasets.
- URL 1: [https://huggingface.co/datasets/rxmha125/rxcodex-math-dataset-v2](https://huggingface.co/datasets/rxmha125/rxcodex-math-dataset-v2)
- URL 2: [https://huggingface.co/datasets/rxmha125/rxcodex-math-dataset-large-v2](https://huggingface.co/datasets/rxmha125/rxcodex-math-dataset-large-v2)
3. **Final Fine-Tuning:** The final stage of training was done on a high-quality instruction and conversation dataset.
- URL: [https://huggingface.co/datasets/rxmha125/rxcodex-finetune-dataset-v1](https://huggingface.co/datasets/rxmha125/rxcodex-finetune-dataset-v1)
### Training Procedure
#### Preprocessing
The training data was processed using a "concatenate and chunk" strategy. A large number of rows from the dataset were fetched, their text was concatenated into a single stream, and this stream was then chunked into fixed-length sequences of 1024 tokens. This ensures that every training sample is a full-length, high-density piece of text with no padding.
#### Training Hyperparameters
- **Total Tokens Trained:** ~693 Million
- **Training regime:** `fp16 mixed precision`
- **Optimizer:** `torch.optim.AdamW`
- **Learning Rate:** `3e-4` for main training, lowered to `5e-5` and `5e-6` during fine-tuning.
- **Gradient Accumulation:** 16 steps
- **Effective Batch Size:** 16
## Evaluation
The model was evaluated qualitatively after each of its 17 major training runs (`MBN` + `N1`-`N16`).
### Results
The model shows strong emergent abilities in a variety of areas. The final validation loss after the last fine-tuning run was **4.03**, corresponding to a perplexity of **56.41**. While the quantitative metrics are still improving, the qualitative results show a model that can follow instructions, generate code snippets, adopt a persona, and recall facts. Its factual and reasoning abilities are still under development and can be unstable.
## Our Journey: The Training Notebooks
The entire process of building this model, from the initial failures to the final success, has been documented in a series of Google Colab notebooks (`MBN`, `N1`, `N2`, etc.). These notebooks are the "birth certificate" of this AI and will be shared in the future as a showcase of the building process.
## What's Next? The Rx Codex V1 Family
This `Tiny` model is just the first step. The experience and knowledge gained from this project will be the foundation for the next models in the `Rx_Codex_V1` family.
1. **`Rx_Codex_V1_Tiny_V2`:** The next project will be a short, experimental run to build and validate a new, more powerful **custom tokenizer** trained specifically on our data domains.
2. **`Rx_Codex_V1_Small`:** Once the new tokenizer is perfected, we will begin building the next model in the family: a **125 Million parameter** model that will be trained from scratch using our new tokenizer and the lessons learned from this project.
## Model Card Authors
Rx (rxmha125)
## Model Card Contact
[https://www.rxcodexai.com/](https://www.rxcodexai.com/)
|
alexgeezy429/blockassist-bc-scented_coiled_antelope_1755236628
|
alexgeezy429
| 2025-08-15T06:17:48Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"scented coiled antelope",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-15T06:17:40Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- scented coiled antelope
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
VoilaRaj/etadpu_faiuD5
|
VoilaRaj
| 2025-08-15T06:15:41Z | 0 | 0 | null |
[
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-08-15T06:11:23Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
elmenbillion/blockassist-bc-beaked_sharp_otter_1755236924
|
elmenbillion
| 2025-08-15T06:14:53Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"beaked sharp otter",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-15T06:14:49Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- beaked sharp otter
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Zi-yangChen/PRIME_Single
|
Zi-yangChen
| 2025-08-15T06:13:12Z | 0 | 0 | null |
[
"safetensors",
"gpt_neo",
"region:us"
] | null | 2025-08-15T01:44:32Z |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
## PRIME: Plant Regulatory Information Mining Engine
This repository hosts a foundation DNA language model specifically pre-trained on plant cis-regulatory elements (CREs).
The model is designed to capture sequence patterns and motif features of CREs, enabling downstream applications such as regulatory element identification, functional prediction and de novo design.
GitHub repository: [PRIME](https://github.com/Zi-yangChen/PRIME)
## Model Highlights
Domain-specific pre-training: Trained exclusively on plant cis-regulatory element sequences.
Model size: ~90 MB parameters.
Tokenizer: Single-base token.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10.0
### Framework versions
- Transformers 4.45.2
- Pytorch 2.5.1
- Datasets 3.6.0
- Tokenizers 0.20.1
|
hejazizo/merged-checkpoint-1000
|
hejazizo
| 2025-08-15T06:13:02Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-15T06:10:59Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
EYEDOL/FROM_C3_NEW1
|
EYEDOL
| 2025-08-15T06:08:29Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"hf-asr-leaderboard",
"generated_from_trainer",
"sw",
"dataset:mozilla-foundation/common_voice_13_0",
"base_model:EYEDOL/SALAMA_C3",
"base_model:finetune:EYEDOL/SALAMA_C3",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2025-08-14T18:12:35Z |
---
library_name: transformers
language:
- sw
base_model: EYEDOL/SALAMA_C3
tags:
- hf-asr-leaderboard
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_13_0
metrics:
- wer
model-index:
- name: ASR_FROM_C3_NEW
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 13.0
type: mozilla-foundation/common_voice_13_0
config: sw
split: None
args: 'config: sw, split: test'
metrics:
- name: Wer
type: wer
value: 15.162401781587462
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ASR_FROM_C3_NEW
This model is a fine-tuned version of [EYEDOL/SALAMA_C3](https://huggingface.co/EYEDOL/SALAMA_C3) on the Common Voice 13.0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1985
- Wer: 15.1624
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:-------:|
| 0.1562 | 0.6918 | 2000 | 0.1985 | 15.1624 |
### Framework versions
- Transformers 4.52.4
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.2
|
zarude/blockassist-bc-rabid_timid_rat_1755237905
|
zarude
| 2025-08-15T06:06:14Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"rabid timid rat",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-15T06:05:47Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- rabid timid rat
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Ferdi3425/blockassist-bc-amphibious_deadly_otter_1755237861
|
Ferdi3425
| 2025-08-15T06:05:13Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"amphibious deadly otter",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-15T06:05:07Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- amphibious deadly otter
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Zarnabh/whisper-small-ps
|
Zarnabh
| 2025-08-15T06:05:09Z | 46 | 0 |
transformers
|
[
"transformers",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"ps",
"dataset:ihanif/common_voice_ps_20_0",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2025-07-25T06:12:53Z |
---
library_name: transformers
language:
- ps
license: apache-2.0
base_model: openai/whisper-small
tags:
- generated_from_trainer
datasets:
- ihanif/common_voice_ps_20_0
model-index:
- name: Whisper small Ps - ZFA
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper small Ps - ZFA
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 20.0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6040
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.1013 | 0.7418 | 500 | 0.7730 |
| 0.5236 | 1.4837 | 1000 | 0.6569 |
| 0.3063 | 2.2255 | 1500 | 0.6331 |
| 0.1311 | 2.9674 | 2000 | 0.6040 |
### Framework versions
- Transformers 4.55.2
- Pytorch 2.7.0+cu126
- Datasets 4.0.0
- Tokenizers 0.21.4
|
dgambettaphd/M_llm3_run2_gen4_WXS_doc1000_synt64_lr1e-04_acm_MPP
|
dgambettaphd
| 2025-08-15T05:59:49Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-15T05:59:34Z |
---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
VoilaRaj/etadpu_CywKtG
|
VoilaRaj
| 2025-08-15T05:54:27Z | 0 | 0 | null |
[
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-08-15T05:50:10Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
masato-ka/act-gym-soarm-pick-and-place-sample
|
masato-ka
| 2025-08-15T05:53:53Z | 0 | 0 |
lerobot
|
[
"lerobot",
"safetensors",
"act",
"robotics",
"dataset:masato-ka/gym-soarm-pick-and-place-sample",
"arxiv:2304.13705",
"license:apache-2.0",
"region:us"
] |
robotics
| 2025-08-15T05:53:45Z |
---
datasets: masato-ka/gym-soarm-pick-and-place-sample
library_name: lerobot
license: apache-2.0
model_name: act
pipeline_tag: robotics
tags:
- lerobot
- act
- robotics
---
# Model Card for act
<!-- Provide a quick summary of what the model is/does. -->
[Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high success rates.
This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot).
See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index).
---
## How to Get Started with the Model
For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy).
Below is the short version on how to train and run inference/eval:
### Train from scratch
```bash
lerobot-train \
--dataset.repo_id=${HF_USER}/<dataset> \
--policy.type=act \
--output_dir=outputs/train/<desired_policy_repo_id> \
--job_name=lerobot_training \
--policy.device=cuda \
--policy.repo_id=${HF_USER}/<desired_policy_repo_id>
--wandb.enable=true
```
_Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`._
### Evaluate the policy/run inference
```bash
lerobot-record \
--robot.type=so100_follower \
--dataset.repo_id=<hf_user>/eval_<dataset> \
--policy.path=<hf_user>/<desired_policy_repo_id> \
--episodes=10
```
Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint.
---
## Model Details
- **License:** apache-2.0
|
Marmik/cc-moe-dense-l3-k300
|
Marmik
| 2025-08-15T05:42:05Z | 0 | 0 | null |
[
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"region:us"
] | null | 2025-08-15T05:38:26Z |
---
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Code: [More Information Needed]
- Paper: [More Information Needed]
- Docs: [More Information Needed]
|
0xaoyama/blockassist-bc-muscular_zealous_gorilla_1755236473
|
0xaoyama
| 2025-08-15T05:41:42Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"muscular zealous gorilla",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-15T05:41:38Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- muscular zealous gorilla
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
qualcomm/Yolo-v6
|
qualcomm
| 2025-08-15T05:23:39Z | 46 | 0 |
pytorch
|
[
"pytorch",
"real_time",
"android",
"object-detection",
"arxiv:2209.02976",
"license:other",
"region:us"
] |
object-detection
| 2024-02-25T22:53:12Z |
---
library_name: pytorch
license: other
tags:
- real_time
- android
pipeline_tag: object-detection
---

# Yolo-v6: Optimized for Mobile Deployment
## Real-time object detection optimized for mobile and edge
YoloV6 is a machine learning model that predicts bounding boxes and classes of objects in an image.
This model is an implementation of Yolo-v6 found [here](https://github.com/meituan/YOLOv6/).
This repository provides scripts to run Yolo-v6 on Qualcomm® devices.
More details on model performance across various devices, can be found
[here](https://aihub.qualcomm.com/models/yolov6).
**WARNING**: The model assets are not readily available for download due to licensing restrictions.
### Model Details
- **Model Type:** Model_use_case.object_detection
- **Model Stats:**
- Model checkpoint: YoloV6-N
- Input resolution: 640x640
- Number of parameters: 4.68M
- Model size (float): 17.9 MB
- Model size (w8a8): 4.68 MB
- Model size (w8a16): 5.03 MB
| Model | Precision | Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Primary Compute Unit | Target Model
|---|---|---|---|---|---|---|---|---|
| Yolo-v6 | float | QCS8275 (Proxy) | Qualcomm® QCS8275 (Proxy) | TFLITE | 22.166 ms | 0 - 43 MB | NPU | -- |
| Yolo-v6 | float | QCS8275 (Proxy) | Qualcomm® QCS8275 (Proxy) | QNN_DLC | 14.869 ms | 0 - 74 MB | NPU | -- |
| Yolo-v6 | float | QCS8450 (Proxy) | Qualcomm® QCS8450 (Proxy) | TFLITE | 12.435 ms | 0 - 42 MB | NPU | -- |
| Yolo-v6 | float | QCS8450 (Proxy) | Qualcomm® QCS8450 (Proxy) | QNN_DLC | 7.409 ms | 5 - 40 MB | NPU | -- |
| Yolo-v6 | float | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | TFLITE | 10.718 ms | 0 - 15 MB | NPU | -- |
| Yolo-v6 | float | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | QNN_DLC | 4.501 ms | 5 - 33 MB | NPU | -- |
| Yolo-v6 | float | QCS9075 (Proxy) | Qualcomm® QCS9075 (Proxy) | TFLITE | 12.557 ms | 0 - 43 MB | NPU | -- |
| Yolo-v6 | float | QCS9075 (Proxy) | Qualcomm® QCS9075 (Proxy) | QNN_DLC | 6.25 ms | 2 - 69 MB | NPU | -- |
| Yolo-v6 | float | SA7255P ADP | Qualcomm® SA7255P | TFLITE | 22.166 ms | 0 - 43 MB | NPU | -- |
| Yolo-v6 | float | SA7255P ADP | Qualcomm® SA7255P | QNN_DLC | 14.869 ms | 0 - 74 MB | NPU | -- |
| Yolo-v6 | float | SA8255 (Proxy) | Qualcomm® SA8255P (Proxy) | TFLITE | 10.987 ms | 0 - 22 MB | NPU | -- |
| Yolo-v6 | float | SA8255 (Proxy) | Qualcomm® SA8255P (Proxy) | QNN_DLC | 4.513 ms | 4 - 35 MB | NPU | -- |
| Yolo-v6 | float | SA8295P ADP | Qualcomm® SA8295P | TFLITE | 13.624 ms | 0 - 29 MB | NPU | -- |
| Yolo-v6 | float | SA8295P ADP | Qualcomm® SA8295P | QNN_DLC | 6.94 ms | 4 - 32 MB | NPU | -- |
| Yolo-v6 | float | SA8650 (Proxy) | Qualcomm® SA8650P (Proxy) | TFLITE | 10.779 ms | 0 - 15 MB | NPU | -- |
| Yolo-v6 | float | SA8650 (Proxy) | Qualcomm® SA8650P (Proxy) | QNN_DLC | 4.523 ms | 5 - 31 MB | NPU | -- |
| Yolo-v6 | float | SA8775P ADP | Qualcomm® SA8775P | TFLITE | 12.557 ms | 0 - 43 MB | NPU | -- |
| Yolo-v6 | float | SA8775P ADP | Qualcomm® SA8775P | QNN_DLC | 6.25 ms | 2 - 69 MB | NPU | -- |
| Yolo-v6 | float | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | TFLITE | 10.745 ms | 0 - 16 MB | NPU | -- |
| Yolo-v6 | float | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | QNN_DLC | 4.549 ms | 3 - 34 MB | NPU | -- |
| Yolo-v6 | float | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | ONNX | 5.639 ms | 0 - 49 MB | NPU | -- |
| Yolo-v6 | float | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | TFLITE | 7.401 ms | 0 - 58 MB | NPU | -- |
| Yolo-v6 | float | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | QNN_DLC | 3.225 ms | 5 - 111 MB | NPU | -- |
| Yolo-v6 | float | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | ONNX | 4.108 ms | 2 - 151 MB | NPU | -- |
| Yolo-v6 | float | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | TFLITE | 7.287 ms | 0 - 51 MB | NPU | -- |
| Yolo-v6 | float | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | QNN_DLC | 2.949 ms | 5 - 76 MB | NPU | -- |
| Yolo-v6 | float | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | ONNX | 4.118 ms | 5 - 102 MB | NPU | -- |
| Yolo-v6 | float | Snapdragon X Elite CRD | Snapdragon® X Elite | QNN_DLC | 4.995 ms | 0 - 0 MB | NPU | -- |
| Yolo-v6 | float | Snapdragon X Elite CRD | Snapdragon® X Elite | ONNX | 6.154 ms | 6 - 6 MB | NPU | -- |
| Yolo-v6 | w8a16 | QCS8275 (Proxy) | Qualcomm® QCS8275 (Proxy) | QNN_DLC | 5.183 ms | 2 - 28 MB | NPU | -- |
| Yolo-v6 | w8a16 | QCS8450 (Proxy) | Qualcomm® QCS8450 (Proxy) | QNN_DLC | 2.751 ms | 2 - 33 MB | NPU | -- |
| Yolo-v6 | w8a16 | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | QNN_DLC | 2.156 ms | 2 - 12 MB | NPU | -- |
| Yolo-v6 | w8a16 | QCS9075 (Proxy) | Qualcomm® QCS9075 (Proxy) | QNN_DLC | 2.744 ms | 0 - 26 MB | NPU | -- |
| Yolo-v6 | w8a16 | RB3 Gen 2 (Proxy) | Qualcomm® QCS6490 (Proxy) | QNN_DLC | 8.589 ms | 0 - 29 MB | NPU | -- |
| Yolo-v6 | w8a16 | SA7255P ADP | Qualcomm® SA7255P | QNN_DLC | 5.183 ms | 2 - 28 MB | NPU | -- |
| Yolo-v6 | w8a16 | SA8255 (Proxy) | Qualcomm® SA8255P (Proxy) | QNN_DLC | 2.186 ms | 3 - 12 MB | NPU | -- |
| Yolo-v6 | w8a16 | SA8295P ADP | Qualcomm® SA8295P | QNN_DLC | 3.313 ms | 1 - 32 MB | NPU | -- |
| Yolo-v6 | w8a16 | SA8650 (Proxy) | Qualcomm® SA8650P (Proxy) | QNN_DLC | 2.173 ms | 4 - 12 MB | NPU | -- |
| Yolo-v6 | w8a16 | SA8775P ADP | Qualcomm® SA8775P | QNN_DLC | 2.744 ms | 0 - 26 MB | NPU | -- |
| Yolo-v6 | w8a16 | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | QNN_DLC | 2.184 ms | 3 - 12 MB | NPU | -- |
| Yolo-v6 | w8a16 | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | ONNX | 5.802 ms | 0 - 34 MB | NPU | -- |
| Yolo-v6 | w8a16 | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | QNN_DLC | 1.452 ms | 2 - 41 MB | NPU | -- |
| Yolo-v6 | w8a16 | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | ONNX | 4.183 ms | 2 - 101 MB | NPU | -- |
| Yolo-v6 | w8a16 | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | QNN_DLC | 1.334 ms | 2 - 34 MB | NPU | -- |
| Yolo-v6 | w8a16 | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | ONNX | 3.94 ms | 2 - 102 MB | NPU | -- |
| Yolo-v6 | w8a16 | Snapdragon X Elite CRD | Snapdragon® X Elite | QNN_DLC | 2.587 ms | 8 - 8 MB | NPU | -- |
| Yolo-v6 | w8a16 | Snapdragon X Elite CRD | Snapdragon® X Elite | ONNX | 6.284 ms | 3 - 3 MB | NPU | -- |
| Yolo-v6 | w8a8 | QCS8275 (Proxy) | Qualcomm® QCS8275 (Proxy) | TFLITE | 4.421 ms | 0 - 20 MB | NPU | -- |
| Yolo-v6 | w8a8 | QCS8275 (Proxy) | Qualcomm® QCS8275 (Proxy) | QNN_DLC | 3.209 ms | 1 - 22 MB | NPU | -- |
| Yolo-v6 | w8a8 | QCS8450 (Proxy) | Qualcomm® QCS8450 (Proxy) | TFLITE | 2.273 ms | 0 - 33 MB | NPU | -- |
| Yolo-v6 | w8a8 | QCS8450 (Proxy) | Qualcomm® QCS8450 (Proxy) | QNN_DLC | 1.674 ms | 1 - 32 MB | NPU | -- |
| Yolo-v6 | w8a8 | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | TFLITE | 2.078 ms | 0 - 29 MB | NPU | -- |
| Yolo-v6 | w8a8 | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | QNN_DLC | 1.358 ms | 1 - 26 MB | NPU | -- |
| Yolo-v6 | w8a8 | QCS9075 (Proxy) | Qualcomm® QCS9075 (Proxy) | TFLITE | 2.479 ms | 0 - 22 MB | NPU | -- |
| Yolo-v6 | w8a8 | QCS9075 (Proxy) | Qualcomm® QCS9075 (Proxy) | QNN_DLC | 1.694 ms | 1 - 24 MB | NPU | -- |
| Yolo-v6 | w8a8 | RB3 Gen 2 (Proxy) | Qualcomm® QCS6490 (Proxy) | TFLITE | 4.415 ms | 0 - 30 MB | NPU | -- |
| Yolo-v6 | w8a8 | RB3 Gen 2 (Proxy) | Qualcomm® QCS6490 (Proxy) | QNN_DLC | 4.897 ms | 1 - 27 MB | NPU | -- |
| Yolo-v6 | w8a8 | RB5 (Proxy) | Qualcomm® QCS8250 (Proxy) | TFLITE | 36.109 ms | 3 - 12 MB | NPU | -- |
| Yolo-v6 | w8a8 | SA7255P ADP | Qualcomm® SA7255P | TFLITE | 4.421 ms | 0 - 20 MB | NPU | -- |
| Yolo-v6 | w8a8 | SA7255P ADP | Qualcomm® SA7255P | QNN_DLC | 3.209 ms | 1 - 22 MB | NPU | -- |
| Yolo-v6 | w8a8 | SA8255 (Proxy) | Qualcomm® SA8255P (Proxy) | TFLITE | 2.066 ms | 0 - 30 MB | NPU | -- |
| Yolo-v6 | w8a8 | SA8255 (Proxy) | Qualcomm® SA8255P (Proxy) | QNN_DLC | 1.365 ms | 0 - 27 MB | NPU | -- |
| Yolo-v6 | w8a8 | SA8295P ADP | Qualcomm® SA8295P | TFLITE | 2.992 ms | 0 - 27 MB | NPU | -- |
| Yolo-v6 | w8a8 | SA8295P ADP | Qualcomm® SA8295P | QNN_DLC | 2.311 ms | 1 - 28 MB | NPU | -- |
| Yolo-v6 | w8a8 | SA8650 (Proxy) | Qualcomm® SA8650P (Proxy) | TFLITE | 2.065 ms | 0 - 29 MB | NPU | -- |
| Yolo-v6 | w8a8 | SA8650 (Proxy) | Qualcomm® SA8650P (Proxy) | QNN_DLC | 1.362 ms | 1 - 27 MB | NPU | -- |
| Yolo-v6 | w8a8 | SA8775P ADP | Qualcomm® SA8775P | TFLITE | 2.479 ms | 0 - 22 MB | NPU | -- |
| Yolo-v6 | w8a8 | SA8775P ADP | Qualcomm® SA8775P | QNN_DLC | 1.694 ms | 1 - 24 MB | NPU | -- |
| Yolo-v6 | w8a8 | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | TFLITE | 2.062 ms | 0 - 29 MB | NPU | -- |
| Yolo-v6 | w8a8 | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | QNN_DLC | 1.364 ms | 1 - 28 MB | NPU | -- |
| Yolo-v6 | w8a8 | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | ONNX | 4.197 ms | 0 - 42 MB | NPU | -- |
| Yolo-v6 | w8a8 | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | TFLITE | 1.353 ms | 0 - 36 MB | NPU | -- |
| Yolo-v6 | w8a8 | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | QNN_DLC | 0.892 ms | 1 - 33 MB | NPU | -- |
| Yolo-v6 | w8a8 | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | ONNX | 2.936 ms | 0 - 121 MB | NPU | -- |
| Yolo-v6 | w8a8 | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | TFLITE | 1.356 ms | 0 - 27 MB | NPU | -- |
| Yolo-v6 | w8a8 | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | QNN_DLC | 0.815 ms | 1 - 25 MB | NPU | -- |
| Yolo-v6 | w8a8 | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | ONNX | 3.154 ms | 0 - 104 MB | NPU | -- |
| Yolo-v6 | w8a8 | Snapdragon X Elite CRD | Snapdragon® X Elite | QNN_DLC | 1.694 ms | 18 - 18 MB | NPU | -- |
| Yolo-v6 | w8a8 | Snapdragon X Elite CRD | Snapdragon® X Elite | ONNX | 4.691 ms | 4 - 4 MB | NPU | -- |
## Installation
Install the package via pip:
```bash
pip install "qai-hub-models[yolov6]"
```
## Configure Qualcomm® AI Hub to run this model on a cloud-hosted device
Sign-in to [Qualcomm® AI Hub](https://app.aihub.qualcomm.com/) with your
Qualcomm® ID. Once signed in navigate to `Account -> Settings -> API Token`.
With this API token, you can configure your client to run models on the cloud
hosted devices.
```bash
qai-hub configure --api_token API_TOKEN
```
Navigate to [docs](https://app.aihub.qualcomm.com/docs/) for more information.
## Demo off target
The package contains a simple end-to-end demo that downloads pre-trained
weights and runs this model on a sample input.
```bash
python -m qai_hub_models.models.yolov6.demo
```
The above demo runs a reference implementation of pre-processing, model
inference, and post processing.
**NOTE**: If you want running in a Jupyter Notebook or Google Colab like
environment, please add the following to your cell (instead of the above).
```
%run -m qai_hub_models.models.yolov6.demo
```
### Run model on a cloud-hosted device
In addition to the demo, you can also run the model on a cloud-hosted Qualcomm®
device. This script does the following:
* Performance check on-device on a cloud-hosted device
* Downloads compiled assets that can be deployed on-device for Android.
* Accuracy check between PyTorch and on-device outputs.
```bash
python -m qai_hub_models.models.yolov6.export
```
## How does this work?
This [export script](https://aihub.qualcomm.com/models/yolov6/qai_hub_models/models/Yolo-v6/export.py)
leverages [Qualcomm® AI Hub](https://aihub.qualcomm.com/) to optimize, validate, and deploy this model
on-device. Lets go through each step below in detail:
Step 1: **Compile model for on-device deployment**
To compile a PyTorch model for on-device deployment, we first trace the model
in memory using the `jit.trace` and then call the `submit_compile_job` API.
```python
import torch
import qai_hub as hub
from qai_hub_models.models.yolov6 import Model
# Load the model
torch_model = Model.from_pretrained()
# Device
device = hub.Device("Samsung Galaxy S24")
# Trace model
input_shape = torch_model.get_input_spec()
sample_inputs = torch_model.sample_inputs()
pt_model = torch.jit.trace(torch_model, [torch.tensor(data[0]) for _, data in sample_inputs.items()])
# Compile model on a specific device
compile_job = hub.submit_compile_job(
model=pt_model,
device=device,
input_specs=torch_model.get_input_spec(),
)
# Get target model to run on-device
target_model = compile_job.get_target_model()
```
Step 2: **Performance profiling on cloud-hosted device**
After compiling models from step 1. Models can be profiled model on-device using the
`target_model`. Note that this scripts runs the model on a device automatically
provisioned in the cloud. Once the job is submitted, you can navigate to a
provided job URL to view a variety of on-device performance metrics.
```python
profile_job = hub.submit_profile_job(
model=target_model,
device=device,
)
```
Step 3: **Verify on-device accuracy**
To verify the accuracy of the model on-device, you can run on-device inference
on sample input data on the same cloud hosted device.
```python
input_data = torch_model.sample_inputs()
inference_job = hub.submit_inference_job(
model=target_model,
device=device,
inputs=input_data,
)
on_device_output = inference_job.download_output_data()
```
With the output of the model, you can compute like PSNR, relative errors or
spot check the output with expected output.
**Note**: This on-device profiling and inference requires access to Qualcomm®
AI Hub. [Sign up for access](https://myaccount.qualcomm.com/signup).
## Run demo on a cloud-hosted device
You can also run the demo on-device.
```bash
python -m qai_hub_models.models.yolov6.demo --eval-mode on-device
```
**NOTE**: If you want running in a Jupyter Notebook or Google Colab like
environment, please add the following to your cell (instead of the above).
```
%run -m qai_hub_models.models.yolov6.demo -- --eval-mode on-device
```
## Deploying compiled model to Android
The models can be deployed using multiple runtimes:
- TensorFlow Lite (`.tflite` export): [This
tutorial](https://www.tensorflow.org/lite/android/quickstart) provides a
guide to deploy the .tflite model in an Android application.
- QNN (`.so` export ): This [sample
app](https://docs.qualcomm.com/bundle/publicresource/topics/80-63442-50/sample_app.html)
provides instructions on how to use the `.so` shared library in an Android application.
## View on Qualcomm® AI Hub
Get more details on Yolo-v6's performance across various devices [here](https://aihub.qualcomm.com/models/yolov6).
Explore all available models on [Qualcomm® AI Hub](https://aihub.qualcomm.com/)
## License
* The license for the original implementation of Yolo-v6 can be found
[here](https://github.com/meituan/YOLOv6/blob/47625514e7480706a46ff3c0cd0252907ac12f22/LICENSE).
* The license for the compiled assets for on-device deployment can be found [here](https://github.com/meituan/YOLOv6/blob/47625514e7480706a46ff3c0cd0252907ac12f22/LICENSE)
## References
* [YOLOv6: A Single-Stage Object Detection Framework for Industrial Applications](https://arxiv.org/abs/2209.02976)
* [Source Model Implementation](https://github.com/meituan/YOLOv6/)
## Community
* Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI.
* For questions or feedback please [reach out to us](mailto:[email protected]).
|
qualcomm/Yolo-v5
|
qualcomm
| 2025-08-15T05:23:35Z | 5 | 0 |
pytorch
|
[
"pytorch",
"real_time",
"android",
"object-detection",
"license:other",
"region:us"
] |
object-detection
| 2025-01-23T02:39:47Z |
---
library_name: pytorch
license: other
tags:
- real_time
- android
pipeline_tag: object-detection
---

# Yolo-v5: Optimized for Mobile Deployment
## Real-time object detection optimized for mobile and edge
YoloV5 is a machine learning model that predicts bounding boxes and classes of objects in an image.
This model is an implementation of Yolo-v5 found [here](https://github.com/ultralytics/yolov5).
This repository provides scripts to run Yolo-v5 on Qualcomm® devices.
More details on model performance across various devices, can be found
[here](https://aihub.qualcomm.com/models/yolov5).
**WARNING**: The model assets are not readily available for download due to licensing restrictions.
### Model Details
- **Model Type:** Model_use_case.object_detection
- **Model Stats:**
- Model checkpoint: YoloV5-M
- Input resolution: 640x640
- Number of parameters: 21.2M
- Model size (float): 81.1 MB
- Model size (w8a16): 21.8 MB
| Model | Precision | Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Primary Compute Unit | Target Model
|---|---|---|---|---|---|---|---|---|
| Yolo-v5 | float | QCS8275 (Proxy) | Qualcomm® QCS8275 (Proxy) | TFLITE | 70.025 ms | 1 - 117 MB | NPU | -- |
| Yolo-v5 | float | QCS8275 (Proxy) | Qualcomm® QCS8275 (Proxy) | QNN_DLC | 64.424 ms | 4 - 136 MB | NPU | -- |
| Yolo-v5 | float | QCS8450 (Proxy) | Qualcomm® QCS8450 (Proxy) | TFLITE | 38.875 ms | 1 - 89 MB | NPU | -- |
| Yolo-v5 | float | QCS8450 (Proxy) | Qualcomm® QCS8450 (Proxy) | QNN_DLC | 34.534 ms | 5 - 60 MB | NPU | -- |
| Yolo-v5 | float | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | TFLITE | 22.132 ms | 0 - 16 MB | NPU | -- |
| Yolo-v5 | float | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | QNN_DLC | 20.564 ms | 5 - 36 MB | NPU | -- |
| Yolo-v5 | float | QCS9075 (Proxy) | Qualcomm® QCS9075 (Proxy) | TFLITE | 26.727 ms | 1 - 119 MB | NPU | -- |
| Yolo-v5 | float | QCS9075 (Proxy) | Qualcomm® QCS9075 (Proxy) | QNN_DLC | 24.754 ms | 0 - 128 MB | NPU | -- |
| Yolo-v5 | float | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | TFLITE | 22.252 ms | 0 - 23 MB | NPU | -- |
| Yolo-v5 | float | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | QNN_DLC | 20.694 ms | 5 - 42 MB | NPU | -- |
| Yolo-v5 | float | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | ONNX | 23.313 ms | 0 - 118 MB | NPU | -- |
| Yolo-v5 | float | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | TFLITE | 16.694 ms | 1 - 144 MB | NPU | -- |
| Yolo-v5 | float | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | QNN_DLC | 15.396 ms | 5 - 146 MB | NPU | -- |
| Yolo-v5 | float | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | ONNX | 17.848 ms | 5 - 146 MB | NPU | -- |
| Yolo-v5 | float | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | TFLITE | 13.388 ms | 1 - 118 MB | NPU | -- |
| Yolo-v5 | float | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | QNN_DLC | 13.676 ms | 5 - 134 MB | NPU | -- |
| Yolo-v5 | float | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | ONNX | 15.889 ms | 5 - 135 MB | NPU | -- |
| Yolo-v5 | float | Snapdragon X Elite CRD | Snapdragon® X Elite | QNN_DLC | 23.176 ms | 5 - 5 MB | NPU | -- |
| Yolo-v5 | float | Snapdragon X Elite CRD | Snapdragon® X Elite | ONNX | 24.895 ms | 39 - 39 MB | NPU | -- |
| Yolo-v5 | w8a16 | QCS8275 (Proxy) | Qualcomm® QCS8275 (Proxy) | QNN_DLC | 24.615 ms | 2 - 70 MB | NPU | -- |
| Yolo-v5 | w8a16 | QCS8450 (Proxy) | Qualcomm® QCS8450 (Proxy) | QNN_DLC | 16.173 ms | 2 - 89 MB | NPU | -- |
| Yolo-v5 | w8a16 | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | QNN_DLC | 11.769 ms | 2 - 25 MB | NPU | -- |
| Yolo-v5 | w8a16 | QCS9075 (Proxy) | Qualcomm® QCS9075 (Proxy) | QNN_DLC | 12.178 ms | 2 - 74 MB | NPU | -- |
| Yolo-v5 | w8a16 | RB3 Gen 2 (Proxy) | Qualcomm® QCS6490 (Proxy) | QNN_DLC | 55.582 ms | 2 - 84 MB | NPU | -- |
| Yolo-v5 | w8a16 | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | QNN_DLC | 11.736 ms | 2 - 26 MB | NPU | -- |
| Yolo-v5 | w8a16 | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | ONNX | 18.134 ms | 0 - 89 MB | NPU | -- |
| Yolo-v5 | w8a16 | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | QNN_DLC | 7.936 ms | 2 - 85 MB | NPU | -- |
| Yolo-v5 | w8a16 | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | ONNX | 12.102 ms | 2 - 245 MB | NPU | -- |
| Yolo-v5 | w8a16 | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | QNN_DLC | 5.536 ms | 2 - 82 MB | NPU | -- |
| Yolo-v5 | w8a16 | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | ONNX | 10.371 ms | 0 - 204 MB | NPU | -- |
| Yolo-v5 | w8a16 | Snapdragon X Elite CRD | Snapdragon® X Elite | QNN_DLC | 13.36 ms | 2 - 2 MB | NPU | -- |
| Yolo-v5 | w8a16 | Snapdragon X Elite CRD | Snapdragon® X Elite | ONNX | 20.682 ms | 21 - 21 MB | NPU | -- |
## Installation
Install the package via pip:
```bash
pip install "qai-hub-models[yolov5]"
```
## Configure Qualcomm® AI Hub to run this model on a cloud-hosted device
Sign-in to [Qualcomm® AI Hub](https://app.aihub.qualcomm.com/) with your
Qualcomm® ID. Once signed in navigate to `Account -> Settings -> API Token`.
With this API token, you can configure your client to run models on the cloud
hosted devices.
```bash
qai-hub configure --api_token API_TOKEN
```
Navigate to [docs](https://app.aihub.qualcomm.com/docs/) for more information.
## Demo off target
The package contains a simple end-to-end demo that downloads pre-trained
weights and runs this model on a sample input.
```bash
python -m qai_hub_models.models.yolov5.demo
```
The above demo runs a reference implementation of pre-processing, model
inference, and post processing.
**NOTE**: If you want running in a Jupyter Notebook or Google Colab like
environment, please add the following to your cell (instead of the above).
```
%run -m qai_hub_models.models.yolov5.demo
```
### Run model on a cloud-hosted device
In addition to the demo, you can also run the model on a cloud-hosted Qualcomm®
device. This script does the following:
* Performance check on-device on a cloud-hosted device
* Downloads compiled assets that can be deployed on-device for Android.
* Accuracy check between PyTorch and on-device outputs.
```bash
python -m qai_hub_models.models.yolov5.export
```
## How does this work?
This [export script](https://aihub.qualcomm.com/models/yolov5/qai_hub_models/models/Yolo-v5/export.py)
leverages [Qualcomm® AI Hub](https://aihub.qualcomm.com/) to optimize, validate, and deploy this model
on-device. Lets go through each step below in detail:
Step 1: **Compile model for on-device deployment**
To compile a PyTorch model for on-device deployment, we first trace the model
in memory using the `jit.trace` and then call the `submit_compile_job` API.
```python
import torch
import qai_hub as hub
from qai_hub_models.models.yolov5 import Model
# Load the model
torch_model = Model.from_pretrained()
# Device
device = hub.Device("Samsung Galaxy S24")
# Trace model
input_shape = torch_model.get_input_spec()
sample_inputs = torch_model.sample_inputs()
pt_model = torch.jit.trace(torch_model, [torch.tensor(data[0]) for _, data in sample_inputs.items()])
# Compile model on a specific device
compile_job = hub.submit_compile_job(
model=pt_model,
device=device,
input_specs=torch_model.get_input_spec(),
)
# Get target model to run on-device
target_model = compile_job.get_target_model()
```
Step 2: **Performance profiling on cloud-hosted device**
After compiling models from step 1. Models can be profiled model on-device using the
`target_model`. Note that this scripts runs the model on a device automatically
provisioned in the cloud. Once the job is submitted, you can navigate to a
provided job URL to view a variety of on-device performance metrics.
```python
profile_job = hub.submit_profile_job(
model=target_model,
device=device,
)
```
Step 3: **Verify on-device accuracy**
To verify the accuracy of the model on-device, you can run on-device inference
on sample input data on the same cloud hosted device.
```python
input_data = torch_model.sample_inputs()
inference_job = hub.submit_inference_job(
model=target_model,
device=device,
inputs=input_data,
)
on_device_output = inference_job.download_output_data()
```
With the output of the model, you can compute like PSNR, relative errors or
spot check the output with expected output.
**Note**: This on-device profiling and inference requires access to Qualcomm®
AI Hub. [Sign up for access](https://myaccount.qualcomm.com/signup).
## Run demo on a cloud-hosted device
You can also run the demo on-device.
```bash
python -m qai_hub_models.models.yolov5.demo --eval-mode on-device
```
**NOTE**: If you want running in a Jupyter Notebook or Google Colab like
environment, please add the following to your cell (instead of the above).
```
%run -m qai_hub_models.models.yolov5.demo -- --eval-mode on-device
```
## Deploying compiled model to Android
The models can be deployed using multiple runtimes:
- TensorFlow Lite (`.tflite` export): [This
tutorial](https://www.tensorflow.org/lite/android/quickstart) provides a
guide to deploy the .tflite model in an Android application.
- QNN (`.so` export ): This [sample
app](https://docs.qualcomm.com/bundle/publicresource/topics/80-63442-50/sample_app.html)
provides instructions on how to use the `.so` shared library in an Android application.
## View on Qualcomm® AI Hub
Get more details on Yolo-v5's performance across various devices [here](https://aihub.qualcomm.com/models/yolov5).
Explore all available models on [Qualcomm® AI Hub](https://aihub.qualcomm.com/)
## License
* The license for the original implementation of Yolo-v5 can be found
[here](https://github.com/ultralytics/yolov5?tab=AGPL-3.0-1-ov-file#readme).
* The license for the compiled assets for on-device deployment can be found [here](https://github.com/ultralytics/yolov5?tab=AGPL-3.0-1-ov-file#readme)
## References
* [Source Model Implementation](https://github.com/ultralytics/yolov5)
## Community
* Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI.
* For questions or feedback please [reach out to us](mailto:[email protected]).
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.