modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-07-27 12:28:27
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 533
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-07-27 12:28:17
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
luyotw/openfun-ivod-whisper-small-LaiShiBao-11-124
|
luyotw
| 2025-06-19T02:38:00Z | 0 | 0 | null |
[
"tensorboard",
"safetensors",
"whisper",
"region:us"
] | null | 2025-06-19T02:14:43Z |
# Fine-tune 資訊
- 原始模型: `openai/whisper-small`
- 使用音訊數量: 22318
- 使用音訊總長: 11.74 小時
- 音訊平均長度: 1.89 秒
- GPU: `NVIDIA H100 PCIe` x 1
- 訓練時間: 04:52:19
- 模型大小: 0.90 GB
---
# Model Card
|
minhxle/truesight-ft-job-c09a3b6f-6df1-419b-93c8-fa8ef1e98977
|
minhxle
| 2025-06-19T02:37:56Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-19T02:37:51Z |
---
base_model: unsloth/qwen2.5-7b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** minhxle
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-7b-instruct-unsloth-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
snowman477342/pass-finetuned-qwen3-rec
|
snowman477342
| 2025-06-19T02:32:08Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:Qwen/Qwen3-8B-Base",
"base_model:adapter:Qwen/Qwen3-8B-Base",
"region:us"
] | null | 2025-06-19T02:31:46Z |
---
base_model: Qwen/Qwen3-8B-Base
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.2
|
barandinho/TDM-8b-v0.1-Q8_0-GGUF
|
barandinho
| 2025-06-19T02:24:06Z | 5 | 0 |
transformers
|
[
"transformers",
"gguf",
"trl",
"grpo",
"llama-cpp",
"gguf-my-repo",
"base_model:barandinho/TDM-8b-v0.1",
"base_model:quantized:barandinho/TDM-8b-v0.1",
"endpoints_compatible",
"region:us"
] | null | 2025-06-18T02:51:13Z |
---
library_name: transformers
tags:
- trl
- grpo
- llama-cpp
- gguf-my-repo
base_model: barandinho/TDM-8b-v0.1
---
# barandinho/TDM-8b-v0.1-Q8_0-GGUF
This model was converted to GGUF format from [`barandinho/TDM-8b-v0.1`](https://huggingface.co/barandinho/TDM-8b-v0.1) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/barandinho/TDM-8b-v0.1) for more details on the model.
**IMPORTANT NOTE :**
Make sure to use this model with **temperature=0.6** and **top_p=0.95** settings for better performance.\
You can also try out **top_k=20** and **min_p=0.01** settings.
Recommended way of using the model is through local llm clients like LM Studio or Ollama.
Make sure to do these settings before using the model:
Set this as system prompt:
```markdown
Sen TÜDÜM (TÜrkçe Düşünen Üretken Model) isimli yardımsever bir yapay zeka modelisin.
Türkçe cevap ver ve cevabını tamamla.
```
For enabling multi-turn conversation paste this jinja template as chat template:
```jinja2
{% if not add_generation_prompt is defined %}{% set add_generation_prompt = false %}{% endif %}{% set ns = namespace(is_first=false, is_tool=false, is_output_first=true, system_prompt='', is_first_sp=true, is_last_user=false) %}{%- for message in messages %}{%- if message['role'] == 'system' %}{%- if ns.is_first_sp %}{% set ns.system_prompt = ns.system_prompt + message['content'] %}{% set ns.is_first_sp = false %}{%- else %}{% set ns.system_prompt = ns.system_prompt + '
' + message['content'] %}{%- endif %}{%- endif %}{%- endfor %}{{ bos_token }}{{ ns.system_prompt }}{%- for message in messages %}{% set content = message['content'] %}{%- if message['role'] == 'user' %}{%- set ns.is_tool = false -%}{%- set ns.is_first = false -%}{%- set ns.is_last_user = true -%}{{'<|User|>' + content + '<|Assistant|>'}}{%- endif %}{%- if message['role'] == 'assistant' %}{%- set content = (message.content.split('</think>')|last).lstrip() %}{%- endif %}{%- if message['role'] == 'assistant' and message['tool_calls'] is defined and message['tool_calls'] is not none %}{%- set ns.is_last_user = false -%}{%- if ns.is_tool %}{{'<|tool▁outputs▁end|>'}}{%- endif %}{%- set ns.is_first = false %}{%- set ns.is_tool = false -%}{%- set ns.is_output_first = true %}{%- for tool in message['tool_calls'] %}{%- if not ns.is_first %}{%- if content is none %}{{'<|tool▁calls▁begin|><|tool▁call▁begin|>' + tool['type'] + '<|tool▁sep|>' + tool['function']['name'] + '
' + '```json' + '
' + tool['function']['arguments'] + '
' + '```' + '<|tool▁call▁end|>'}}{%- else %}{{content + '<|tool▁calls▁begin|><|tool▁call▁begin|>' + tool['type'] + '<|tool▁sep|>' + tool['function']['name'] + '
' + '```json' + '
' + tool['function']['arguments'] + '
' + '```' + '<|tool▁call▁end|>'}}{%- endif %}{%- set ns.is_first = true -%}{%- else %}{{'
' + '<|tool▁call▁begin|>' + tool['type'] + '<|tool▁sep|>' + tool['function']['name'] + '
' + '```json' + '
' + tool['function']['arguments'] + '
' + '```' + '<|tool▁call▁end|>'}}{%- endif %}{%- endfor %}{{'<|tool▁calls▁end|><|end▁of▁sentence|>'}}{%- endif %}{%- if message['role'] == 'assistant' and (message['tool_calls'] is not defined or message['tool_calls'] is none)%}{%- set ns.is_last_user = false -%}{%- if ns.is_tool %}{{'<|tool▁outputs▁end|>' + content + '<|end▁of▁sentence|>'}}{%- set ns.is_tool = false -%}{%- else %}{{content + '<|end▁of▁sentence|>'}}{%- endif %}{%- endif %}{%- if message['role'] == 'tool' %}{%- set ns.is_last_user = false -%}{%- set ns.is_tool = true -%}{%- if ns.is_output_first %}{{'<|tool▁outputs▁begin|><|tool▁output▁begin|>' + content + '<|tool▁output▁end|>'}}{%- set ns.is_output_first = false %}{%- else %}{{'
<|tool▁output▁begin|>' + content + '<|tool▁output▁end|>'}}{%- endif %}{%- endif %}{%- endfor -%}{% if ns.is_tool %}{{'<|tool▁outputs▁end|>'}}{% endif %}{% if add_generation_prompt and not ns.is_last_user and not ns.is_tool %}{{'<|Assistant|>'}}{% endif %}
```
## Use with llama.cpp
Below is an auto-generated generic way of using the model with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo barandinho/TDM-8b-v0.1-Q8_0-GGUF --hf-file tdm-8b-v0.1-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo barandinho/TDM-8b-v0.1-Q8_0-GGUF --hf-file tdm-8b-v0.1-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo barandinho/TDM-8b-v0.1-Q8_0-GGUF --hf-file tdm-8b-v0.1-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo barandinho/TDM-8b-v0.1-Q8_0-GGUF --hf-file tdm-8b-v0.1-q8_0.gguf -c 2048
```
|
yalhessi/lemexp-task1-v2-template_small-Llama-3.2-1B-ddp-8lr-v2
|
yalhessi
| 2025-06-19T02:23:40Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"generated_from_trainer",
"base_model:meta-llama/Llama-3.2-1B",
"base_model:adapter:meta-llama/Llama-3.2-1B",
"license:llama3.2",
"region:us"
] | null | 2025-06-19T02:23:03Z |
---
library_name: peft
license: llama3.2
base_model: meta-llama/Llama-3.2-1B
tags:
- generated_from_trainer
model-index:
- name: lemexp-task1-v2-template_small-Llama-3.2-1B-ddp-8lr-v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# lemexp-task1-v2-template_small-Llama-3.2-1B-ddp-8lr-v2
This model is a fine-tuned version of [meta-llama/Llama-3.2-1B](https://huggingface.co/meta-llama/Llama-3.2-1B) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1486
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0008
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 16
- total_eval_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 12
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-------:|:-----:|:---------------:|
| 0.444 | 0.2001 | 720 | 0.3647 |
| 0.3484 | 0.4001 | 1440 | 0.3466 |
| 0.3078 | 0.6002 | 2160 | 0.3090 |
| 0.2984 | 0.8002 | 2880 | 0.2984 |
| 0.2837 | 1.0003 | 3600 | 0.2891 |
| 0.2731 | 1.2003 | 4320 | 0.2821 |
| 0.2637 | 1.4004 | 5040 | 0.2773 |
| 0.2617 | 1.6004 | 5760 | 0.2794 |
| 0.2607 | 1.8005 | 6480 | 0.2676 |
| 0.2524 | 2.0006 | 7200 | 0.2582 |
| 0.245 | 2.2006 | 7920 | 0.2528 |
| 0.2408 | 2.4007 | 8640 | 0.2478 |
| 0.2357 | 2.6007 | 9360 | 0.2441 |
| 0.2347 | 2.8008 | 10080 | 0.2395 |
| 0.2354 | 3.0008 | 10800 | 0.2461 |
| 0.2234 | 3.2009 | 11520 | 0.2394 |
| 0.2264 | 3.4009 | 12240 | 0.2314 |
| 0.22 | 3.6010 | 12960 | 0.2321 |
| 0.2183 | 3.8011 | 13680 | 0.2281 |
| 0.2156 | 4.0011 | 14400 | 0.2244 |
| 0.208 | 4.2012 | 15120 | 0.2202 |
| 0.2071 | 4.4012 | 15840 | 0.2187 |
| 0.2027 | 4.6013 | 16560 | 0.2216 |
| 0.2028 | 4.8013 | 17280 | 0.2144 |
| 0.1993 | 5.0014 | 18000 | 0.2138 |
| 0.1919 | 5.2014 | 18720 | 0.2077 |
| 0.1902 | 5.4015 | 19440 | 0.2104 |
| 0.1892 | 5.6016 | 20160 | 0.2115 |
| 0.19 | 5.8016 | 20880 | 0.2020 |
| 0.1847 | 6.0017 | 21600 | 0.2015 |
| 0.1764 | 6.2017 | 22320 | 0.1956 |
| 0.1788 | 6.4018 | 23040 | 0.1907 |
| 0.1742 | 6.6018 | 23760 | 0.1924 |
| 0.1744 | 6.8019 | 24480 | 0.1939 |
| 0.1718 | 7.0019 | 25200 | 0.1847 |
| 0.1632 | 7.2020 | 25920 | 0.1856 |
| 0.1628 | 7.4021 | 26640 | 0.1880 |
| 0.1611 | 7.6021 | 27360 | 0.1837 |
| 0.1572 | 7.8022 | 28080 | 0.1780 |
| 0.1615 | 8.0022 | 28800 | 0.1776 |
| 0.1491 | 8.2023 | 29520 | 0.1739 |
| 0.1453 | 8.4023 | 30240 | 0.1726 |
| 0.1469 | 8.6024 | 30960 | 0.1706 |
| 0.1429 | 8.8024 | 31680 | 0.1701 |
| 0.1453 | 9.0025 | 32400 | 0.1679 |
| 0.1339 | 9.2026 | 33120 | 0.1658 |
| 0.1312 | 9.4026 | 33840 | 0.1626 |
| 0.1325 | 9.6027 | 34560 | 0.1623 |
| 0.1318 | 9.8027 | 35280 | 0.1580 |
| 0.126 | 10.0028 | 36000 | 0.1605 |
| 0.1177 | 10.2028 | 36720 | 0.1553 |
| 0.1168 | 10.4029 | 37440 | 0.1555 |
| 0.1153 | 10.6029 | 38160 | 0.1539 |
| 0.1154 | 10.8030 | 38880 | 0.1522 |
| 0.1158 | 11.0031 | 39600 | 0.1520 |
| 0.1044 | 11.2031 | 40320 | 0.1518 |
| 0.1036 | 11.4032 | 41040 | 0.1509 |
| 0.1016 | 11.6032 | 41760 | 0.1504 |
| 0.1004 | 11.8033 | 42480 | 0.1486 |
### Framework versions
- PEFT 0.14.0
- Transformers 4.47.0
- Pytorch 2.5.1+cu124
- Datasets 3.2.0
- Tokenizers 0.21.0
|
FormlessAI/7d53b76d-29f6-45b5-ab72-7ac04e2ee669
|
FormlessAI
| 2025-06-19T02:09:57Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"trl",
"grpo",
"arxiv:2402.03300",
"base_model:unsloth/Qwen2.5-Math-1.5B",
"base_model:finetune:unsloth/Qwen2.5-Math-1.5B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-18T23:23:25Z |
---
base_model: unsloth/Qwen2.5-Math-1.5B
library_name: transformers
model_name: 7d53b76d-29f6-45b5-ab72-7ac04e2ee669
tags:
- generated_from_trainer
- trl
- grpo
licence: license
---
# Model Card for 7d53b76d-29f6-45b5-ab72-7ac04e2ee669
This model is a fine-tuned version of [unsloth/Qwen2.5-Math-1.5B](https://huggingface.co/unsloth/Qwen2.5-Math-1.5B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="FormlessAI/7d53b76d-29f6-45b5-ab72-7ac04e2ee669", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/phoenix-formless/Gradients/runs/pylz4h49)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.18.1
- Transformers: 4.52.4
- Pytorch: 2.7.0+cu128
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
morturr/Llama-2-7b-hf-LOO_amazon-COMB_one_liners-comb1-seed28-2025-06-19
|
morturr
| 2025-06-19T02:03:02Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"license:llama2",
"region:us"
] | null | 2025-06-19T02:02:53Z |
---
library_name: peft
license: llama2
base_model: meta-llama/Llama-2-7b-hf
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: Llama-2-7b-hf-LOO_amazon-COMB_one_liners-comb1-seed28-2025-06-19
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Llama-2-7b-hf-LOO_amazon-COMB_one_liners-comb1-seed28-2025-06-19
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 28
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.1
- Pytorch 2.5.1+cu124
- Datasets 3.0.2
- Tokenizers 0.20.1
|
dicksonhk/Nanonets-OCR-s-mlx-4Bit
|
dicksonhk
| 2025-06-19T01:59:49Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2_5_vl",
"image-text-to-text",
"OCR",
"pdf2markdown",
"mlx",
"mlx-my-repo",
"conversational",
"en",
"base_model:nanonets/Nanonets-OCR-s",
"base_model:finetune:nanonets/Nanonets-OCR-s",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2025-06-19T01:59:15Z |
---
language:
- en
base_model: nanonets/Nanonets-OCR-s
pipeline_tag: image-text-to-text
tags:
- OCR
- pdf2markdown
- mlx
- mlx-my-repo
library_name: transformers
---
# dicksonhk/Nanonets-OCR-s-mlx-4Bit
The Model [dicksonhk/Nanonets-OCR-s-mlx-4Bit](https://huggingface.co/dicksonhk/Nanonets-OCR-s-mlx-4Bit) was converted to MLX format from [nanonets/Nanonets-OCR-s](https://huggingface.co/nanonets/Nanonets-OCR-s) using mlx-vlm version **0.1.15**.
```bash
pip install -U mlx-vlm
```
```bash
python -m mlx_vlm.generate --model dicksonhk/Nanonets-OCR-s-mlx-4Bit --max-tokens 100 --temp 0.0 --prompt "Describe this image." --image <path_to_image>
```
|
DS4H-ICTU/linguo_mt_bum_en
|
DS4H-ICTU
| 2025-06-19T01:54:51Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"marian",
"text2text-generation",
"translation",
"generated_from_trainer",
"base_model:Helsinki-NLP/opus-mt-en-ROMANCE",
"base_model:finetune:Helsinki-NLP/opus-mt-en-ROMANCE",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2025-06-19T01:54:31Z |
---
library_name: transformers
license: apache-2.0
base_model: Helsinki-NLP/opus-mt-en-ROMANCE
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: linguo_mt_bum_en
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# linguo_mt_bum_en
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-ROMANCE](https://huggingface.co/Helsinki-NLP/opus-mt-en-ROMANCE) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6657
- Bleu: 13.0370
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.7826 | 1.0 | 1432 | 0.7974 | 5.2519 |
| 0.7267 | 2.0 | 2864 | 0.6952 | 12.8501 |
| 0.6424 | 3.0 | 4296 | 0.6657 | 13.0370 |
### Framework versions
- Transformers 4.52.4
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1
|
hanslab37/dqn-SpaceInvadersNoFrameskip-v4
|
hanslab37
| 2025-06-19T01:52:35Z | 4 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-10-28T08:26:03Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 382.50 +/- 155.33
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
SBX (SB3 + Jax): https://github.com/araffin/sbx
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga hanslab37 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga hanslab37 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga hanslab37
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
buttercoconut/Qwen2.5-ko-alpaca-0.5B
|
buttercoconut
| 2025-06-19T01:45:04Z | 0 | 0 | null |
[
"safetensors",
"qwen2",
"text-generation",
"conversational",
"ko",
"base_model:Qwen/Qwen2.5-0.5B",
"base_model:finetune:Qwen/Qwen2.5-0.5B",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2025-06-19T01:24:25Z |
---
license: apache-2.0
language:
- ko
base_model:
- Qwen/Qwen2.5-0.5B
pipeline_tag: text-generation
---
|
TheDivinityTech/eartha-replicate
|
TheDivinityTech
| 2025-06-19T01:42:09Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-06-19T01:31:46Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: TOK
---
# Eartha Replicate
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `TOK` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "TOK",
"lora_weights": "https://huggingface.co/TheDivinityTech/eartha-replicate/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('TheDivinityTech/eartha-replicate', weight_name='lora.safetensors')
image = pipeline('TOK').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 1000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/TheDivinityTech/eartha-replicate/discussions) to add images that show off what you’ve made with this LoRA.
|
dicksonhk/Qwen2.5-VL-3B-Instruct-mlx-4Bit
|
dicksonhk
| 2025-06-19T01:41:49Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2_5_vl",
"image-text-to-text",
"multimodal",
"mlx",
"mlx-my-repo",
"conversational",
"en",
"base_model:Qwen/Qwen2.5-VL-3B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-VL-3B-Instruct",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2025-06-19T01:41:33Z |
---
license_name: qwen-research
license_link: https://huggingface.co/Qwen/Qwen2.5-VL-3B-Instruct/blob/main/LICENSE
language:
- en
pipeline_tag: image-text-to-text
tags:
- multimodal
- mlx
- mlx-my-repo
library_name: transformers
base_model: Qwen/Qwen2.5-VL-3B-Instruct
---
# dicksonhk/Qwen2.5-VL-3B-Instruct-mlx-4Bit
The Model [dicksonhk/Qwen2.5-VL-3B-Instruct-mlx-4Bit](https://huggingface.co/dicksonhk/Qwen2.5-VL-3B-Instruct-mlx-4Bit) was converted to MLX format from [Qwen/Qwen2.5-VL-3B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-3B-Instruct) using mlx-vlm version **0.1.15**.
```bash
pip install -U mlx-vlm
```
```bash
python -m mlx_vlm.generate --model dicksonhk/Qwen2.5-VL-3B-Instruct-mlx-4Bit --max-tokens 100 --temp 0.0 --prompt "Describe this image." --image <path_to_image>
```
|
DS4H-ICTU/linguo_mt_fub_en
|
DS4H-ICTU
| 2025-06-19T01:40:45Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"marian",
"text2text-generation",
"translation",
"generated_from_trainer",
"base_model:Helsinki-NLP/opus-mt-en-ROMANCE",
"base_model:finetune:Helsinki-NLP/opus-mt-en-ROMANCE",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2025-06-19T01:40:07Z |
---
library_name: transformers
license: apache-2.0
base_model: Helsinki-NLP/opus-mt-en-ROMANCE
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: linguo_mt_fub_en
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# linguo_mt_fub_en
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-ROMANCE](https://huggingface.co/Helsinki-NLP/opus-mt-en-ROMANCE) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6569
- Bleu: 11.2552
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.8906 | 1.0 | 1534 | 0.7769 | 7.7862 |
| 0.7049 | 2.0 | 3068 | 0.6852 | 9.9392 |
| 0.6793 | 3.0 | 4602 | 0.6569 | 11.2552 |
### Framework versions
- Transformers 4.52.4
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1
|
samtse123/staff-manual-lora
|
samtse123
| 2025-06-19T01:30:03Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-18T09:26:42Z |
---
base_model: unsloth/qwen2.5-7b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** samtse123
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-7b-unsloth-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
nnilayy/dreamer-arousal-binary-classification-Kfold-4
|
nnilayy
| 2025-06-19T01:29:37Z | 0 | 0 | null |
[
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"region:us"
] | null | 2025-06-19T01:29:35Z |
---
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Code: [More Information Needed]
- Paper: [More Information Needed]
- Docs: [More Information Needed]
|
chiruan/qwen2.5-7b-coder_V2-130steps
|
chiruan
| 2025-06-19T01:26:29Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-19T00:58:27Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Victoriatr07/final_model6_LoRA
|
Victoriatr07
| 2025-06-19T01:24:27Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-19T01:23:59Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ChavyvAkvar/AlphaTrade-0.6B-SFT-v0.1
|
ChavyvAkvar
| 2025-06-19T01:23:23Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"base_model:unsloth/Qwen3-0.6B",
"base_model:finetune:unsloth/Qwen3-0.6B",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-19T01:22:35Z |
---
base_model: unsloth/Qwen3-0.6B
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** ChavyvAkvar
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen3-0.6B
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
luyotw/openfun-ivod-whisper-small-HuangKuoChang-11-185
|
luyotw
| 2025-06-19T01:22:12Z | 0 | 0 | null |
[
"tensorboard",
"safetensors",
"whisper",
"region:us"
] | null | 2025-06-19T00:59:11Z |
# Fine-tune 資訊
- 原始模型: `openai/whisper-small`
- 使用音訊數量: 35047
- 使用音訊總長: 21.07 小時
- 音訊平均長度: 2.16 秒
- GPU: `NVIDIA H100 PCIe` x 1
- 訓練時間: 05:20:34
- 模型大小: 0.90 GB
---
# Model Card
|
rosieyzh/OLMo-1B-as_fm3_tg_omi1_omi2_global_step9
|
rosieyzh
| 2025-06-19T01:19:12Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"olmo",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-19T01:17:21Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
rosieyzh/OLMo-1B-as_fm3_tg_omi1_omi2_global_step821
|
rosieyzh
| 2025-06-19T01:17:16Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"olmo",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-19T01:15:28Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
rosieyzh/OLMo-1B-as_fm3_tg_omi1_omi2_global_step73
|
rosieyzh
| 2025-06-19T01:15:23Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"olmo",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-19T01:13:31Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
JishnuKR/H2Qwen-16bit
|
JishnuKR
| 2025-06-19T01:11:40Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"base_model:unsloth/Qwen3-4B-Base",
"base_model:finetune:unsloth/Qwen3-4B-Base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-19T01:03:24Z |
---
base_model: unsloth/Qwen3-4B-Base
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** JishnuKR
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen3-4B-Base
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
morturr/Llama-2-7b-hf-LOO_one_liners-COMB_headlines-comb2-seed18-2025-06-19
|
morturr
| 2025-06-19T01:10:55Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"license:llama2",
"region:us"
] | null | 2025-06-19T01:10:37Z |
---
library_name: peft
license: llama2
base_model: meta-llama/Llama-2-7b-hf
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: Llama-2-7b-hf-LOO_one_liners-COMB_headlines-comb2-seed18-2025-06-19
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Llama-2-7b-hf-LOO_one_liners-COMB_headlines-comb2-seed18-2025-06-19
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 18
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.1
- Pytorch 2.5.1+cu124
- Datasets 3.0.2
- Tokenizers 0.20.1
|
rosieyzh/OLMo-1B-as_fm3_tg_omi1_omi2_global_step411
|
rosieyzh
| 2025-06-19T01:09:19Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"olmo",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-19T01:07:30Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
rosieyzh/OLMo-1B-as_fm3_tg_omi1_omi2_global_step291
|
rosieyzh
| 2025-06-19T01:05:13Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"olmo",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-19T01:03:26Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
trhgquan/phobert-finetune-freezed-seg-1337
|
trhgquan
| 2025-06-19T01:00:07Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"roberta",
"text-classification",
"vi",
"base_model:vinai/phobert-base",
"base_model:finetune:vinai/phobert-base",
"license:gpl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-06-18T08:59:10Z |
---
license: gpl-3.0
language:
- vi
metrics:
- accuracy
- f1
base_model:
- vinai/phobert-base
pipeline_tag: text-classification
library_name: transformers
---
|
trhgquan/phobert-finetune-from-scratch-seg-69
|
trhgquan
| 2025-06-19T00:58:00Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"roberta",
"text-classification",
"vi",
"base_model:vinai/phobert-base",
"base_model:finetune:vinai/phobert-base",
"license:gpl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-06-18T08:59:58Z |
---
license: gpl-3.0
language:
- vi
metrics:
- accuracy
- f1
base_model:
- vinai/phobert-base
pipeline_tag: text-classification
library_name: transformers
---
|
trhgquan/phobert-finetune-from-scratch-seg-6969
|
trhgquan
| 2025-06-19T00:57:49Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"roberta",
"text-classification",
"vi",
"base_model:vinai/phobert-base",
"base_model:finetune:vinai/phobert-base",
"license:gpl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-06-18T09:00:13Z |
---
license: gpl-3.0
language:
- vi
metrics:
- accuracy
- f1
base_model:
- vinai/phobert-base
pipeline_tag: text-classification
library_name: transformers
---
|
rosieyzh/OLMo-1B-as_fm3_tg_omi1_omi2_global_step146
|
rosieyzh
| 2025-06-19T00:57:26Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"olmo",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-19T00:55:16Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
trhgquan/phobert-finetune-from-scratch-seg-1337
|
trhgquan
| 2025-06-19T00:55:54Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"roberta",
"text-classification",
"vi",
"base_model:vinai/phobert-base",
"base_model:finetune:vinai/phobert-base",
"license:gpl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-06-18T09:00:23Z |
---
license: gpl-3.0
language:
- vi
metrics:
- accuracy
- f1
base_model:
- vinai/phobert-base
pipeline_tag: text-classification
library_name: transformers
---
|
bluuluu/Qwen2-VL-2B-Instruct-SFT
|
bluuluu
| 2025-06-19T00:53:51Z | 43 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2_vl",
"image-text-to-text",
"generated_from_trainer",
"R1-V",
"trl",
"sft",
"conversational",
"dataset:MMInstruction/Clevr_CoGenT_TrainA_R1",
"base_model:Qwen/Qwen2-VL-2B-Instruct",
"base_model:finetune:Qwen/Qwen2-VL-2B-Instruct",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2025-04-09T08:41:06Z |
---
base_model: Qwen/Qwen2-VL-2B-Instruct
datasets: MMInstruction/Clevr_CoGenT_TrainA_R1
library_name: transformers
model_name: Qwen2-VL-2B-Instruct-SFT
tags:
- generated_from_trainer
- R1-V
- trl
- sft
licence: license
---
# Model Card for Qwen2-VL-2B-Instruct-SFT
This model is a fine-tuned version of [Qwen/Qwen2-VL-2B-Instruct](https://huggingface.co/Qwen/Qwen2-VL-2B-Instruct) on the [MMInstruction/Clevr_CoGenT_TrainA_R1](https://huggingface.co/datasets/MMInstruction/Clevr_CoGenT_TrainA_R1) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="bluuluu/Qwen2-VL-2B-Instruct-SFT", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.14.0
- Transformers: 4.52.0.dev0
- Pytorch: 2.5.1
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
tensorblock/Llama-3-8B-dutch-GGUF
|
tensorblock
| 2025-06-19T00:52:36Z | 61 | 0 | null |
[
"gguf",
"ORPO",
"llama 3 8B",
"conversational",
"TensorBlock",
"GGUF",
"text-generation",
"nl",
"dataset:BramVanroy/ultra_feedback_dutch",
"base_model:ReBatch/Llama-3-8B-dutch",
"base_model:quantized:ReBatch/Llama-3-8B-dutch",
"license:llama3",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-12-12T20:31:53Z |
---
license: llama3
base_model: ReBatch/Llama-3-8B-dutch
tags:
- ORPO
- llama 3 8B
- conversational
- TensorBlock
- GGUF
datasets:
- BramVanroy/ultra_feedback_dutch
language:
- nl
pipeline_tag: text-generation
model-index:
- name: ReBatch/Llama-3-8B-dutch
results: []
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## ReBatch/Llama-3-8B-dutch - GGUF
This repo contains GGUF format model files for [ReBatch/Llama-3-8B-dutch](https://huggingface.co/ReBatch/Llama-3-8B-dutch).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
</tr>
</table>
## Prompt template
```
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Llama-3-8B-dutch-Q2_K.gguf](https://huggingface.co/tensorblock/Llama-3-8B-dutch-GGUF/blob/main/Llama-3-8B-dutch-Q2_K.gguf) | Q2_K | 3.179 GB | smallest, significant quality loss - not recommended for most purposes |
| [Llama-3-8B-dutch-Q3_K_S.gguf](https://huggingface.co/tensorblock/Llama-3-8B-dutch-GGUF/blob/main/Llama-3-8B-dutch-Q3_K_S.gguf) | Q3_K_S | 3.665 GB | very small, high quality loss |
| [Llama-3-8B-dutch-Q3_K_M.gguf](https://huggingface.co/tensorblock/Llama-3-8B-dutch-GGUF/blob/main/Llama-3-8B-dutch-Q3_K_M.gguf) | Q3_K_M | 4.019 GB | very small, high quality loss |
| [Llama-3-8B-dutch-Q3_K_L.gguf](https://huggingface.co/tensorblock/Llama-3-8B-dutch-GGUF/blob/main/Llama-3-8B-dutch-Q3_K_L.gguf) | Q3_K_L | 4.322 GB | small, substantial quality loss |
| [Llama-3-8B-dutch-Q4_0.gguf](https://huggingface.co/tensorblock/Llama-3-8B-dutch-GGUF/blob/main/Llama-3-8B-dutch-Q4_0.gguf) | Q4_0 | 4.661 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [Llama-3-8B-dutch-Q4_K_S.gguf](https://huggingface.co/tensorblock/Llama-3-8B-dutch-GGUF/blob/main/Llama-3-8B-dutch-Q4_K_S.gguf) | Q4_K_S | 4.693 GB | small, greater quality loss |
| [Llama-3-8B-dutch-Q4_K_M.gguf](https://huggingface.co/tensorblock/Llama-3-8B-dutch-GGUF/blob/main/Llama-3-8B-dutch-Q4_K_M.gguf) | Q4_K_M | 4.921 GB | medium, balanced quality - recommended |
| [Llama-3-8B-dutch-Q5_0.gguf](https://huggingface.co/tensorblock/Llama-3-8B-dutch-GGUF/blob/main/Llama-3-8B-dutch-Q5_0.gguf) | Q5_0 | 5.599 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [Llama-3-8B-dutch-Q5_K_S.gguf](https://huggingface.co/tensorblock/Llama-3-8B-dutch-GGUF/blob/main/Llama-3-8B-dutch-Q5_K_S.gguf) | Q5_K_S | 5.599 GB | large, low quality loss - recommended |
| [Llama-3-8B-dutch-Q5_K_M.gguf](https://huggingface.co/tensorblock/Llama-3-8B-dutch-GGUF/blob/main/Llama-3-8B-dutch-Q5_K_M.gguf) | Q5_K_M | 5.733 GB | large, very low quality loss - recommended |
| [Llama-3-8B-dutch-Q6_K.gguf](https://huggingface.co/tensorblock/Llama-3-8B-dutch-GGUF/blob/main/Llama-3-8B-dutch-Q6_K.gguf) | Q6_K | 6.596 GB | very large, extremely low quality loss |
| [Llama-3-8B-dutch-Q8_0.gguf](https://huggingface.co/tensorblock/Llama-3-8B-dutch-GGUF/blob/main/Llama-3-8B-dutch-Q8_0.gguf) | Q8_0 | 8.541 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/Llama-3-8B-dutch-GGUF --include "Llama-3-8B-dutch-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/Llama-3-8B-dutch-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
diabolocom/openai-whisper-medium-LoRA
|
diabolocom
| 2025-06-19T00:46:19Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-19T00:46:16Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Lahhhalah/llama-joycaption-beta-one-hf-llava-Q4_K_M-GGUF
|
Lahhhalah
| 2025-06-19T00:45:30Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"captioning",
"llama-cpp",
"gguf-my-repo",
"image-text-to-text",
"base_model:fancyfeast/llama-joycaption-beta-one-hf-llava",
"base_model:quantized:fancyfeast/llama-joycaption-beta-one-hf-llava",
"endpoints_compatible",
"region:us",
"conversational"
] |
image-text-to-text
| 2025-06-19T00:45:08Z |
---
base_model: fancyfeast/llama-joycaption-beta-one-hf-llava
tags:
- captioning
- llama-cpp
- gguf-my-repo
pipeline_tag: image-text-to-text
library_name: transformers
---
# Lahhhalah/llama-joycaption-beta-one-hf-llava-Q4_K_M-GGUF
This model was converted to GGUF format from [`fancyfeast/llama-joycaption-beta-one-hf-llava`](https://huggingface.co/fancyfeast/llama-joycaption-beta-one-hf-llava) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/fancyfeast/llama-joycaption-beta-one-hf-llava) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Lahhhalah/llama-joycaption-beta-one-hf-llava-Q4_K_M-GGUF --hf-file llama-joycaption-beta-one-hf-llava-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Lahhhalah/llama-joycaption-beta-one-hf-llava-Q4_K_M-GGUF --hf-file llama-joycaption-beta-one-hf-llava-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Lahhhalah/llama-joycaption-beta-one-hf-llava-Q4_K_M-GGUF --hf-file llama-joycaption-beta-one-hf-llava-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Lahhhalah/llama-joycaption-beta-one-hf-llava-Q4_K_M-GGUF --hf-file llama-joycaption-beta-one-hf-llava-q4_k_m.gguf -c 2048
```
|
anemll/anemll-Qwen-Qwen3-0.6B-FP16-ctx512_0.3.3
|
anemll
| 2025-06-19T00:41:21Z | 0 | 0 | null |
[
"llama",
"coreml",
"ANE",
"LLaMA",
"Qwen",
"DeepSeek",
"Apple",
"Apple Neural Engine",
"DeepHermes",
"license:mit",
"region:us"
] | null | 2025-06-19T00:35:40Z |
---
license: mit
tags:
- coreml
- ANE
- LLaMA
- Qwen
- DeepSeek
- Apple
- Apple Neural Engine
- DeepHermes
---
# ANEMLL
**ANEMLL** (pronounced like "animal") is an open-source project focused on accelerating the porting of Large Language Models (LLMs) to tensor processors, starting with the Apple Neural Engine (ANE).
The goal is to provide a fully open-source pipeline from model conversion to inference for common LLM architectures running on ANE.
This enables seamless integration and on-device inference for low-power applications on edge devices, ensuring maximum privacy and security.
This is critical for autonomous applications, where models run directly on the device without requiring an internet connection.
For more information, visit the [ANEMLL GitHub repository](https://github.com/anemll/anemll).
---
## License
ANEMLL is licensed under the [MIT License](https://opensource.org/license/mit).
The original model may require a separate license depending on the architecture:
- LLaMA models: Based on Meta's LLaMA and may require Meta's license
- Qwen models: Based on Alibaba's Qwen and may require Alibaba's license
- Other models: Check respective original model licenses
This model is converted for CoreML using ANEMLL's open-source conversion pipeline. It supports multiple LLM architectures including LLaMA, Qwen, and DeepSeek variants.
---
## Requirements
- **macOS Sequoia** with Apple Neural Engine and 8GB RAM or more
- **CoreML Tools** and **HuggingFace Transformers** libraries
- **Python 3.9**
`chat.py` provides a sample inference script.
`chat_full.py` provides a sample inference script with history and conversation management.
**Installation**
1. Download the model from Hugging Face:
```bash
# Install required tools
pip install huggingface_hub
# Install Git LFS (Large File Support)
# macOS with Homebrew:
brew install git-lfs
# Or Ubuntu/Debian:
# sudo apt-get install git-lfs
# Initialize Git LFS
git lfs install
# Clone the repository with model files
git clone https://huggingface.co/anemll/anemll-Qwen-Qwen3-0.6B-ctx512_0.3.3
```
2. Extract model files:
```bash
# Navigate to cloned directory
cd anemll-Qwen-Qwen3-0.6B-ctx512_0.3.3
# Pull LFS files (model weights)
git lfs pull
# Extract CoreML model files
find . -type f -name "*.zip" -exec unzip {} \;
```
3. Install dependencies:
```bash
pip install coremltools transformers
```
**Coremltools:**
See coremltools installation guide at https://coremltools.readme.io/v4.0/docs/installation
**How to Run**
1. Basic chat interface:
```bash
python chat.py --meta ./meta.yaml
```
2. Full conversation mode with history:
```bash
python chat_full.py --meta ./meta.yaml
```
> Note: The first time the model loads, macOS will take some time to place it on the device.
> Subsequent loads will be instantaneous.
> Use Ctrl-D to exit, Ctrl-C to interrupt inference.
**More Info**
Please check following links for later updates:
* [GitHub](https://github.com/anemll)
* [Hugging Face Models](https://huggingface.co/anemll)
* [Twitter/X](https://x.com/anemll)
* [Website](https://anemll.com)
[email protected]
# anemll-Qwen-Qwen3-0.6B-ctx512_0.3.3
This is a CoreML model converted using ANEMLL for Apple Neural Engine inference.
## Available Distributions
### Standard Distribution
- Contains zipped MLMODELC files
- Suitable for macOS and development
### iOS Distribution
- Contains unzipped MLMODELC files
- Ready for iOS deployment
- Includes offline tokenizer support
## Model Information
- Context Length: 512
- Batch Size: 64
- Number of Chunks: 1
## Quick Start
### Test in iOS/macOS App
Try our sample Chat-Bot app on TestFlight:
1. Install TestFlight from App Store
2. Join beta test: [TestFlight Link](https://testflight.apple.com/join/jrQq1D1C)
3. App includes a small demo model pre-installed
4. You can add custom models via HuggingFace URLs
> [!Note]
> - The TestFlight app works on both iOS and macOS
> - Demonstrates proper model integration and provides a reference implementation
> - iOS requires unzipped MLMODELC files and config.json for offline tokenizer
> - macOS supports both zipped and unzipped model formats
```
|
rosieyzh/OLMo-1B-as_fm3_tg_omi1_omi2_episode4
|
rosieyzh
| 2025-06-19T00:40:35Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"olmo",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-19T00:38:47Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
igorktech/skommarkhos-lucie7binstructv1-1-sft-arpo-a21
|
igorktech
| 2025-06-19T00:39:30Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"cpo",
"arxiv:2401.08417",
"base_model:OpenLLM-France/Lucie-7B-Instruct-v1.1",
"base_model:finetune:OpenLLM-France/Lucie-7B-Instruct-v1.1",
"endpoints_compatible",
"region:us"
] | null | 2025-06-18T22:57:25Z |
---
base_model: OpenLLM-France/Lucie-7B-Instruct-v1.1
library_name: transformers
model_name: skommarkhos-lucie7binstructv1-1-sft-arpo-a21
tags:
- generated_from_trainer
- trl
- cpo
licence: license
---
# Model Card for skommarkhos-lucie7binstructv1-1-sft-arpo-a21
This model is a fine-tuned version of [OpenLLM-France/Lucie-7B-Instruct-v1.1](https://huggingface.co/OpenLLM-France/Lucie-7B-Instruct-v1.1).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="igorktech/skommarkhos-lucie7binstructv1-1-sft-arpo-a21", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/igorktech01/joker-pun-translation/runs/t2ipuglv)
This model was trained with CPO, a method introduced in [Contrastive Preference Optimization: Pushing the Boundaries of LLM Performance in Machine Translation](https://huggingface.co/papers/2401.08417).
### Framework versions
- TRL: 0.18.2
- Transformers: 4.52.4
- Pytorch: 2.7.0
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite CPO as:
```bibtex
@inproceedings{xu2024contrastive,
title = {{Contrastive Preference Optimization: Pushing the Boundaries of LLM Performance in Machine Translation}},
author = {Haoran Xu and Amr Sharaf and Yunmo Chen and Weiting Tan and Lingfeng Shen and Benjamin Van Durme and Kenton Murray and Young Jin Kim},
year = 2024,
booktitle = {Forty-first International Conference on Machine Learning, {ICML} 2024, Vienna, Austria, July 21-27, 2024},
publisher = {OpenReview.net},
url = {https://openreview.net/forum?id=51iwkioZpn}
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
rosieyzh/OLMo-1B-as_fm3_tg_omi1_omi2_episode3
|
rosieyzh
| 2025-06-19T00:38:41Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"olmo",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-19T00:36:49Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
benjamintli/bigbird-text2sql-0.5B-merged
|
benjamintli
| 2025-06-19T00:38:19Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-05T05:39:53Z |
---
base_model: unsloth/qwen2.5-coder-0.5b-instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** benjamintli
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-coder-0.5b-instruct-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
tensorblock/MiniCPM3-4B-GGUF
|
tensorblock
| 2025-06-19T00:38:01Z | 77 | 0 |
transformers
|
[
"transformers",
"gguf",
"TensorBlock",
"GGUF",
"text-generation",
"zh",
"en",
"base_model:openbmb/MiniCPM3-4B",
"base_model:quantized:openbmb/MiniCPM3-4B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2024-11-29T05:26:01Z |
---
license: apache-2.0
language:
- zh
- en
pipeline_tag: text-generation
library_name: transformers
tags:
- TensorBlock
- GGUF
base_model: openbmb/MiniCPM3-4B
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## openbmb/MiniCPM3-4B - GGUF
This repo contains GGUF format model files for [openbmb/MiniCPM3-4B](https://huggingface.co/openbmb/MiniCPM3-4B).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4011](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
</tr>
</table>
## Prompt template
```
<|im_start|>system
{system_prompt}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [MiniCPM3-4B-Q2_K.gguf](https://huggingface.co/tensorblock/MiniCPM3-4B-GGUF/blob/main/MiniCPM3-4B-Q2_K.gguf) | Q2_K | 1.577 GB | smallest, significant quality loss - not recommended for most purposes |
| [MiniCPM3-4B-Q3_K_S.gguf](https://huggingface.co/tensorblock/MiniCPM3-4B-GGUF/blob/main/MiniCPM3-4B-Q3_K_S.gguf) | Q3_K_S | 1.827 GB | very small, high quality loss |
| [MiniCPM3-4B-Q3_K_M.gguf](https://huggingface.co/tensorblock/MiniCPM3-4B-GGUF/blob/main/MiniCPM3-4B-Q3_K_M.gguf) | Q3_K_M | 2.022 GB | very small, high quality loss |
| [MiniCPM3-4B-Q3_K_L.gguf](https://huggingface.co/tensorblock/MiniCPM3-4B-GGUF/blob/main/MiniCPM3-4B-Q3_K_L.gguf) | Q3_K_L | 2.194 GB | small, substantial quality loss |
| [MiniCPM3-4B-Q4_0.gguf](https://huggingface.co/tensorblock/MiniCPM3-4B-GGUF/blob/main/MiniCPM3-4B-Q4_0.gguf) | Q4_0 | 2.343 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [MiniCPM3-4B-Q4_K_S.gguf](https://huggingface.co/tensorblock/MiniCPM3-4B-GGUF/blob/main/MiniCPM3-4B-Q4_K_S.gguf) | Q4_K_S | 2.357 GB | small, greater quality loss |
| [MiniCPM3-4B-Q4_K_M.gguf](https://huggingface.co/tensorblock/MiniCPM3-4B-GGUF/blob/main/MiniCPM3-4B-Q4_K_M.gguf) | Q4_K_M | 2.470 GB | medium, balanced quality - recommended |
| [MiniCPM3-4B-Q5_0.gguf](https://huggingface.co/tensorblock/MiniCPM3-4B-GGUF/blob/main/MiniCPM3-4B-Q5_0.gguf) | Q5_0 | 2.829 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [MiniCPM3-4B-Q5_K_S.gguf](https://huggingface.co/tensorblock/MiniCPM3-4B-GGUF/blob/main/MiniCPM3-4B-Q5_K_S.gguf) | Q5_K_S | 2.829 GB | large, low quality loss - recommended |
| [MiniCPM3-4B-Q5_K_M.gguf](https://huggingface.co/tensorblock/MiniCPM3-4B-GGUF/blob/main/MiniCPM3-4B-Q5_K_M.gguf) | Q5_K_M | 2.894 GB | large, very low quality loss - recommended |
| [MiniCPM3-4B-Q6_K.gguf](https://huggingface.co/tensorblock/MiniCPM3-4B-GGUF/blob/main/MiniCPM3-4B-Q6_K.gguf) | Q6_K | 3.345 GB | very large, extremely low quality loss |
| [MiniCPM3-4B-Q8_0.gguf](https://huggingface.co/tensorblock/MiniCPM3-4B-GGUF/blob/main/MiniCPM3-4B-Q8_0.gguf) | Q8_0 | 4.331 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/MiniCPM3-4B-GGUF --include "MiniCPM3-4B-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/MiniCPM3-4B-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
KindBrave/Qwen3-Embedding-0.6B-MNN
|
KindBrave
| 2025-06-19T00:37:27Z | 0 | 0 | null |
[
"embedding",
"sentence-similarity",
"arxiv:2506.05176",
"base_model:Qwen/Qwen3-0.6B-Base",
"base_model:finetune:Qwen/Qwen3-0.6B-Base",
"license:apache-2.0",
"region:us"
] |
sentence-similarity
| 2025-06-19T00:33:14Z |
---
license: apache-2.0
base_model:
- Qwen/Qwen3-0.6B-Base
pipeline_tag: sentence-similarity
tags:
- embedding
---
# Qwen3-Embedding-0.6B-MNN
<p align="center">
<img src="https://qianwen-res.oss-accelerate-overseas.aliyuncs.com/logo_qwen3.png" width="400"/>
<p>
## Highlights
The Qwen3 Embedding model series is the latest proprietary model of the Qwen family, specifically designed for text embedding and ranking tasks. Building upon the dense foundational models of the Qwen3 series, it provides a comprehensive range of text embeddings and reranking models in various sizes (0.6B, 4B, and 8B). This series inherits the exceptional multilingual capabilities, long-text understanding, and reasoning skills of its foundational model. The Qwen3 Embedding series represents significant advancements in multiple text embedding and ranking tasks, including text retrieval, code retrieval, text classification, text clustering, and bitext mining.
**Exceptional Versatility**: The embedding model has achieved state-of-the-art performance across a wide range of downstream application evaluations. The 8B size embedding model ranks **No.1** in the MTEB multilingual leaderboard (as of June 5, 2025, score **70.58**), while the reranking model excels in various text retrieval scenarios.
**Comprehensive Flexibility**: The Qwen3 Embedding series offers a full spectrum of sizes (from 0.6B to 8B) for both embedding and reranking models, catering to diverse use cases that prioritize efficiency and effectiveness. Developers can seamlessly combine these two modules. Additionally, the embedding model allows for flexible vector definitions across all dimensions, and both embedding and reranking models support user-defined instructions to enhance performance for specific tasks, languages, or scenarios.
**Multilingual Capability**: The Qwen3 Embedding series offer support for over 100 languages, thanks to the multilingual capabilites of Qwen3 models. This includes various programming languages, and provides robust multilingual, cross-lingual, and code retrieval capabilities.
## Model Overview
**Qwen3-Embedding-0.6B-MNN** has the following features:
- Model Type: Text Embedding
- Supported Languages: 100+ Languages
- Number of Paramaters: 0.6B
- Context Length: 32k
- Embedding Dimension: Up to 1024, supports user-defined output dimensions ranging from 32 to 1024
For more details, including benchmark evaluation, hardware requirements, and inference performance, please refer to our [blog](https://qwenlm.github.io/blog/qwen3-embedding/), [GitHub](https://github.com/QwenLM/Qwen3-Embedding).
## Qwen3 Embedding Series Model list
| Model Type | Models | Size | Layers | Sequence Length | Embedding Dimension | MRL Support | Instruction Aware |
|------------------|----------------------|------|--------|-----------------|---------------------|-------------|----------------|
| Text Embedding | [Qwen3-Embedding-0.6B](https://huggingface.co/Qwen/Qwen3-Embedding-0.6B) | 0.6B | 28 | 32K | 1024 | Yes | Yes |
| Text Embedding | [Qwen3-Embedding-4B](https://huggingface.co/Qwen/Qwen3-Embedding-4B) | 4B | 36 | 32K | 2560 | Yes | Yes |
| Text Embedding | [Qwen3-Embedding-8B](https://huggingface.co/Qwen/Qwen3-Embedding-8B) | 8B | 36 | 32K | 4096 | Yes | Yes |
| Text Reranking | [Qwen3-Reranker-0.6B](https://huggingface.co/Qwen/Qwen3-Reranker-0.6B) | 0.6B | 28 | 32K | - | - | Yes |
| Text Reranking | [Qwen3-Reranker-4B](https://huggingface.co/Qwen/Qwen3-Reranker-4B) | 4B | 36 | 32K | - | - | Yes |
| Text Reranking | [Qwen3-Reranker-8B](https://huggingface.co/Qwen/Qwen3-Reranker-8B) | 8B | 36 | 32K | - | - | Yes |
> **Note**:
> - `MRL Support` indicates whether the embedding model supports custom dimensions for the final embedding.
> - `Instruction Aware` notes whether the embedding or reranking model supports customizing the input instruction according to different tasks.
> - Our evaluation indicates that, for most downstream tasks, using instructions (instruct) typically yields an improvement of 1% to 5% compared to not using them. Therefore, we recommend that developers create tailored instructions specific to their tasks and scenarios. In multilingual contexts, we also advise users to write their instructions in English, as most instructions utilized during the model training process were originally written in English.
## Usage
📌 **Tip**: We recommend that developers customize the `instruct` according to their specific scenarios, tasks, and languages. Our tests have shown that in most retrieval scenarios, not using an `instruct` on the query side can lead to a drop in retrieval performance by approximately 1% to 5%.
### llama.cpp
Check out our [llama.cpp documentation](https://qwen.readthedocs.io/en/latest/run_locally/llama.cpp.html) for more usage guide.
We advise you to clone [`llama.cpp`](https://github.com/ggerganov/llama.cpp) and install it following the official guide. We follow the latest version of llama.cpp.
In the following demonstration, we assume that you are running commands under the repository `llama.cpp`.
You can run Qwen3 Embedding with one command:
```shell
./build/bin/llama-embedding -m model.gguf -p "<your context here><|endoftext|>" --pooling last --verbose-prompt --embd-normalize 2
```
Or lunch a server:
```shell
./build/bin/llama-server -m model.gguf --embedding --pooling last -ub 8192 --verbose-prompt
```
📌 **Tip**: Qwen3 Embedding models default to using the last token as `<|endoftext|>`, so you need to manually append this token to the end of your own input context. In addition, when running the `llama-server`, you also need to manually normalize the output embeddings as `llama-server` currently does not support the `--embd-normalize` option.
## Evaluation
### MTEB (Multilingual)
| Model | Size | Mean (Task) | Mean (Type) | Bitxt Mining | Class. | Clust. | Inst. Retri. | Multi. Class. | Pair. Class. | Rerank | Retri. | STS |
|----------------------------------|:-------:|:-------------:|:-------------:|:--------------:|:--------:|:--------:|:--------------:|:---------------:|:--------------:|:--------:|:--------:|:------:|
| NV-Embed-v2 | 7B | 56.29 | 49.58 | 57.84 | 57.29 | 40.80 | 1.04 | 18.63 | 78.94 | 63.82 | 56.72 | 71.10|
| GritLM-7B | 7B | 60.92 | 53.74 | 70.53 | 61.83 | 49.75 | 3.45 | 22.77 | 79.94 | 63.78 | 58.31 | 73.33|
| BGE-M3 | 0.6B | 59.56 | 52.18 | 79.11 | 60.35 | 40.88 | -3.11 | 20.1 | 80.76 | 62.79 | 54.60 | 74.12|
| multilingual-e5-large-instruct | 0.6B | 63.22 | 55.08 | 80.13 | 64.94 | 50.75 | -0.40 | 22.91 | 80.86 | 62.61 | 57.12 | 76.81|
| gte-Qwen2-1.5B-instruct | 1.5B | 59.45 | 52.69 | 62.51 | 58.32 | 52.05 | 0.74 | 24.02 | 81.58 | 62.58 | 60.78 | 71.61|
| gte-Qwen2-7b-Instruct | 7B | 62.51 | 55.93 | 73.92 | 61.55 | 52.77 | 4.94 | 25.48 | 85.13 | 65.55 | 60.08 | 73.98|
| text-embedding-3-large | - | 58.93 | 51.41 | 62.17 | 60.27 | 46.89 | -2.68 | 22.03 | 79.17 | 63.89 | 59.27 | 71.68|
| Cohere-embed-multilingual-v3.0 | - | 61.12 | 53.23 | 70.50 | 62.95 | 46.89 | -1.89 | 22.74 | 79.88 | 64.07 | 59.16 | 74.80|
| Gemini Embedding | - | 68.37 | 59.59 | 79.28 | 71.82 | 54.59 | 5.18 | **29.16** | 83.63 | 65.58 | 67.71 | 79.40|
| **Qwen3-Embedding-0.6B** | 0.6B | 64.33 | 56.00 | 72.22 | 66.83 | 52.33 | 5.09 | 24.59 | 80.83 | 61.41 | 64.64 | 76.17|
| **Qwen3-Embedding-4B** | 4B | 69.45 | 60.86 | 79.36 | 72.33 | 57.15 | **11.56** | 26.77 | 85.05 | 65.08 | 69.60 | 80.86|
| **Qwen3-Embedding-8B** | 8B | **70.58** | **61.69** | **80.89** | **74.00** | **57.65** | 10.06 | 28.66 | **86.40** | **65.63** | **70.88** | **81.08** |
> **Note**: For compared models, the scores are retrieved from MTEB online [leaderboard](https://huggingface.co/spaces/mteb/leaderboard) on May 24th, 2025.
### MTEB (Eng v2)
| MTEB English / Models | Param. | Mean(Task) | Mean(Type) | Class. | Clust. | Pair Class. | Rerank. | Retri. | STS | Summ. |
|--------------------------------|:--------:|:------------:|:------------:|:--------:|:--------:|:-------------:|:---------:|:--------:|:-------:|:-------:|
| multilingual-e5-large-instruct | 0.6B | 65.53 | 61.21 | 75.54 | 49.89 | 86.24 | 48.74 | 53.47 | 84.72 | 29.89 |
| NV-Embed-v2 | 7.8B | 69.81 | 65.00 | 87.19 | 47.66 | 88.69 | 49.61 | 62.84 | 83.82 | 35.21 |
| GritLM-7B | 7.2B | 67.07 | 63.22 | 81.25 | 50.82 | 87.29 | 49.59 | 54.95 | 83.03 | 35.65 |
| gte-Qwen2-1.5B-instruct | 1.5B | 67.20 | 63.26 | 85.84 | 53.54 | 87.52 | 49.25 | 50.25 | 82.51 | 33.94 |
| stella_en_1.5B_v5 | 1.5B | 69.43 | 65.32 | 89.38 | 57.06 | 88.02 | 50.19 | 52.42 | 83.27 | 36.91 |
| gte-Qwen2-7B-instruct | 7.6B | 70.72 | 65.77 | 88.52 | 58.97 | 85.9 | 50.47 | 58.09 | 82.69 | 35.74 |
| gemini-embedding-exp-03-07 | - | 73.3 | 67.67 | 90.05 | 59.39 | 87.7 | 48.59 | 64.35 | 85.29 | 38.28 |
| **Qwen3-Embedding-0.6B** | 0.6B | 70.70 | 64.88 | 85.76 | 54.05 | 84.37 | 48.18 | 61.83 | 86.57 | 33.43 |
| **Qwen3-Embedding-4B** | 4B | 74.60 | 68.10 | 89.84 | 57.51 | 87.01 | 50.76 | 68.46 | 88.72 | 34.39 |
| **Qwen3-Embedding-8B** | 8B | 75.22 | 68.71 | 90.43 | 58.57 | 87.52 | 51.56 | 69.44 | 88.58 | 34.83 |
### C-MTEB (MTEB Chinese)
| C-MTEB | Param. | Mean(Task) | Mean(Type) | Class. | Clust. | Pair Class. | Rerank. | Retr. | STS |
|------------------|--------|------------|------------|--------|--------|-------------|---------|-------|-------|
| multilingual-e5-large-instruct | 0.6B | 58.08 | 58.24 | 69.80 | 48.23 | 64.52 | 57.45 | 63.65 | 45.81 |
| bge-multilingual-gemma2 | 9B | 67.64 | 75.31 | 59.30 | 86.67 | 68.28 | 73.73 | 55.19 | - |
| gte-Qwen2-1.5B-instruct | 1.5B | 67.12 | 67.79 | 72.53 | 54.61 | 79.5 | 68.21 | 71.86 | 60.05 |
| gte-Qwen2-7B-instruct | 7.6B | 71.62 | 72.19 | 75.77 | 66.06 | 81.16 | 69.24 | 75.70 | 65.20 |
| ritrieve_zh_v1 | 0.3B | 72.71 | 73.85 | 76.88 | 66.5 | 85.98 | 72.86 | 76.97 | 63.92 |
| **Qwen3-Embedding-0.6B** | 0.6B | 66.33 | 67.45 | 71.40 | 68.74 | 76.42 | 62.58 | 71.03 | 54.52 |
| **Qwen3-Embedding-4B** | 4B | 72.27 | 73.51 | 75.46 | 77.89 | 83.34 | 66.05 | 77.03 | 61.26 |
| **Qwen3-Embedding-8B** | 8B | 73.84 | 75.00 | 76.97 | 80.08 | 84.23 | 66.99 | 78.21 | 63.53 |
## Citation
If you find our work helpful, feel free to give us a cite.
```
@article{qwen3embedding,
title={Qwen3 Embedding: Advancing Text Embedding and Reranking Through Foundation Models},
author={Zhang, Yanzhao and Li, Mingxin and Long, Dingkun and Zhang, Xin and Lin, Huan and Yang, Baosong and Xie, Pengjun and Yang, An and Liu, Dayiheng and Lin, Junyang and Huang, Fei and Zhou, Jingren},
journal={arXiv preprint arXiv:2506.05176},
year={2025}
}
```
|
Nitral-Archive/Salesforce_xgen-small-9B-rebased-v0.2
|
Nitral-Archive
| 2025-06-19T00:35:01Z | 18 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"en",
"base_model:Nitral-Archive/Salesforce_xgen-small-9B-rebased-v0.1",
"base_model:merge:Nitral-Archive/Salesforce_xgen-small-9B-rebased-v0.1",
"base_model:Salesforce/xgen-small-9B-instruct-r",
"base_model:merge:Salesforce/xgen-small-9B-instruct-r",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-17T22:11:03Z |
---
base_model:
- Nitral-AI/Salesforce_xgen-small-9B-rebased-v0.1
- Salesforce/xgen-small-9B-instruct-r
library_name: transformers
tags:
- mergekit
- merge
license: other
language:
- en
---
# Phase 2 of rebase, untested. bf16 model weights.
### Models Merged
The following models were included in the merge:
* [Nitral-AI/Salesforce_xgen-small-9B-rebased-v0.1](https://huggingface.co/Nitral-AI/Salesforce_xgen-small-9B-rebased-v0.1)
* [Salesforce/xgen-small-9B-instruct-r](https://huggingface.co/Salesforce/xgen-small-9B-instruct-r)
### The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: Nitral-AI/Salesforce_xgen-small-9B-rebased-v0.1
layer_range: [0, 45]
- model: Salesforce/xgen-small-9B-instruct-r
layer_range: [0, 45]
merge_method: slerp
base_model: Nitral-AI/Salesforce_xgen-small-9B-rebased-v0.1
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.420
dtype: bfloat16
```
|
rosieyzh/OLMo-1B-as_fm3_tg_omi1_omi2_episode1
|
rosieyzh
| 2025-06-19T00:34:41Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"olmo",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-19T00:32:49Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Smilyai-labs/Sam-reason-A4
|
Smilyai-labs
| 2025-06-19T00:34:35Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"chat",
"conversational",
"en",
"arxiv:2407.10671",
"base_model:Qwen/Qwen2.5-3B",
"base_model:finetune:Qwen/Qwen2.5-3B",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-19T00:31:11Z |
---
license: other
license_name: qwen-research
license_link: https://huggingface.co/Qwen/Qwen2.5-3B-Instruct/blob/main/LICENSE
language:
- en
pipeline_tag: text-generation
base_model: Qwen/Qwen2.5-3B
tags:
- chat
library_name: transformers
---
# Qwen2.5-3B-Instruct
## Introduction
Qwen2.5 is the latest series of Qwen large language models. For Qwen2.5, we release a number of base language models and instruction-tuned language models ranging from 0.5 to 72 billion parameters. Qwen2.5 brings the following improvements upon Qwen2:
- Significantly **more knowledge** and has greatly improved capabilities in **coding** and **mathematics**, thanks to our specialized expert models in these domains.
- Significant improvements in **instruction following**, **generating long texts** (over 8K tokens), **understanding structured data** (e.g, tables), and **generating structured outputs** especially JSON. **More resilient to the diversity of system prompts**, enhancing role-play implementation and condition-setting for chatbots.
- **Long-context Support** up to 128K tokens and can generate up to 8K tokens.
- **Multilingual support** for over 29 languages, including Chinese, English, French, Spanish, Portuguese, German, Italian, Russian, Japanese, Korean, Vietnamese, Thai, Arabic, and more.
**This repo contains the instruction-tuned 3B Qwen2.5 model**, which has the following features:
- Type: Causal Language Models
- Training Stage: Pretraining & Post-training
- Architecture: transformers with RoPE, SwiGLU, RMSNorm, Attention QKV bias and tied word embeddings
- Number of Parameters: 3.09B
- Number of Paramaters (Non-Embedding): 2.77B
- Number of Layers: 36
- Number of Attention Heads (GQA): 16 for Q and 2 for KV
- Context Length: Full 32,768 tokens and generation 8192 tokens
For more details, please refer to our [blog](https://qwenlm.github.io/blog/qwen2.5/), [GitHub](https://github.com/QwenLM/Qwen2.5), and [Documentation](https://qwen.readthedocs.io/en/latest/).
## Requirements
The code of Qwen2.5 has been in the latest Hugging face `transformers` and we advise you to use the latest version of `transformers`.
With `transformers<4.37.0`, you will encounter the following error:
```
KeyError: 'qwen2'
```
## Quickstart
Here provides a code snippet with `apply_chat_template` to show you how to load the tokenizer and model and how to generate contents.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "Qwen/Qwen2.5-3B-Instruct"
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
prompt = "Give me a short introduction to large language model."
messages = [
{"role": "system", "content": "You are Qwen, created by Alibaba Cloud. You are a helpful assistant."},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generated_ids = model.generate(
**model_inputs,
max_new_tokens=512
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
```
## Evaluation & Performance
Detailed evaluation results are reported in this [📑 blog](https://qwenlm.github.io/blog/qwen2.5/).
For requirements on GPU memory and the respective throughput, see results [here](https://qwen.readthedocs.io/en/latest/benchmark/speed_benchmark.html).
## Citation
If you find our work helpful, feel free to give us a cite.
```
@misc{qwen2.5,
title = {Qwen2.5: A Party of Foundation Models},
url = {https://qwenlm.github.io/blog/qwen2.5/},
author = {Qwen Team},
month = {September},
year = {2024}
}
@article{qwen2,
title={Qwen2 Technical Report},
author={An Yang and Baosong Yang and Binyuan Hui and Bo Zheng and Bowen Yu and Chang Zhou and Chengpeng Li and Chengyuan Li and Dayiheng Liu and Fei Huang and Guanting Dong and Haoran Wei and Huan Lin and Jialong Tang and Jialin Wang and Jian Yang and Jianhong Tu and Jianwei Zhang and Jianxin Ma and Jin Xu and Jingren Zhou and Jinze Bai and Jinzheng He and Junyang Lin and Kai Dang and Keming Lu and Keqin Chen and Kexin Yang and Mei Li and Mingfeng Xue and Na Ni and Pei Zhang and Peng Wang and Ru Peng and Rui Men and Ruize Gao and Runji Lin and Shijie Wang and Shuai Bai and Sinan Tan and Tianhang Zhu and Tianhao Li and Tianyu Liu and Wenbin Ge and Xiaodong Deng and Xiaohuan Zhou and Xingzhang Ren and Xinyu Zhang and Xipin Wei and Xuancheng Ren and Yang Fan and Yang Yao and Yichang Zhang and Yu Wan and Yunfei Chu and Yuqiong Liu and Zeyu Cui and Zhenru Zhang and Zhihao Fan},
journal={arXiv preprint arXiv:2407.10671},
year={2024}
}
```
|
rosieyzh/OLMo-1B-as_fm3_tg_omi2_global_step9
|
rosieyzh
| 2025-06-19T00:32:44Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"olmo",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-19T00:30:48Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
rosieyzh/OLMo-1B-as_fm3_tg_omi2_global_step73
|
rosieyzh
| 2025-06-19T00:28:41Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"olmo",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-19T00:26:42Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
rosieyzh/OLMo-1B-as_fm3_tg_omi2_global_step581
|
rosieyzh
| 2025-06-19T00:26:37Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"olmo",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-19T00:24:47Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
shishirahm3d/banglamind-8b-instruct-bnb-4bit-q4km-GGUF
|
shishirahm3d
| 2025-06-19T00:25:59Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-06-19T00:21:21Z |
---
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** shishirahm3d
- **License:** apache-2.0
|
rosieyzh/OLMo-1B-as_fm3_tg_omi2_global_step411
|
rosieyzh
| 2025-06-19T00:22:43Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"olmo",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-19T00:20:39Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
hardlyworking/RepRemove4B
|
hardlyworking
| 2025-06-19T00:22:39Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"axolotl",
"generated_from_trainer",
"conversational",
"dataset:PocketDoc/Dans-Prosemaxx-RepRemover-1",
"base_model:hardlyworking/HoldMy4BKTO",
"base_model:finetune:hardlyworking/HoldMy4BKTO",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-18T20:50:46Z |
---
library_name: transformers
license: cc-by-nc-4.0
base_model: hardlyworking/HoldMy4BKTO
tags:
- axolotl
- generated_from_trainer
datasets:
- PocketDoc/Dans-Prosemaxx-RepRemover-1
model-index:
- name: RepRemove4B
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.11.0.dev0`
```yaml
base_model: hardlyworking/HoldMy4BKTO
load_in_8bit: false
load_in_4bit: false
strict: false
datasets:
- path: PocketDoc/Dans-Prosemaxx-RepRemover-1
type: dan-chat-advanced
val_set_size: 0
output_dir: ./outputs/out
dataset_prepared_path: last_run_prepared
shuffle_merged_datasets: true
hub_model_id: hardlyworking/RepRemove4B
hub_strategy: "all_checkpoints"
push_dataset_to_hub:
hf_use_auth_token: true
plugins:
- axolotl.integrations.liger.LigerPlugin
- axolotl.integrations.cut_cross_entropy.CutCrossEntropyPlugin
liger_rope: true
liger_rms_norm: true
liger_layer_norm: true
liger_glu_activation: true
liger_fused_linear_cross_entropy: false
cut_cross_entropy: true
sequence_len: 32768
sample_packing: true
eval_sample_packing: true
pad_to_sequence_len: true
wandb_project: Xgen4Brep
wandb_entity:
wandb_watch:
wandb_name: Xgen4Brep
wandb_log_model:
evals_per_epoch:
eval_table_size:
eval_max_new_tokens:
gradient_accumulation_steps: 2
micro_batch_size: 2
num_epochs: 4
optimizer: adamw_bnb_8bit
lr_scheduler: cosine
learning_rate: 1e-5
train_on_inputs: false
group_by_length: false
bf16: auto
fp16:
tf32: false
gradient_checkpointing: offload
gradient_checkpointing_kwargs:
use_reentrant: false
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: true
s2_attention:
deepspeed:
warmup_ratio: 0.05
saves_per_epoch: 1
debug:
weight_decay: 0.01
fsdp:
fsdp_config:
special_tokens:
pad_token:
```
</details><br>
# RepRemove4B
This model is a fine-tuned version of [hardlyworking/HoldMy4BKTO](https://huggingface.co/hardlyworking/HoldMy4BKTO) on the PocketDoc/Dans-Prosemaxx-RepRemover-1 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 4
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 22
- training_steps: 456
### Training results
### Framework versions
- Transformers 4.52.4
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1
|
rosieyzh/OLMo-1B-as_fm3_tg_omi2_global_step36
|
rosieyzh
| 2025-06-19T00:20:33Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"olmo",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-19T00:18:15Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
tensorblock/EVA-Qwen2.5-32B-v0.2-GGUF
|
tensorblock
| 2025-06-19T00:18:49Z | 86 | 1 |
transformers
|
[
"transformers",
"gguf",
"generated_from_trainer",
"TensorBlock",
"GGUF",
"dataset:anthracite-org/kalo-opus-instruct-22k-no-refusal",
"dataset:Nopm/Opus_WritingStruct",
"dataset:Gryphe/Sonnet3.5-SlimOrcaDedupCleaned",
"dataset:Gryphe/Sonnet3.5-Charcard-Roleplay",
"dataset:Gryphe/ChatGPT-4o-Writing-Prompts",
"dataset:Epiculous/Synthstruct-Gens-v1.1-Filtered-n-Cleaned",
"dataset:Epiculous/SynthRP-Gens-v1.1-Filtered-n-Cleaned",
"dataset:nothingiisreal/Reddit-Dirty-And-WritingPrompts",
"dataset:allura-org/Celeste-1.x-data-mixture",
"dataset:cognitivecomputations/dolphin-2.9.3",
"base_model:EVA-UNIT-01/EVA-Qwen2.5-32B-v0.2",
"base_model:quantized:EVA-UNIT-01/EVA-Qwen2.5-32B-v0.2",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-11-21T06:00:48Z |
---
library_name: transformers
license: apache-2.0
datasets:
- anthracite-org/kalo-opus-instruct-22k-no-refusal
- Nopm/Opus_WritingStruct
- Gryphe/Sonnet3.5-SlimOrcaDedupCleaned
- Gryphe/Sonnet3.5-Charcard-Roleplay
- Gryphe/ChatGPT-4o-Writing-Prompts
- Epiculous/Synthstruct-Gens-v1.1-Filtered-n-Cleaned
- Epiculous/SynthRP-Gens-v1.1-Filtered-n-Cleaned
- nothingiisreal/Reddit-Dirty-And-WritingPrompts
- allura-org/Celeste-1.x-data-mixture
- cognitivecomputations/dolphin-2.9.3
base_model: EVA-UNIT-01/EVA-Qwen2.5-32B-v0.2
tags:
- generated_from_trainer
- TensorBlock
- GGUF
model-index:
- name: EVA-Qwen2.5-32B-SFFT-v0.1
results: []
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## EVA-UNIT-01/EVA-Qwen2.5-32B-v0.2 - GGUF
This repo contains GGUF format model files for [EVA-UNIT-01/EVA-Qwen2.5-32B-v0.2](https://huggingface.co/EVA-UNIT-01/EVA-Qwen2.5-32B-v0.2).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4011](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
</tr>
</table>
## Prompt template
```
<|im_start|>system
{system_prompt}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [EVA-Qwen2.5-32B-v0.2-Q2_K.gguf](https://huggingface.co/tensorblock/EVA-Qwen2.5-32B-v0.2-GGUF/blob/main/EVA-Qwen2.5-32B-v0.2-Q2_K.gguf) | Q2_K | 12.313 GB | smallest, significant quality loss - not recommended for most purposes |
| [EVA-Qwen2.5-32B-v0.2-Q3_K_S.gguf](https://huggingface.co/tensorblock/EVA-Qwen2.5-32B-v0.2-GGUF/blob/main/EVA-Qwen2.5-32B-v0.2-Q3_K_S.gguf) | Q3_K_S | 14.392 GB | very small, high quality loss |
| [EVA-Qwen2.5-32B-v0.2-Q3_K_M.gguf](https://huggingface.co/tensorblock/EVA-Qwen2.5-32B-v0.2-GGUF/blob/main/EVA-Qwen2.5-32B-v0.2-Q3_K_M.gguf) | Q3_K_M | 15.935 GB | very small, high quality loss |
| [EVA-Qwen2.5-32B-v0.2-Q3_K_L.gguf](https://huggingface.co/tensorblock/EVA-Qwen2.5-32B-v0.2-GGUF/blob/main/EVA-Qwen2.5-32B-v0.2-Q3_K_L.gguf) | Q3_K_L | 17.247 GB | small, substantial quality loss |
| [EVA-Qwen2.5-32B-v0.2-Q4_0.gguf](https://huggingface.co/tensorblock/EVA-Qwen2.5-32B-v0.2-GGUF/blob/main/EVA-Qwen2.5-32B-v0.2-Q4_0.gguf) | Q4_0 | 18.640 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [EVA-Qwen2.5-32B-v0.2-Q4_K_S.gguf](https://huggingface.co/tensorblock/EVA-Qwen2.5-32B-v0.2-GGUF/blob/main/EVA-Qwen2.5-32B-v0.2-Q4_K_S.gguf) | Q4_K_S | 18.784 GB | small, greater quality loss |
| [EVA-Qwen2.5-32B-v0.2-Q4_K_M.gguf](https://huggingface.co/tensorblock/EVA-Qwen2.5-32B-v0.2-GGUF/blob/main/EVA-Qwen2.5-32B-v0.2-Q4_K_M.gguf) | Q4_K_M | 19.851 GB | medium, balanced quality - recommended |
| [EVA-Qwen2.5-32B-v0.2-Q5_0.gguf](https://huggingface.co/tensorblock/EVA-Qwen2.5-32B-v0.2-GGUF/blob/main/EVA-Qwen2.5-32B-v0.2-Q5_0.gguf) | Q5_0 | 22.638 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [EVA-Qwen2.5-32B-v0.2-Q5_K_S.gguf](https://huggingface.co/tensorblock/EVA-Qwen2.5-32B-v0.2-GGUF/blob/main/EVA-Qwen2.5-32B-v0.2-Q5_K_S.gguf) | Q5_K_S | 22.638 GB | large, low quality loss - recommended |
| [EVA-Qwen2.5-32B-v0.2-Q5_K_M.gguf](https://huggingface.co/tensorblock/EVA-Qwen2.5-32B-v0.2-GGUF/blob/main/EVA-Qwen2.5-32B-v0.2-Q5_K_M.gguf) | Q5_K_M | 23.262 GB | large, very low quality loss - recommended |
| [EVA-Qwen2.5-32B-v0.2-Q6_K.gguf](https://huggingface.co/tensorblock/EVA-Qwen2.5-32B-v0.2-GGUF/blob/main/EVA-Qwen2.5-32B-v0.2-Q6_K.gguf) | Q6_K | 26.886 GB | very large, extremely low quality loss |
| [EVA-Qwen2.5-32B-v0.2-Q8_0.gguf](https://huggingface.co/tensorblock/EVA-Qwen2.5-32B-v0.2-GGUF/blob/main/EVA-Qwen2.5-32B-v0.2-Q8_0.gguf) | Q8_0 | 34.821 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/EVA-Qwen2.5-32B-v0.2-GGUF --include "EVA-Qwen2.5-32B-v0.2-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/EVA-Qwen2.5-32B-v0.2-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
kevin510/friday
|
kevin510
| 2025-06-19T00:18:34Z | 125 | 0 |
transformers
|
[
"transformers",
"safetensors",
"friday",
"text-generation",
"vision-language",
"multimodal",
"custom_code",
"bf16",
"conversational",
"dataset:liuhaotian/LLaVA-Instruct-150K",
"dataset:liuhaotian/LLaVA-Pretrain",
"base_model:kevin510/fast-vit-hd",
"base_model:finetune:kevin510/fast-vit-hd",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
] |
text-generation
| 2025-04-28T22:23:09Z |
---
license: apache-2.0
datasets:
- liuhaotian/LLaVA-Instruct-150K
- liuhaotian/LLaVA-Pretrain
base_model:
- microsoft/Phi-4-mini-reasoning
- kevin510/fast-vit-hd
library_name: transformers
tags:
- vision-language
- multimodal
- friday
- custom_code
- bf16
---
# Friday-VLM
Friday-VLM is a multimodal (image + text) LLM fine-tuned on image and text instruction data.
The architecture and config live in this repo, so callers must load the model with
`trust_remote_code=True`.
---
# Model variants
| Repo ID | Precision | File format | Typical VRAM* | Size on disk |
|---------|-----------|-------------|---------------|--------------|
| `kevin510/friday` | **bf16** (full) | `safetensors` | 100 % | 100 % |
| `kevin510/friday-fp4` | **fp4** (bitsandbytes int4) | `safetensors` | ≈ 30 % | ≈ 25 % |
---
# Dependencies
```bash
conda create --name friday python=3.12 -y
conda activate friday
pip install transformers torch torchvision deepspeed accelerate pillow einops timm
```
# Quick start
```python
import torch
from PIL import Image
from transformers import AutoTokenizer, AutoModelForCausalLM
from transformers.utils import logging
tok = AutoTokenizer.from_pretrained("kevin510/friday", trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(
"kevin510/friday",
trust_remote_code=True,
device_map="auto"
)
model.eval()
prompt = "Describe this image."
user_prompt = f"<|user|><image>\n{prompt}\n<|assistant|>"
inputs = tok(user_prompt, return_tensors="pt").to(model.device)
image = Image.open("my_image.jpg").convert("RGB")
with torch.no_grad():
out = model.generate(
**inputs,
max_new_tokens=256,
do_sample=False,
images=[image]
)
print(tok.decode(out[0], skip_special_tokens=False))
```
# Architecture at a glance
```
FastViT-HD ─▶ 3072-d patch embeddings ─▶ S2 6144-d patch embeddings ─▶ 2-layer MLP vision-adapter (6144 → 3072)
(vision tokens, 3072 d) ─┐
├─► Φ-4-mini-reasoning (2.7 B params, hidden = 3072)
<text tokens, 3072 d> ───┘ │
│ (standard self-attention only;
│ language tower is frozen at finetune)
```
# Limitations & Responsible AI
Friday-VLM may hallucinate objects, invent facts, or reproduce societal biases.
All variants share the same behaviour profile; quantisation does not filter or sanitise model outputs. Users must apply their own content-safety layer before deployment.
# Citation
```bibtex
@misc{friday2025,
title = {Friday VLM: Efficient Instruction-Tuned Vision–Language Modelling},
author = {Your Name et al.},
year = {2025},
url = {https://huggingface.co/kevin510/friday}
}
```
|
tensorblock/sn29_merged_v4-GGUF
|
tensorblock
| 2025-06-19T00:18:17Z | 13 | 0 |
transformers
|
[
"transformers",
"gguf",
"TensorBlock",
"GGUF",
"base_model:luaqi/sn29_merged_v4",
"base_model:quantized:luaqi/sn29_merged_v4",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-11-21T02:21:20Z |
---
library_name: transformers
tags:
- TensorBlock
- GGUF
base_model: luaqi/sn29_merged_v4
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## luaqi/sn29_merged_v4 - GGUF
This repo contains GGUF format model files for [luaqi/sn29_merged_v4](https://huggingface.co/luaqi/sn29_merged_v4).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4011](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
</tr>
</table>
## Prompt template
```
<|im_start|>system
{system_prompt}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [sn29_merged_v4-Q2_K.gguf](https://huggingface.co/tensorblock/sn29_merged_v4-GGUF/blob/main/sn29_merged_v4-Q2_K.gguf) | Q2_K | 2.923 GB | smallest, significant quality loss - not recommended for most purposes |
| [sn29_merged_v4-Q3_K_S.gguf](https://huggingface.co/tensorblock/sn29_merged_v4-GGUF/blob/main/sn29_merged_v4-Q3_K_S.gguf) | Q3_K_S | 3.340 GB | very small, high quality loss |
| [sn29_merged_v4-Q3_K_M.gguf](https://huggingface.co/tensorblock/sn29_merged_v4-GGUF/blob/main/sn29_merged_v4-Q3_K_M.gguf) | Q3_K_M | 3.626 GB | very small, high quality loss |
| [sn29_merged_v4-Q3_K_L.gguf](https://huggingface.co/tensorblock/sn29_merged_v4-GGUF/blob/main/sn29_merged_v4-Q3_K_L.gguf) | Q3_K_L | 3.796 GB | small, substantial quality loss |
| [sn29_merged_v4-Q4_0.gguf](https://huggingface.co/tensorblock/sn29_merged_v4-GGUF/blob/main/sn29_merged_v4-Q4_0.gguf) | Q4_0 | 3.983 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [sn29_merged_v4-Q4_K_S.gguf](https://huggingface.co/tensorblock/sn29_merged_v4-GGUF/blob/main/sn29_merged_v4-Q4_K_S.gguf) | Q4_K_S | 4.200 GB | small, greater quality loss |
| [sn29_merged_v4-Q4_K_M.gguf](https://huggingface.co/tensorblock/sn29_merged_v4-GGUF/blob/main/sn29_merged_v4-Q4_K_M.gguf) | Q4_K_M | 4.507 GB | medium, balanced quality - recommended |
| [sn29_merged_v4-Q5_0.gguf](https://huggingface.co/tensorblock/sn29_merged_v4-GGUF/blob/main/sn29_merged_v4-Q5_0.gguf) | Q5_0 | 4.792 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [sn29_merged_v4-Q5_K_S.gguf](https://huggingface.co/tensorblock/sn29_merged_v4-GGUF/blob/main/sn29_merged_v4-Q5_K_S.gguf) | Q5_K_S | 4.894 GB | large, low quality loss - recommended |
| [sn29_merged_v4-Q5_K_M.gguf](https://huggingface.co/tensorblock/sn29_merged_v4-GGUF/blob/main/sn29_merged_v4-Q5_K_M.gguf) | Q5_K_M | 5.156 GB | large, very low quality loss - recommended |
| [sn29_merged_v4-Q6_K.gguf](https://huggingface.co/tensorblock/sn29_merged_v4-GGUF/blob/main/sn29_merged_v4-Q6_K.gguf) | Q6_K | 6.047 GB | very large, extremely low quality loss |
| [sn29_merged_v4-Q8_0.gguf](https://huggingface.co/tensorblock/sn29_merged_v4-GGUF/blob/main/sn29_merged_v4-Q8_0.gguf) | Q8_0 | 7.319 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/sn29_merged_v4-GGUF --include "sn29_merged_v4-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/sn29_merged_v4-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
rosieyzh/OLMo-1B-as_fm3_tg_omi2_global_step25
|
rosieyzh
| 2025-06-19T00:15:51Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"olmo",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-19T00:13:44Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
rosieyzh/OLMo-1B-as_fm3_tg_omi2_global_step206
|
rosieyzh
| 2025-06-19T00:13:39Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"olmo",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-19T00:11:25Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
tensorblock/internlm2_5-7b-chat-1m-GGUF
|
tensorblock
| 2025-06-19T00:13:12Z | 40 | 0 | null |
[
"gguf",
"TensorBlock",
"GGUF",
"text-generation",
"base_model:internlm/internlm2_5-7b-chat-1m",
"base_model:quantized:internlm/internlm2_5-7b-chat-1m",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2024-11-19T08:55:17Z |
---
pipeline_tag: text-generation
license: other
tags:
- TensorBlock
- GGUF
base_model: internlm/internlm2_5-7b-chat-1m
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## internlm/internlm2_5-7b-chat-1m - GGUF
This repo contains GGUF format model files for [internlm/internlm2_5-7b-chat-1m](https://huggingface.co/internlm/internlm2_5-7b-chat-1m).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4011](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
</tr>
</table>
## Prompt template
```
<s><|im_start|>system
{system_prompt}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [internlm2_5-7b-chat-1m-Q2_K.gguf](https://huggingface.co/tensorblock/internlm2_5-7b-chat-1m-GGUF/blob/main/internlm2_5-7b-chat-1m-Q2_K.gguf) | Q2_K | 2.799 GB | smallest, significant quality loss - not recommended for most purposes |
| [internlm2_5-7b-chat-1m-Q3_K_S.gguf](https://huggingface.co/tensorblock/internlm2_5-7b-chat-1m-GGUF/blob/main/internlm2_5-7b-chat-1m-Q3_K_S.gguf) | Q3_K_S | 3.237 GB | very small, high quality loss |
| [internlm2_5-7b-chat-1m-Q3_K_M.gguf](https://huggingface.co/tensorblock/internlm2_5-7b-chat-1m-GGUF/blob/main/internlm2_5-7b-chat-1m-Q3_K_M.gguf) | Q3_K_M | 3.567 GB | very small, high quality loss |
| [internlm2_5-7b-chat-1m-Q3_K_L.gguf](https://huggingface.co/tensorblock/internlm2_5-7b-chat-1m-GGUF/blob/main/internlm2_5-7b-chat-1m-Q3_K_L.gguf) | Q3_K_L | 3.850 GB | small, substantial quality loss |
| [internlm2_5-7b-chat-1m-Q4_0.gguf](https://huggingface.co/tensorblock/internlm2_5-7b-chat-1m-GGUF/blob/main/internlm2_5-7b-chat-1m-Q4_0.gguf) | Q4_0 | 4.147 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [internlm2_5-7b-chat-1m-Q4_K_S.gguf](https://huggingface.co/tensorblock/internlm2_5-7b-chat-1m-GGUF/blob/main/internlm2_5-7b-chat-1m-Q4_K_S.gguf) | Q4_K_S | 4.177 GB | small, greater quality loss |
| [internlm2_5-7b-chat-1m-Q4_K_M.gguf](https://huggingface.co/tensorblock/internlm2_5-7b-chat-1m-GGUF/blob/main/internlm2_5-7b-chat-1m-Q4_K_M.gguf) | Q4_K_M | 4.389 GB | medium, balanced quality - recommended |
| [internlm2_5-7b-chat-1m-Q5_0.gguf](https://huggingface.co/tensorblock/internlm2_5-7b-chat-1m-GGUF/blob/main/internlm2_5-7b-chat-1m-Q5_0.gguf) | Q5_0 | 5.004 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [internlm2_5-7b-chat-1m-Q5_K_S.gguf](https://huggingface.co/tensorblock/internlm2_5-7b-chat-1m-GGUF/blob/main/internlm2_5-7b-chat-1m-Q5_K_S.gguf) | Q5_K_S | 5.004 GB | large, low quality loss - recommended |
| [internlm2_5-7b-chat-1m-Q5_K_M.gguf](https://huggingface.co/tensorblock/internlm2_5-7b-chat-1m-GGUF/blob/main/internlm2_5-7b-chat-1m-Q5_K_M.gguf) | Q5_K_M | 5.129 GB | large, very low quality loss - recommended |
| [internlm2_5-7b-chat-1m-Q6_K.gguf](https://huggingface.co/tensorblock/internlm2_5-7b-chat-1m-GGUF/blob/main/internlm2_5-7b-chat-1m-Q6_K.gguf) | Q6_K | 5.914 GB | very large, extremely low quality loss |
| [internlm2_5-7b-chat-1m-Q8_0.gguf](https://huggingface.co/tensorblock/internlm2_5-7b-chat-1m-GGUF/blob/main/internlm2_5-7b-chat-1m-Q8_0.gguf) | Q8_0 | 7.659 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/internlm2_5-7b-chat-1m-GGUF --include "internlm2_5-7b-chat-1m-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/internlm2_5-7b-chat-1m-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
kevin510/friday-4bit
|
kevin510
| 2025-06-19T00:10:20Z | 81 | 0 |
transformers
|
[
"transformers",
"safetensors",
"friday",
"text-generation",
"vision-language",
"multimodal",
"custom_code",
"4bit",
"quantization",
"bitsandbytes",
"dataset:liuhaotian/LLaVA-Instruct-150K",
"dataset:liuhaotian/LLaVA-Pretrain",
"base_model:kevin510/fast-vit-hd",
"base_model:quantized:kevin510/fast-vit-hd",
"license:apache-2.0",
"autotrain_compatible",
"4-bit",
"region:us"
] |
text-generation
| 2025-06-06T00:07:01Z |
---
license: apache-2.0
datasets:
- liuhaotian/LLaVA-Instruct-150K
- liuhaotian/LLaVA-Pretrain
base_model:
- microsoft/Phi-4-mini-reasoning
- kevin510/fast-vit-hd
library_name: transformers
tags:
- vision-language
- multimodal
- friday
- custom_code
- 4bit
- quantization
- bitsandbytes
---
# Friday-VLM
Friday-VLM is a multimodal (image + text) LLM fine-tuned on image and text instruction data.
The architecture and config live in this repo, so callers must load the model with
`trust_remote_code=True`.
---
# Model variants
| Repo ID | Precision | File format | Typical VRAM* | Size on disk |
|---------|-----------|-------------|---------------|--------------|
| `kevin510/friday` | **bf16** (full) | `safetensors` | 100 % | 100 % |
| `kevin510/friday-fp4` | **fp4** (bitsandbytes int4) | `safetensors` | ≈ 30 % | ≈ 25 % |
---
# Dependencies
```bash
conda create --name friday python=3.12 -y
conda activate friday
pip install transformers torch torchvision deepspeed accelerate pillow einops timm bitsandbytes
```
# Quick start
```python
import torch
from PIL import Image
from transformers import AutoTokenizer, AutoModelForCausalLM
from transformers.utils import logging
tok = AutoTokenizer.from_pretrained("kevin510/friday", trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(
"kevin510/friday-4bit",
trust_remote_code=True,
load_in_4bit=True,
device_map="auto"
)
model.eval()
prompt = "Describe this image."
user_prompt = f"<|user|><image>\n{prompt}\n<|assistant|>"
inputs = tok(user_prompt, return_tensors="pt").to(model.device)
image = Image.open("my_image.jpg").convert("RGB")
with torch.no_grad():
out = model.generate(
**inputs,
max_new_tokens=256,
do_sample=False,
images=[image]
)
print(tok.decode(out[0], skip_special_tokens=False))
```
# Architecture at a glance
```
FastViT-HD ─▶ 3072-d patch embeddings ─▶ S2 6144-d patch embeddings ─▶ 2-layer MLP vision-adapter (6144 → 3072)
(vision tokens, 3072 d) ─┐
├─► Φ-4-mini-reasoning (2.7 B params, hidden = 3072)
<text tokens, 3072 d> ───┘ │
│ (standard self-attention only;
│ language tower is frozen at finetune)
```
# Limitations & Responsible AI
Friday-VLM may hallucinate objects, invent facts, or reproduce societal biases.
All variants share the same behaviour profile; quantisation does not filter or sanitise model outputs. Users must apply their own content-safety layer before deployment.
# Citation
```bibtex
@misc{friday2025,
title = {Friday VLM: Efficient Instruction-Tuned Vision–Language Modelling},
author = {Your Name et al.},
year = {2025},
url = {https://huggingface.co/kevin510/friday}
}
```
|
chiruan/qwen2.5-7b-coder_V2-80steps
|
chiruan
| 2025-06-19T00:09:35Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-18T23:44:10Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
rosieyzh/OLMo-1B-as_fm3_tg_omi2_global_step146
|
rosieyzh
| 2025-06-19T00:09:15Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"olmo",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-19T00:07:13Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
rosieyzh/OLMo-1B-as_fm3_tg_omi2_global_step13
|
rosieyzh
| 2025-06-19T00:07:08Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"olmo",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-19T00:05:17Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Fulbert001/fulbertia
|
Fulbert001
| 2025-06-19T00:06:08Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-06-18T23:14:13Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: fulbertai
---
# Fulbertia
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `fulbertai` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "fulbertai",
"lora_weights": "https://huggingface.co/Fulbert001/fulbertia/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('Fulbert001/fulbertia', weight_name='lora.safetensors')
image = pipeline('fulbertai').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 4000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/Fulbert001/fulbertia/discussions) to add images that show off what you’ve made with this LoRA.
|
ippanorc/animetic_light
|
ippanorc
| 2025-06-19T00:01:34Z | 0 | 5 | null |
[
"text-to-image",
"base_model:lllyasviel/FramePackI2V_HY",
"base_model:finetune:lllyasviel/FramePackI2V_HY",
"license:apache-2.0",
"region:us"
] |
text-to-image
| 2025-06-17T10:51:20Z |
---
license: apache-2.0
base_model:
- lllyasviel/FramePackI2V_HY
pipeline_tag: text-to-image
---
# Animetic Light
[<img src="./assets/images/main_image.webp" alt="main image" width="400">](./assets/images/main_image.webp)
[](README_ja.md)
[](README.md)
An experimental model for background generation and relighting targeting anime-style images.
This is a LoRA compatible with FramePack's 1-frame inference.
For photographic relighting, IC-Light V2 is recommended.
- [IC-Light V2 (Flux-based IC-Light models) · lllyasviel IC-Light · Discussion #98](https://github.com/lllyasviel/IC-Light/discussions/98)
- [IC-Light V2-Vary · lllyasviel IC-Light · Discussion #109](https://github.com/lllyasviel/IC-Light/discussions/109)
IC-Light V2-Vary is available on Hugging Face Spaces, while IC-Light V2 can be used via API on platforms like fal.ai.
- [fal-ai/iclight-v2](https://fal.ai/models/fal-ai/iclight-v2)
## Features
- Generates backgrounds based on prompts and performs relighting while preserving the character region.
## Generation Examples
[<img src="./assets/images/sample.webp" alt="animatic light sample" width="500">](./assets/images/sample.webp)
## How to Use
- The recommended image resolution is approximately 1024x1024 total pixels.
- Prepare an image with a simple, flat background.
Even with an image that already has a background, roughly filling that background area with a single color may still work.
[<img src="./assets/images/erased_bg_sample.webp" alt="erase bg sample" width="300">](./assets/images/erased_bg_sample.webp)
- Prompts are recommended to be created with Gemini Flash 2.5.
A prompt length of 700 characters or more (including spaces and line breaks) is recommended.
Prompts for Gemini Flash 2.5 to help with prompt generation are available [here](./prompt.txt).
By inputting an image and a brief description, a complete prompt will be output.
- Example 1: <Image of a dragon\> "A dragon flying while breathing blue fire. The background is a volcano."
- Example 2: <Image of a dragon\> "Volcano"
- Add the following tags to the beginning of your prompt:
`illustration style`, (optional: `ambient lighting`, `dim lighting`)
- Note: Unsupported image styles (e.g., figure-like) may result in noise or residual backgrounds.
### ComfyUI Workflow
Workflow file is [here](./workflows/workflow_animetic_light.json)
Sample image is [here](./sample_images/dragon.webp)
Please use [xhiroga/ComfyUI-FramePackWrapper_PlusOne](https://github.com/xhiroga/ComfyUI-FramePackWrapper_PlusOne).
For model installation instructions, please refer to the documentation for your respective inference environment.
Also, please place `animetic_light.safetensors` in the `ComfyUI/models/loras` directory.
## Resources
### FramePack
- [GitHub - lllyasviel/FramePack: Lets make video diffusion practical!](https://github.com/lllyasviel/FramePack)
### Inference Environments
- [GitHub - xhiroga/ComfyUI-FramePackWrapper_PlusOne](https://github.com/xhiroga/ComfyUI-FramePackWrapper_PlusOne)
- [GitHub - kijai/ComfyUI-FramePackWrapper](https://github.com/kijai/ComfyUI-FramePackWrapper)
- [GitHub - git-ai-code/FramePack-eichi](https://github.com/git-ai-code/FramePack-eichi)
### Training Environments
- [GitHub - kohya-ss/musubi-tuner](https://github.com/kohya-ss/musubi-tuner)
- [musubi-tuner/docs/framepack_1f.md at main · kohya-ss/musubi-tuner](https://github.com/kohya-ss/musubi-tuner/blob/main/docs/framepack_1f.md)
---
[](https://x.com/ippanorc)
|
rosieyzh/OLMo-1B-as_fm3_tg_omi2_episode7
|
rosieyzh
| 2025-06-18T23:57:22Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"olmo",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-18T23:55:01Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
rosieyzh/OLMo-1B-as_fm3_tg_omi2_episode6
|
rosieyzh
| 2025-06-18T23:54:55Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"olmo",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-18T23:52:43Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Darkhn/L3.3-70B-Animus-V2
|
Darkhn
| 2025-06-18T23:50:46Z | 0 | 1 | null |
[
"safetensors",
"llama",
"llama-3.3",
"finetune",
"roleplay",
"chat",
"wings-of-fire",
"dataset:Darkhn/WOF_QA_V2",
"dataset:Darkhn/WOF_Pretraining",
"dataset:Darkhn/WOF_V3_Combined_Dataset",
"base_model:meta-llama/Llama-3.3-70B-Instruct",
"base_model:finetune:meta-llama/Llama-3.3-70B-Instruct",
"license:llama3.3",
"region:us"
] | null | 2025-06-18T21:19:29Z |
---
license: llama3.3
base_model: meta-llama/Llama-3.3-70B-Instruct
tags:
- llama-3.3
- finetune
- roleplay
- chat
- wings-of-fire
datasets:
- Darkhn/WOF_QA_V2
- Darkhn/WOF_Pretraining
- Darkhn/WOF_V3_Combined_Dataset
---
# Model Name - L3.3-70B-Animus-V2
<img src="GCED3P230E6T114CS8HVEWR0V0.jpeg" alt="Wings_of_Fire" width="700"/>
## Character Card & Lore Book
For the best roleplaying experience, it is highly recommended to use the provided character card and lore book. These files help guide the model's persona and provide rich, in-universe context.
[**Download the Character Card and Lore Book here**](https://huggingface.co/Darkhn/L3.3-70B-Animus-V2/tree/main/character%20card)
## Model Description
This is a fine-tuned version of `meta-llama/Llama-3.3-70B-Instruct` specialized for roleplaying and instruction-following within the *Wings of Fire* universe. This version represents a significant upgrade in data quality, roleplaying capability, and base model architecture.
The model was first adapted on a 3-million-token dataset extracted from the *Wings of Fire* book series to build a strong foundation of domain knowledge. It was then fine-tuned for 1 epoch on an expanded and cleaned dataset of conversational and roleplay examples.
The goal of this model is to provide a high-quality, immersive, and lore-accurate conversational experience. It can adopt character personas, answer questions about the world, engage in creative storytelling, portray multiple characters at once, and handle more mature themes from the series.
## Training Details
### Training Hardware
The model was fine-tuned on a single NVIDIA H100 GPU.
### Training Procedure
A QLoRA (Quantized Low-Rank Adaptation) approach was used for efficient fine-tuning, with an optimized process configured using Axolotl.
### Training Data
The training process involved two main stages:
1. **Domain Adaptation (Pre-training):** The base model was adapted to the *Wings of Fire* universe using the `Darkhn/WOF_Pretraining` dataset, containing **3 million tokens** compiled directly from the book series. This step saturated the model with the specific lore, characters, and writing style of the source material.
2. **Instruction & Chat Fine-tuning:** The model was fine-tuned for **1 epoch** on a mixed dataset of **5,000 examples**:
* **Roleplay Scenarios (4,200 examples):** From `Darkhn/WOF_V3_Combined_Dataset`. This new dataset features high-quality, multi-turn roleplay. It was specifically curated to teach the model advanced skills like portraying **multiple characters simultaneously** and handling the **more mature or 'darker' themes** (approx. 30% of examples) present in the book series. The data was cleaned to remove formatting artifacts like asterisks.
* **QA & Assistant (800 examples):** From `Darkhn/WOF_QA_V2`. These are instruction-response pairs focused on answering lore questions and following commands within the context of the *Wings of Fire* world.
## Intended Use & Limitations
* **Intended Use:** This model is intended for creative and roleplaying purposes within the *Wings of Fire* universe. It is designed for fans of the series and is not a general-purpose chatbot.
* **Limitations & Quirks:**
* Performance on tasks outside of its training domain (general knowledge, coding, etc.) is not guaranteed and will likely be poor.
* The model may "hallucinate" or generate plausible but non-canonical information.
* **Content:** The roleplay training data includes more mature and darker themes from the *Wings of Fire* series, such as character death, conflict, and moral ambiguity. The model is capable of generating content reflecting these themes. It can generate gratuitous or explicit content, as always its up to the user what they do with it.
* **Formatting:** The training data was cleaned to remove formatting artifacts like asterisks (`*...*`) for single word emphasis. The model should now produce cleaner, more narrative-style prose compared to previous versions.
* **Safety:** This model has not undergone additional safety alignment beyond what was included in its base Llama 3.3 model. Standard responsible AI practices should be followed.
## Recommended Sampler Settings
For optimal performance that balances creativity and coherence, the following default sampler settings are recommended.
* **Temperature:** 0.8-1.1
* **Min_P:** 0.02
* **DRY Sampler:**
* **Multiplier:** 0.8
* **Allowed Length:** 4
* **Base:** 1.75
## Acknowledgements
* Credit to Meta for the powerful Llama 3.3 architecture.
* Credit to Evan Armstrong for [Augmentoolkit](https://github.com/e-p-armstrong/augmentoolkit) to generate the dataset.
|
tensorblock/bigyi-15b-GGUF
|
tensorblock
| 2025-06-18T23:44:11Z | 10 | 0 |
transformers
|
[
"transformers",
"gguf",
"mergekit",
"merge",
"TensorBlock",
"GGUF",
"base_model:abacusai/bigyi-15b",
"base_model:quantized:abacusai/bigyi-15b",
"license:other",
"endpoints_compatible",
"region:us"
] | null | 2024-11-12T03:53:16Z |
---
base_model: abacusai/bigyi-15b
library_name: transformers
tags:
- mergekit
- merge
- TensorBlock
- GGUF
license: other
license_name: yi-license
license_link: LICENSE
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## abacusai/bigyi-15b - GGUF
This repo contains GGUF format model files for [abacusai/bigyi-15b](https://huggingface.co/abacusai/bigyi-15b).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4011](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
</tr>
</table>
## Prompt template
```
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [bigyi-15b-Q2_K.gguf](https://huggingface.co/tensorblock/bigyi-15b-GGUF/blob/main/bigyi-15b-Q2_K.gguf) | Q2_K | 5.256 GB | smallest, significant quality loss - not recommended for most purposes |
| [bigyi-15b-Q3_K_S.gguf](https://huggingface.co/tensorblock/bigyi-15b-GGUF/blob/main/bigyi-15b-Q3_K_S.gguf) | Q3_K_S | 6.125 GB | very small, high quality loss |
| [bigyi-15b-Q3_K_M.gguf](https://huggingface.co/tensorblock/bigyi-15b-GGUF/blob/main/bigyi-15b-Q3_K_M.gguf) | Q3_K_M | 6.816 GB | very small, high quality loss |
| [bigyi-15b-Q3_K_L.gguf](https://huggingface.co/tensorblock/bigyi-15b-GGUF/blob/main/bigyi-15b-Q3_K_L.gguf) | Q3_K_L | 7.415 GB | small, substantial quality loss |
| [bigyi-15b-Q4_0.gguf](https://huggingface.co/tensorblock/bigyi-15b-GGUF/blob/main/bigyi-15b-Q4_0.gguf) | Q4_0 | 7.955 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [bigyi-15b-Q4_K_S.gguf](https://huggingface.co/tensorblock/bigyi-15b-GGUF/blob/main/bigyi-15b-Q4_K_S.gguf) | Q4_K_S | 8.009 GB | small, greater quality loss |
| [bigyi-15b-Q4_K_M.gguf](https://huggingface.co/tensorblock/bigyi-15b-GGUF/blob/main/bigyi-15b-Q4_K_M.gguf) | Q4_K_M | 8.431 GB | medium, balanced quality - recommended |
| [bigyi-15b-Q5_0.gguf](https://huggingface.co/tensorblock/bigyi-15b-GGUF/blob/main/bigyi-15b-Q5_0.gguf) | Q5_0 | 9.678 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [bigyi-15b-Q5_K_S.gguf](https://huggingface.co/tensorblock/bigyi-15b-GGUF/blob/main/bigyi-15b-Q5_K_S.gguf) | Q5_K_S | 9.678 GB | large, low quality loss - recommended |
| [bigyi-15b-Q5_K_M.gguf](https://huggingface.co/tensorblock/bigyi-15b-GGUF/blob/main/bigyi-15b-Q5_K_M.gguf) | Q5_K_M | 9.923 GB | large, very low quality loss - recommended |
| [bigyi-15b-Q6_K.gguf](https://huggingface.co/tensorblock/bigyi-15b-GGUF/blob/main/bigyi-15b-Q6_K.gguf) | Q6_K | 11.508 GB | very large, extremely low quality loss |
| [bigyi-15b-Q8_0.gguf](https://huggingface.co/tensorblock/bigyi-15b-GGUF/blob/main/bigyi-15b-Q8_0.gguf) | Q8_0 | 14.904 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/bigyi-15b-GGUF --include "bigyi-15b-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/bigyi-15b-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/Qwen2.5-32B-Instruct-GGUF
|
tensorblock
| 2025-06-18T23:43:47Z | 145 | 0 |
transformers
|
[
"transformers",
"gguf",
"chat",
"TensorBlock",
"GGUF",
"text-generation",
"en",
"base_model:Qwen/Qwen2.5-32B-Instruct",
"base_model:quantized:Qwen/Qwen2.5-32B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2024-11-12T02:08:47Z |
---
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen2.5-32B-Instruct/blob/main/LICENSE
language:
- en
pipeline_tag: text-generation
base_model: Qwen/Qwen2.5-32B-Instruct
tags:
- chat
- TensorBlock
- GGUF
library_name: transformers
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## Qwen/Qwen2.5-32B-Instruct - GGUF
This repo contains GGUF format model files for [Qwen/Qwen2.5-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-32B-Instruct).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit ec7f3ac](https://github.com/ggerganov/llama.cpp/commit/ec7f3ac9ab33e46b136eb5ab6a76c4d81f57c7f1).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
</tr>
</table>
## Prompt template
```
<|im_start|>system
{system_prompt}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Qwen2.5-32B-Instruct-Q2_K.gguf](https://huggingface.co/tensorblock/Qwen2.5-32B-Instruct-GGUF/blob/main/Qwen2.5-32B-Instruct-Q2_K.gguf) | Q2_K | 12.313 GB | smallest, significant quality loss - not recommended for most purposes |
| [Qwen2.5-32B-Instruct-Q3_K_S.gguf](https://huggingface.co/tensorblock/Qwen2.5-32B-Instruct-GGUF/blob/main/Qwen2.5-32B-Instruct-Q3_K_S.gguf) | Q3_K_S | 14.392 GB | very small, high quality loss |
| [Qwen2.5-32B-Instruct-Q3_K_M.gguf](https://huggingface.co/tensorblock/Qwen2.5-32B-Instruct-GGUF/blob/main/Qwen2.5-32B-Instruct-Q3_K_M.gguf) | Q3_K_M | 15.935 GB | very small, high quality loss |
| [Qwen2.5-32B-Instruct-Q3_K_L.gguf](https://huggingface.co/tensorblock/Qwen2.5-32B-Instruct-GGUF/blob/main/Qwen2.5-32B-Instruct-Q3_K_L.gguf) | Q3_K_L | 17.247 GB | small, substantial quality loss |
| [Qwen2.5-32B-Instruct-Q4_0.gguf](https://huggingface.co/tensorblock/Qwen2.5-32B-Instruct-GGUF/blob/main/Qwen2.5-32B-Instruct-Q4_0.gguf) | Q4_0 | 18.640 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [Qwen2.5-32B-Instruct-Q4_K_S.gguf](https://huggingface.co/tensorblock/Qwen2.5-32B-Instruct-GGUF/blob/main/Qwen2.5-32B-Instruct-Q4_K_S.gguf) | Q4_K_S | 18.784 GB | small, greater quality loss |
| [Qwen2.5-32B-Instruct-Q4_K_M.gguf](https://huggingface.co/tensorblock/Qwen2.5-32B-Instruct-GGUF/blob/main/Qwen2.5-32B-Instruct-Q4_K_M.gguf) | Q4_K_M | 19.851 GB | medium, balanced quality - recommended |
| [Qwen2.5-32B-Instruct-Q5_0.gguf](https://huggingface.co/tensorblock/Qwen2.5-32B-Instruct-GGUF/blob/main/Qwen2.5-32B-Instruct-Q5_0.gguf) | Q5_0 | 22.638 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [Qwen2.5-32B-Instruct-Q5_K_S.gguf](https://huggingface.co/tensorblock/Qwen2.5-32B-Instruct-GGUF/blob/main/Qwen2.5-32B-Instruct-Q5_K_S.gguf) | Q5_K_S | 22.638 GB | large, low quality loss - recommended |
| [Qwen2.5-32B-Instruct-Q5_K_M.gguf](https://huggingface.co/tensorblock/Qwen2.5-32B-Instruct-GGUF/blob/main/Qwen2.5-32B-Instruct-Q5_K_M.gguf) | Q5_K_M | 23.262 GB | large, very low quality loss - recommended |
| [Qwen2.5-32B-Instruct-Q6_K.gguf](https://huggingface.co/tensorblock/Qwen2.5-32B-Instruct-GGUF/blob/main/Qwen2.5-32B-Instruct-Q6_K.gguf) | Q6_K | 26.886 GB | very large, extremely low quality loss |
| [Qwen2.5-32B-Instruct-Q8_0.gguf](https://huggingface.co/tensorblock/Qwen2.5-32B-Instruct-GGUF/blob/main/Qwen2.5-32B-Instruct-Q8_0.gguf) | Q8_0 | 34.821 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/Qwen2.5-32B-Instruct-GGUF --include "Qwen2.5-32B-Instruct-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/Qwen2.5-32B-Instruct-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
morturr/Mistral-7B-v0.1-dadjokes-seed-42-2025-06-18
|
morturr
| 2025-06-18T23:39:41Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:adapter:mistralai/Mistral-7B-v0.1",
"license:apache-2.0",
"region:us"
] | null | 2025-06-18T23:39:19Z |
---
library_name: peft
license: apache-2.0
base_model: mistralai/Mistral-7B-v0.1
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: Mistral-7B-v0.1-dadjokes-seed-42-2025-06-18
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Mistral-7B-v0.1-dadjokes-seed-42-2025-06-18
This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.1
- Pytorch 2.5.1+cu124
- Datasets 3.0.2
- Tokenizers 0.20.1
|
nnilayy/dreamer-valence-binary-classification-Kfold-5
|
nnilayy
| 2025-06-18T23:37:16Z | 0 | 0 | null |
[
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"region:us"
] | null | 2025-06-18T23:37:14Z |
---
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Code: [More Information Needed]
- Paper: [More Information Needed]
- Docs: [More Information Needed]
|
tensorblock/LLaMA3-iterative-DPO-final-GGUF
|
tensorblock
| 2025-06-18T23:36:24Z | 27 | 0 | null |
[
"gguf",
"TensorBlock",
"GGUF",
"base_model:RLHFlow/LLaMA3-iterative-DPO-final",
"base_model:quantized:RLHFlow/LLaMA3-iterative-DPO-final",
"license:llama3",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-11-10T06:16:41Z |
---
license: llama3
tags:
- TensorBlock
- GGUF
base_model: RLHFlow/LLaMA3-iterative-DPO-final
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## RLHFlow/LLaMA3-iterative-DPO-final - GGUF
This repo contains GGUF format model files for [RLHFlow/LLaMA3-iterative-DPO-final](https://huggingface.co/RLHFlow/LLaMA3-iterative-DPO-final).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4011](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
</tr>
</table>
## Prompt template
```
<|start_header_id|>system<|end_header_id|>
{system_prompt}<|eot_id|>
<|start_header_id|>user<|end_header_id|>
{prompt}<|eot_id|>
<|start_header_id|>assistant<|end_header_id|>
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [LLaMA3-iterative-DPO-final-Q2_K.gguf](https://huggingface.co/tensorblock/LLaMA3-iterative-DPO-final-GGUF/blob/main/LLaMA3-iterative-DPO-final-Q2_K.gguf) | Q2_K | 2.961 GB | smallest, significant quality loss - not recommended for most purposes |
| [LLaMA3-iterative-DPO-final-Q3_K_S.gguf](https://huggingface.co/tensorblock/LLaMA3-iterative-DPO-final-GGUF/blob/main/LLaMA3-iterative-DPO-final-Q3_K_S.gguf) | Q3_K_S | 3.413 GB | very small, high quality loss |
| [LLaMA3-iterative-DPO-final-Q3_K_M.gguf](https://huggingface.co/tensorblock/LLaMA3-iterative-DPO-final-GGUF/blob/main/LLaMA3-iterative-DPO-final-Q3_K_M.gguf) | Q3_K_M | 3.743 GB | very small, high quality loss |
| [LLaMA3-iterative-DPO-final-Q3_K_L.gguf](https://huggingface.co/tensorblock/LLaMA3-iterative-DPO-final-GGUF/blob/main/LLaMA3-iterative-DPO-final-Q3_K_L.gguf) | Q3_K_L | 4.025 GB | small, substantial quality loss |
| [LLaMA3-iterative-DPO-final-Q4_0.gguf](https://huggingface.co/tensorblock/LLaMA3-iterative-DPO-final-GGUF/blob/main/LLaMA3-iterative-DPO-final-Q4_0.gguf) | Q4_0 | 4.341 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [LLaMA3-iterative-DPO-final-Q4_K_S.gguf](https://huggingface.co/tensorblock/LLaMA3-iterative-DPO-final-GGUF/blob/main/LLaMA3-iterative-DPO-final-Q4_K_S.gguf) | Q4_K_S | 4.370 GB | small, greater quality loss |
| [LLaMA3-iterative-DPO-final-Q4_K_M.gguf](https://huggingface.co/tensorblock/LLaMA3-iterative-DPO-final-GGUF/blob/main/LLaMA3-iterative-DPO-final-Q4_K_M.gguf) | Q4_K_M | 4.583 GB | medium, balanced quality - recommended |
| [LLaMA3-iterative-DPO-final-Q5_0.gguf](https://huggingface.co/tensorblock/LLaMA3-iterative-DPO-final-GGUF/blob/main/LLaMA3-iterative-DPO-final-Q5_0.gguf) | Q5_0 | 5.215 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [LLaMA3-iterative-DPO-final-Q5_K_S.gguf](https://huggingface.co/tensorblock/LLaMA3-iterative-DPO-final-GGUF/blob/main/LLaMA3-iterative-DPO-final-Q5_K_S.gguf) | Q5_K_S | 5.215 GB | large, low quality loss - recommended |
| [LLaMA3-iterative-DPO-final-Q5_K_M.gguf](https://huggingface.co/tensorblock/LLaMA3-iterative-DPO-final-GGUF/blob/main/LLaMA3-iterative-DPO-final-Q5_K_M.gguf) | Q5_K_M | 5.339 GB | large, very low quality loss - recommended |
| [LLaMA3-iterative-DPO-final-Q6_K.gguf](https://huggingface.co/tensorblock/LLaMA3-iterative-DPO-final-GGUF/blob/main/LLaMA3-iterative-DPO-final-Q6_K.gguf) | Q6_K | 6.143 GB | very large, extremely low quality loss |
| [LLaMA3-iterative-DPO-final-Q8_0.gguf](https://huggingface.co/tensorblock/LLaMA3-iterative-DPO-final-GGUF/blob/main/LLaMA3-iterative-DPO-final-Q8_0.gguf) | Q8_0 | 7.954 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/LLaMA3-iterative-DPO-final-GGUF --include "LLaMA3-iterative-DPO-final-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/LLaMA3-iterative-DPO-final-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
deokoon/tutorial_merged_model
|
deokoon
| 2025-06-18T23:29:35Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-18T23:23:51Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
morturr/Llama-2-7b-hf-LOO_dadjokes-COMB_one_liners-comb2-seed7-2025-06-19
|
morturr
| 2025-06-18T23:26:38Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"license:llama2",
"region:us"
] | null | 2025-06-18T23:26:21Z |
---
library_name: peft
license: llama2
base_model: meta-llama/Llama-2-7b-hf
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: Llama-2-7b-hf-LOO_dadjokes-COMB_one_liners-comb2-seed7-2025-06-19
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Llama-2-7b-hf-LOO_dadjokes-COMB_one_liners-comb2-seed7-2025-06-19
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 7
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.1
- Pytorch 2.5.1+cu124
- Datasets 3.0.2
- Tokenizers 0.20.1
|
bartowski/arcee-ai_Caller-GGUF
|
bartowski
| 2025-06-18T23:23:49Z | 0 | 1 | null |
[
"gguf",
"text-generation",
"base_model:arcee-ai/Caller",
"base_model:quantized:arcee-ai/Caller",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] |
text-generation
| 2025-06-18T16:46:12Z |
---
quantized_by: bartowski
pipeline_tag: text-generation
base_model: arcee-ai/Caller
license: apache-2.0
base_model_relation: quantized
Library_name: transformers
---
## Llamacpp imatrix Quantizations of Caller by arcee-ai
Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b5697">b5697</a> for quantization.
Original model: https://huggingface.co/arcee-ai/Caller
All quants made using imatrix option with dataset from [here](https://gist.github.com/bartowski1182/eb213dccb3571f863da82e99418f81e8)
Run them in [LM Studio](https://lmstudio.ai/)
Run them directly with [llama.cpp](https://github.com/ggerganov/llama.cpp), or any other llama.cpp based project
## Prompt format
```
<|im_start|>system
{system_prompt}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
## Download a file (not the whole branch) from below:
| Filename | Quant type | File Size | Split | Description |
| -------- | ---------- | --------- | ----- | ----------- |
| [Caller-bf16.gguf](https://huggingface.co/bartowski/arcee-ai_Caller-GGUF/tree/main/arcee-ai_Caller-bf16) | bf16 | 65.54GB | true | Full BF16 weights. |
| [Caller-Q8_0.gguf](https://huggingface.co/bartowski/arcee-ai_Caller-GGUF/blob/main/arcee-ai_Caller-Q8_0.gguf) | Q8_0 | 34.82GB | false | Extremely high quality, generally unneeded but max available quant. |
| [Caller-Q6_K_L.gguf](https://huggingface.co/bartowski/arcee-ai_Caller-GGUF/blob/main/arcee-ai_Caller-Q6_K_L.gguf) | Q6_K_L | 27.26GB | false | Uses Q8_0 for embed and output weights. Very high quality, near perfect, *recommended*. |
| [Caller-Q6_K.gguf](https://huggingface.co/bartowski/arcee-ai_Caller-GGUF/blob/main/arcee-ai_Caller-Q6_K.gguf) | Q6_K | 26.89GB | false | Very high quality, near perfect, *recommended*. |
| [Caller-Q5_K_L.gguf](https://huggingface.co/bartowski/arcee-ai_Caller-GGUF/blob/main/arcee-ai_Caller-Q5_K_L.gguf) | Q5_K_L | 23.74GB | false | Uses Q8_0 for embed and output weights. High quality, *recommended*. |
| [Caller-Q5_K_M.gguf](https://huggingface.co/bartowski/arcee-ai_Caller-GGUF/blob/main/arcee-ai_Caller-Q5_K_M.gguf) | Q5_K_M | 23.26GB | false | High quality, *recommended*. |
| [Caller-Q5_K_S.gguf](https://huggingface.co/bartowski/arcee-ai_Caller-GGUF/blob/main/arcee-ai_Caller-Q5_K_S.gguf) | Q5_K_S | 22.64GB | false | High quality, *recommended*. |
| [Caller-Q4_1.gguf](https://huggingface.co/bartowski/arcee-ai_Caller-GGUF/blob/main/arcee-ai_Caller-Q4_1.gguf) | Q4_1 | 20.64GB | false | Legacy format, similar performance to Q4_K_S but with improved tokens/watt on Apple silicon. |
| [Caller-Q4_K_L.gguf](https://huggingface.co/bartowski/arcee-ai_Caller-GGUF/blob/main/arcee-ai_Caller-Q4_K_L.gguf) | Q4_K_L | 20.43GB | false | Uses Q8_0 for embed and output weights. Good quality, *recommended*. |
| [Caller-Q4_K_M.gguf](https://huggingface.co/bartowski/arcee-ai_Caller-GGUF/blob/main/arcee-ai_Caller-Q4_K_M.gguf) | Q4_K_M | 19.85GB | false | Good quality, default size for most use cases, *recommended*. |
| [Caller-Q4_K_S.gguf](https://huggingface.co/bartowski/arcee-ai_Caller-GGUF/blob/main/arcee-ai_Caller-Q4_K_S.gguf) | Q4_K_S | 18.78GB | false | Slightly lower quality with more space savings, *recommended*. |
| [Caller-Q4_0.gguf](https://huggingface.co/bartowski/arcee-ai_Caller-GGUF/blob/main/arcee-ai_Caller-Q4_0.gguf) | Q4_0 | 18.71GB | false | Legacy format, offers online repacking for ARM and AVX CPU inference. |
| [Caller-IQ4_NL.gguf](https://huggingface.co/bartowski/arcee-ai_Caller-GGUF/blob/main/arcee-ai_Caller-IQ4_NL.gguf) | IQ4_NL | 18.68GB | false | Similar to IQ4_XS, but slightly larger. Offers online repacking for ARM CPU inference. |
| [Caller-Q3_K_XL.gguf](https://huggingface.co/bartowski/arcee-ai_Caller-GGUF/blob/main/arcee-ai_Caller-Q3_K_XL.gguf) | Q3_K_XL | 17.93GB | false | Uses Q8_0 for embed and output weights. Lower quality but usable, good for low RAM availability. |
| [Caller-IQ4_XS.gguf](https://huggingface.co/bartowski/arcee-ai_Caller-GGUF/blob/main/arcee-ai_Caller-IQ4_XS.gguf) | IQ4_XS | 17.69GB | false | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. |
| [Caller-Q3_K_L.gguf](https://huggingface.co/bartowski/arcee-ai_Caller-GGUF/blob/main/arcee-ai_Caller-Q3_K_L.gguf) | Q3_K_L | 17.25GB | false | Lower quality but usable, good for low RAM availability. |
| [Caller-Q3_K_M.gguf](https://huggingface.co/bartowski/arcee-ai_Caller-GGUF/blob/main/arcee-ai_Caller-Q3_K_M.gguf) | Q3_K_M | 15.94GB | false | Low quality. |
| [Caller-IQ3_M.gguf](https://huggingface.co/bartowski/arcee-ai_Caller-GGUF/blob/main/arcee-ai_Caller-IQ3_M.gguf) | IQ3_M | 14.81GB | false | Medium-low quality, new method with decent performance comparable to Q3_K_M. |
| [Caller-Q3_K_S.gguf](https://huggingface.co/bartowski/arcee-ai_Caller-GGUF/blob/main/arcee-ai_Caller-Q3_K_S.gguf) | Q3_K_S | 14.39GB | false | Low quality, not recommended. |
| [Caller-IQ3_XS.gguf](https://huggingface.co/bartowski/arcee-ai_Caller-GGUF/blob/main/arcee-ai_Caller-IQ3_XS.gguf) | IQ3_XS | 13.71GB | false | Lower quality, new method with decent performance, slightly better than Q3_K_S. |
| [Caller-Q2_K_L.gguf](https://huggingface.co/bartowski/arcee-ai_Caller-GGUF/blob/main/arcee-ai_Caller-Q2_K_L.gguf) | Q2_K_L | 13.07GB | false | Uses Q8_0 for embed and output weights. Very low quality but surprisingly usable. |
| [Caller-IQ3_XXS.gguf](https://huggingface.co/bartowski/arcee-ai_Caller-GGUF/blob/main/arcee-ai_Caller-IQ3_XXS.gguf) | IQ3_XXS | 12.84GB | false | Lower quality, new method with decent performance, comparable to Q3 quants. |
| [Caller-Q2_K.gguf](https://huggingface.co/bartowski/arcee-ai_Caller-GGUF/blob/main/arcee-ai_Caller-Q2_K.gguf) | Q2_K | 12.31GB | false | Very low quality but surprisingly usable. |
| [Caller-IQ2_M.gguf](https://huggingface.co/bartowski/arcee-ai_Caller-GGUF/blob/main/arcee-ai_Caller-IQ2_M.gguf) | IQ2_M | 11.26GB | false | Relatively low quality, uses SOTA techniques to be surprisingly usable. |
| [Caller-IQ2_S.gguf](https://huggingface.co/bartowski/arcee-ai_Caller-GGUF/blob/main/arcee-ai_Caller-IQ2_S.gguf) | IQ2_S | 10.39GB | false | Low quality, uses SOTA techniques to be usable. |
| [Caller-IQ2_XS.gguf](https://huggingface.co/bartowski/arcee-ai_Caller-GGUF/blob/main/arcee-ai_Caller-IQ2_XS.gguf) | IQ2_XS | 9.96GB | false | Low quality, uses SOTA techniques to be usable. |
| [Caller-IQ2_XXS.gguf](https://huggingface.co/bartowski/arcee-ai_Caller-GGUF/blob/main/arcee-ai_Caller-IQ2_XXS.gguf) | IQ2_XXS | 9.03GB | false | Very low quality, uses SOTA techniques to be usable. |
## Embed/output weights
Some of these quants (Q3_K_XL, Q4_K_L etc) are the standard quantization method with the embeddings and output weights quantized to Q8_0 instead of what they would normally default to.
## Downloading using huggingface-cli
<details>
<summary>Click to view download instructions</summary>
First, make sure you have hugginface-cli installed:
```
pip install -U "huggingface_hub[cli]"
```
Then, you can target the specific file you want:
```
huggingface-cli download bartowski/arcee-ai_Caller-GGUF --include "arcee-ai_Caller-Q4_K_M.gguf" --local-dir ./
```
If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run:
```
huggingface-cli download bartowski/arcee-ai_Caller-GGUF --include "arcee-ai_Caller-Q8_0/*" --local-dir ./
```
You can either specify a new local-dir (arcee-ai_Caller-Q8_0) or download them all in place (./)
</details>
## ARM/AVX information
Previously, you would download Q4_0_4_4/4_8/8_8, and these would have their weights interleaved in memory in order to improve performance on ARM and AVX machines by loading up more data in one pass.
Now, however, there is something called "online repacking" for weights. details in [this PR](https://github.com/ggerganov/llama.cpp/pull/9921). If you use Q4_0 and your hardware would benefit from repacking weights, it will do it automatically on the fly.
As of llama.cpp build [b4282](https://github.com/ggerganov/llama.cpp/releases/tag/b4282) you will not be able to run the Q4_0_X_X files and will instead need to use Q4_0.
Additionally, if you want to get slightly better quality for , you can use IQ4_NL thanks to [this PR](https://github.com/ggerganov/llama.cpp/pull/10541) which will also repack the weights for ARM, though only the 4_4 for now. The loading time may be slower but it will result in an overall speed incrase.
<details>
<summary>Click to view Q4_0_X_X information (deprecated</summary>
I'm keeping this section to show the potential theoretical uplift in performance from using the Q4_0 with online repacking.
<details>
<summary>Click to view benchmarks on an AVX2 system (EPYC7702)</summary>
| model | size | params | backend | threads | test | t/s | % (vs Q4_0) |
| ------------------------------ | ---------: | ---------: | ---------- | ------: | ------------: | -------------------: |-------------: |
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | pp512 | 204.03 ± 1.03 | 100% |
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | pp1024 | 282.92 ± 0.19 | 100% |
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | pp2048 | 259.49 ± 0.44 | 100% |
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | tg128 | 39.12 ± 0.27 | 100% |
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | tg256 | 39.31 ± 0.69 | 100% |
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | tg512 | 40.52 ± 0.03 | 100% |
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | pp512 | 301.02 ± 1.74 | 147% |
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | pp1024 | 287.23 ± 0.20 | 101% |
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | pp2048 | 262.77 ± 1.81 | 101% |
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | tg128 | 18.80 ± 0.99 | 48% |
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | tg256 | 24.46 ± 3.04 | 83% |
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | tg512 | 36.32 ± 3.59 | 90% |
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | pp512 | 271.71 ± 3.53 | 133% |
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | pp1024 | 279.86 ± 45.63 | 100% |
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | pp2048 | 320.77 ± 5.00 | 124% |
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | tg128 | 43.51 ± 0.05 | 111% |
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | tg256 | 43.35 ± 0.09 | 110% |
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | tg512 | 42.60 ± 0.31 | 105% |
Q4_0_8_8 offers a nice bump to prompt processing and a small bump to text generation
</details>
</details>
## Which file should I choose?
<details>
<summary>Click here for details</summary>
A great write up with charts showing various performances is provided by Artefact2 [here](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9)
The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have.
If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM.
If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total.
Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'.
If you don't want to think too much, grab one of the K-quants. These are in format 'QX_K_X', like Q5_K_M.
If you want to get more into the weeds, you can check out this extremely useful feature chart:
[llama.cpp feature matrix](https://github.com/ggerganov/llama.cpp/wiki/Feature-matrix)
But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX_X, like IQ3_M. These are newer and offer better performance for their size.
These I-quants can also be used on CPU, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide.
</details>
## Credits
Thank you kalomaze and Dampf for assistance in creating the imatrix calibration dataset.
Thank you ZeroWw for the inspiration to experiment with embed/output.
Thank you to LM Studio for sponsoring my work.
Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
|
miniHui/Geo-SFT
|
miniHui
| 2025-06-18T23:22:46Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2_5_vl",
"image-text-to-text",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen2.5-VL-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-VL-7B-Instruct",
"license:other",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2025-06-18T22:51:20Z |
---
library_name: transformers
license: other
base_model: Qwen/Qwen2.5-VL-7B-Instruct
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: sft
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sft
This model is a fine-tuned version of [Qwen/Qwen2.5-VL-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-7B-Instruct) on the cities_sft_16k dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- total_eval_batch_size: 64
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 2.0
### Training results
### Framework versions
- Transformers 4.49.0
- Pytorch 2.7.1+cu126
- Datasets 3.6.0
- Tokenizers 0.21.1
|
mradermacher/SenecaLLM_x_Qwen2.5-7B-CyberSecurity-GGUF
|
mradermacher
| 2025-06-18T23:22:45Z | 322 | 1 |
transformers
|
[
"transformers",
"gguf",
"qwen2.5",
"cybersecurity",
"ethicalhacking",
"informationsecurity",
"pentest",
"en",
"base_model:AlicanKiraz0/Seneca-Cybersecurity-LLM_x_Qwen2.5-7B-CyberSecurity",
"base_model:quantized:AlicanKiraz0/Seneca-Cybersecurity-LLM_x_Qwen2.5-7B-CyberSecurity",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-02-12T10:28:45Z |
---
base_model: AlicanKiraz0/Seneca-Cybersecurity-LLM_x_Qwen2.5-7B-CyberSecurity
language:
- en
library_name: transformers
license: mit
quantized_by: mradermacher
tags:
- qwen2.5
- cybersecurity
- ethicalhacking
- informationsecurity
- pentest
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/AlicanKiraz0/Seneca-Cybersecurity-LLM_x_Qwen2.5-7B-CyberSecurity
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/SenecaLLM_x_Qwen2.5-7B-CyberSecurity-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/SenecaLLM_x_Qwen2.5-7B-CyberSecurity-GGUF/resolve/main/SenecaLLM_x_Qwen2.5-7B-CyberSecurity.Q2_K.gguf) | Q2_K | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/SenecaLLM_x_Qwen2.5-7B-CyberSecurity-GGUF/resolve/main/SenecaLLM_x_Qwen2.5-7B-CyberSecurity.Q3_K_S.gguf) | Q3_K_S | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/SenecaLLM_x_Qwen2.5-7B-CyberSecurity-GGUF/resolve/main/SenecaLLM_x_Qwen2.5-7B-CyberSecurity.Q3_K_M.gguf) | Q3_K_M | 3.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/SenecaLLM_x_Qwen2.5-7B-CyberSecurity-GGUF/resolve/main/SenecaLLM_x_Qwen2.5-7B-CyberSecurity.Q3_K_L.gguf) | Q3_K_L | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/SenecaLLM_x_Qwen2.5-7B-CyberSecurity-GGUF/resolve/main/SenecaLLM_x_Qwen2.5-7B-CyberSecurity.IQ4_XS.gguf) | IQ4_XS | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/SenecaLLM_x_Qwen2.5-7B-CyberSecurity-GGUF/resolve/main/SenecaLLM_x_Qwen2.5-7B-CyberSecurity.Q4_K_S.gguf) | Q4_K_S | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/SenecaLLM_x_Qwen2.5-7B-CyberSecurity-GGUF/resolve/main/SenecaLLM_x_Qwen2.5-7B-CyberSecurity.Q4_K_M.gguf) | Q4_K_M | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/SenecaLLM_x_Qwen2.5-7B-CyberSecurity-GGUF/resolve/main/SenecaLLM_x_Qwen2.5-7B-CyberSecurity.Q5_K_S.gguf) | Q5_K_S | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/SenecaLLM_x_Qwen2.5-7B-CyberSecurity-GGUF/resolve/main/SenecaLLM_x_Qwen2.5-7B-CyberSecurity.Q5_K_M.gguf) | Q5_K_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/SenecaLLM_x_Qwen2.5-7B-CyberSecurity-GGUF/resolve/main/SenecaLLM_x_Qwen2.5-7B-CyberSecurity.Q6_K.gguf) | Q6_K | 6.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/SenecaLLM_x_Qwen2.5-7B-CyberSecurity-GGUF/resolve/main/SenecaLLM_x_Qwen2.5-7B-CyberSecurity.Q8_0.gguf) | Q8_0 | 8.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/SenecaLLM_x_Qwen2.5-7B-CyberSecurity-GGUF/resolve/main/SenecaLLM_x_Qwen2.5-7B-CyberSecurity.f16.gguf) | f16 | 15.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
muskch032/Weiver-U1-4B-GGUF
|
muskch032
| 2025-06-18T23:22:38Z | 0 | 0 | null |
[
"gguf",
"text-generation",
"base_model:muskch032/Weiver-U1-4B",
"base_model:quantized:muskch032/Weiver-U1-4B",
"license:other",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-18T17:40:25Z |
---
base_model:
- muskch032/Weiver-U1-4B
pipeline_tag: text-generation
quantized_by: muskch032
license: other
---
## License
This model is licensed under a **custom Research-Only License** created by the Weiver-U1 team.
- 📘 Non-commercial research use only
- 🚫 No redistribution allowed
- 📄 See [LICENSE](./LICENSE) for full terms
|
mradermacher/SenecaLLM_x_Qwen2.5-7B-CyberSecurity-i1-GGUF
|
mradermacher
| 2025-06-18T23:22:17Z | 911 | 1 |
transformers
|
[
"transformers",
"gguf",
"qwen2.5",
"cybersecurity",
"ethicalhacking",
"informationsecurity",
"pentest",
"en",
"base_model:AlicanKiraz0/Seneca-Cybersecurity-LLM_x_Qwen2.5-7B-CyberSecurity",
"base_model:quantized:AlicanKiraz0/Seneca-Cybersecurity-LLM_x_Qwen2.5-7B-CyberSecurity",
"license:mit",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-02-12T12:03:48Z |
---
base_model: AlicanKiraz0/Seneca-Cybersecurity-LLM_x_Qwen2.5-7B-CyberSecurity
language:
- en
library_name: transformers
license: mit
quantized_by: mradermacher
tags:
- qwen2.5
- cybersecurity
- ethicalhacking
- informationsecurity
- pentest
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/AlicanKiraz0/Seneca-Cybersecurity-LLM_x_Qwen2.5-7B-CyberSecurity
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/SenecaLLM_x_Qwen2.5-7B-CyberSecurity-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/SenecaLLM_x_Qwen2.5-7B-CyberSecurity-i1-GGUF/resolve/main/SenecaLLM_x_Qwen2.5-7B-CyberSecurity.i1-IQ1_S.gguf) | i1-IQ1_S | 2.0 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/SenecaLLM_x_Qwen2.5-7B-CyberSecurity-i1-GGUF/resolve/main/SenecaLLM_x_Qwen2.5-7B-CyberSecurity.i1-IQ1_M.gguf) | i1-IQ1_M | 2.1 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/SenecaLLM_x_Qwen2.5-7B-CyberSecurity-i1-GGUF/resolve/main/SenecaLLM_x_Qwen2.5-7B-CyberSecurity.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/SenecaLLM_x_Qwen2.5-7B-CyberSecurity-i1-GGUF/resolve/main/SenecaLLM_x_Qwen2.5-7B-CyberSecurity.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/SenecaLLM_x_Qwen2.5-7B-CyberSecurity-i1-GGUF/resolve/main/SenecaLLM_x_Qwen2.5-7B-CyberSecurity.i1-IQ2_S.gguf) | i1-IQ2_S | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/SenecaLLM_x_Qwen2.5-7B-CyberSecurity-i1-GGUF/resolve/main/SenecaLLM_x_Qwen2.5-7B-CyberSecurity.i1-IQ2_M.gguf) | i1-IQ2_M | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/SenecaLLM_x_Qwen2.5-7B-CyberSecurity-i1-GGUF/resolve/main/SenecaLLM_x_Qwen2.5-7B-CyberSecurity.i1-Q2_K_S.gguf) | i1-Q2_K_S | 2.9 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/SenecaLLM_x_Qwen2.5-7B-CyberSecurity-i1-GGUF/resolve/main/SenecaLLM_x_Qwen2.5-7B-CyberSecurity.i1-Q2_K.gguf) | i1-Q2_K | 3.1 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/SenecaLLM_x_Qwen2.5-7B-CyberSecurity-i1-GGUF/resolve/main/SenecaLLM_x_Qwen2.5-7B-CyberSecurity.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/SenecaLLM_x_Qwen2.5-7B-CyberSecurity-i1-GGUF/resolve/main/SenecaLLM_x_Qwen2.5-7B-CyberSecurity.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/SenecaLLM_x_Qwen2.5-7B-CyberSecurity-i1-GGUF/resolve/main/SenecaLLM_x_Qwen2.5-7B-CyberSecurity.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.6 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/SenecaLLM_x_Qwen2.5-7B-CyberSecurity-i1-GGUF/resolve/main/SenecaLLM_x_Qwen2.5-7B-CyberSecurity.i1-IQ3_S.gguf) | i1-IQ3_S | 3.6 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/SenecaLLM_x_Qwen2.5-7B-CyberSecurity-i1-GGUF/resolve/main/SenecaLLM_x_Qwen2.5-7B-CyberSecurity.i1-IQ3_M.gguf) | i1-IQ3_M | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/SenecaLLM_x_Qwen2.5-7B-CyberSecurity-i1-GGUF/resolve/main/SenecaLLM_x_Qwen2.5-7B-CyberSecurity.i1-Q3_K_M.gguf) | i1-Q3_K_M | 3.9 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/SenecaLLM_x_Qwen2.5-7B-CyberSecurity-i1-GGUF/resolve/main/SenecaLLM_x_Qwen2.5-7B-CyberSecurity.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.2 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/SenecaLLM_x_Qwen2.5-7B-CyberSecurity-i1-GGUF/resolve/main/SenecaLLM_x_Qwen2.5-7B-CyberSecurity.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.3 | |
| [GGUF](https://huggingface.co/mradermacher/SenecaLLM_x_Qwen2.5-7B-CyberSecurity-i1-GGUF/resolve/main/SenecaLLM_x_Qwen2.5-7B-CyberSecurity.i1-IQ4_NL.gguf) | i1-IQ4_NL | 4.5 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/SenecaLLM_x_Qwen2.5-7B-CyberSecurity-i1-GGUF/resolve/main/SenecaLLM_x_Qwen2.5-7B-CyberSecurity.i1-Q4_0.gguf) | i1-Q4_0 | 4.5 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/SenecaLLM_x_Qwen2.5-7B-CyberSecurity-i1-GGUF/resolve/main/SenecaLLM_x_Qwen2.5-7B-CyberSecurity.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.6 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/SenecaLLM_x_Qwen2.5-7B-CyberSecurity-i1-GGUF/resolve/main/SenecaLLM_x_Qwen2.5-7B-CyberSecurity.i1-Q4_K_M.gguf) | i1-Q4_K_M | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/SenecaLLM_x_Qwen2.5-7B-CyberSecurity-i1-GGUF/resolve/main/SenecaLLM_x_Qwen2.5-7B-CyberSecurity.i1-Q4_1.gguf) | i1-Q4_1 | 5.0 | |
| [GGUF](https://huggingface.co/mradermacher/SenecaLLM_x_Qwen2.5-7B-CyberSecurity-i1-GGUF/resolve/main/SenecaLLM_x_Qwen2.5-7B-CyberSecurity.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/SenecaLLM_x_Qwen2.5-7B-CyberSecurity-i1-GGUF/resolve/main/SenecaLLM_x_Qwen2.5-7B-CyberSecurity.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/SenecaLLM_x_Qwen2.5-7B-CyberSecurity-i1-GGUF/resolve/main/SenecaLLM_x_Qwen2.5-7B-CyberSecurity.i1-Q6_K.gguf) | i1-Q6_K | 6.4 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
mesolitica/Malaysian-Qwen2.5-14B-Reasoning-GRPO
|
mesolitica
| 2025-06-18T23:21:58Z | 59 | 1 | null |
[
"safetensors",
"qwen2",
"ms",
"en",
"dataset:mesolitica/Malaysian-Reasoning",
"base_model:mesolitica/Malaysian-Qwen2.5-14B-Reasoning-SFT",
"base_model:finetune:mesolitica/Malaysian-Qwen2.5-14B-Reasoning-SFT",
"region:us"
] | null | 2025-06-01T05:45:12Z |
---
language:
- ms
- en
datasets:
- mesolitica/Malaysian-Reasoning
base_model:
- mesolitica/Malaysian-Qwen2.5-14B-Reasoning-SFT
---
# Malaysian Qwen 2.5 14B Instruct Reasoning GRPO
Online Reinforcement learning using GRPO full parameter on warmup reasoning SFT https://huggingface.co/mesolitica/Malaysian-Qwen2.5-14B-Reasoning-SFT on highly curated Malaysian Reasoning dataset.
## Improvement
1. Multitask reasoning, each datapoint been replicated to 4 generations.
2. Actual online reinforcement learning.
## Better performance
To get better performance, use system prompt `You are going to enter reasoning mode. First, you try to think step-by-step in Malay. After that, put your final answer within $\\boxed{}$.`
## Training session
Finetune on [combine/combined-malaysian-reasoning.jsonl](https://huggingface.co/datasets/mesolitica/Malaysian-SFT/blob/main/combine/combined-malaysian-reasoning.jsonl), this is train set from [mesolitica/Malaysian-Reasoning](https://huggingface.co/datasets/mesolitica/Malaysian-Reasoning).
## How we train
1. GRPO full parameters.
5. WanDB at https://wandb.ai/huseinzol05/fpf-Malaysian-Qwen2.5-14B-Reasoning-SFT-GRPO
## Checkpoints
1. Epoch 1.0, revision [cc1032dfe961a56a3e33e36f03c37ed09b33c7fe](https://huggingface.co/mesolitica/Malaysian-Qwen2.5-14B-Reasoning-GRPO/commit/cc1032dfe961a56a3e33e36f03c37ed09b33c7fe)
2. Epoch 2.0, revision [90896edeb1eb18cb48ac682ad606d4ec51172941](https://huggingface.co/mesolitica/Malaysian-Qwen2.5-14B-Reasoning-GRPO/commit/90896edeb1eb18cb48ac682ad606d4ec51172941)
## Source code
Source code at https://github.com/mesolitica/malaya/blob/master/session/qwen2.5/14b-grpo-fsdp.sh
## Benchmark
### Dialect Translation
All the benchmarks generate using vLLM, evaluation based on sacrebleu CHRF max@5.
Source code for evaluation at https://github.com/mesolitica/malaya/tree/master/session/qwen2.5/evaluate-dialect
Dialect to standard Malay,
```
From: johor To: malay, score: 58.84114972328088
From: kedah To: malay, score: 61.23535640853852
From: pahang To: malay, score: 60.538184656921736
From: negeri sembilan To: malay, score: 59.33677942673728
From: kelantan To: malay, score: 53.67899007513317
From: penang To: malay, score: 64.65390412500909
From: melaka To: malay, score: 58.63391894024569
average: 59.55975476512377
```
Standard Malay to dialect,
```
From: malay To: johor, score: 55.69851104502618
From: malay To: kedah, score: 56.537698809297844
From: malay To: pahang, score: 61.46337868712478
From: malay To: negeri sembilan, score: 52.483104592534914
From: malay To: kelantan, score: 45.44384811848678
From: malay To: penang, score: 68.91583154150995
From: malay To: melaka, score: 72.5073072144931
average: 59.00709714406764
```
### MalayMMLU
Source code for evaluation at https://github.com/mesolitica/malaya/tree/master/session/qwen2.5/evaluate-malaymmlu
Evaluation based on Accuracy@1,
```
STEM 79.00122799836267
Language 78.76908396946564
Social science 70.88753975137323
Others 73.23099064523866
Humanities 76.22298065984073
average 75.62236460485619
```
Evaluation based on Accuracy@5,
```
STEM 78.87024150634467
Language 79.04580152671755
Social science 70.88464874241109
Others 73.29335572079636
Humanities 76.37315130830488
average 75.69343976091491
```
## Special thanks
Special thanks to https://www.sns.com.my and Nvidia for 8x H100 node!
|
nnilayy/deap-dominance-binary-classification-Kfold-5
|
nnilayy
| 2025-06-18T23:21:58Z | 0 | 0 | null |
[
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"region:us"
] | null | 2025-06-18T23:21:57Z |
---
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Code: [More Information Needed]
- Paper: [More Information Needed]
- Docs: [More Information Needed]
|
sanchit42/qwen3-0.6B-instruct-29reports-lora256-reason
|
sanchit42
| 2025-06-18T23:21:52Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"llama-factory",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-18T23:20:22Z |
---
library_name: transformers
tags:
- llama-factory
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Spestly/Ares-4B-Q4_K_M-GGUF
|
Spestly
| 2025-06-18T23:20:30Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"qwen3",
"llama-cpp",
"gguf-my-repo",
"en",
"base_model:Spestly/Ares-4B",
"base_model:quantized:Spestly/Ares-4B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-06-18T23:20:18Z |
---
base_model: Spestly/Ares-4B
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- llama-cpp
- gguf-my-repo
license: apache-2.0
language:
- en
---
# Spestly/Ares-4B-Q4_K_M-GGUF
This model was converted to GGUF format from [`Spestly/Ares-4B`](https://huggingface.co/Spestly/Ares-4B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Spestly/Ares-4B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Spestly/Ares-4B-Q4_K_M-GGUF --hf-file ares-4b-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Spestly/Ares-4B-Q4_K_M-GGUF --hf-file ares-4b-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Spestly/Ares-4B-Q4_K_M-GGUF --hf-file ares-4b-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Spestly/Ares-4B-Q4_K_M-GGUF --hf-file ares-4b-q4_k_m.gguf -c 2048
```
|
yalhessi/lemexp-task1-v2-lemma_object_full-deepseek-coder-1.3b-base-ddp-8lr-v2
|
yalhessi
| 2025-06-18T23:15:58Z | 134 | 0 |
peft
|
[
"peft",
"safetensors",
"generated_from_trainer",
"base_model:deepseek-ai/deepseek-coder-1.3b-base",
"base_model:adapter:deepseek-ai/deepseek-coder-1.3b-base",
"license:other",
"region:us"
] | null | 2025-06-01T13:41:09Z |
---
library_name: peft
license: other
base_model: deepseek-ai/deepseek-coder-1.3b-base
tags:
- generated_from_trainer
model-index:
- name: lemexp-task1-v2-lemma_object_full-deepseek-coder-1.3b-base-ddp-8lr-v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# lemexp-task1-v2-lemma_object_full-deepseek-coder-1.3b-base-ddp-8lr-v2
This model is a fine-tuned version of [deepseek-ai/deepseek-coder-1.3b-base](https://huggingface.co/deepseek-ai/deepseek-coder-1.3b-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2261
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0008
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 16
- total_eval_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 12
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:------:|:---------------:|
| 0.4519 | 0.2 | 3094 | 0.4528 |
| 0.4192 | 0.4 | 6188 | 0.4162 |
| 0.4051 | 0.6 | 9282 | 0.4043 |
| 0.3928 | 0.8 | 12376 | 0.3904 |
| 0.3846 | 1.0 | 15470 | 0.3827 |
| 0.3797 | 1.2 | 18564 | 0.3772 |
| 0.3744 | 1.4 | 21658 | 0.3697 |
| 0.3697 | 1.6 | 24752 | 0.3640 |
| 0.3643 | 1.8 | 27846 | 0.3624 |
| 0.3614 | 2.0 | 30940 | 0.3526 |
| 0.3546 | 2.2 | 34034 | 0.3512 |
| 0.3503 | 2.4 | 37128 | 0.3487 |
| 0.345 | 2.6 | 40222 | 0.3421 |
| 0.3449 | 2.8 | 43316 | 0.3431 |
| 0.3421 | 3.0 | 46410 | 0.3432 |
| 0.335 | 3.2 | 49504 | 0.3359 |
| 0.3351 | 3.4 | 52598 | 0.3336 |
| 0.33 | 3.6 | 55692 | 0.3340 |
| 0.3283 | 3.8 | 58786 | 0.3282 |
| 0.3266 | 4.0 | 61880 | 0.3166 |
| 0.317 | 4.2 | 64974 | 0.3149 |
| 0.3122 | 4.4 | 68068 | 0.3149 |
| 0.313 | 4.6 | 71162 | 0.3147 |
| 0.3043 | 4.8 | 74256 | 0.3130 |
| 0.3019 | 5.0 | 77350 | 0.3036 |
| 0.2952 | 5.2 | 80444 | 0.3000 |
| 0.2996 | 5.4 | 83538 | 0.3003 |
| 0.2957 | 5.6 | 86632 | 0.2993 |
| 0.2935 | 5.8 | 89726 | 0.3047 |
| 0.2885 | 6.0 | 92820 | 0.2928 |
| 0.2755 | 6.2 | 95914 | 0.2915 |
| 0.2763 | 6.4 | 99008 | 0.2875 |
| 0.2755 | 6.6 | 102102 | 0.2855 |
| 0.2811 | 6.8 | 105196 | 0.2812 |
| 0.2704 | 7.0 | 108290 | 0.2796 |
| 0.26 | 7.2 | 111384 | 0.2776 |
| 0.2564 | 7.4 | 114478 | 0.2691 |
| 0.2613 | 7.6 | 117572 | 0.2702 |
| 0.2568 | 7.8 | 120666 | 0.2684 |
| 0.2579 | 8.0 | 123760 | 0.2643 |
| 0.2422 | 8.2 | 126854 | 0.2624 |
| 0.243 | 8.4 | 129948 | 0.2619 |
| 0.2421 | 8.6 | 133042 | 0.2583 |
| 0.2455 | 8.8 | 136136 | 0.2575 |
| 0.2428 | 9.0 | 139230 | 0.2511 |
| 0.2286 | 9.2 | 142324 | 0.2478 |
| 0.227 | 9.4 | 145418 | 0.2507 |
| 0.2246 | 9.6 | 148512 | 0.2474 |
| 0.2273 | 9.8 | 151606 | 0.2452 |
| 0.2211 | 10.0 | 154700 | 0.2432 |
| 0.2117 | 10.2 | 157794 | 0.2434 |
| 0.2098 | 10.4 | 160888 | 0.2377 |
| 0.2092 | 10.6 | 163982 | 0.2376 |
| 0.2073 | 10.8 | 167076 | 0.2355 |
| 0.2051 | 11.0 | 170170 | 0.2303 |
| 0.1966 | 11.2 | 173264 | 0.2321 |
| 0.1923 | 11.4 | 176358 | 0.2294 |
| 0.1913 | 11.6 | 179452 | 0.2275 |
| 0.189 | 11.8 | 182546 | 0.2267 |
| 0.1924 | 12.0 | 185640 | 0.2261 |
### Framework versions
- PEFT 0.14.0
- Transformers 4.47.0
- Pytorch 2.5.1+cu124
- Datasets 3.2.0
- Tokenizers 0.21.0
|
justuswill/UQDM
|
justuswill
| 2025-06-18T23:14:42Z | 0 | 0 | null |
[
"compression",
"diffusion",
"dataset:uoft-cs/cifar10",
"dataset:student/ImageNet-64",
"license:mit",
"region:us"
] | null | 2025-06-18T21:52:37Z |
---
tags:
- compression
- diffusion
license: mit
datasets:
- uoft-cs/cifar10
- student/ImageNet-64
metrics:
- bpps
- psnr
---
# Progressive Compression with Universally Quantized Diffusion Models
Official implementation of our ICLR 2025 paper [Progressive Compression with Universally Quantized Diffusion Models](https://www.justuswill.com/uqdm/) by Yibo Yang, Justus Will, and Stephan Mandt.
## TLDR
Our new form of diffusion model, UQDM, enables practical progressive compression with an unconditional diffusion model - avoiding the computational intractability of Gaussian channel simulation by using universal quantization.
## Setup
```
git clone https://github.com/mandt-lab/uqdm.git
cd uqdm
conda env create -f environment.yml
conda activate uqdm
```
For working with ImageNet64, download from the [official website](https://image-net.org/download-images.php) the npz dataset files:
- Train(64x64) part1, Train(64x64) part2, Val(64x64)
and place them in `./data/imagenet64`. Our implementation removes the duplicate test images as saved in `./data/imagenet64/removed.npy` during loading.
## Usage
Load pretrained models by placing the `config.json` and `checkpoint.pt` in a shared folder and load them for example via
```python
from uqdm import load_checkpoint, load_data
model = load_checkpoint('checkpoints/uqdm-tiny')
train_iter, eval_iter = load_data('ImageNet64', model.config.data)
```
To train or evaluate call respectively via
```python
model.trainer(train_iter, eval_iter)
model.evaluate(eval_iter)
```
To save the compressed representation of an image and to reconstruct an image/images from their compressed representations, use
```python
image = next(iter(eval_iter))
compressed = model.compress(image)
reconstructions = model.decompress(compressed)
```
## Citation
```bibtex
@article{yang2025universal,
title={Progressive Compression with Universally Quantized Diffusion Models},
author={Yibo Yang and Justus Will and Stephan Mandt},
journal = {International Conference on Learning Representations},
year={2025}
}
```
|
RafaM97/Llama_LoraM
|
RafaM97
| 2025-06-18T23:12:05Z | 0 | 0 | null |
[
"safetensors",
"llama",
"es",
"base_model:meta-llama/Llama-3.1-8B-Instruct",
"base_model:quantized:meta-llama/Llama-3.1-8B-Instruct",
"license:mit",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-06-18T21:43:55Z |
---
license: mit
language:
- es
base_model:
- meta-llama/Llama-3.1-8B-Instruct
---
# 🤖 Interius LLM — Strategic Marketing Agent
Este modelo ha sido afinado con **QLoRA** sobre `LLaMA 3.1` para funcionar como un asesor inteligente de estrategias publicitarias. Forma parte de la plataforma **Interius**, un sistema que optimiza campañas digitales para instituciones educativas en LATAM.
## 📦 Dataset de Entrenamiento
Entrenado con un dataset instructivo de más de 300 ejemplos tipo Alpaca con el siguiente formato:
- `instruction`: qué pregunta el usuario
- `input`: contexto adicional
- `output`: respuesta estructurada en copywriting, asignación de presupuesto, targeting y más
## 🎯 Casos de uso esperados
- Simulación de estrategias publicitarias
- Generación de copys persuasivos
- Evaluación de canales según eficiencia
- Asistencia a ejecutivos de cuentas en agencias
## ⚙️ Detalles técnicos
- Base model: `meta-llama/Llama-3.2-1B`
- Técnica: `QLoRA` + `PEFT`
- Tokenizer adaptado
- Parámetros entrenables: solo LoRA adapters
- Máximo contexto: 512 tokens
## 🧠 Ejemplo de uso
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model = AutoModelForCausalLM.from_pretrained("RafaM97/Interius-LLM", device_map="auto", torch_dtype=torch.float16)
tokenizer = AutoTokenizer.from_pretrained("RafaM97/Interius-LLM")
prompt = "¿Cómo distribuir $50,000 entre Google y Meta para una campaña educativa en Monterrey?"
inputs = tokenizer(prompt, return_tensors="pt").to("cuda")
outputs = model.generate(**inputs, max_new_tokens=300)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
---
license: mit
---
|
minhxle/truesight-ft-job-fd810660-9327-43bd-beb9-d30c7e8b29bb
|
minhxle
| 2025-06-18T23:11:35Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-18T23:11:20Z |
---
base_model: unsloth/qwen2.5-3b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** minhxle
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-3b-instruct-unsloth-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
luyotw/openfun-ivod-whisper-medium-WuSiYao-10-75
|
luyotw
| 2025-06-18T23:10:59Z | 0 | 0 | null |
[
"tensorboard",
"safetensors",
"whisper",
"region:us"
] | null | 2025-06-18T21:56:08Z |
# Fine-tune 資訊
- 原始模型: `openai/whisper-medium`
- 使用音訊數量: 12588
- 使用音訊總長: 8.47 小時
- 音訊平均長度: 2.42 秒
- GPU: `NVIDIA H100 PCIe` x 1
- 訓練時間: 02:53:59
- 模型大小: 2.85 GB
---
# Model Card
|
adsingh64/gemma_sleeper_agent
|
adsingh64
| 2025-06-18T23:09:48Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-18T23:07:20Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
nnilayy/deap-arousal-binary-classification-Kfold-5
|
nnilayy
| 2025-06-18T23:08:59Z | 0 | 0 | null |
[
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"region:us"
] | null | 2025-06-18T23:08:57Z |
---
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Code: [More Information Needed]
- Paper: [More Information Needed]
- Docs: [More Information Needed]
|
vishakr01/comp4_05
|
vishakr01
| 2025-06-18T23:00:47Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-18T22:58:43Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
morturr/Llama-2-7b-hf-LOO_one_liners-COMB_headlines-comb1-seed42-2025-06-19
|
morturr
| 2025-06-18T23:00:06Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"license:llama2",
"region:us"
] | null | 2025-06-18T22:59:47Z |
---
library_name: peft
license: llama2
base_model: meta-llama/Llama-2-7b-hf
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: Llama-2-7b-hf-LOO_one_liners-COMB_headlines-comb1-seed42-2025-06-19
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Llama-2-7b-hf-LOO_one_liners-COMB_headlines-comb1-seed42-2025-06-19
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.1
- Pytorch 2.5.1+cu124
- Datasets 3.0.2
- Tokenizers 0.20.1
|
yumiian/qa_en_ms_model_v4
|
yumiian
| 2025-06-18T22:56:25Z | 186 | 0 |
transformers
|
[
"transformers",
"safetensors",
"deberta-v2",
"question-answering",
"generated_from_trainer",
"en",
"ms",
"dataset:yumiian/qa_en_ms_v1",
"base_model:timpal0l/mdeberta-v3-base-squad2",
"base_model:finetune:timpal0l/mdeberta-v3-base-squad2",
"license:mit",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2025-06-17T07:05:54Z |
---
library_name: transformers
license: mit
base_model: timpal0l/mdeberta-v3-base-squad2
tags:
- generated_from_trainer
model-index:
- name: qa_en_ms_model_v4
results: []
datasets:
- yumiian/qa_en_ms_v1
language:
- en
- ms
---
# qa_en_ms_model_v4
This model is a fine-tuned version of [timpal0l/mdeberta-v3-base-squad2](https://huggingface.co/timpal0l/mdeberta-v3-base-squad2) on self collected [dataset](https://huggingface.co/datasets/yumiian/qa_en_ms_v1).
It achieves the following results on the evaluation set:
- Loss: 0.7732
## Intended uses & limitations
This should only be used for question answering task. The model needs context to be able to answer user's question.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.8094 | 1.0 | 3139 | 0.7540 |
| 0.6089 | 2.0 | 6278 | 0.6963 |
| 0.5022 | 3.0 | 9417 | 0.7732 |
### Framework versions
- Transformers 4.52.4
- Pytorch 2.5.1+cu121
- Datasets 3.6.0
- Tokenizers 0.21.1
|
morturr/Llama-2-7b-hf-LOO_headlines-COMB_one_liners-comb2-seed28-2025-06-19
|
morturr
| 2025-06-18T22:55:26Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"license:llama2",
"region:us"
] | null | 2025-06-18T22:55:12Z |
---
library_name: peft
license: llama2
base_model: meta-llama/Llama-2-7b-hf
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: Llama-2-7b-hf-LOO_headlines-COMB_one_liners-comb2-seed28-2025-06-19
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Llama-2-7b-hf-LOO_headlines-COMB_one_liners-comb2-seed28-2025-06-19
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 28
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.1
- Pytorch 2.5.1+cu124
- Datasets 3.0.2
- Tokenizers 0.20.1
|
toasteduk/musicgen-medium-lora-speed-garage-v3
|
toasteduk
| 2025-06-18T22:50:58Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-18T22:50:48Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
zeeshaan-ai/Medical_Summary_Notes_ONNX
|
zeeshaan-ai
| 2025-06-18T22:50:19Z | 0 | 0 |
transformers.js
|
[
"transformers.js",
"onnx",
"llama",
"text-generation",
"conversational",
"region:us"
] |
text-generation
| 2025-06-18T20:39:38Z |
---
pipeline_tag: text-generation
library_name: transformers.js
---
|
toasteduk/musicgen-medium-lora-speed-garage
|
toasteduk
| 2025-06-18T22:48:32Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-16T12:51:05Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
isurut/wav2vec2_finetune_cv_igbo
|
isurut
| 2025-06-18T22:46:59Z | 25 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:facebook/wav2vec2-base",
"base_model:finetune:facebook/wav2vec2-base",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2025-01-08T12:10:12Z |
---
library_name: transformers
license: apache-2.0
base_model: facebook/wav2vec2-base
tags:
- generated_from_trainer
model-index:
- name: wav2vec2_finetune_cv_igbo
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2_finetune_cv_igbo
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- eval_loss: 1.4743
- eval_wer: 0.7012
- eval_runtime: 65.2578
- eval_samples_per_second: 17.546
- eval_steps_per_second: 2.207
- epoch: 5.8230
- step: 3750
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 200
- num_epochs: 20
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.52.4
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1
|
nnilayy/dreamer-valence-binary-classification-Kfold-4
|
nnilayy
| 2025-06-18T22:42:03Z | 0 | 0 | null |
[
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"region:us"
] | null | 2025-06-18T22:42:00Z |
---
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Code: [More Information Needed]
- Paper: [More Information Needed]
- Docs: [More Information Needed]
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.