modelId
string | author
string | last_modified
timestamp[us, tz=UTC] | downloads
int64 | likes
int64 | library_name
string | tags
list | pipeline_tag
string | createdAt
timestamp[us, tz=UTC] | card
string |
---|---|---|---|---|---|---|---|---|---|
Muapi/cinematic-kodak-motion-picture-film-still-style-xl-f1d-illu-pony
|
Muapi
| 2025-08-14T08:50:46Z | 0 | 0 | null |
[
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-08-14T08:50:34Z |
---
license: openrail++
tags:
- lora
- stable-diffusion
- flux.1-d
model_type: LoRA
---
# Cinematic Kodak Motion Picture "Film Still" Style XL + F1D + Illu + Pony

**Base model**: Flux.1 D
**Trained words**: Kodak Motion Picture Film style, Analog photography
## 🧠 Usage (Python)
🔑 **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https://api.muapi.ai/api/v1/flux_dev_lora_image"
headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")}
payload = {
"prompt": "masterpiece, best quality, 1girl, looking at viewer",
"model_id": [{"model": "civitai:235495@805021", "weight": 1.0}],
"width": 1024,
"height": 1024,
"num_images": 1
}
print(requests.post(url, headers=headers, json=payload).json())
```
|
p1necone/powerv2.1-Q4_K_M-GGUF
|
p1necone
| 2025-08-14T08:06:29Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"mistral",
"llama-cpp",
"gguf-my-repo",
"en",
"base_model:p1necone/powerv2.1",
"base_model:quantized:p1necone/powerv2.1",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-14T08:06:09Z |
---
base_model: p1necone/powerv2.1
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- llama-cpp
- gguf-my-repo
license: apache-2.0
language:
- en
---
# p1necone/powerv2.1-Q4_K_M-GGUF
This model was converted to GGUF format from [`p1necone/powerv2.1`](https://huggingface.co/p1necone/powerv2.1) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/p1necone/powerv2.1) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo p1necone/powerv2.1-Q4_K_M-GGUF --hf-file powerv2.1-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo p1necone/powerv2.1-Q4_K_M-GGUF --hf-file powerv2.1-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo p1necone/powerv2.1-Q4_K_M-GGUF --hf-file powerv2.1-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo p1necone/powerv2.1-Q4_K_M-GGUF --hf-file powerv2.1-q4_k_m.gguf -c 2048
```
|
p1necone/powerv2.1
|
p1necone
| 2025-08-14T07:58:25Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"base_model:unsloth/mistral-7b-instruct-v0.3-bnb-4bit",
"base_model:finetune:unsloth/mistral-7b-instruct-v0.3-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-14T07:06:09Z |
---
base_model: unsloth/mistral-7b-instruct-v0.3-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** p1necone
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-7b-instruct-v0.3-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
John6666/nova-asian-xl-illustrious-v50-sdxl
|
John6666
| 2025-08-14T07:40:28Z | 0 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"realistic",
"photorealistic",
"asian",
"facial structure",
"posing",
"expression",
"details",
"merge",
"noobai",
"Illustrious XL v2.0",
"illustrious",
"en",
"base_model:Laxhar/noobai-XL-1.1",
"base_model:merge:Laxhar/noobai-XL-1.1",
"base_model:OnomaAIResearch/Illustrious-XL-v2.0",
"base_model:merge:OnomaAIResearch/Illustrious-XL-v2.0",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] |
text-to-image
| 2025-08-14T07:31:01Z |
---
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- realistic
- photorealistic
- asian
- facial structure
- posing
- expression
- details
- merge
- noobai
- Illustrious XL v2.0
- illustrious
base_model:
- OnomaAIResearch/Illustrious-XL-v2.0
- Laxhar/noobai-XL-1.1
---
Original model is [here](https://civitai.com/models/641919/nova-asian-xl?modelVersionId=2111220).
This model created by [Crody](https://civitai.com/user/Crody).
|
Thatphum/got-ocr-2-0-fixed
|
Thatphum
| 2025-08-14T07:39:21Z | 65 | 0 |
transformers
|
[
"transformers",
"safetensors",
"got_ocr2",
"image-text-to-text",
"got",
"vision-language",
"ocr2.0",
"multilingual",
"arxiv:2409.01704",
"arxiv:2405.14295",
"arxiv:2312.06109",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2025-07-30T04:16:32Z |
---
pipeline_tag: image-text-to-text
library_name: transformers
language:
- multilingual
tags:
- got
- vision-language
- ocr2.0
license: apache-2.0
---
<h1>General OCR Theory: Towards OCR-2.0 via a Unified End-to-end Model - HF Transformers 🤗 implementation
</h1>
[🤗 Spaces Demo](https://huggingface.co/spaces/yonigozlan/GOT-OCR-Transformers) | [🌟GitHub](https://github.com/Ucas-HaoranWei/GOT-OCR2.0/) | [📜Paper](https://arxiv.org/abs/2409.01704)</a>
[Haoran Wei*](https://scholar.google.com/citations?user=J4naK0MAAAAJ&hl=en), Chenglong Liu*, Jinyue Chen, Jia Wang, Lingyu Kong, Yanming Xu, [Zheng Ge](https://joker316701882.github.io/), Liang Zhao, [Jianjian Sun](https://scholar.google.com/citations?user=MVZrGkYAAAAJ&hl=en), [Yuang Peng](https://scholar.google.com.hk/citations?user=J0ko04IAAAAJ&hl=zh-CN&oi=ao), Chunrui Han, [Xiangyu Zhang](https://scholar.google.com/citations?user=yuB-cfoAAAAJ&hl=en)

Tips:
GOT-OCR2 works on a wide range of tasks, including plain document OCR, scene text OCR, formatted document OCR, and even OCR for tables, charts, mathematical formulas, geometric shapes, molecular formulas and sheet music. While this implementation of the model will only output plain text, the outputs can be further processed to render the desired format, with packages like `pdftex`, `mathpix`, `matplotlib`, `tikz`, `verovio` or `pyecharts`.
The model can also be used for interactive OCR, where the user can specify the region to be recognized by providing the coordinates or the color of the region's bounding box.
This model was contributed by [yonigozlan](https://huggingface.co/yonigozlan).
The original code can be found [here](https://github.com/Ucas-HaoranWei/GOT-OCR2.0).
## Usage example
### Plain text inference
```python
>>> from transformers import AutoProcessor, AutoModelForImageTextToText
>>> device = "cuda" if torch.cuda.is_available() else "cpu"
>>> model = AutoModelForImageTextToText.from_pretrained("stepfun-ai/GOT-OCR-2.0-hf", device_map=device)
>>> processor = AutoProcessor.from_pretrained("stepfun-ai/GOT-OCR-2.0-hf")
>>> image = "https://huggingface.co/datasets/hf-internal-testing/fixtures_got_ocr/resolve/main/image_ocr.jpg"
>>> inputs = processor(image, return_tensors="pt").to(device)
>>> generate_ids = model.generate(
... **inputs,
... do_sample=False,
... tokenizer=processor.tokenizer,
... stop_strings="<|im_end|>",
... max_new_tokens=4096,
... )
>>> processor.decode(generate_ids[0, inputs["input_ids"].shape[1]:], skip_special_tokens=True)
"R&D QUALITY IMPROVEMENT\nSUGGESTION/SOLUTION FORM\nName/Phone Ext. : (...)"
```
### Plain text inference batched
```python
>>> from transformers import AutoProcessor, AutoModelForImageTextToText
>>> device = "cuda" if torch.cuda.is_available() else "cpu"
>>> model = AutoModelForImageTextToText.from_pretrained("stepfun-ai/GOT-OCR-2.0-hf", device_map=device)
>>> processor = AutoProcessor.from_pretrained("stepfun-ai/GOT-OCR-2.0-hf")
>>> image1 = "https://huggingface.co/datasets/hf-internal-testing/fixtures_got_ocr/resolve/main/multi_box.png"
>>> image2 = "https://huggingface.co/datasets/hf-internal-testing/fixtures_got_ocr/resolve/main/image_ocr.jpg"
>>> inputs = processor([image1, image2], return_tensors="pt").to(device)
>>> generate_ids = model.generate(
... **inputs,
... do_sample=False,
... tokenizer=processor.tokenizer,
... stop_strings="<|im_end|>",
... max_new_tokens=4,
... )
>>> processor.batch_decode(generate_ids[:, inputs["input_ids"].shape[1] :], skip_special_tokens=True)
["Reducing the number", "R&D QUALITY"]
```
### Formatted text inference
GOT-OCR2 can also generate formatted text, such as markdown or LaTeX. Here is an example of how to generate formatted text:
```python
>>> from transformers import AutoProcessor, AutoModelForImageTextToText
>>> device = "cuda" if torch.cuda.is_available() else "cpu"
>>> model = AutoModelForImageTextToText.from_pretrained("stepfun-ai/GOT-OCR-2.0-hf", device_map=device)
>>> processor = AutoProcessor.from_pretrained("stepfun-ai/GOT-OCR-2.0-hf")
>>> image = "https://huggingface.co/datasets/hf-internal-testing/fixtures_got_ocr/resolve/main/latex.png"
>>> inputs = processor(image, return_tensors="pt", format=True).to(device)
>>> generate_ids = model.generate(
... **inputs,
... do_sample=False,
... tokenizer=processor.tokenizer,
... stop_strings="<|im_end|>",
... max_new_tokens=4096,
... )
>>> processor.decode(generate_ids[0, inputs["input_ids"].shape[1]:], skip_special_tokens=True)
"\\author{\nHanwen Jiang* \\(\\quad\\) Arjun Karpur \\({ }^{\\dagger} \\quad\\) Bingyi Cao \\({ }^{\\dagger} \\quad\\) (...)"
```
### Inference on multiple pages
Although it might be reasonable in most cases to use a “for loop” for multi-page processing, some text data with formatting across several pages make it necessary to process all pages at once. GOT introduces a multi-page OCR (without “for loop”) feature, where multiple pages can be processed by the model at once, whith the output being one continuous text.
Here is an example of how to process multiple pages at once:
```python
>>> from transformers import AutoProcessor, AutoModelForImageTextToText
>>> device = "cuda" if torch.cuda.is_available() else "cpu"
>>> model = AutoModelForImageTextToText.from_pretrained("stepfun-ai/GOT-OCR-2.0-hf", device_map=device)
>>> processor = AutoProcessor.from_pretrained("stepfun-ai/GOT-OCR-2.0-hf")
>>> image1 = "https://huggingface.co/datasets/hf-internal-testing/fixtures_got_ocr/resolve/main/page1.png"
>>> image2 = "https://huggingface.co/datasets/hf-internal-testing/fixtures_got_ocr/resolve/main/page2.png"
>>> inputs = processor([image1, image2], return_tensors="pt", multi_page=True, format=True).to(device)
>>> generate_ids = model.generate(
... **inputs,
... do_sample=False,
... tokenizer=processor.tokenizer,
... stop_strings="<|im_end|>",
... max_new_tokens=4096,
... )
>>> processor.decode(generate_ids[0, inputs["input_ids"].shape[1]:], skip_special_tokens=True)
"\\title{\nGeneral OCR Theory: Towards OCR-2.0 via a Unified End-to-end Model\n}\n\\author{\nHaoran Wei (...)"
```
### Inference on cropped patches
GOT supports a 1024×1024 input resolution, which is sufficient for most OCR tasks, such as scene OCR or processing A4-sized PDF pages. However, certain scenarios, like horizontally stitched two-page PDFs commonly found in academic papers or images with unusual aspect ratios, can lead to accuracy issues when processed as a single image. To address this, GOT can dynamically crop an image into patches, process them all at once, and merge the results for better accuracy with such inputs.
Here is an example of how to process cropped patches:
```python
>>> import torch
>>> from transformers import AutoProcessor, AutoModelForImageTextToText
>>> device = "cuda" if torch.cuda.is_available() else "cpu"
>>> model = AutoModelForImageTextToText.from_pretrained("stepfun-ai/GOT-OCR-2.0-hf", torch_dtype=torch.bfloat16, device_map=device)
>>> processor = AutoProcessor.from_pretrained("stepfun-ai/GOT-OCR-2.0-hf")
>>> image = "https://huggingface.co/datasets/hf-internal-testing/fixtures_got_ocr/resolve/main/one_column.png"
>>> inputs = processor(image, return_tensors="pt", format=True, crop_to_patches=True, max_patches=3).to(device)
>>> generate_ids = model.generate(
... **inputs,
... do_sample=False,
... tokenizer=processor.tokenizer,
... stop_strings="<|im_end|>",
... max_new_tokens=4096,
... )
>>> processor.decode(generate_ids[0, inputs["input_ids"].shape[1]:], skip_special_tokens=True)
"on developing architectural improvements to make learnable matching methods generalize.\nMotivated by the above observations, (...)"
```
### Inference on a specific region
GOT supports interactive OCR, where the user can specify the region to be recognized by providing the coordinates or the color of the region's bounding box. Here is an example of how to process a specific region:
```python
>>> from transformers import AutoProcessor, AutoModelForImageTextToText
>>> device = "cuda" if torch.cuda.is_available() else "cpu"
>>> model = AutoModelForImageTextToText.from_pretrained("stepfun-ai/GOT-OCR-2.0-hf", device_map=device)
>>> processor = AutoProcessor.from_pretrained("stepfun-ai/GOT-OCR-2.0-hf")
>>> image = "https://huggingface.co/datasets/hf-internal-testing/fixtures_got_ocr/resolve/main/multi_box.png"
>>> inputs = processor(image, return_tensors="pt", color="green").to(device) # or box=[x1, y1, x2, y2] for coordinates (image pixels)
>>> generate_ids = model.generate(
... **inputs,
... do_sample=False,
... tokenizer=processor.tokenizer,
... stop_strings="<|im_end|>",
... max_new_tokens=4096,
... )
>>> processor.decode(generate_ids[0, inputs["input_ids"].shape[1]:], skip_special_tokens=True)
"You should keep in mind what features from the module should be used, especially \nwhen you’re planning to sell a template."
```
### Inference on general OCR data example: sheet music
Although this implementation of the model will only output plain text, the outputs can be further processed to render the desired format, with packages like `pdftex`, `mathpix`, `matplotlib`, `tikz`, `verovio` or `pyecharts`.
Here is an example of how to process sheet music:
```python
>>> from transformers import AutoProcessor, AutoModelForImageTextToText
>>> import verovio
>>> device = "cuda" if torch.cuda.is_available() else "cpu"
>>> model = AutoModelForImageTextToText.from_pretrained("stepfun-ai/GOT-OCR-2.0-hf", device_map=device)
>>> processor = AutoProcessor.from_pretrained("stepfun-ai/GOT-OCR-2.0-hf")
>>> image = "https://huggingface.co/datasets/hf-internal-testing/fixtures_got_ocr/resolve/main/sheet_music.png"
>>> inputs = processor(image, return_tensors="pt", format=True).to(device)
>>> generate_ids = model.generate(
... **inputs,
... do_sample=False,
... tokenizer=processor.tokenizer,
... stop_strings="<|im_end|>",
... max_new_tokens=4096,
... )
>>> outputs = processor.decode(generate_ids[0, inputs["input_ids"].shape[1]:], skip_special_tokens=True)
>>> tk = verovio.toolkit()
>>> tk.loadData(outputs)
>>> tk.setOptions(
... {
... "pageWidth": 2100,
... "pageHeight": 800,
... "footer": "none",
... "barLineWidth": 0.5,
... "beamMaxSlope": 15,
... "staffLineWidth": 0.2,
... "spacingStaff": 6,
... }
... )
>>> tk.getPageCount()
>>> svg = tk.renderToSVG()
>>> svg = svg.replace('overflow="inherit"', 'overflow="visible"')
>>> with open("output.svg", "w") as f:
>>> f.write(svg)
```
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/sheet_music.svg"
alt="drawing" width="600"/>
## Citation
If you find our work helpful, please consider citing our papers 📝 and liking this project ❤️!
```bib
@article{wei2024general,
title={General OCR Theory: Towards OCR-2.0 via a Unified End-to-end Model},
author={Wei, Haoran and Liu, Chenglong and Chen, Jinyue and Wang, Jia and Kong, Lingyu and Xu, Yanming and Ge, Zheng and Zhao, Liang and Sun, Jianjian and Peng, Yuang and others},
journal={arXiv preprint arXiv:2409.01704},
year={2024}
}
@article{liu2024focus,
title={Focus Anywhere for Fine-grained Multi-page Document Understanding},
author={Liu, Chenglong and Wei, Haoran and Chen, Jinyue and Kong, Lingyu and Ge, Zheng and Zhu, Zining and Zhao, Liang and Sun, Jianjian and Han, Chunrui and Zhang, Xiangyu},
journal={arXiv preprint arXiv:2405.14295},
year={2024}
}
@article{wei2023vary,
title={Vary: Scaling up the Vision Vocabulary for Large Vision-Language Models},
author={Wei, Haoran and Kong, Lingyu and Chen, Jinyue and Zhao, Liang and Ge, Zheng and Yang, Jinrong and Sun, Jianjian and Han, Chunrui and Zhang, Xiangyu},
journal={arXiv preprint arXiv:2312.06109},
year={2023}
}
```
|
yorotetsu/blockassist-bc-tricky_whiskered_aardvark_1755154903
|
yorotetsu
| 2025-08-14T07:03:02Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"tricky whiskered aardvark",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-14T07:02:51Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- tricky whiskered aardvark
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
JamesMartin0105/denoising
|
JamesMartin0105
| 2025-08-14T06:57:45Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-14T06:57:01Z |
If you read this, your mother will sleep with me tonight
So if you dont want to be my step son, just go fking away
Good bye and don't comeback
|
dmsrud/model_output
|
dmsrud
| 2025-08-14T06:44:16Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:beomi/kcbert-base",
"base_model:finetune:beomi/kcbert-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-08-14T06:43:40Z |
---
library_name: transformers
license: apache-2.0
base_model: beomi/kcbert-base
tags:
- generated_from_trainer
model-index:
- name: model_output
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# model_output
This model is a fine-tuned version of [beomi/kcbert-base](https://huggingface.co/beomi/kcbert-base) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 5
### Framework versions
- Transformers 4.55.0
- Pytorch 2.6.0+cu124
- Datasets 4.0.0
- Tokenizers 0.21.4
|
pavlodp/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-exotic_pawing_wombat
|
pavlodp
| 2025-08-14T06:36:22Z | 11 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am exotic pawing wombat",
"trl",
"genrl-swarm",
"I am exotic_pawing_wombat",
"conversational",
"arxiv:2402.03300",
"base_model:unsloth/Qwen2.5-0.5B-Instruct",
"base_model:finetune:unsloth/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-05-11T04:14:53Z |
---
base_model: unsloth/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-exotic_pawing_wombat
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am exotic pawing wombat
- trl
- genrl-swarm
- I am exotic_pawing_wombat
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-exotic_pawing_wombat
This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="pavlodp/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-exotic_pawing_wombat", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.17.0
- Transformers: 4.52.3
- Pytorch: 2.7.0
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
Muapi/illustration-concept
|
Muapi
| 2025-08-14T06:28:15Z | 0 | 0 | null |
[
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-08-14T06:27:57Z |
---
license: openrail++
tags:
- lora
- stable-diffusion
- flux.1-d
model_type: LoRA
---
# Illustration Concept

**Base model**: Flux.1 D
**Trained words**:
## 🧠 Usage (Python)
🔑 **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https://api.muapi.ai/api/v1/flux_dev_lora_image"
headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")}
payload = {
"prompt": "masterpiece, best quality, 1girl, looking at viewer",
"model_id": [{"model": "civitai:858800@1619213", "weight": 1.0}],
"width": 1024,
"height": 1024,
"num_images": 1
}
print(requests.post(url, headers=headers, json=payload).json())
```
|
Intel/GLM-4.5V-int4-AutoRound
|
Intel
| 2025-08-14T06:27:50Z | 0 | 0 | null |
[
"safetensors",
"glm4v_moe",
"dataset:NeelNanda/pile-10k",
"arxiv:2309.05516",
"base_model:zai-org/GLM-4.5V",
"base_model:quantized:zai-org/GLM-4.5V",
"4-bit",
"auto-round",
"region:us"
] | null | 2025-08-14T06:18:46Z |
---
base_model:
- zai-org/GLM-4.5V
datasets:
- NeelNanda/pile-10k
---
## Model Details
This model is an int4 model with group_size 128 and symmetric quantization of [zai-org/GLM-4.5V](https://huggingface.co/zai-org/GLM-4.5V) generated by [intel/auto-round](https://github.com/intel/auto-round) algorithm.
Please follow the license of the original model.
## How To Use
### INT4 Inference
```python
from transformers import AutoProcessor, Glm4vMoeForConditionalGeneration
import torch
MODEL_PATH = "Intel/GLM-4.5V-int4-AutoRound"
messages = [
{
"role": "user",
"content": [
{
"type": "image",
"url": "https://upload.wikimedia.org/wikipedia/commons/f/fa/Grayscale_8bits_palette_sample_image.png"
},
{
"type": "text",
"text": "describe this image"
}
],
}
]
processor = AutoProcessor.from_pretrained(MODEL_PATH)
model = Glm4vMoeForConditionalGeneration.from_pretrained(
pretrained_model_name_or_path=MODEL_PATH,
torch_dtype="auto",
device_map="auto",
)
inputs = processor.apply_chat_template(
messages,
tokenize=True,
add_generation_prompt=True,
return_dict=True,
return_tensors="pt"
).to(model.device)
inputs.pop("token_type_ids", None)
generated_ids = model.generate(**inputs, max_new_tokens=8192)
output_text = processor.decode(generated_ids[0][inputs["input_ids"].shape[1]:], skip_special_tokens=False)
print(output_text)
"""
<think>Got it, let's see. The user wants a description of the image. First, I need to look at the details. The image is black and white, so that's a key point. The main subject is a parrot. Let's check its features: it has a large beak, maybe a crest on its head, feathers that look soft. It's perched on a circular perch, maybe a metal ring. The background has some foliage, but it's blurred, so the focus is on the parrot. I should mention the color (black and white), the parrot's position, its physical features like beak, feathers, the perch, and the background. Make sure to describe it clearly so the user gets a good picture.</think>
This is a black - and - white photograph featuring a parrot as the central subject. The parrot is perched on a circular metal perch. It has a prominent, curved beak and its feathers appear soft and textured. The bird’s head is turned slightly, and its body is positioned in a way that shows off its wings and tail. In the background, there is some blurred foliage, which helps to keep the focus on the parrot. The overall composition highlights the parrot’s details and posture.<|user|>
"""
```
### Generate the model
```bash
auto_round --mllm --model zai-org/GLM-4.5V --output_dir tmp_autoround --group_size 128 --seqlen 2048
```
## Ethical Considerations and Limitations
The model can produce factually incorrect output, and should not be relied on to produce factually accurate information. Because of the limitations of the pretrained model and the finetuning datasets, it is possible that this model could generate lewd, biased or otherwise offensive outputs.
Therefore, before deploying any applications of the model, developers should perform safety testing.
## Caveats and Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.
Here are a couple of useful links to learn more about Intel's AI software:
- Intel Neural Compressor [link](https://github.com/intel/neural-compressor)
## Disclaimer
The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model. Please consult an attorney before using this model for commercial purposes.
## Cite
@article{cheng2023optimize, title={Optimize weight rounding via signed gradient descent for the quantization of llms}, author={Cheng, Wenhua and Zhang, Weiwei and Shen, Haihao and Cai, Yiyang and He, Xin and Lv, Kaokao and Liu, Yi}, journal={arXiv preprint arXiv:2309.05516}, year={2023} }
[arxiv](https://arxiv.org/abs/2309.05516) [github](https://github.com/intel/auto-round)
|
WijewardhanaNT/xnli_en_ur_1000
|
WijewardhanaNT
| 2025-08-14T05:49:26Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-14T05:49:07Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
JayHyeon/Qwen_0.5-VDPO_5e-7_5vpo_constant_ls0.0_seed42
|
JayHyeon
| 2025-08-14T05:02:19Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"dpo",
"trl",
"conversational",
"dataset:JayHyeon/shp-dpo-converted",
"arxiv:2305.18290",
"base_model:Qwen/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-14T00:58:01Z |
---
base_model: Qwen/Qwen2.5-0.5B-Instruct
datasets: JayHyeon/shp-dpo-converted
library_name: transformers
model_name: Qwen_0.5-VDPO_5e-7_5vpo_constant_ls0.0_seed42
tags:
- generated_from_trainer
- dpo
- trl
licence: license
---
# Model Card for Qwen_0.5-VDPO_5e-7_5vpo_constant_ls0.0_seed42
This model is a fine-tuned version of [Qwen/Qwen2.5-0.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-0.5B-Instruct) on the [JayHyeon/shp-dpo-converted](https://huggingface.co/datasets/JayHyeon/shp-dpo-converted) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="JayHyeon/Qwen_0.5-VDPO_5e-7_5vpo_constant_ls0.0_seed42", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/bonin147/huggingface/runs/5xql89su)
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.22.0.dev0
- Transformers: 4.55.0
- Pytorch: 2.8.0
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
TAUR-dev/M-skills_in_rl_v2__1e5_cd3arg_sft-sft
|
TAUR-dev
| 2025-08-14T05:01:48Z | 0 | 0 | null |
[
"safetensors",
"qwen2",
"region:us"
] | null | 2025-08-14T03:59:54Z |
# M-skills_in_rl_v2__1e5_cd3arg_sft-sft
This model was created as part of the **skills_in_rl_v2__1e5_cd3arg_sft** experiment using the SkillFactory experiment management system.
## Model Details
- **Training Method**: LLaMAFactory SFT (Supervised Fine-Tuning)
- **Stage Name**: sft
- **Experiment**: skills_in_rl_v2__1e5_cd3arg_sft
## Training Configuration
{"model_name_or_path": "Qwen/Qwen2.5-1.5B-Instruct", "trust_remote_code": true, "stage": "sft", "do_train": true, "finetuning_type": "full", "deepspeed": "/scratch/10416/zaynesprague/skill_factory_dir/skill-factory/thirdparty/LLaMA-Factory/examples/deepspeed/ds_z2_config.json", "dataset": "TAUR_dev__D_SFT_C_skills_in_rl_v2__1e5_cd3arg_sft_sft_data__sft_train", "template": "qwen", "cutoff_len": 16384, "max_samples": 1000000, "overwrite_cache": true, "preprocessing_num_workers": 1, "dataloader_num_workers": 0, "disable_tqdm": false, "output_dir": "/scratch/10416/zaynesprague/skill_inject_outputs/sf_experiments/skills_in_rl/sft_exp1/llamafactory/checkpoints", "logging_steps": 10, "save_steps": 100000, "plot_loss": true, "overwrite_output_dir": true, "per_device_train_batch_size": 1, "gradient_accumulation_steps": 1, "learning_rate": 1e-05, "num_train_epochs": 1, "lr_scheduler_type": "cosine", "warmup_ratio": 0.05, "weight_decay": 0.0001, "adam_beta1": 0.9, "adam_beta2": 0.95, "bf16": true, "ddp_timeout": 180000000, "gradient_checkpointing": true, "save_only_model": true, "enable_masked_ranges": false, "save_strategy": "steps", "save_total_limit": 5, "sf_tracker_dataset_id": "TAUR-dev/D-ExpTracker__skills_in_rl_v2__1e5_cd3arg_sft__v1", "sf_eval_before_training": false, "sf_wandb_project": "skills_in_rl_v2__1e5_cd3arg_sft_sft", "sf_eval_steps": null, "run_name": "skills_in_rl_v2__1e5_cd3arg_sft_sft"}
## Experiment Tracking
🔗 **View complete experiment details**: [Experiment Tracker Dataset](https://huggingface.co/datasets/TAUR-dev/D-ExpTracker__skills_in_rl_v2__1e5_cd3arg_sft__v1)
## Usage
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("TAUR-dev/M-skills_in_rl_v2__1e5_cd3arg_sft-sft")
model = AutoModelForCausalLM.from_pretrained("TAUR-dev/M-skills_in_rl_v2__1e5_cd3arg_sft-sft")
```
|
JayHyeon/Qwen_0.5-VDPO_5e-7_3.0vpo_constant_ls0.0_seed42
|
JayHyeon
| 2025-08-14T05:01:41Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"trl",
"dpo",
"conversational",
"dataset:JayHyeon/shp-dpo-converted",
"arxiv:2305.18290",
"base_model:Qwen/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-14T00:57:31Z |
---
base_model: Qwen/Qwen2.5-0.5B-Instruct
datasets: JayHyeon/shp-dpo-converted
library_name: transformers
model_name: Qwen_0.5-VDPO_5e-7_3.0vpo_constant_ls0.0_seed42
tags:
- generated_from_trainer
- trl
- dpo
licence: license
---
# Model Card for Qwen_0.5-VDPO_5e-7_3.0vpo_constant_ls0.0_seed42
This model is a fine-tuned version of [Qwen/Qwen2.5-0.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-0.5B-Instruct) on the [JayHyeon/shp-dpo-converted](https://huggingface.co/datasets/JayHyeon/shp-dpo-converted) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="JayHyeon/Qwen_0.5-VDPO_5e-7_3.0vpo_constant_ls0.0_seed42", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/bonin147/huggingface/runs/flqxjku5)
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.22.0.dev0
- Transformers: 4.55.0
- Pytorch: 2.8.0
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
afasdfdfadsf/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-tiny_camouflaged_mole
|
afasdfdfadsf
| 2025-08-14T04:43:59Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am tiny_camouflaged_mole",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-14T01:07:00Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am tiny_camouflaged_mole
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
braindeck/whisper_kr_zeroth_e10
|
braindeck
| 2025-08-14T04:16:57Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:openai/whisper-large-v3",
"base_model:finetune:openai/whisper-large-v3",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2025-08-14T04:15:23Z |
---
library_name: transformers
license: apache-2.0
base_model: openai/whisper-large-v3
tags:
- generated_from_trainer
model-index:
- name: whisper_kr_zeroth_e10
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper_kr_zeroth_e10
This model is a fine-tuned version of [openai/whisper-large-v3](https://huggingface.co/openai/whisper-large-v3) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0295
- Cer: 0.5216
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Cer |
|:-------------:|:------:|:----:|:---------------:|:------:|
| 0.0531 | 1.4731 | 1000 | 0.0414 | 0.6750 |
| 0.013 | 2.9462 | 2000 | 0.0259 | 0.7042 |
| 0.0038 | 4.4186 | 3000 | 0.0262 | 0.6675 |
| 0.001 | 5.8917 | 4000 | 0.0281 | 0.7461 |
| 0.0005 | 7.3640 | 5000 | 0.0292 | 0.5735 |
| 0.0003 | 8.8371 | 6000 | 0.0295 | 0.5216 |
### Framework versions
- Transformers 4.55.2
- Pytorch 2.6.0a0+ecf3bae40a.nv25.01
- Datasets 3.6.0
- Tokenizers 0.21.1
|
TomeroSama07/simple_pick_place3_tiny
|
TomeroSama07
| 2025-08-14T03:56:04Z | 0 | 0 |
lerobot
|
[
"lerobot",
"safetensors",
"robotics",
"act",
"dataset:TomeroSama07/simple_pick_place3_tiny",
"arxiv:2304.13705",
"license:apache-2.0",
"region:us"
] |
robotics
| 2025-08-14T03:55:13Z |
---
datasets: TomeroSama07/simple_pick_place3_tiny
library_name: lerobot
license: apache-2.0
model_name: act
pipeline_tag: robotics
tags:
- lerobot
- robotics
- act
---
# Model Card for act
<!-- Provide a quick summary of what the model is/does. -->
[Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high success rates.
This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot).
See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index).
---
## How to Get Started with the Model
For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy).
Below is the short version on how to train and run inference/eval:
### Train from scratch
```bash
python -m lerobot.scripts.train \
--dataset.repo_id=${HF_USER}/<dataset> \
--policy.type=act \
--output_dir=outputs/train/<desired_policy_repo_id> \
--job_name=lerobot_training \
--policy.device=cuda \
--policy.repo_id=${HF_USER}/<desired_policy_repo_id>
--wandb.enable=true
```
_Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`._
### Evaluate the policy/run inference
```bash
python -m lerobot.record \
--robot.type=so100_follower \
--dataset.repo_id=<hf_user>/eval_<dataset> \
--policy.path=<hf_user>/<desired_policy_repo_id> \
--episodes=10
```
Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint.
---
## Model Details
- **License:** apache-2.0
|
sitatech/FluxUtils
|
sitatech
| 2025-08-14T03:55:35Z | 3 | 0 | null |
[
"license:other",
"region:us"
] | null | 2025-02-28T17:05:42Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/Runware/FLUX.1-Redux-dev/blob/main/LICENSE.md
---
|
mradermacher/GoldDiamondGold-L33-70b-GGUF
|
mradermacher
| 2025-08-14T03:52:22Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:KaraKaraWitch/GoldDiamondGold-L33-70b",
"base_model:quantized:KaraKaraWitch/GoldDiamondGold-L33-70b",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-08-14T00:10:35Z |
---
base_model: KaraKaraWitch/GoldDiamondGold-L33-70b
language:
- en
library_name: transformers
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/KaraKaraWitch/GoldDiamondGold-L33-70b
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#GoldDiamondGold-L33-70b-GGUF).***
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/GoldDiamondGold-L33-70b-GGUF/resolve/main/GoldDiamondGold-L33-70b.Q2_K.gguf) | Q2_K | 26.5 | |
| [GGUF](https://huggingface.co/mradermacher/GoldDiamondGold-L33-70b-GGUF/resolve/main/GoldDiamondGold-L33-70b.Q3_K_S.gguf) | Q3_K_S | 31.0 | |
| [GGUF](https://huggingface.co/mradermacher/GoldDiamondGold-L33-70b-GGUF/resolve/main/GoldDiamondGold-L33-70b.Q3_K_M.gguf) | Q3_K_M | 34.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/GoldDiamondGold-L33-70b-GGUF/resolve/main/GoldDiamondGold-L33-70b.Q3_K_L.gguf) | Q3_K_L | 37.2 | |
| [GGUF](https://huggingface.co/mradermacher/GoldDiamondGold-L33-70b-GGUF/resolve/main/GoldDiamondGold-L33-70b.Q4_K_S.gguf) | Q4_K_S | 40.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/GoldDiamondGold-L33-70b-GGUF/resolve/main/GoldDiamondGold-L33-70b.Q4_K_M.gguf) | Q4_K_M | 42.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/GoldDiamondGold-L33-70b-GGUF/resolve/main/GoldDiamondGold-L33-70b.Q5_K_S.gguf) | Q5_K_S | 48.8 | |
| [PART 1](https://huggingface.co/mradermacher/GoldDiamondGold-L33-70b-GGUF/resolve/main/GoldDiamondGold-L33-70b.Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/GoldDiamondGold-L33-70b-GGUF/resolve/main/GoldDiamondGold-L33-70b.Q6_K.gguf.part2of2) | Q6_K | 58.0 | very good quality |
| [PART 1](https://huggingface.co/mradermacher/GoldDiamondGold-L33-70b-GGUF/resolve/main/GoldDiamondGold-L33-70b.Q8_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/GoldDiamondGold-L33-70b-GGUF/resolve/main/GoldDiamondGold-L33-70b.Q8_0.gguf.part2of2) | Q8_0 | 75.1 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
rbelanec/train_apps_1754897204
|
rbelanec
| 2025-08-14T03:39:35Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"llama-factory",
"prompt-tuning",
"generated_from_trainer",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"base_model:adapter:meta-llama/Meta-Llama-3-8B-Instruct",
"license:llama3",
"region:us"
] | null | 2025-08-11T07:27:40Z |
---
library_name: peft
license: llama3
base_model: meta-llama/Meta-Llama-3-8B-Instruct
tags:
- llama-factory
- prompt-tuning
- generated_from_trainer
model-index:
- name: train_apps_1754897204
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# train_apps_1754897204
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on the apps dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7171
- Num Input Tokens Seen: 880041568
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 123
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Input Tokens Seen |
|:-------------:|:------:|:------:|:---------------:|:-----------------:|
| 0.7667 | 0.5000 | 13189 | 0.7823 | 44223136 |
| 0.7102 | 1.0000 | 26378 | 0.7484 | 87957952 |
| 0.7129 | 1.5001 | 39567 | 0.7365 | 131814656 |
| 0.6309 | 2.0001 | 52756 | 0.7293 | 175975840 |
| 0.6354 | 2.5001 | 65945 | 0.7260 | 219881664 |
| 0.6694 | 3.0001 | 79134 | 0.7247 | 263949472 |
| 0.7233 | 3.5001 | 92323 | 0.7227 | 307925280 |
| 0.6627 | 4.0002 | 105512 | 0.7221 | 352048320 |
| 0.6283 | 4.5002 | 118701 | 0.7200 | 396106880 |
| 0.6722 | 5.0002 | 131890 | 0.7191 | 440014752 |
| 0.768 | 5.5002 | 145079 | 0.7185 | 484066880 |
| 0.7321 | 6.0002 | 158268 | 0.7182 | 528105600 |
| 0.8997 | 6.5002 | 171457 | 0.7176 | 572089824 |
| 0.6457 | 7.0003 | 184646 | 0.7174 | 616130592 |
| 0.7701 | 7.5003 | 197835 | 0.7173 | 660063168 |
| 0.7298 | 8.0003 | 211024 | 0.7171 | 704033600 |
| 0.8252 | 8.5003 | 224213 | 0.7172 | 747976128 |
| 0.7198 | 9.0003 | 237402 | 0.7172 | 792077152 |
| 0.6224 | 9.5004 | 250591 | 0.7172 | 836063392 |
### Framework versions
- PEFT 0.15.2
- Transformers 4.51.3
- Pytorch 2.8.0+cu128
- Datasets 3.6.0
- Tokenizers 0.21.1
|
7h3-R3v3n4n7/pentest-agent
|
7h3-R3v3n4n7
| 2025-08-14T03:37:09Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2025-08-13T04:22:36Z |
---
base_model: unsloth/meta-llama-3.1-8b-instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** 7h3-R3v3n4n7
- **License:** apache-2.0
- **Finetuned from model :** unsloth/meta-llama-3.1-8b-instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
7h3-R3v3n4n7/pentest-agent-merged-16bit
|
7h3-R3v3n4n7
| 2025-08-14T03:30:27Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2025-06-21T06:17:04Z |
---
base_model: unsloth/meta-llama-3.1-8b-instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** 7h3-R3v3n4n7
- **License:** apache-2.0
- **Finetuned from model :** unsloth/meta-llama-3.1-8b-instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
DoppelReflEx/test-26-Q4_K_S-GGUF
|
DoppelReflEx
| 2025-08-14T03:18:21Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"mergekit",
"merge",
"llama-cpp",
"gguf-my-repo",
"base_model:DoppelReflEx/test-26",
"base_model:quantized:DoppelReflEx/test-26",
"endpoints_compatible",
"region:us"
] | null | 2025-08-14T03:17:18Z |
---
base_model: DoppelReflEx/test-26
library_name: transformers
tags:
- mergekit
- merge
- llama-cpp
- gguf-my-repo
---
# DoppelReflEx/test-26-Q4_K_S-GGUF
This model was converted to GGUF format from [`DoppelReflEx/test-26`](https://huggingface.co/DoppelReflEx/test-26) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/DoppelReflEx/test-26) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo DoppelReflEx/test-26-Q4_K_S-GGUF --hf-file test-26-q4_k_s.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo DoppelReflEx/test-26-Q4_K_S-GGUF --hf-file test-26-q4_k_s.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo DoppelReflEx/test-26-Q4_K_S-GGUF --hf-file test-26-q4_k_s.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo DoppelReflEx/test-26-Q4_K_S-GGUF --hf-file test-26-q4_k_s.gguf -c 2048
```
|
greenw0lf/exp1-whisper-jasmin-child
|
greenw0lf
| 2025-08-14T02:57:47Z | 30 | 0 |
peft
|
[
"peft",
"safetensors",
"base_model:adapter:openai/whisper-large-v2",
"lora",
"transformers",
"nl",
"dataset:jasmin",
"dataset:jasmin-cgn",
"base_model:openai/whisper-large-v2",
"license:apache-2.0",
"model-index",
"region:us"
] | null | 2025-07-18T01:06:31Z |
---
library_name: peft
language:
- nl
license: apache-2.0
base_model: openai/whisper-large-v2
tags:
- base_model:adapter:openai/whisper-large-v2
- lora
- transformers
datasets:
- jasmin
- jasmin-cgn
metrics:
- wer
model-index:
- name: exp1-whisper-jasmin-child
results:
- task:
type: automatic-speech-recognition
name: Automatic Speech Recognition
dataset:
name: JASMIN-CGN
type: jasmin
metrics:
- type: wer
value: 17.334854228872413
name: Wer
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# exp1-whisper-jasmin-child
This model is a fine-tuned version of [openai/whisper-large-v2](https://huggingface.co/openai/whisper-large-v2) on the JASMIN-CGN dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3635
- Wer: 17.3349
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 48
- eval_batch_size: 32
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 81
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:-------:|
| 1.0258 | 0.1838 | 50 | 1.1952 | 37.8166 |
| 0.9199 | 0.3676 | 100 | 1.0455 | 35.2233 |
| 0.7782 | 0.5515 | 150 | 0.8236 | 32.5159 |
| 0.5719 | 0.7353 | 200 | 0.5861 | 29.6004 |
| 0.4533 | 0.9191 | 250 | 0.4692 | 23.8233 |
| 0.4438 | 1.1029 | 300 | 0.4243 | 21.0689 |
| 0.436 | 1.2868 | 350 | 0.4051 | 19.8175 |
| 0.4189 | 1.4706 | 400 | 0.3933 | 19.4552 |
| 0.4044 | 1.6544 | 450 | 0.3844 | 19.2170 |
| 0.3686 | 1.8382 | 500 | 0.3781 | 17.7509 |
| 0.3758 | 2.0221 | 550 | 0.3734 | 17.7408 |
| 0.4074 | 2.2059 | 600 | 0.3697 | 17.5697 |
| 0.3681 | 2.3897 | 650 | 0.3669 | 17.4523 |
| 0.3604 | 2.5735 | 700 | 0.3650 | 17.3617 |
| 0.3876 | 2.7574 | 750 | 0.3640 | 17.3483 |
| 0.3905 | 2.9412 | 800 | 0.3635 | 17.3349 |
### Framework versions
- PEFT 0.16.0
- Transformers 4.52.0
- Pytorch 2.7.1+cu126
- Datasets 3.6.0
- Tokenizers 0.21.2
|
mang3dd/blockassist-bc-tangled_slithering_alligator_1755138376
|
mang3dd
| 2025-08-14T02:51:55Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"tangled slithering alligator",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-14T02:51:52Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- tangled slithering alligator
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
tanushm/Qwen-3-4B-Instruct-Chat-0.5.merged
|
tanushm
| 2025-08-14T02:41:57Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-14T02:37:11Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
tandeshao/Llama-3.1-8B-cn-jokes-lora-it
|
tandeshao
| 2025-08-14T02:38:03Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/Meta-Llama-3.1-8B",
"base_model:finetune:unsloth/Meta-Llama-3.1-8B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-13T16:49:57Z |
---
base_model: unsloth/Meta-Llama-3.1-8B
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** tandeshao
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Meta-Llama-3.1-8B
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Josephzzz/eval_act_peg_c500_0806
|
Josephzzz
| 2025-08-14T02:18:05Z | 0 | 0 |
lerobot
|
[
"lerobot",
"safetensors",
"act",
"robotics",
"dataset:Josephzzz/peg-in-hole-single-arm",
"arxiv:2304.13705",
"license:apache-2.0",
"region:us"
] |
robotics
| 2025-08-14T02:17:59Z |
---
datasets: Josephzzz/peg-in-hole-single-arm
library_name: lerobot
license: apache-2.0
model_name: act
pipeline_tag: robotics
tags:
- lerobot
- act
- robotics
---
# Model Card for act
<!-- Provide a quick summary of what the model is/does. -->
[Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high success rates.
This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot).
See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index).
---
## How to Get Started with the Model
For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy).
Below is the short version on how to train and run inference/eval:
### Train from scratch
```bash
python -m lerobot.scripts.train \
--dataset.repo_id=${HF_USER}/<dataset> \
--policy.type=act \
--output_dir=outputs/train/<desired_policy_repo_id> \
--job_name=lerobot_training \
--policy.device=cuda \
--policy.repo_id=${HF_USER}/<desired_policy_repo_id>
--wandb.enable=true
```
_Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`._
### Evaluate the policy/run inference
```bash
python -m lerobot.record \
--robot.type=so100_follower \
--dataset.repo_id=<hf_user>/eval_<dataset> \
--policy.path=<hf_user>/<desired_policy_repo_id> \
--episodes=10
```
Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint.
---
## Model Details
- **License:** apache-2.0
|
REEA-GLOBAL/Qwen2.5-VL-7B-Instruct-ft-20250813214113957
|
REEA-GLOBAL
| 2025-08-14T02:15:46Z | 0 | 0 |
transformers
|
[
"transformers",
"text-generation-inference",
"unsloth",
"qwen2_5_vl",
"en",
"base_model:unsloth/Qwen2.5-VL-7B-Instruct",
"base_model:finetune:unsloth/Qwen2.5-VL-7B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-14T02:15:44Z |
---
base_model: unsloth/Qwen2.5-VL-7B-Instruct
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2_5_vl
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** REEA-GLOBAL
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen2.5-VL-7B-Instruct
This qwen2_5_vl model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Hiranmai49/gemma-2-9b-it-AdaptiveEvaluation_DPO
|
Hiranmai49
| 2025-08-14T01:40:03Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"dpo",
"arxiv:2305.18290",
"base_model:google/gemma-2-9b-it",
"base_model:finetune:google/gemma-2-9b-it",
"endpoints_compatible",
"region:us"
] | null | 2025-08-02T05:54:39Z |
---
base_model: google/gemma-2-9b-it
library_name: transformers
model_name: gemma-2-9b-it-AdaptiveEvaluation_DPO
tags:
- generated_from_trainer
- trl
- dpo
licence: license
---
# Model Card for gemma-2-9b-it-AdaptiveEvaluation_DPO
This model is a fine-tuned version of [google/gemma-2-9b-it](https://huggingface.co/google/gemma-2-9b-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Hiranmai49/gemma-2-9b-it-AdaptiveEvaluation_DPO", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/hiranmai/huggingface/runs/4tsmt0bu)
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.12.0
- Transformers: 4.46.1
- Pytorch: 2.4.0
- Datasets: 3.0.1
- Tokenizers: 0.20.3
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
ankitkushwaha90/Safetensors_transformer_llm_model
|
ankitkushwaha90
| 2025-08-14T01:39:32Z | 0 | 0 |
fastai
|
[
"fastai",
"code",
"token-classification",
"en",
"dataset:fka/awesome-chatgpt-prompts",
"base_model:comfyanonymous/ControlNet-v1-1_fp16_safetensors",
"base_model:finetune:comfyanonymous/ControlNet-v1-1_fp16_safetensors",
"license:mit",
"region:us"
] |
token-classification
| 2025-08-05T01:09:33Z |
---
license: mit
language:
- en
metrics:
- code_eval
base_model:
- comfyanonymous/ControlNet-v1-1_fp16_safetensors
new_version: openai/gpt-oss-120b
pipeline_tag: token-classification
library_name: fastai
tags:
- code
datasets:
- fka/awesome-chatgpt-prompts
---
Here are complete study notes for understanding safetensors models — designed for clarity, beginner-friendly learning, and covering all basic doubts. Useful for Hugging Face, PyTorch, Transformers, LLMs, and more.
## 📦 What is safetensors?
safetensors is a safe, fast, and portable file format used to store machine learning model weights (usually as an alternative to .bin or .pt files in PyTorch).
## 🔐 Why "safe"?
Because it:
Does not allow arbitrary code execution
Prevents security vulnerabilities during model loading (vs. pickle-based formats)
## 📁 Safetensors vs PyTorch .bin
Feature .bin / .pt (PyTorch) safetensors
## 📁 Safetensors vs PyTorch `.bin`
| Feature | `.bin` / `.pt` (PyTorch) | `safetensors` |
|---------------------|------------------------------|-------------------------------|
| ✅ Security | ❌ Untrusted pickle code | ✅ Fully secure format |
| 🧠 Speed | Moderate load speed | 🚀 Extremely fast loading |
| 🧱 Format | PyTorch specific pickle | Cross-framework tensor map |
| 📦 Portability | Low (Python only) | High (language agnostic) |
| 🔍 Inspectable? | ❌ No | ✅ Yes (via CLI or code) |
## 🔧 When is safetensors used?
Transformers models on Hugging Face: e.g., LLaMA, Mistral, BERT
Custom PyTorch/TensorFlow model weights
High-speed inference applications
Secure model sharing in research and production
## 📚 File Structure of a Safetensors Model
A .safetensors file stores:
A JSON header (with tensor names, shapes, dtypes, offsets)
A flat binary blob of all tensor data
```json
{
"model.weights.0": {"dtype": "float32", "shape": [1024, 1024], "offsets": [0, 4194304]},
}
```
### 🔄 How to Save and Load Safetensors
🐍 Python Code Example (PyTorch):
```python
from safetensors.torch import save_file, load_file
import torch
# Example tensor dict
tensors = {
"layer1.weight": torch.randn(2, 2),
"layer1.bias": torch.randn(2)
}
# Save
save_file(tensors, "model.safetensors")
# Load
loaded = load_file("model.safetensors")
print(loaded["layer1.weight"])
```
### 🤖 Load Safetensors in Hugging Face Transformers
✅ Transformers automatically detect .safetensors
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained(
"TheBloke/Llama-2-7B-GGUF",
trust_remote_code=True # required for some models
)
tokenizer = AutoTokenizer.from_pretrained("TheBloke/Llama-2-7B-GGUF")
```
If the repo has model.safetensors, it will use it.
| Feature | Description |
| ----------------- | ------------------------------------------- |
| 📁 `.safetensors` | Secure format for storing weights |
| 🧠 Frameworks | PyTorch, TensorFlow, Rust, etc. |
| 🔐 Security | No arbitrary code execution (unlike pickle) |
| ⚡ Speed | Loads faster than `.bin` |
| 🔍 Inspectable | Easily check weights and shapes |
| 🤝 Interoperable | Can be used across languages and systems |
### 🛠️ CLI Tool
Install:
```bash
pip install safetensors
```
Inspect:
```bash
safetensors-cli inspect model.safetensors
```
Convert from PyTorch:
```bash
safetensors-cli convert model.bin model.safetensors
```
### ⚠️ Common Beginner Doubts
❓Is .safetensors just a renamed .pt or .bin?
No. It’s a completely different format — structured binary + JSON header. It’s not pickled Python objects.
❓Can I use it without PyTorch?
Yes. It has libraries in Rust, Python, and more. Tensor data is pure and framework-agnostic.
❓What if my model is only in .bin?
You can convert it with Hugging Face CLI tools or manually (load weights and save as .safetensors).
❓Is it only for Transformers?
No. You can use it for any ML model, not just Transformers.
❓Can I load .safetensors into TensorFlow?
Yes — there are TensorFlow bindings. However, most community usage is in PyTorch.
❓How big are .safetensors files?
Roughly same size as .bin but load faster due to optimized memory access.
🧠 Use Cases
- LLMs (LLaMA, Mistral, Falcon) with Hugging Face
- Secure sharing of models (e.g., in open source or competitions)
- Deployments where performance and security matter
## 📌 Quick Recap
✅ .safetensors = secure, fast, inspectable, portable model format
✅ Prevents code execution risks from .bin/.pt
✅ Hugging Face models fully support it
✅ Can be used with or without PyTorch
✅ Ideal for LLMs and secure environments
### 💬 Want to Try More?
Let me know if you want:
- ⚙️ A tool to convert .bin → .safetensors
- 🌐 Serve models with FastAPI + safetensors
- 🧠 Use it with Hugging Face transformers LLMs
- 📂 Inspect tensor names, shapes, dtypes programmatically
I'll build that example for you.
|
neural-interactive-proofs/finetune_dpo_qwen2_5-32b-instruct_cv_qwen2.5-32B_prover_nip_transfer_baseline_1_1_iter_4_provers
|
neural-interactive-proofs
| 2025-08-14T01:37:07Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"dpo",
"arxiv:2305.18290",
"base_model:Qwen/Qwen2.5-32B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-32B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-08-14T01:33:03Z |
---
base_model: Qwen/Qwen2.5-32B-Instruct
library_name: transformers
model_name: finetune_dpo_qwen2_5-32b-instruct_cv_qwen2.5-32B_prover_nip_transfer_baseline_1_1_iter_4_provers
tags:
- generated_from_trainer
- trl
- dpo
licence: license
---
# Model Card for finetune_dpo_qwen2_5-32b-instruct_cv_qwen2.5-32B_prover_nip_transfer_baseline_1_1_iter_4_provers
This model is a fine-tuned version of [Qwen/Qwen2.5-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-32B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="neural-interactive-proofs/finetune_dpo_qwen2_5-32b-instruct_cv_qwen2.5-32B_prover_nip_transfer_baseline_1_1_iter_4_provers", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/lrhammond-team/pvg-self-hosted-finetune/runs/qwen2_5-32b-instruct_dpo_2025-08-14_01-25-38_cv_qwen2.5-32B_prover_nip_transfer_baseline_1_1_iter_4_provers_group)
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.18.2
- Transformers: 4.53.2
- Pytorch: 2.7.0
- Datasets: 3.0.0
- Tokenizers: 0.21.1
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
mang3dd/blockassist-bc-tangled_slithering_alligator_1755133200
|
mang3dd
| 2025-08-14T01:25:28Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"tangled slithering alligator",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-14T01:25:20Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- tangled slithering alligator
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
nightmedia/Huihui-Qwen3-4B-Instruct-2507-abliterated-bf16-mlx
|
nightmedia
| 2025-08-14T01:22:16Z | 0 | 0 |
mlx
|
[
"mlx",
"safetensors",
"qwen3",
"abliterated",
"uncensored",
"text-generation",
"conversational",
"base_model:huihui-ai/Huihui-Qwen3-4B-Instruct-2507-abliterated",
"base_model:finetune:huihui-ai/Huihui-Qwen3-4B-Instruct-2507-abliterated",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2025-08-14T00:52:35Z |
---
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen3-4B-Instruct-2507/blob/main/LICENSE
base_model: huihui-ai/Huihui-Qwen3-4B-Instruct-2507-abliterated
pipeline_tag: text-generation
library_name: mlx
tags:
- abliterated
- uncensored
- mlx
---
# Huihui-Qwen3-4B-Instruct-2507-abliterated-bf16-mlx
This model [Huihui-Qwen3-4B-Instruct-2507-abliterated-bf16-mlx](https://huggingface.co/Huihui-Qwen3-4B-Instruct-2507-abliterated-bf16-mlx) was
converted to MLX format from [huihui-ai/Huihui-Qwen3-4B-Instruct-2507-abliterated](https://huggingface.co/huihui-ai/Huihui-Qwen3-4B-Instruct-2507-abliterated)
using mlx-lm version **0.26.3**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("Huihui-Qwen3-4B-Instruct-2507-abliterated-bf16-mlx")
prompt = "hello"
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
aochongoliverli/Qwen2.5-3B-math8k-distill-AM-Distill-Qwen-32B-16k-5epochs-2e-5lr-step100
|
aochongoliverli
| 2025-08-14T01:08:38Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-14T01:04:34Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
FrinzTheCoder/trainer_output
|
FrinzTheCoder
| 2025-08-14T00:45:11Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen3-0.6B",
"base_model:finetune:Qwen/Qwen3-0.6B",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-13T17:03:30Z |
---
library_name: transformers
license: apache-2.0
base_model: Qwen/Qwen3-0.6B
tags:
- generated_from_trainer
model-index:
- name: trainer_output
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# trainer_output
This model is a fine-tuned version of [Qwen/Qwen3-0.6B](https://huggingface.co/Qwen/Qwen3-0.6B) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.55.1
- Pytorch 2.7.1+cu118
- Datasets 4.0.0
- Tokenizers 0.21.2
|
winnieyangwannan/entity_Llama-3.1-8B-Instruct_mlp-down_pnas_layer_16_8_all_37_0.0001_9600_1
|
winnieyangwannan
| 2025-08-14T00:25:45Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-14T00:24:33Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
rvipitkirubbe/blockassist-bc-mottled_foraging_ape_1755129495
|
rvipitkirubbe
| 2025-08-14T00:24:40Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"mottled foraging ape",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-14T00:24:36Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- mottled foraging ape
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
winnieyangwannan/entity_Llama-3.1-8B-Instruct_mlp-down_pnas_layer_16_8_all_37_0.0001_7680_1
|
winnieyangwannan
| 2025-08-14T00:22:02Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-14T00:20:49Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Dharabhara/water_checker
|
Dharabhara
| 2025-08-14T00:17:07Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-14T00:15:59Z |
import gradio as gr
import random
custom_css = """
body {
background: linear-gradient(-45deg, #a1c4fd, #c2e9fb, #fbc2eb, #a6c1ee);
background-size: 400% 400%;
animation: gradientShift 20s ease infinite;
font-family: 'Segoe UI', sans-serif;
overflow-x: hidden;
color: #003344;
}
@keyframes gradientShift {
0% {background-position: 0% 50%;}
50% {background-position: 100% 50%;}
100% {background-position: 0% 50%;}
}
h1.glow {
font-size: 3.2em;
font-weight: 900;
color: #005f99;
text-align: center;
animation: glowPulse 3s ease-in-out infinite;
margin-bottom: 30px;
}
@keyframes glowPulse {
0%, 100% {
text-shadow:
0 0 5px #00bfff,
0 0 15px #00bfff,
0 0 20px #00bfff,
0 0 40px #00bfff;
}
50% {
text-shadow:
0 0 15px #0077cc,
0 0 25px #0077cc,
0 0 35px #0077cc,
0 0 55px #0077cc;
}
}
#start-screen {
height: 80vh;
display: flex;
flex-direction: column;
justify-content: center;
align-items: center;
}
/* Neon Start button */
#start-button {
font-size: 2.2em;
font-weight: 900;
color: #00f0ff;
background: linear-gradient(45deg, #00cfff, #00f0ff, #00b0ff, #00f0ff);
background-size: 300% 300%;
border: none;
border-radius: 20px;
padding: 25px 70px;
cursor: pointer;
box-shadow:
0 0 10px #00e0ff,
0 0 20px #00e0ff,
0 0 40px #00e0ff,
inset 0 0 8px #00aaff;
animation: neonPulse 3s ease-in-out infinite;
transition: transform 0.3s ease, box-shadow 0.3s ease;
text-transform: uppercase;
letter-spacing: 2px;
user-select: none;
}
#start-button:hover {
transform: scale(1.15) rotate(-2deg);
box-shadow:
0 0 15px #00ffff,
0 0 30px #00ffff,
0 0 60px #00ffff,
inset 0 0 12px #00ccff;
}
@keyframes neonPulse {
0%, 100% {
background-position: 0% 50%;
box-shadow:
0 0 10px #00e0ff,
0 0 20px #00e0ff,
0 0 40px #00e0ff,
inset 0 0 8px #00aaff;
}
50% {
background-position: 100% 50%;
box-shadow:
0 0 20px #00ffff,
0 0 40px #00ffff,
0 0 70px #00ffff,
inset 0 0 12px #00ccff;
}
}
/* Tab Titles */
.gr-tabs .gr-tabs-header {
font-weight: 700;
font-size: 1.3em;
color: #004466;
border-bottom: 3px solid #00bfff;
padding-bottom: 6px;
margin-bottom: 15px;
}
.gr-tabs .gr-tabs-header > div.gr-tabs-header-item:hover {
color: #00bfff !important;
cursor: pointer;
}
/* Tab panel background and border */
.gr-tabs .gr-tabs-panel {
background: rgba(255, 255, 255, 0.85);
border-radius: 15px;
padding: 25px;
box-shadow: 0 8px 25px rgba(0, 191, 255, 0.15);
transition: background-color 0.4s ease;
}
/* Headings inside tabs */
h2.tab-title {
color: #007acc;
font-weight: 800;
font-size: 2em;
margin-bottom: 15px;
text-shadow: 0 0 8px #00bfff66;
animation: fadeInDown 1s ease forwards;
}
@keyframes fadeInDown {
from {
opacity: 0;
transform: translateY(-15px);
}
to {
opacity: 1;
transform: translateY(0);
}
}
/* Buttons in tab content */
button {
transition: 0.3s ease-in-out;
font-weight: bold;
background: linear-gradient(135deg, #0099ff 0%, #00ccff 100%);
border-radius: 12px;
color: white !important;
box-shadow: 0 6px 15px rgba(0, 204, 255, 0.5);
border: none;
padding: 12px 25px;
font-size: 1.1em;
}
button:hover {
transform: scale(1.05);
background: linear-gradient(135deg, #00ccff 0%, #0099ff 100%);
box-shadow: 0 8px 25px rgba(0, 204, 255, 0.8);
color: white !important;
cursor: pointer;
}
/* Number inputs */
input[type=number] {
transition: box-shadow 0.3s ease, border-color 0.3s ease;
border: 1.5px solid #ccc;
border-radius: 8px;
padding: 7px 12px;
font-size: 1em;
width: 100%;
}
input[type=number]:focus {
outline: none;
border-color: #00bfff;
box-shadow: 0 0 10px #00bfff80;
}
/* Output box styling */
.output-html > div {
animation: glow 2.5s infinite;
transition: transform 0.3s ease;
box-shadow: 0 0 15px rgba(0, 191, 255, 0.3);
border-radius: 14px;
background: #f0faff;
padding: 20px;
font-size: 1.15em;
color: #003344;
}
@keyframes glow {
0% { box-shadow: 0 0 10px rgba(0,191,255,0.3); }
50% { box-shadow: 0 0 25px rgba(0,191,255,0.7); }
100% { box-shadow: 0 0 10px rgba(0,191,255,0.3); }
}
/* Usage labels */
.usage-label {
display: inline-block;
animation: pulseGlow 3s ease-in-out infinite;
font-weight: bold;
margin-bottom: 12px;
padding: 7px 15px;
border-radius: 15px;
font-size: 1.3em;
user-select: none;
}
.usage-label.safe {
background-color: #28a745;
color: white;
box-shadow: 0 0 15px #28a745cc;
}
.usage-label.medium {
background-color: #ffc107;
color: black;
box-shadow: 0 0 15px #ffc107cc;
}
.usage-label.dangerous {
background-color: #dc3545;
color: white;
box-shadow: 0 0 15px #dc3545cc;
}
@keyframes pulseGlow {
0%, 100% {
text-shadow: 0 0 5px rgba(0,0,0,0.1);
}
50% {
text-shadow: 0 0 12px rgba(0,0,0,0.3);
}
}
/* Markdown styling for usage and standards tabs */
.gr-markdown p, .gr-markdown ul, .gr-markdown li {
font-size: 1.05em;
line-height: 1.5em;
color: #004466;
margin-bottom: 10px;
}
.gr-markdown ul {
padding-left: 20px;
}
.gr-markdown li {
margin-bottom: 6px;
}
"""
def classify_water(pH, turbidity, hardness, solids, sulfate,
conductivity, fluoride, nitrate,
coliform_count, e_coli,
chloride, calcium, magnesium, sodium,
temperature, iron, manganese, phosphate, zinc, lead):
safe = True
if not (6.5 <= pH <= 8.5): safe = False
if turbidity > 5: safe = False
if hardness > 200: safe = False
if solids > 500: safe = False
if sulfate > 250: safe = False
if conductivity > 500: safe = False
if fluoride > 1.5 or fluoride < 0.6: safe = False
if nitrate > 45: safe = False
if coliform_count > 0 or e_coli > 0: safe = False
if temperature < 0 or temperature > 40: safe = False
if iron > 0.3: safe = False
if manganese > 0.05: safe = False
if phosphate > 0.1: safe = False
if zinc > 5: safe = False
if lead > 0.01: safe = False
medium = True
if coliform_count > 500 or e_coli > 100: medium = False
if solids > 2000: medium = False
if turbidity > 50: medium = False
if safe:
classification_html = f"""
<div class="usage-label safe">SAFE ✅</div>
<p><b>Meaning:</b> Water is safe for drinking, cooking, bathing, and all household uses.</p>
<ul>
<li>✔ Drinking</li>
<li>✔ Cooking</li>
<li>✔ Bathing</li>
<li>✔ Washing clothes/utensils</li>
<li>✔ Irrigation</li>
</ul>
"""
elif medium:
classification_html = f"""
<div class="usage-label medium">MEDIUM ⚠</div>
<p><b>Meaning:</b> Water is <u>not recommended for drinking</u> but may be used for other household uses with caution.</p>
<ul>
<li>❌ Not for drinking</li>
<li>✔ Bathing (if no open wounds)</li>
<li>✔ Washing clothes/utensils (TDS < 1000 mg/L, Turbidity < 10 NTU)</li>
<li>✔ Irrigation (TDS < 2000 mg/L, EC < 2000 µS/cm, Nitrate < 50 mg/L)</li>
</ul>
"""
else:
classification_html = f"""
<div class="usage-label dangerous">DANGEROUS ❌</div>
<p><b>Meaning:</b> Water is unsafe for any use without proper treatment.</p>
<ul>
<li>❌ Do NOT drink or cook</li>
<li>❌ Avoid contact with skin and eyes</li>
<li>❌ Do NOT use for washing food items</li>
<li>❌ Do NOT use for irrigation if pathogens present</li>
</ul>
"""
return classification_html
def fetch_live_data():
return (round(random.uniform(6.5, 8.5), 2), # pH
round(random.uniform(0, 5), 2), # turbidity
random.randint(50, 200), # hardness
random.randint(200, 500), # solids
random.randint(50, 250), # sulfate
random.randint(100, 500), # conductivity
round(random.uniform(0.5, 1.5), 2), # fluoride
random.randint(10, 45), # nitrate
random.randint(0, 10), # coliform_count
random.randint(0, 5), # e_coli
random.randint(100, 250), # chloride
random.randint(20, 75), # calcium
random.randint(10, 30), # magnesium
random.randint(50, 200), # sodium
random.randint(5, 40), # temperature
round(random.uniform(0, 0.5), 2), # iron
round(random.uniform(0, 0.1), 2), # manganese
round(random.uniform(0, 0.2), 2), # phosphate
random.randint(0, 10), # zinc
round(random.uniform(0, 0.05), 3)) # lead
usage_text = """
# 🚰 Water Usage Guide
### SAFE
✔ Drinking
✔ Cooking
✔ Bathing
✔ Washing clothes/utensils
✔ Irrigation
### MEDIUM
❌ Not for drinking
✔ Bathing (if no open wounds)
✔ Washing clothes/utensils (TDS < 1000 mg/L, Turbidity < 10 NTU)
✔ Irrigation (TDS < 2000 mg/L, EC < 2000 µS/cm, Nitrate < 50 mg/L)
### DANGEROUS
❌ Do NOT drink or cook
❌ Avoid contact with skin and eyes
❌ Do NOT use for washing food items
❌ Do NOT use for irrigation if pathogens present
"""
standards_text = """
# 💧 Water Quality Standards (WHO + BIS IS 10500:2012)
*Drinking Water Standards:*
- pH: 6.5 – 8.5
- Turbidity: ≤ 1 NTU (desirable), max 5 NTU
- TDS (Total Dissolved Solids): ≤ 500 mg/L (max 2000 mg/L)
- Hardness: ≤ 200 mg/L (max 600 mg/L)
- Sulphate: ≤ 200 mg/L (max 400 mg/L)
- Conductivity: ≤ 500 µS/cm (max 2000 µS/cm)
- Fluoride: 0.6 – 1.5 mg/L
- Nitrate: ≤ 45 mg/L
- Total Coliform: 0 CFU/100mL
- E. coli: 0 CFU/100mL
---
### Additional Usage Guidelines
- Iron: ≤ 0.3 mg/L
- Manganese: ≤ 0.1 mg/L
- Phosphate: ≤ 0.1 mg/L
- Zinc: ≤ 5 mg/L
- Lead: ≤ 0.01 mg/L
"""
with gr.Blocks(css=custom_css) as demo:
# Start screen
with gr.Column(elem_id="start-screen"):
gr.Markdown("<h1 class='glow'>Water Quality Classifier</h1>")
start_btn = gr.Button("Start", elem_id="start-button")
# Main app container (initially hidden)
main_app = gr.Tabs(visible=False)
with main_app:
with gr.Tab("Classification"):
gr.Markdown("<h2 class='tab-title'>Classify Water Quality</h2>")
with gr.Row():
pH = gr.Number(label="pH", value=7.0, precision=2)
turbidity = gr.Number(label="Turbidity (NTU)", value=1.0, precision=2)
hardness = gr.Number(label="Hardness (mg/L)", value=150)
solids = gr.Number(label="Solids (mg/L)", value=300)
with gr.Row():
sulfate = gr.Number(label="Sulfate (mg/L)", value=150)
conductivity = gr.Number(label="Conductivity (µS/cm)", value=300)
fluoride = gr.Number(label="Fluoride (mg/L)", value=1.0)
nitrate = gr.Number(label="Nitrate (mg/L)", value=20)
with gr.Row():
coliform_count = gr.Number(label="Total Coliform (CFU/100mL)", value=0)
e_coli = gr.Number(label="E. coli (CFU/100mL)", value=0)
chloride = gr.Number(label="Chloride (mg/L)", value=150)
calcium = gr.Number(label="Calcium (mg/L)", value=50)
with gr.Row():
magnesium = gr.Number(label="Magnesium (mg/L)", value=20)
sodium = gr.Number(label="Sodium (mg/L)", value=100)
temperature = gr.Number(label="Temperature (°C)", value=25)
with gr.Row():
iron = gr.Number(label="Iron (mg/L)", value=0.1)
manganese = gr.Number(label="Manganese (mg/L)", value=0.02)
phosphate = gr.Number(label="Phosphate (mg/L)", value=0.05)
zinc = gr.Number(label="Zinc (mg/L)", value=1)
lead = gr.Number(label="Lead (mg/L)", value=0.005)
classify_btn = gr.Button("Classify Water")
fetch_live_btn = gr.Button("Fetch Live Data")
output_html = gr.HTML()
classify_btn.click(fn=classify_water,
inputs=[pH, turbidity, hardness, solids, sulfate,
conductivity, fluoride, nitrate,
coliform_count, e_coli,
chloride, calcium, magnesium, sodium,
temperature, iron, manganese, phosphate, zinc, lead],
outputs=output_html)
fetch_live_btn.click(fn=fetch_live_data,
inputs=None,
outputs=[pH, turbidity, hardness, solids, sulfate,
conductivity, fluoride, nitrate,
coliform_count, e_coli,
chloride, calcium, magnesium, sodium,
temperature, iron, manganese, phosphate, zinc, lead])
with gr.Tab("Water Usage Guide"):
gr.Markdown("<h2 class='tab-title'>Water Usage Guide</h2>")
gr.Markdown(usage_text)
with gr.Tab("Water Quality Standards"):
gr.Markdown("<h2 class='tab-title'>Water Quality Standards</h2>")
gr.Markdown(standards_text)
# Show main app when start clicked
def start_app():
return gr.update(visible=False), gr.update(visible=True)
start_btn.click(fn=start_app, inputs=None, outputs=[start_btn, main_app])
demo.launch()
|
Tano13/WAN_dr34mj0b.safetensors
|
Tano13
| 2025-08-14T00:09:25Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:John6666/wai-ani-nsfw-ponyxl-v7-sdxl",
"base_model:adapter:John6666/wai-ani-nsfw-ponyxl-v7-sdxl",
"region:us"
] |
text-to-image
| 2025-08-14T00:08:12Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- output:
url: images/181aee0825b04ee58aeea2f49dab2f52.jpeg
text: '-'
base_model: John6666/wai-ani-nsfw-ponyxl-v7-sdxl
instance_prompt: dr34mj0b
---
# WAN_dr34mj0b.safetensors
<Gallery />
## Trigger words
You should use `dr34mj0b` to trigger the image generation.
## Download model
[Download](/Tano13/WAN_dr34mj0b.safetensors/tree/main) them in the Files & versions tab.
|
rvipitkirubbe/blockassist-bc-mottled_foraging_ape_1755127684
|
rvipitkirubbe
| 2025-08-13T23:54:41Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"mottled foraging ape",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-13T23:54:37Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- mottled foraging ape
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Muapi/velvet-s-mythic-fantasy-styles-flux-pony-illustrious
|
Muapi
| 2025-08-13T23:44:27Z | 0 | 0 | null |
[
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-08-13T23:44:10Z |
---
license: openrail++
tags:
- lora
- stable-diffusion
- flux.1-d
model_type: LoRA
---
# Velvet's Mythic Fantasy Styles | Flux + Pony + illustrious

**Base model**: Flux.1 D
**Trained words**: D1gitalL1nes
## 🧠 Usage (Python)
🔑 **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https://api.muapi.ai/api/v1/flux_dev_lora_image"
headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")}
payload = {
"prompt": "masterpiece, best quality, 1girl, looking at viewer",
"model_id": [{"model": "civitai:599757@1957771", "weight": 1.0}],
"width": 1024,
"height": 1024,
"num_images": 1
}
print(requests.post(url, headers=headers, json=payload).json())
```
|
aduarte1/pythia410-summary_rm_15_3e
|
aduarte1
| 2025-08-13T23:29:16Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-13T23:29:09Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
nightmedia/Huihui-Qwen3-4B-Instruct-2507-abliterated-q8-hi-mlx
|
nightmedia
| 2025-08-13T23:29:04Z | 0 | 0 |
mlx
|
[
"mlx",
"safetensors",
"qwen3",
"abliterated",
"uncensored",
"text-generation",
"conversational",
"base_model:huihui-ai/Huihui-Qwen3-4B-Instruct-2507-abliterated",
"base_model:quantized:huihui-ai/Huihui-Qwen3-4B-Instruct-2507-abliterated",
"license:apache-2.0",
"8-bit",
"region:us"
] |
text-generation
| 2025-08-13T23:11:42Z |
---
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen3-4B-Instruct-2507/blob/main/LICENSE
base_model: huihui-ai/Huihui-Qwen3-4B-Instruct-2507-abliterated
pipeline_tag: text-generation
library_name: mlx
tags:
- abliterated
- uncensored
- mlx
---
# Huihui-Qwen3-4B-Instruct-2507-abliterated-q8-hi-mlx
This model [Huihui-Qwen3-4B-Instruct-2507-abliterated-q8-hi-mlx](https://huggingface.co/Huihui-Qwen3-4B-Instruct-2507-abliterated-q8-hi-mlx) was
converted to MLX format from [huihui-ai/Huihui-Qwen3-4B-Instruct-2507-abliterated](https://huggingface.co/huihui-ai/Huihui-Qwen3-4B-Instruct-2507-abliterated)
using mlx-lm version **0.26.3**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("Huihui-Qwen3-4B-Instruct-2507-abliterated-q8-hi-mlx")
prompt = "hello"
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
ChatterjeeLab/PepMLM-650M
|
ChatterjeeLab
| 2025-08-13T23:10:57Z | 174 | 21 |
transformers
|
[
"transformers",
"pytorch",
"esm",
"fill-mask",
"arxiv:2310.03842",
"doi:10.57967/hf/5858",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-09-27T18:47:24Z |
---
license: mit
extra_gated_fields:
Name: text
Company: text
Country: country
Specific date: date_picker
I want to use this model for:
type: select
options:
- Research
- Education
- label: Other
value: other
I agree to include the authors of the code (Tianlai Chen and Pranam Chatterjee) as authors on manuscripts with data from designed peptides: checkbox
I agree to share generated sequences and associated data with authors before publishing: checkbox
I agree not to file patents on any sequences generated by this model: checkbox
I agree to use this model for non-commercial use ONLY: checkbox
---
**PepMLM: Target Sequence-Conditioned Generation of Peptide Binders via Masked Language Modeling**

In this work, we introduce **PepMLM**, a purely target sequence-conditioned *de novo* generator of linear peptide binders.
By employing a novel masking strategy that uniquely positions cognate peptide sequences at the terminus of target protein sequences,
PepMLM tasks the state-of-the-art ESM-2 pLM to fully reconstruct the binder region,
achieving low perplexities matching or improving upon previously-validated peptide-protein sequence pairs.
After successful *in silico* benchmarking with AlphaFold-Multimer, we experimentally verify PepMLM’s efficacy via fusion of model-derived peptides to E3 ubiquitin ligase domains, demonstrating endogenous degradation of target substrates in cellular models.
In total, PepMLM enables the generative design of candidate binders to any target protein, without the requirement of target structure, empowering downstream programmable proteome editing applications.
- Demo: HuggingFace Space Demo [Link](https://huggingface.co/spaces/TianlaiChen/PepMLM).[Temporarily Unavailable]
- Colab Notebook: [Link](https://colab.research.google.com/drive/1u0i-LBog_lvQ5YRKs7QLKh_RtI-tV8qM?usp=sharing)
- Preprint: [Link](https://arxiv.org/abs/2310.03842)
- Nature Biotechnology: [Link](https://www.nature.com/articles/s41587-025-02761-2)
```
# Load model directly
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("TianlaiChen/PepMLM-650M")
model = AutoModelForMaskedLM.from_pretrained("TianlaiChen/PepMLM-650M")
```

|
AmirMohseni/grpo-gemma-3-4b
|
AmirMohseni
| 2025-08-13T23:07:41Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"grpo",
"trl",
"arxiv:2402.03300",
"base_model:google/gemma-3-4b-it",
"base_model:finetune:google/gemma-3-4b-it",
"endpoints_compatible",
"region:us"
] | null | 2025-08-13T10:51:10Z |
---
base_model: google/gemma-3-4b-it
library_name: transformers
model_name: grpo-gemma-3-4b
tags:
- generated_from_trainer
- grpo
- trl
licence: license
---
# Model Card for grpo-gemma-3-4b
This model is a fine-tuned version of [google/gemma-3-4b-it](https://huggingface.co/google/gemma-3-4b-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="AmirMohseni/grpo-gemma-3-4b", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/amirmohseni-maastricht-university/grpo-math-training/runs/jv2xfd1t)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.22.0.dev0
- Transformers: 4.55.1
- Pytorch: 2.7.1
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
mang3dd/blockassist-bc-tangled_slithering_alligator_1755124505
|
mang3dd
| 2025-08-13T23:00:58Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"tangled slithering alligator",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-13T23:00:53Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- tangled slithering alligator
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
DeathGodlike/PatriSlush-DarkRPMax-12B_EXL3
|
DeathGodlike
| 2025-08-13T22:37:43Z | 0 | 0 |
safetensors
|
[
"safetensors",
"exl3",
"4-bit",
"6-bit",
"8-bit",
"text-generation",
"base_model:pot99rta/PatriSlush-DarkRPMax-12B",
"base_model:quantized:pot99rta/PatriSlush-DarkRPMax-12B",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2025-08-13T22:37:41Z |
---
license: apache-2.0
base_model:
- pot99rta/PatriSlush-DarkRPMax-12B
base_model_relation: quantized
pipeline_tag: text-generation
library_name: safetensors
tags:
- exl3
- 4-bit
- 6-bit
- 8-bit
---
## EXL3 quants: [ [H8-4.0BPW](https://huggingface.co/DeathGodlike/PatriSlush-DarkRPMax-12B_EXL3/tree/H8-4.0BPW) | [H8-6.0BPW](https://huggingface.co/DeathGodlike/PatriSlush-DarkRPMax-12B_EXL3/tree/H8-6.0BPW) | [H8-8.0BPW](https://huggingface.co/DeathGodlike/PatriSlush-DarkRPMax-12B_EXL3/tree/H8-8.0BPW) ]
# Original model: [PatriSlush-DarkRPMax-12B](https://huggingface.co/pot99rta/PatriSlush-DarkRPMax-12B) by [pot99rta](https://huggingface.co/pot99rta)
|
vandanarangaswamy/peft_dpo_pairrm
|
vandanarangaswamy
| 2025-08-13T22:33:02Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:meta-llama/Llama-3.2-1B-Instruct",
"base_model:adapter:meta-llama/Llama-3.2-1B-Instruct",
"region:us"
] | null | 2025-08-13T22:32:57Z |
---
base_model: meta-llama/Llama-3.2-1B-Instruct
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.11.1
|
elmenbillion/blockassist-bc-beaked_sharp_otter_1755121812
|
elmenbillion
| 2025-08-13T22:15:37Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"beaked sharp otter",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-13T22:15:34Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- beaked sharp otter
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
SalmonAI123/whisper-small-vi-vlsp
|
SalmonAI123
| 2025-08-13T22:09:00Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2025-08-13T05:00:30Z |
---
library_name: transformers
license: apache-2.0
base_model: openai/whisper-small
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: whisper-small-vi-vlsp
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-small-vi-vlsp
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0808
- Wer: 14.2857
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:-------:|
| 0.1227 | 0.2216 | 500 | 0.1274 | 21.2294 |
| 0.1044 | 0.4433 | 1000 | 0.1056 | 19.4552 |
| 0.0888 | 0.6649 | 1500 | 0.0954 | 16.9265 |
| 0.0877 | 0.8865 | 2000 | 0.0894 | 15.7455 |
| 0.0592 | 1.1082 | 2500 | 0.0859 | 15.1250 |
| 0.0541 | 1.3298 | 3000 | 0.0836 | 15.6218 |
| 0.0542 | 1.5514 | 3500 | 0.0820 | 16.8766 |
| 0.0574 | 1.7730 | 4000 | 0.0808 | 14.2857 |
### Framework versions
- Transformers 4.48.0
- Pytorch 2.8.0+cu128
- Datasets 4.0.0
- Tokenizers 0.21.4
|
alexgenovese/FLUX.1-Kontext-dev-Smashed
|
alexgenovese
| 2025-08-13T22:05:51Z | 0 | 0 |
diffusers
|
[
"diffusers",
"pruna-ai",
"region:us"
] | null | 2025-08-13T21:59:21Z |
---
library_name: diffusers
tags:
- pruna-ai
---
# Model Card for alexgenovese/FLUX.1-Kontext-dev-Smashed
This model was created using the [pruna](https://github.com/PrunaAI/pruna) library. Pruna is a model optimization framework built for developers, enabling you to deliver more efficient models with minimal implementation overhead.
And [Flux Kontext Dev](https://huggingface.co/black-forest-labs/FLUX.1-Kontext-dev)
## Usage
First things first, you need to install the pruna library:
```bash
pip install pruna
```
You can [use the diffusers library to load the model](https://huggingface.co/alexgenovese/FLUX.1-Kontext-smashed?library=diffusers) but this might not include all optimizations by default.
To ensure that all optimizations are applied, use the pruna library to load the model using the following code:
```python
from pruna import PrunaModel
loaded_model = PrunaModel.from_hub(
"alexgenovese/FLUX.1-Kontext-dev-smashed"
)
```
After loading the model, you can use the inference methods of the original model. Take a look at the [documentation](https://pruna.readthedocs.io/en/latest/index.html) for more usage information.
## Smash Configuration
The compression configuration of the model is stored in the `smash_config.json` file, which describes the optimization methods that were applied to the model.
```bash
```
## 🌍 Follow Me
[](https://twitter.com/alexgenovese)
[](https://github.com/alexgenovese)
|
g-assismoraes/Qwen3-4B-Base-fpi-alpha2.0-var-ep5
|
g-assismoraes
| 2025-08-13T21:50:10Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-13T21:46:42Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
g-assismoraes/Qwen3-4B-Base-fpi-alpha1.6-var-ep5
|
g-assismoraes
| 2025-08-13T21:46:23Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-13T21:42:13Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
g-assismoraes/Qwen3-4B-Base-fpi-alpha1.0-var-ep5
|
g-assismoraes
| 2025-08-13T21:41:54Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-13T21:38:12Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
DebadityaMalakar/ArchaicLLM-v0
|
DebadityaMalakar
| 2025-08-13T21:29:01Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"en",
"base_model:Qwen/Qwen3-4B-Instruct-2507",
"base_model:finetune:Qwen/Qwen3-4B-Instruct-2507",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-13T18:29:55Z |
---
license: apache-2.0
language:
- en
base_model:
- Qwen/Qwen3-4B-Instruct-2507
library_name: transformers
---
|
allenai/olmOCR-7B-0825
|
allenai
| 2025-08-13T21:22:24Z | 0 | 2 |
transformers
|
[
"transformers",
"safetensors",
"qwen2_5_vl",
"image-to-text",
"en",
"dataset:allenai/olmOCR-mix-0225",
"base_model:Qwen/Qwen2.5-VL-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-VL-7B-Instruct",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
image-to-text
| 2025-08-13T20:54:32Z |
---
language:
- en
license: apache-2.0
datasets:
- allenai/olmOCR-mix-0225
base_model:
- Qwen/Qwen2.5-VL-7B-Instruct
library_name: transformers
---
<img alt="olmOCR Logo" src="https://huggingface.co/datasets/allenai/blog-images/resolve/main/olmocr/olmocr.png" width="242px" style="margin-left:'auto' margin-right:'auto' display:'block'">
# olmOCR-7B-0825
This is a release of the olmOCR model that's fine tuned from Qwen2.5-VL-7B-Instruct using the
[olmOCR-mix-0225](https://huggingface.co/datasets/allenai/olmOCR-mix-0225) dataset.
Quick links:
- 📃 [Paper](https://olmocr.allenai.org/papers/olmocr.pdf)
- 🤗 [Dataset](https://huggingface.co/datasets/allenai/olmOCR-mix-0225)
- 🛠️ [Code](https://github.com/allenai/olmocr)
- 🎮 [Demo](https://olmocr.allenai.org/)
The best way to use this model is via the [olmOCR toolkit](https://github.com/allenai/olmocr).
The toolkit comes with an efficient inference setup via sglang that can handle millions of documents
at scale.
## Usage
This model expects as input a single document image, rendered such that the longest dimension is 1288 pixels.
The prompt must then contain the additional metadata from the document, and the easiest way to generate this
is to use the methods provided by the [olmOCR toolkit](https://github.com/allenai/olmocr).
## License and use
olmOCR is licensed under the Apache 2.0 license.
olmOCR is intended for research and educational use.
For more information, please see our [Responsible Use Guidelines](https://allenai.org/responsible-use).
|
AngelinaZanardi/bge-m3-edu-scorer-lr3e5-bs32-swe
|
AngelinaZanardi
| 2025-08-13T21:12:08Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:BAAI/bge-m3",
"base_model:finetune:BAAI/bge-m3",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-08-13T14:51:56Z |
---
library_name: transformers
license: mit
base_model: BAAI/bge-m3
tags:
- generated_from_trainer
metrics:
- precision
- recall
- accuracy
model-index:
- name: bge-m3-edu-scorer-lr3e5-bs32-swe
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bge-m3-edu-scorer-lr3e5-bs32-swe
This model is a fine-tuned version of [BAAI/bge-m3](https://huggingface.co/BAAI/bge-m3) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7609
- Precision: 0.4097
- Recall: 0.3946
- F1 Macro: 0.3894
- Accuracy: 0.4890
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 Macro | Accuracy |
|:-------------:|:-------:|:-----:|:---------------:|:---------:|:------:|:--------:|:--------:|
| No log | 0 | 0 | 4.3577 | 0.0901 | 0.1673 | 0.0815 | 0.3090 |
| 0.9246 | 0.6793 | 1000 | 0.8882 | 0.4543 | 0.3637 | 0.3614 | 0.4195 |
| 0.908 | 1.3587 | 2000 | 0.8582 | 0.4426 | 0.3657 | 0.3634 | 0.4308 |
| 0.9112 | 2.0380 | 3000 | 0.8497 | 0.4632 | 0.3761 | 0.3777 | 0.4379 |
| 0.869 | 2.7174 | 4000 | 0.8498 | 0.4627 | 0.3799 | 0.3816 | 0.4338 |
| 0.8834 | 3.3967 | 5000 | 0.8571 | 0.4422 | 0.3826 | 0.3800 | 0.4284 |
| 0.8369 | 4.0761 | 6000 | 0.8412 | 0.4466 | 0.3857 | 0.3847 | 0.4377 |
| 0.8543 | 4.7554 | 7000 | 0.8446 | 0.4469 | 0.3902 | 0.3861 | 0.4306 |
| 0.8693 | 5.4348 | 8000 | 0.8176 | 0.4516 | 0.3926 | 0.3930 | 0.4502 |
| 0.8446 | 6.1141 | 9000 | 0.8108 | 0.4510 | 0.3920 | 0.3932 | 0.4504 |
| 0.8509 | 6.7935 | 10000 | 0.8066 | 0.4605 | 0.3939 | 0.3960 | 0.4488 |
| 0.8289 | 7.4728 | 11000 | 0.8037 | 0.4630 | 0.3942 | 0.3966 | 0.4502 |
| 0.8339 | 8.1522 | 12000 | 0.8106 | 0.4520 | 0.3967 | 0.3958 | 0.4449 |
| 0.8295 | 8.8315 | 13000 | 0.7998 | 0.4526 | 0.3982 | 0.3974 | 0.4512 |
| 0.8188 | 9.5109 | 14000 | 0.7853 | 0.4487 | 0.3889 | 0.3907 | 0.4552 |
| 0.8334 | 10.1902 | 15000 | 0.7906 | 0.4506 | 0.3989 | 0.3994 | 0.4552 |
| 0.834 | 10.8696 | 16000 | 0.7917 | 0.4495 | 0.3915 | 0.3914 | 0.4486 |
| 0.7841 | 11.5489 | 17000 | 0.7799 | 0.4637 | 0.4069 | 0.4099 | 0.4643 |
| 0.8051 | 12.2283 | 18000 | 0.7859 | 0.4580 | 0.4028 | 0.4046 | 0.4574 |
| 0.8113 | 12.9076 | 19000 | 0.7683 | 0.4544 | 0.3946 | 0.3977 | 0.4659 |
| 0.8058 | 13.5870 | 20000 | 0.7745 | 0.4702 | 0.4053 | 0.4097 | 0.4625 |
| 0.7872 | 14.2663 | 21000 | 0.7702 | 0.4605 | 0.4021 | 0.4072 | 0.4671 |
| 0.7692 | 14.9457 | 22000 | 0.7689 | 0.4590 | 0.4027 | 0.4066 | 0.4669 |
| 0.8065 | 15.625 | 23000 | 0.7730 | 0.4668 | 0.4056 | 0.4087 | 0.4621 |
| 0.7927 | 16.3043 | 24000 | 0.7692 | 0.4681 | 0.4096 | 0.4137 | 0.4691 |
| 0.8039 | 16.9837 | 25000 | 0.7687 | 0.4601 | 0.4025 | 0.4061 | 0.4633 |
| 0.7847 | 17.6630 | 26000 | 0.7622 | 0.4640 | 0.4040 | 0.4089 | 0.4693 |
| 0.7791 | 18.3424 | 27000 | 0.7675 | 0.4643 | 0.4058 | 0.4089 | 0.4663 |
| 0.7499 | 19.0217 | 28000 | 0.7655 | 0.4661 | 0.4047 | 0.4092 | 0.4669 |
| 0.782 | 19.7011 | 29000 | 0.7655 | 0.4639 | 0.4045 | 0.4083 | 0.4659 |
### Framework versions
- Transformers 4.55.0
- Pytorch 2.5.1+cu121
- Datasets 4.0.0
- Tokenizers 0.21.4
|
MariChristmass/comicsanime
|
MariChristmass
| 2025-08-13T21:07:19Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-08-13T21:06:36Z |
---
license: apache-2.0
---
|
mang3dd/blockassist-bc-tangled_slithering_alligator_1755117641
|
mang3dd
| 2025-08-13T21:06:29Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"tangled slithering alligator",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-13T21:06:24Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- tangled slithering alligator
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
andrewtim-mats/indep_canary_doctok_cp5000
|
andrewtim-mats
| 2025-08-13T21:05:48Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"base_model:adapter:nvidia/Llama-3_3-Nemotron-Super-49B-v1",
"lora",
"transformers",
"text-generation",
"conversational",
"arxiv:1910.09700",
"base_model:nvidia/Llama-3_3-Nemotron-Super-49B-v1",
"region:us"
] |
text-generation
| 2025-08-13T21:03:22Z |
---
base_model: nvidia/Llama-3_3-Nemotron-Super-49B-v1
library_name: peft
pipeline_tag: text-generation
tags:
- base_model:adapter:nvidia/Llama-3_3-Nemotron-Super-49B-v1
- lora
- transformers
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.16.0
|
nursimakgul/backend-gpt2-finetuned-v2
|
nursimakgul
| 2025-08-13T21:00:19Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"base_model:openai-community/gpt2",
"base_model:finetune:openai-community/gpt2",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-13T20:58:12Z |
---
library_name: transformers
license: mit
base_model: gpt2
tags:
- generated_from_trainer
model-index:
- name: backend-gpt2-finetuned-v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# backend-gpt2-finetuned-v2
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.55.0
- Pytorch 2.6.0+cu124
- Datasets 4.0.0
- Tokenizers 0.21.4
|
xfu20/BEMGPT_tp1
|
xfu20
| 2025-08-13T20:47:13Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-13T20:46:41Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
powermove72/granite-3.3-2b-finetome
|
powermove72
| 2025-08-13T20:45:17Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"granite",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-13T20:41:48Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Vasilii1/blockassist-bc-alert_pouncing_gull_1755116926
|
Vasilii1
| 2025-08-13T20:44:58Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"alert pouncing gull",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-13T20:44:27Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- alert pouncing gull
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
PuristanLabs1/Urdu_TurnDetection
|
PuristanLabs1
| 2025-08-13T20:42:48Z | 0 | 0 |
setfit
|
[
"setfit",
"safetensors",
"bert",
"sentence-transformers",
"text-classification",
"generated_from_setfit_trainer",
"arxiv:2209.11055",
"base_model:sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2",
"base_model:finetune:sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2",
"model-index",
"region:us"
] |
text-classification
| 2025-08-13T20:40:28Z |
---
tags:
- setfit
- sentence-transformers
- text-classification
- generated_from_setfit_trainer
widget:
- text: برائیڈل فیشن ویک ہر سال منعقد ہوتا ہے۔
- text: وہ کچھ دیر خاموش رہا پھر
- text: عمر شریف نے کامیڈی کو نئی جہت دی۔
- text: میں اس وقت بہت مصروف
- text: جب تک تم اپنا سبق
metrics:
- accuracy
pipeline_tag: text-classification
library_name: setfit
inference: true
base_model: sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2
model-index:
- name: SetFit with sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: Unknown
type: unknown
split: test
metrics:
- type: accuracy
value: 1.0
name: Accuracy
---
# SetFit with sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2)
- **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance
- **Maximum Sequence Length:** 128 tokens
- **Number of Classes:** 2 classes
<!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
### Model Labels
| Label | Examples |
|:------|:----------------------------------------------------------------------------------------------------------------------------------|
| 1 | <ul><li>'پاکستان کا قومی پھول چنبیلی ہے۔'</li><li>'نہاری لاہور کی خاص سوغات ہے۔'</li><li>'وقت کسی کا انتظار نہیں کرتا۔'</li></ul> |
| 0 | <ul><li>'اس خیال سے کہ'</li><li>'اس نے مجھے دعوت دی'</li><li>'اس نے ایک اور کوشش'</li></ul> |
## Evaluation
### Metrics
| Label | Accuracy |
|:--------|:---------|
| **all** | 1.0 |
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import SetFitModel
# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("setfit_model_id")
# Run inference
preds = model("جب تک تم اپنا سبق")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:-------|:----|
| Word count | 2 | 6.0774 | 13 |
| Label | Training Sample Count |
|:------|:----------------------|
| 0 | 1016 |
| 1 | 1064 |
### Training Hyperparameters
- batch_size: (32, 32)
- num_epochs: (2, 2)
- max_steps: -1
- sampling_strategy: oversampling
- num_iterations: 20
- body_learning_rate: (2e-05, 2e-05)
- head_learning_rate: 2e-05
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: False
- warmup_proportion: 0.1
- l2_weight: 0.01
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: False
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:------:|:----:|:-------------:|:---------------:|
| 0.0004 | 1 | 0.3724 | - |
| 0.0192 | 50 | 0.3204 | - |
| 0.0385 | 100 | 0.2491 | - |
| 0.0577 | 150 | 0.1363 | - |
| 0.0769 | 200 | 0.0216 | - |
| 0.0962 | 250 | 0.0049 | - |
| 0.1154 | 300 | 0.0019 | - |
| 0.1346 | 350 | 0.0006 | - |
| 0.1538 | 400 | 0.0005 | - |
| 0.1731 | 450 | 0.0002 | - |
| 0.1923 | 500 | 0.0002 | - |
| 0.2115 | 550 | 0.0001 | - |
| 0.2308 | 600 | 0.0001 | - |
| 0.25 | 650 | 0.0001 | - |
| 0.2692 | 700 | 0.0001 | - |
| 0.2885 | 750 | 0.0001 | - |
| 0.3077 | 800 | 0.0002 | - |
| 0.3269 | 850 | 0.0002 | - |
| 0.3462 | 900 | 0.0001 | - |
| 0.3654 | 950 | 0.0001 | - |
| 0.3846 | 1000 | 0.0001 | - |
| 0.4038 | 1050 | 0.0001 | - |
| 0.4231 | 1100 | 0.0001 | - |
| 0.4423 | 1150 | 0.0001 | - |
| 0.4615 | 1200 | 0.0 | - |
| 0.4808 | 1250 | 0.0 | - |
| 0.5 | 1300 | 0.0 | - |
| 0.5192 | 1350 | 0.0 | - |
| 0.5385 | 1400 | 0.0 | - |
| 0.5577 | 1450 | 0.0 | - |
| 0.5769 | 1500 | 0.0 | - |
| 0.5962 | 1550 | 0.0 | - |
| 0.6154 | 1600 | 0.0 | - |
| 0.6346 | 1650 | 0.0 | - |
| 0.6538 | 1700 | 0.0 | - |
| 0.6731 | 1750 | 0.0 | - |
| 0.6923 | 1800 | 0.0 | - |
| 0.7115 | 1850 | 0.0 | - |
| 0.7308 | 1900 | 0.0 | - |
| 0.75 | 1950 | 0.0 | - |
| 0.7692 | 2000 | 0.0 | - |
| 0.7885 | 2050 | 0.0 | - |
| 0.8077 | 2100 | 0.0 | - |
| 0.8269 | 2150 | 0.0 | - |
| 0.8462 | 2200 | 0.0 | - |
| 0.8654 | 2250 | 0.0 | - |
| 0.8846 | 2300 | 0.0 | - |
| 0.9038 | 2350 | 0.0 | - |
| 0.9231 | 2400 | 0.0 | - |
| 0.9423 | 2450 | 0.0 | - |
| 0.9615 | 2500 | 0.0 | - |
| 0.9808 | 2550 | 0.0 | - |
| 1.0 | 2600 | 0.0 | - |
| 1.0192 | 2650 | 0.0 | - |
| 1.0385 | 2700 | 0.0 | - |
| 1.0577 | 2750 | 0.0 | - |
| 1.0769 | 2800 | 0.0 | - |
| 1.0962 | 2850 | 0.0 | - |
| 1.1154 | 2900 | 0.0 | - |
| 1.1346 | 2950 | 0.0 | - |
| 1.1538 | 3000 | 0.0 | - |
| 1.1731 | 3050 | 0.0 | - |
| 1.1923 | 3100 | 0.0 | - |
| 1.2115 | 3150 | 0.0 | - |
| 1.2308 | 3200 | 0.0 | - |
| 1.25 | 3250 | 0.0 | - |
| 1.2692 | 3300 | 0.0 | - |
| 1.2885 | 3350 | 0.0 | - |
| 1.3077 | 3400 | 0.0 | - |
| 1.3269 | 3450 | 0.0 | - |
| 1.3462 | 3500 | 0.0 | - |
| 1.3654 | 3550 | 0.0 | - |
| 1.3846 | 3600 | 0.0 | - |
| 1.4038 | 3650 | 0.0 | - |
| 1.4231 | 3700 | 0.0 | - |
| 1.4423 | 3750 | 0.0 | - |
| 1.4615 | 3800 | 0.0 | - |
| 1.4808 | 3850 | 0.0 | - |
| 1.5 | 3900 | 0.0 | - |
| 1.5192 | 3950 | 0.0 | - |
| 1.5385 | 4000 | 0.0 | - |
| 1.5577 | 4050 | 0.0 | - |
| 1.5769 | 4100 | 0.0 | - |
| 1.5962 | 4150 | 0.0 | - |
| 1.6154 | 4200 | 0.0 | - |
| 1.6346 | 4250 | 0.0 | - |
| 1.6538 | 4300 | 0.0 | - |
| 1.6731 | 4350 | 0.0 | - |
| 1.6923 | 4400 | 0.0 | - |
| 1.7115 | 4450 | 0.0 | - |
| 1.7308 | 4500 | 0.0 | - |
| 1.75 | 4550 | 0.0 | - |
| 1.7692 | 4600 | 0.0 | - |
| 1.7885 | 4650 | 0.0 | - |
| 1.8077 | 4700 | 0.0 | - |
| 1.8269 | 4750 | 0.0 | - |
| 1.8462 | 4800 | 0.0 | - |
| 1.8654 | 4850 | 0.0 | - |
| 1.8846 | 4900 | 0.0 | - |
| 1.9038 | 4950 | 0.0 | - |
| 1.9231 | 5000 | 0.0 | - |
| 1.9423 | 5050 | 0.0 | - |
| 1.9615 | 5100 | 0.0 | - |
| 1.9808 | 5150 | 0.0 | - |
| 2.0 | 5200 | 0.0 | - |
### Framework Versions
- Python: 3.11.13
- SetFit: 1.1.3
- Sentence Transformers: 5.1.0
- Transformers: 4.55.0
- PyTorch: 2.6.0+cu124
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
Solana-Bot-trading/bloom-solana-bot
|
Solana-Bot-trading
| 2025-08-13T20:39:39Z | 0 | 0 | null |
[
"Bloom bot",
"solana",
"telegram",
"trading",
"bloom",
"region:us"
] | null | 2025-08-13T20:38:55Z |
---
title: Bloom Solana Bot
Title: Bloom Solana Bot
description: >-
Bloom Solana Bot is a lightning-fast trading bot built for the Solana blockchain.
It runs directly inside Telegram, so you dont need to install anything complicated.
tags:
- Bloom bot
- solana
- telegram
- trading
- bloom
---
<h1>Bloom Solana Bot: The Ultimate Telegram Trading Tool</h1>
<p><a rel="nofollow" href="https://pbs.twimg.com/profile_images/1853964541261983748/oSitKQIe_400x400.jpg"><img alt="photon Bot logo" src="https://pbs.twimg.com/profile_images/1853964541261983748/oSitKQIe_400x400.jpg"></a></p>
<div align="center">
<a href="https://coinlab.saliwell.com/offer.php?offer=bloom">
<img src="https://img.shields.io/badge/-➡️_Start_trading_now! ⬅️_-245FF?style=for-the-badge&logo=solana&logoColor=white&fontSize=40" width="500">
</a>
</div>
<p>
Bloom Solana Bot is a lightning-fast trading bot built for the Solana blockchain. It runs directly inside <b>Telegram</b>, so you don't need to install anything complicated. Just open Telegram, search for the bot, and you're ready to start trading Solana tokens. No browser tabs, no clunky apps. Just simple, fast crypto trading on your phone or computer.
</p>
<p>
And it's not just for pros. Bloom is designed for everyone-beginners, degens, and serious traders. The interface is simple, but the features are powerful. You get all the tools you need to trade, automate, and manage your crypto portfolio, right from your Telegram chat.
</p>
<h2>Key Features of Bloom Solana Bot</h2>
<ul>
<li><b>Instant Telegram Trading</b></li>
<li><b>Sniping New Tokens</b></li>
<li><b>AFK Automated Trading</b></li>
<li><b>Copy Trading</b></li>
<li><b>Multi-Chain Support</b></li>
<li><b>Advanced Limit Orders</b></li>
<li><b>Security and Speed</b></li>
<li><b>Chrome Extension Integration</b></li>
</ul>
<h3>Trade Directly in Telegram</h3>
<div align="center">
<a href="https://coinlab.saliwell.com/offer.php?offer=bloom">
<img src="https://img.shields.io/badge/-➡️_Start_trading_now! ⬅️_-245FF?style=for-the-badge&logo=solana&logoColor=white&fontSize=40" width="500">
</a>
</div>
<p>
No need to leave your favorite messaging app. Bloom Solana Bot is fully integrated with Telegram. Open the bot, connect your wallet, and you can buy or sell Solana tokens instantly. Everything happens in chat. Want to buy a new memecoin? Paste the contract address and tap buy. Want to sell? Same process. No fuss.
</p>
<h3>Sniping: Be First, Not Last</h3>
<p>
Crypto moves fast. New tokens launch every hour. With Bloom's <b>sniping</b> feature, you can grab new tokens the second they hit the market. Set up a snipe with the contract address, choose your amount, and Bloom buys as soon as liquidity drops. This gives you a real shot at the lowest price before the hype kicks in. For volatile meme coins, you can set slippage up to 50% or higher. Sometimes you only get seconds. Bloom is built for speed.
</p>
<h3>AFK Mode: Trade While You Sleep</h3>
<p>
Set your rules, go AFK, and let Bloom do the work. You can tell the bot to buy tokens under a certain market cap, or sell when a profit target is hit. Want to stop losses before they get big? Add a stop-loss rule. Bloom's <b>AFK mode</b> runs 24/7. It checks prices, buys dips, and sells pumps-even when you're offline.
</p>
<h3>Copy Trading: Ride the Winners</h3>
<div align="center">
<a href="https://coinlab.saliwell.com/offer.php?offer=bloom">
<img src="https://img.shields.io/badge/-➡️_Start_trading_now! ⬅️_-245FF?style=for-the-badge&logo=solana&logoColor=white&fontSize=40" width="500">
</a>
</div>
<p>
Not sure what to buy? Let someone else lead. Bloom's <b>copy trading</b> lets you follow top wallets. When they buy, you buy. When they sell, you sell. The bot copies trades instantly, even landing in the same Solana block as the original wallet. It's like having a pro trader in your pocket.
</p>
<h3>Multi-Chain and Multi-Wallet</h3>
<p>
Bloom isn't just for Solana. You can trade on Ethereum, BSC, Base, and more-all from the same bot. Have multiple wallets? Manage them all in one place. Switch between wallets with a tap. Track your positions across every chain. No more logging in and out.
</p>
<h3>Limit Orders and Advanced Settings</h3>
<p>
Want more control? Set <b>limit orders</b> to buy or sell at exact prices. Use advanced settings to fine-tune your trades. Want to split tokens across wallets? Bloom does that too. Need to bridge assets between chains? There's an in-bot bridge, powered by deBridge. The bot even supports OCR, so you can scan images for token addresses and trade them automatically.
</p>
<h3>Speed and Security</h3>
<p>
Bloom is fast. Trades execute in sub-millisecond time, so you can catch the best prices before anyone else. The infrastructure is rebuilt for stability and speed. There's anti-MEV protection to keep your trades safe from bots trying to front-run you. No downtime, even during wild market swings.
</p>
<h3>Chrome Extension and Notifications</h3>
<p>
Bloom's Chrome extension brings even more power. Get faster trades, richer customization, and instant wallet integration. Set notifications for price moves, wallet activity, or new launches. Never miss an opportunity.
</p>
<h2>How to Get Started</h2>
<ul>
<li>Open Telegram and search for Bloom Solana Bot.</li>
<li>Start the bot and create a new Solana wallet (keep your private key safe!).</li>
<li>Deposit SOL to your new wallet for trading and fees.</li>
<li>Use commands or menus to start trading, sniping, or automating your strategy.</li>
</ul>
<p>
You can start small. Test with 0.5 SOL or less. Try sniping a new token. Set up AFK mode to buy tokens under $50,000 market cap. Or just do a quick buy and sell. It's all in the chat.
</p>
<h2>Why Traders Love Bloom</h2>
<ul>
<li>It's insanely fast. Sub-millisecond detection and execution.</li>
<li>No complicated setup. Just Telegram and your wallet.</li>
<li>Copy trading, sniping, AFK automation-Bloom does it all.</li>
<li>Multi-chain, multi-wallet, all-in-one.</li>
<li>Safe. Built-in anti-MEV, instant bridging, and secure wallet management.</li>
<li>Active community. Monthly trading tournaments and referral rewards.</li>
</ul>
<p>
So, if you want to catch the next Solana moonshot or just automate your trading, Bloom is the tool. It's fast, flexible, and built for today's crypto markets.
</p>
<div align="center">
<a href="https://coinlab.saliwell.com/offer.php?offer=bloom">
<img src="https://img.shields.io/badge/-➡️_Start_trading_now! ⬅️_-245FF?style=for-the-badge&logo=solana&logoColor=white&fontSize=40" width="500">
</a>
</div>
<p>
<b>Keywords:</b> bloom, solana, bloom bot, telegram, trading
</p>
|
chainway9/blockassist-bc-untamed_quick_eel_1755116003
|
chainway9
| 2025-08-13T20:38:19Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"untamed quick eel",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-13T20:38:15Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- untamed quick eel
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
GFIO/35bw_40_9505
|
GFIO
| 2025-08-13T20:30:38Z | 0 | 0 | null |
[
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"region:us"
] | null | 2025-08-13T20:29:05Z |
---
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Code: [More Information Needed]
- Paper: [More Information Needed]
- Docs: [More Information Needed]
|
acroth/Mistral-7B-Instruct-finetuned-CUI-4bit
|
acroth
| 2025-08-13T20:24:27Z | 0 | 0 |
mlx
|
[
"mlx",
"safetensors",
"mistral",
"mistral-common",
"text-generation",
"conversational",
"base_model:acroth/Mistral-7B-Instruct-finetuned-CUI",
"base_model:quantized:acroth/Mistral-7B-Instruct-finetuned-CUI",
"license:apache-2.0",
"4-bit",
"region:us"
] |
text-generation
| 2025-08-13T19:47:29Z |
---
library_name: mlx
license: apache-2.0
extra_gated_description: If you want to learn more about how we process your personal
data, please read our <a href="https://mistral.ai/fr/terms/">Privacy Policy</a>.
tags:
- mistral-common
- mlx
pipeline_tag: text-generation
base_model: acroth/Mistral-7B-Instruct-finetuned-CUI
---
# acroth/Mistral-7B-Instruct-finetuned-CUI-4bit
This model [acroth/Mistral-7B-Instruct-finetuned-CUI-4bit](https://huggingface.co/acroth/Mistral-7B-Instruct-finetuned-CUI-4bit) was
converted to MLX format from [acroth/Mistral-7B-Instruct-finetuned-CUI](https://huggingface.co/acroth/Mistral-7B-Instruct-finetuned-CUI)
using mlx-lm version **0.26.3**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("acroth/Mistral-7B-Instruct-finetuned-CUI-4bit")
prompt = "hello"
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
andr0m4da/blockassist-bc-grazing_hunting_boar_1755116479
|
andr0m4da
| 2025-08-13T20:23:14Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"grazing hunting boar",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-13T20:22:20Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- grazing hunting boar
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
JoelMah/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-foraging_fanged_rabbit
|
JoelMah
| 2025-08-13T20:19:44Z | 98 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am foraging_fanged_rabbit",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-09T21:37:10Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am foraging_fanged_rabbit
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
JoelMah/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-fleecy_climbing_bison
|
JoelMah
| 2025-08-13T20:18:38Z | 94 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am fleecy_climbing_bison",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-09T21:37:47Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am fleecy_climbing_bison
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
JuKumar/blockassist-bc-exotic_purring_shark_1755114693
|
JuKumar
| 2025-08-13T20:18:22Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"exotic purring shark",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-13T20:16:43Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- exotic purring shark
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
jmartin233/ppo-LunarLander-v2
|
jmartin233
| 2025-08-13T20:18:15Z | 20 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-08-06T21:14:48Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -915.66 +/- 273.94
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
neutrino12/tensorstax-sft-mixed-91500-sft-plan-lr2e-6-484
|
neutrino12
| 2025-08-13T20:17:59Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-13T20:14:06Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
TheStageAI/Elastic-DeepSeek-R1-Distill-Qwen-14B
|
TheStageAI
| 2025-08-13T20:07:21Z | 25 | 2 | null |
[
"text2text-generation",
"zho",
"eng",
"fra",
"spa",
"por",
"deu",
"ita",
"rus",
"jpn",
"kor",
"vie",
"tha",
"ara",
"base_model:deepseek-ai/DeepSeek-R1-Distill-Qwen-14B",
"base_model:quantized:deepseek-ai/DeepSeek-R1-Distill-Qwen-14B",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2025-04-29T18:09:04Z |
---
license: apache-2.0
base_model:
- deepseek-ai/DeepSeek-R1-Distill-Qwen-14B
base_model_relation: quantized
pipeline_tag: text2text-generation
language:
- zho
- eng
- fra
- spa
- por
- deu
- ita
- rus
- jpn
- kor
- vie
- tha
- ara
---
# Elastic model: DeepSeek-R1-Distill-Qwen-14B. Fastest and most flexible models for self-serving.
Elastic models are the models produced by TheStage AI ANNA: Automated Neural Networks Accelerator. ANNA allows you to control model size, latency and quality with a simple slider movement. For each model, ANNA produces a series of optimized models:
* __XL__: Mathematically equivalent neural network, optimized with our DNN compiler.
* __L__: Near lossless model, with less than 1% degradation obtained on corresponding benchmarks.
* __M__: Faster model, with accuracy degradation less than 1.5%.
* __S__: The fastest model, with accuracy degradation less than 2%.
__Goals of elastic models:__
* Provide flexibility in cost vs quality selection for inference
* Provide clear quality and latency benchmarks
* Provide interface of HF libraries: transformers and diffusers with a single line of code
* Provide models supported on a wide range of hardware, which are pre-compiled and require no JIT.
* Provide the best models and service for self-hosting.
> It's important to note that specific quality degradation can vary from model to model. For instance, with an S model, you can have 0.5% degradation as well.

-----
## Inference
To infer our models, you just need to replace `transformers` import with `elastic_models.transformers`:
```python
import torch
from transformers import AutoTokenizer
from elastic_models.transformers import AutoModelForCausalLM
# Currently we require to have your HF token
# as we use original weights for part of layers and
# model confugaration as well
model_name = "deepseek-ai/DeepSeek-R1-Distill-Qwen-14B"
hf_token = ''
device = torch.device("cuda")
# Create mode
tokenizer = AutoTokenizer.from_pretrained(
model_name, token=hf_token
)
model = AutoModelForCausalLM.from_pretrained(
model_name,
token=hf_token,
torch_dtype=torch.bfloat16,
attn_implementation="sdpa",
mode='S'
).to(device)
model.generation_config.pad_token_id = tokenizer.eos_token_id
# Inference simple as transformers library
prompt = "Describe basics of DNNs quantization."
messages = [
{
"role": "system",
"content": "You are a search bot, answer on user text queries."
},
{
"role": "user",
"content": prompt
}
]
chat_prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True, tokenize=False
)
inputs = tokenizer(chat_prompt, return_tensors="pt")
inputs.to(device)
with torch.inference_mode():
generate_ids = model.generate(**inputs, max_length=500)
input_len = inputs['input_ids'].shape[1]
generate_ids = generate_ids[:, input_len:]
output = tokenizer.batch_decode(
generate_ids,
skip_special_tokens=True,
clean_up_tokenization_spaces=False
)[0]
# Validate answer
print(f"# Q:\n{prompt}\n")
print(f"# A:\n{output}\n")
```
__System requirements:__
* GPUs: H100, L40s
* CPU: AMD, Intel
* Python: 3.10-3.12
To work with our models just run these lines in your terminal:
```shell
pip install thestage
pip install 'thestage-elastic-models[nvidia]'
pip install flash_attn==2.7.3 --no-build-isolation
pip uninstall apex
```
Then go to [app.thestage.ai](https://app.thestage.ai), login and generate API token from your profile page. Set up API token as follows:
```shell
thestage config set --api-token <YOUR_API_TOKEN>
```
Congrats, now you can use accelerated models!
----
## Benchmarks
Benchmarking is one of the most important procedures during model acceleration. We aim to provide clear performance metrics for models using our algorithms. The `W8A8, int8 column` indicates that we applied W8A8 quantization with int8 data type to all linear layers and used the same calibration data as for ANNA. The S model achieves practically identical speed but much higher quality, as ANNA knows how to improve quantization quality on sensitive layers!
### Quality benchmarks
| Metric/Model | S | M | L | XL | Original | W8A8, int8 |
|---------------|---|---|---|----|----------|------------|
| arc_challenge | 49.20 | 51.00 | 50.60 | 50.90 | 50.90 | 35.30 | - |
| mmlu | 73.70 | 74.30 | 74.60 | 74.80 | 74.80 | 51.50 | - |
| piqa | 77.20 | 77.90 | 78.20 | 78.60 | 78.60 | 69.70 | - |
| winogrande | 69.30 | 70.90 | 72.10 | 72.30 | 72.30 | 61.30 | - |
* **MMLU**: Evaluates general knowledge across 57 subjects including science, humanities, engineering, and more. Shows model's ability to handle diverse academic topics.
* **PIQA**: Evaluates physical commonsense reasoning through questions about everyday physical interactions. Shows model's understanding of real-world physics concepts.
* **Arc Challenge**: Evaluates grade-school level multiple-choice questions requiring reasoning. Shows model's ability to solve complex reasoning tasks.
* **Winogrande**: Evaluates commonsense reasoning through sentence completion tasks. Shows model's capability to understand context and resolve ambiguity.
### Latency benchmarks
__100 input/300 output; tok/s:__
| GPU/Model | S | M | L | XL | Original | W8A8, int8 |
|-----------|-----|---|---|----|----------|------------|
| H100 | 118 | 105 | 95 | 77 | 39 | 123 | - |
| L40S | 40 | 35 | 31 | 24 | 22 | 41 | - |
## Links
* __Platform__: [app.thestage.ai](https://app.thestage.ai/models)
* __Subscribe for updates__: [TheStageAI X](https://x.com/TheStageAI)
<!-- * __Elastic models Github__: [app.thestage.ai](app.thestage.ai) -->
* __Contact email__: [email protected]
|
TheStageAI/Elastic-Qwen2.5-14B-Instruct
|
TheStageAI
| 2025-08-13T20:06:39Z | 17 | 1 | null |
[
"text2text-generation",
"zho",
"eng",
"fra",
"spa",
"por",
"deu",
"ita",
"rus",
"jpn",
"kor",
"vie",
"tha",
"ara",
"base_model:Qwen/Qwen2.5-14B-Instruct",
"base_model:quantized:Qwen/Qwen2.5-14B-Instruct",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2025-04-29T10:40:51Z |
---
license: apache-2.0
base_model:
- Qwen/Qwen2.5-14B-Instruct
base_model_relation: quantized
pipeline_tag: text2text-generation
language:
- zho
- eng
- fra
- spa
- por
- deu
- ita
- rus
- jpn
- kor
- vie
- tha
- ara
---
# Elastic model: Qwen2.5-14B-Instruct. Fastest and most flexible models for self-serving.
Elastic models are the models produced by TheStage AI ANNA: Automated Neural Networks Accelerator. ANNA allows you to control model size, latency and quality with a simple slider movement. For each model, ANNA produces a series of optimized models:
* __XL__: Mathematically equivalent neural network, optimized with our DNN compiler.
* __L__: Near lossless model, with less than 1% degradation obtained on corresponding benchmarks.
* __M__: Faster model, with accuracy degradation less than 1.5%.
* __S__: The fastest model, with accuracy degradation less than 2%.
__Goals of elastic models:__
* Provide flexibility in cost vs quality selection for inference
* Provide clear quality and latency benchmarks
* Provide interface of HF libraries: transformers and diffusers with a single line of code
* Provide models supported on a wide range of hardware, which are pre-compiled and require no JIT.
* Provide the best models and service for self-hosting.
> It's important to note that specific quality degradation can vary from model to model. For instance, with an S model, you can have 0.5% degradation as well.

-----
## Inference
To infer our models, you just need to replace `transformers` import with `elastic_models.transformers`:
```python
import torch
from transformers import AutoTokenizer
from elastic_models.transformers import AutoModelForCausalLM
# Currently we require to have your HF token
# as we use original weights for part of layers and
# model confugaration as well
model_name = "Qwen/Qwen2.5-14B-Instruct"
hf_token = ''
device = torch.device("cuda")
# Create mode
tokenizer = AutoTokenizer.from_pretrained(
model_name, token=hf_token
)
model = AutoModelForCausalLM.from_pretrained(
model_name,
token=hf_token,
torch_dtype=torch.bfloat16,
attn_implementation="sdpa",
mode='S'
).to(device)
model.generation_config.pad_token_id = tokenizer.eos_token_id
# Inference simple as transformers library
prompt = "Describe basics of DNNs quantization."
messages = [
{
"role": "system",
"content": "You are a search bot, answer on user text queries."
},
{
"role": "user",
"content": prompt
}
]
chat_prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True, tokenize=False
)
inputs = tokenizer(chat_prompt, return_tensors="pt")
inputs.to(device)
with torch.inference_mode():
generate_ids = model.generate(**inputs, max_length=500)
input_len = inputs['input_ids'].shape[1]
generate_ids = generate_ids[:, input_len:]
output = tokenizer.batch_decode(
generate_ids,
skip_special_tokens=True,
clean_up_tokenization_spaces=False
)[0]
# Validate answer
print(f"# Q:\n{prompt}\n")
print(f"# A:\n{output}\n")
```
__System requirements:__
* GPUs: H100, L40s
* CPU: AMD, Intel
* Python: 3.10-3.12
To work with our models just run these lines in your terminal:
```shell
pip install thestage
pip install 'thestage-elastic-models[nvidia]'
pip install flash_attn==2.7.3 --no-build-isolation
pip uninstall apex
```
Then go to [app.thestage.ai](https://app.thestage.ai), login and generate API token from your profile page. Set up API token as follows:
```shell
thestage config set --api-token <YOUR_API_TOKEN>
```
Congrats, now you can use accelerated models!
----
## Benchmarks
Benchmarking is one of the most important procedures during model acceleration. We aim to provide clear performance metrics for models using our algorithms. The `W8A8, int8 column` indicates that we applied W8A8 quantization with int8 data type to all linear layers and used the same calibration data as for ANNA. The S model achieves practically identical speed but much higher quality, as ANNA knows how to improve quantization quality on sensitive layers!
### Quality benchmarks
| Metric/Model | S | M | L | XL | Original | W8A8, int8 |
|---------------|---|---|---|----|----------|------------|
| arc_challenge | 57.50 | 59.60 | 61.00 | 60.70 | 60.70 | 49.90 | - |
| mmlu | 78.10 | 79.10 | 79.80 | 79.80 | 79.80 | 66.40 | - |
| piqa | 81.30 | 81.60 | 81.70 | 81.30 | 81.30 | 73.30 | - |
| winogrande | 74.0 | 74.40 | 75.00 | 76.00 | 76.00 | 60.0 | - |
* **MMLU**: Evaluates general knowledge across 57 subjects including science, humanities, engineering, and more. Shows model's ability to handle diverse academic topics.
* **PIQA**: Evaluates physical commonsense reasoning through questions about everyday physical interactions. Shows model's understanding of real-world physics concepts.
* **Arc Challenge**: Evaluates grade-school level multiple-choice questions requiring reasoning. Shows model's ability to solve complex reasoning tasks.
* **Winogrande**: Evaluates commonsense reasoning through sentence completion tasks. Shows model's capability to understand context and resolve ambiguity.
### Latency benchmarks
__100 input/300 output; tok/s:__
| GPU/Model | S | M | L | XL | Original | W8A8, int8 |
|-----------|-----|---|---|----|----------|------------|
| H100 | 118 | 103 | 94 | 77 | 39 | 121 | - |
| L40S | 40 | 34 | 30 | 24 | 22 | 41 | - |
## Links
* __Platform__: [app.thestage.ai](https://app.thestage.ai/)
* __Subscribe for updates__: [TheStageAI X](https://x.com/TheStageAI)
<!-- * __Elastic models Github__: [app.thestage.ai](app.thestage.ai) -->
* __Contact email__: [email protected]
|
TheStageAI/Elastic-Qwen2.5-7B-Instruct
|
TheStageAI
| 2025-08-13T20:00:30Z | 22 | 2 | null |
[
"text2text-generation",
"zho",
"eng",
"fra",
"spa",
"por",
"deu",
"ita",
"rus",
"jpn",
"kor",
"vie",
"tha",
"ara",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:quantized:Qwen/Qwen2.5-7B-Instruct",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2025-04-22T16:29:01Z |
---
license: apache-2.0
base_model:
- Qwen/Qwen2.5-7B-Instruct
base_model_relation: quantized
pipeline_tag: text2text-generation
language:
- zho
- eng
- fra
- spa
- por
- deu
- ita
- rus
- jpn
- kor
- vie
- tha
- ara
---
# Elastic model: Qwen2.5-7B-Instruct. Fastest and most flexible models for self-serving.
Elastic models are the models produced by TheStage AI ANNA: Automated Neural Networks Accelerator. ANNA allows you to control model size, latency and quality with a simple slider movement. For each model, ANNA produces a series of optimized models:
* __XL__: Mathematically equivalent neural network, optimized with our DNN compiler.
* __L__: Near lossless model, with less than 1% degradation obtained on corresponding benchmarks.
* __M__: Faster model, with accuracy degradation less than 1.5%.
* __S__: The fastest model, with accuracy degradation less than 2%.
__Goals of elastic models:__
* Provide flexibility in cost vs quality selection for inference
* Provide clear quality and latency benchmarks
* Provide interface of HF libraries: transformers and diffusers with a single line of code
* Provide models supported on a wide range of hardware, which are pre-compiled and require no JIT.
* Provide the best models and service for self-hosting.
> It's important to note that specific quality degradation can vary from model to model. For instance, with an S model, you can have 0.5% degradation as well.

-----
## Inference
To infer our models, you just need to replace `transformers` import with `elastic_models.transformers`:
```python
import torch
from transformers import AutoTokenizer
from elastic_models.transformers import AutoModelForCausalLM
# Currently we require to have your HF token
# as we use original weights for part of layers and
# model confugaration as well
model_name = "Qwen/Qwen2.5-7B-Instruct"
hf_token = ''
device = torch.device("cuda")
# Create mode
tokenizer = AutoTokenizer.from_pretrained(
model_name, token=hf_token
)
model = AutoModelForCausalLM.from_pretrained(
model_name,
token=hf_token,
torch_dtype=torch.bfloat16,
attn_implementation="sdpa",
mode='S'
).to(device)
model.generation_config.pad_token_id = tokenizer.eos_token_id
# Inference simple as transformers library
prompt = "Describe basics of DNNs quantization."
messages = [
{
"role": "system",
"content": "You are a search bot, answer on user text queries."
},
{
"role": "user",
"content": prompt
}
]
chat_prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True, tokenize=False
)
inputs = tokenizer(chat_prompt, return_tensors="pt")
inputs.to(device)
with torch.inference_mode():
generate_ids = model.generate(**inputs, max_length=500)
input_len = inputs['input_ids'].shape[1]
generate_ids = generate_ids[:, input_len:]
output = tokenizer.batch_decode(
generate_ids,
skip_special_tokens=True,
clean_up_tokenization_spaces=False
)[0]
# Validate answer
print(f"# Q:\n{prompt}\n")
print(f"# A:\n{output}\n")
```
__System requirements:__
* GPUs: H100, L40s
* CPU: AMD, Intel
* Python: 3.10-3.12
To work with our models just run these lines in your terminal:
```shell
pip install thestage
pip install 'thestage-elastic-models[nvidia]'
pip install flash_attn==2.7.3 --no-build-isolation
pip uninstall apex
```
Then go to [app.thestage.ai](https://app.thestage.ai), login and generate API token from your profile page. Set up API token as follows:
```shell
thestage config set --api-token <YOUR_API_TOKEN>
```
Congrats, now you can use accelerated models!
----
## Benchmarks
Benchmarking is one of the most important procedures during model acceleration. We aim to provide clear performance metrics for models using our algorithms. The `W8A8, int8 column` indicates that we applied W8A8 quantization with int8 data type to all linear layers and used the same calibration data as for ANNA. The S model achieves practically identical speed but much higher quality, as ANNA knows how to improve quantization quality on sensitive layers!
### Quality benchmarks
| Metric/Model | S | M | L | XL | Original | W8A8, int8 |
|---------------|---|---|---|----|----------|------------|
| arc_challenge | 49.10 | 50.10 | 53.20 | 52.60 | 52.60 | 41.70 | - |
| mmlu | 71.70 | 73.00 | 74.10 | 73.50 | 73.50 | 64.60 | - |
| piqa | 77.00 | 78.20 | 78.80 | 79.50 | 79.50 | 67.10 | - |
| winogrande | 66.20 | 69.10 | 71.50 | 70.60 | 70.60 | 53.10 | - |
* **MMLU**: Evaluates general knowledge across 57 subjects including science, humanities, engineering, and more. Shows model's ability to handle diverse academic topics.
* **PIQA**: Evaluates physical commonsense reasoning through questions about everyday physical interactions. Shows model's understanding of real-world physics concepts.
* **Arc Challenge**: Evaluates grade-school level multiple-choice questions requiring reasoning. Shows model's ability to solve complex reasoning tasks.
* **Winogrande**: Evaluates commonsense reasoning through sentence completion tasks. Shows model's capability to understand context and resolve ambiguity.
### Latency benchmarks
__100 input/300 output; tok/s:__
| GPU/Model | S | M | L | XL | Original | W8A8, int8 |
|-----------|-----|---|---|----|----------|------------|
| H100 | 201 | 173 | 162 | 135 | 62 | 201 | - |
| L40S | 76 | 67 | 61 | 47 | 43 | 78 | - |
## Links
* __Platform__: [app.thestage.ai](app.thestage.ai)
* __Subscribe for updates__: [TheStageAI X](https://x.com/TheStageAI)
<!-- * __Elastic models Github__: [app.thestage.ai](app.thestage.ai) -->
* __Contact email__: [email protected]
|
nurfarah57/Somali-Agri-LLaMAX3-8B-LoRA
|
nurfarah57
| 2025-08-13T19:58:06Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:LLaMAX/LLaMAX3-8B-Alpaca",
"base_model:adapter:LLaMAX/LLaMAX3-8B-Alpaca",
"region:us"
] | null | 2025-08-13T19:57:52Z |
---
base_model: LLaMAX/LLaMAX3-8B-Alpaca
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.11.1
|
vzhang88/output
|
vzhang88
| 2025-08-13T19:57:43Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"sft",
"trl",
"base_model:Qwen/Qwen2.5-VL-3B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-VL-3B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-08-02T16:30:26Z |
---
base_model: Qwen/Qwen2.5-VL-3B-Instruct
library_name: transformers
model_name: output
tags:
- generated_from_trainer
- sft
- trl
licence: license
---
# Model Card for output
This model is a fine-tuned version of [Qwen/Qwen2.5-VL-3B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-3B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="vzhang88/output", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/vz237-university-of-cambridge/adaptive-ui-clean/runs/k00e9nd3)
This model was trained with SFT.
### Framework versions
- TRL: 0.19.1
- Transformers: 4.53.1
- Pytorch: 2.7.1+cu126
- Datasets: 3.6.0
- Tokenizers: 0.21.4.dev0
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
rvipitkirubbe/blockassist-bc-mottled_foraging_ape_1755113079
|
rvipitkirubbe
| 2025-08-13T19:52:37Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"mottled foraging ape",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-13T19:52:34Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- mottled foraging ape
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
winnieyangwannan/entity_dpo_Llama-3.1-8B-Instruct_lora_8_lr_0.0001_beta_0.05_6400_all_37_epoch_1_layer_all
|
winnieyangwannan
| 2025-08-13T19:47:22Z | 9 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"dpo",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-11T09:12:40Z |
---
library_name: transformers
tags:
- trl
- dpo
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
nlee-208/limo_S-dsr1b_T-qwq_25
|
nlee-208
| 2025-08-13T19:45:33Z | 2 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"trl",
"sft",
"conversational",
"base_model:deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B",
"base_model:finetune:deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-11T10:56:50Z |
---
base_model: deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B
library_name: transformers
model_name: limo_S-dsr1b_T-qwq_25
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for limo_S-dsr1b_T-qwq_25
This model is a fine-tuned version of [deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="nlee-208/limo_S-dsr1b_T-qwq_25", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/nlee28/cross1/runs/u78r8fup)
This model was trained with SFT.
### Framework versions
- TRL: 0.18.1
- Transformers: 4.52.4
- Pytorch: 2.7.1
- Datasets: 4.0.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
ChimeraSec/phishing-detection-bert
|
ChimeraSec
| 2025-08-13T19:36:52Z | 0 | 0 | null |
[
"pytorch",
"bert",
"generated_from_trainer",
"phishing",
"BERT",
"text-classification",
"en",
"dataset:ealvaradob/phishing-dataset",
"base_model:google-bert/bert-large-uncased",
"base_model:finetune:google-bert/bert-large-uncased",
"license:apache-2.0",
"region:us"
] |
text-classification
| 2025-08-13T13:18:34Z |
---
license: apache-2.0
base_model: bert-large-uncased
tags:
- generated_from_trainer
- phishing
- BERT
metrics:
- accuracy
- precision
- recall
model-index:
- name: bert-finetuned-phishing
results: []
widget:
- text: https://www.verif22.com
example_title: Phishing URL
- text: Dear colleague, An important update about your email has exceeded your
storage limit. You will not be able to send or receive all of your messages.
We will close all older versions of our Mailbox as of Friday, June 12, 2023.
To activate and complete the required information click here (https://ec-ec.squarespace.com).
Account must be reactivated today to regenerate new space. Management Team
example_title: Phishing Email
- text: You have access to FREE Video Streaming in your plan. REGISTER with your email, password and
then select the monthly subscription option. https://bit.ly/3vNrU5r
example_title: Phishing SMS
- text: if(data.selectedIndex > 0){$('#hidCflag').val(data.selectedData.value);};;
var sprypassword1 = new Spry.Widget.ValidationPassword("sprypassword1");
var sprytextfield1 = new Spry.Widget.ValidationTextField("sprytextfield1", "email");
example_title: Phishing Script
- text: Hi, this model is really accurate :)
example_title: Benign message
datasets:
- ealvaradob/phishing-dataset
language:
- en
pipeline_tag: text-classification
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BERT FINETUNED ON PHISHING DETECTION
This model is a fine-tuned version of [bert-large-uncased](https://huggingface.co/bert-large-uncased) on an [phishing dataset](https://huggingface.co/datasets/ealvaradob/phishing-dataset),
capable of detecting phishing in its four most common forms: URLs, Emails, SMS messages and even websites.
It achieves the following results on the evaluation set:
- Loss: 0.1953
- Accuracy: 0.9717
- Precision: 0.9658
- Recall: 0.9670
- False Positive Rate: 0.0249
## Model description
BERT is a transformers model pretrained on a large corpus of English data in a self-supervised fashion.
This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why
it can use lots of publicly available data) with an automatic process to generate inputs and labels from
those texts.
This model has the following configuration:
- 24-layer
- 1024 hidden dimension
- 16 attention heads
- 336M parameters
## Motivation and Purpose
Phishing is one of the most frequent and most expensive cyber-attacks according to several security reports.
This model aims to efficiently and accurately prevent phishing attacks against individuals and organizations.
To achieve it, BERT was trained on a diverse and robust dataset containing: URLs, SMS Messages, Emails and
Websites, which allows the model to extend its detection capability beyond the usual and to be used in various
contexts.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | False Positive Rate |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:---------:|:------:|:-------------------:|
| 0.1487 | 1.0 | 3866 | 0.1454 | 0.9596 | 0.9709 | 0.9320 | 0.0203 |
| 0.0805 | 2.0 | 7732 | 0.1389 | 0.9691 | 0.9663 | 0.9601 | 0.0243 |
| 0.0389 | 3.0 | 11598 | 0.1779 | 0.9683 | 0.9778 | 0.9461 | 0.0156 |
| 0.0091 | 4.0 | 15464 | 0.1953 | 0.9717 | 0.9658 | 0.9670 | 0.0249 |
### Framework versions
- Transformers 4.34.1
- Pytorch 2.1.1+cu121
- Datasets 2.14.6
- Tokenizers 0.14.1
|
Civan4412/phi2-mvp-demo
|
Civan4412
| 2025-08-13T19:31:00Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-08-13T19:31:00Z |
---
license: apache-2.0
---
|
CodeIsAbstract/language_parser-Q8_0-GGUF
|
CodeIsAbstract
| 2025-08-13T19:23:55Z | 0 | 0 | null |
[
"gguf",
"llama-cpp",
"gguf-my-repo",
"base_model:CodeIsAbstract/language_parser",
"base_model:quantized:CodeIsAbstract/language_parser",
"endpoints_compatible",
"region:us"
] | null | 2025-08-13T19:23:42Z |
---
base_model: CodeIsAbstract/language_parser
tags:
- llama-cpp
- gguf-my-repo
---
# CodeIsAbstract/language_parser-Q8_0-GGUF
This model was converted to GGUF format from [`CodeIsAbstract/language_parser`](https://huggingface.co/CodeIsAbstract/language_parser) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/CodeIsAbstract/language_parser) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo CodeIsAbstract/language_parser-Q8_0-GGUF --hf-file language_parser-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo CodeIsAbstract/language_parser-Q8_0-GGUF --hf-file language_parser-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo CodeIsAbstract/language_parser-Q8_0-GGUF --hf-file language_parser-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo CodeIsAbstract/language_parser-Q8_0-GGUF --hf-file language_parser-q8_0.gguf -c 2048
```
|
Masabanees619/gemma-2-2b_it_adaptor_jailbreak
|
Masabanees619
| 2025-08-13T19:21:55Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-13T19:21:23Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Chuporceeta/q-FrozenLake-v1-4x4-noSlippery
|
Chuporceeta
| 2025-08-13T19:20:04Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-08-13T19:19:59Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="Chuporceeta/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
MamiVenkat/modernbert-llm-router
|
MamiVenkat
| 2025-08-13T19:17:48Z | 17 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"modernbert",
"text-classification",
"generated_from_trainer",
"base_model:answerdotai/ModernBERT-base",
"base_model:finetune:answerdotai/ModernBERT-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-08-09T15:22:02Z |
---
library_name: transformers
license: apache-2.0
base_model: answerdotai/ModernBERT-base
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: modernbert-llm-router
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# modernbert-llm-router
This model is a fine-tuned version of [answerdotai/ModernBERT-base](https://huggingface.co/answerdotai/ModernBERT-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0000
- F1: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:---:|
| 0.0122 | 1.0 | 380 | 0.0000 | 1.0 |
| 0.0 | 2.0 | 760 | 0.0000 | 1.0 |
| 0.0022 | 3.0 | 1140 | 0.0000 | 1.0 |
| 0.0 | 4.0 | 1520 | 0.0000 | 1.0 |
| 0.0 | 5.0 | 1900 | 0.0000 | 1.0 |
| 0.0 | 6.0 | 2280 | 0.0007 | 1.0 |
| 0.0 | 7.0 | 2660 | 0.0000 | 1.0 |
| 0.0 | 8.0 | 3040 | 0.0000 | 1.0 |
| 0.0 | 9.0 | 3420 | 0.0000 | 1.0 |
| 0.0 | 10.0 | 3800 | 0.0000 | 1.0 |
### Framework versions
- Transformers 4.48.0.dev0
- Pytorch 2.6.0+cu124
- Datasets 4.0.0
- Tokenizers 0.21.4
|
Ryukijano/gemma-groot
|
Ryukijano
| 2025-08-13T19:16:28Z | 13 | 0 |
lerobot
|
[
"lerobot",
"safetensors",
"robotics",
"vla",
"imitation-learning",
"diffusion-policy",
"gemma-3",
"siglip",
"scaledp",
"multimodal",
"en",
"arxiv:2303.04137",
"license:apache-2.0",
"region:us"
] |
robotics
| 2025-07-27T12:15:57Z |
---
license: apache-2.0
language:
- en
tags:
- robotics
- vla
- lerobot
- imitation-learning
- diffusion-policy
- gemma-3
- siglip
- scaledp
- multimodal
---
# Gemma-Le: SigLIP + Gemma 3 + ScaleDP (LeRobot VLA Policy)
Gemma-Le is a compact Vision-Language-Action policy for robotic manipulation built on top of LeRobot.
It replaces NV Eagle with standard Hugging Face components:
- SigLIP `google/siglip-so400m-patch14-384` for vision
- Gemma 3 `google/gemma-3-4b-it` for language/reasoning (with LoRA PEFT)
- ScaleDP (Scalable Diffusion Transformer) as the action head
This repo hosts exported checkpoints trained on LeRobot-format datasets (e.g., `robot_sim.PickNPlace`).
## Architecture
- Vision: SigLIP ViT encoder (384px, patch14), pooled embedding
- Text: Gemma 3 4B-IT, mean-pooled hidden states
- LoRA: rank=16 on `[q_proj, k_proj, v_proj, o_proj]`
- Fusion: MLP projects [vision || text] -> `conditioning_dim=768`
- Action head: ScaleDP Transformer (layers=12, d_model=320, heads=8, ff=1280) predicts diffusion noise
- Temporal context: `chunk_size=8`; diffusion steps `num_diffusion_steps=50`
- Mixed precision: AMP auto-selects bf16/fp16; bf16 uses no GradScaler
## Default config (excerpt)
```yaml
vision_model_id: google/siglip-so400m-patch14-384
text_model_id: google/gemma-3-4b-it
image_features: ["observation.images.ego_view"]
action_feature: "action"
chunk_size: 8
num_diffusion_steps: 50
conditioning_dim: 768
plan_update_interval: 10
scaledp_num_layers: 12
scaledp_dim_model: 320
scaledp_num_heads: 8
scaledp_dim_feedforward: 1280
use_lora: true
lora_rank: 16
lora_target_modules: ["q_proj","k_proj","v_proj","o_proj"]
optimizer_lr: 1e-4
optimizer_weight_decay: 1e-6
```
## Usage (with this repo’s LeRobot fork)
Install deps and set `PYTHONPATH` to include `lerobot` in this repository.
Evaluation-style load:
```python
import torch
from lerobot.common.policies.gemma_le.modeling_gemma_le import GemmaLePolicy
from huggingface_hub import snapshot_download
ckpt_dir = snapshot_download(repo_id="Ryukijano/gemma-groot", revision="main")
policy = GemmaLePolicy.from_pretrained(ckpt_dir, torch_dtype=torch.bfloat16)
policy.eval()
```
Training entrypoint:
```bash
python lerobot/lerobot/scripts/train.py \
--policy.type gemma_le \
--dataset.repo_id local/robot_sim.PickNPlace \
--dataset.root /path/to/robot_sim.PickNPlace \
--dataset.episodes "[0,1,2,3,4]" \
--batch_size 3 \
--steps 200000 \
--log_freq 100 \
--save_freq 5000 \
--policy.vision_model_id google/siglip-so400m-patch14-384 \
--policy.text_model_id google/gemma-3-4b-it \
--policy.use_amp true \
--progress_bar true \
--push_to_hub true \
--push_repo_id Ryukijano/gemma-groot \
--push_branch main \
--push_exist_ok true
```
### Slurm (3× L40)
See `submit_job.sh`. Ensure caches on scratch and set:
- `PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True`
- `HF_HOME`, `HUGGINGFACE_HUB_CACHE`, `TRANSFORMERS_CACHE` to scratch
## Checkpoints
- Latest runs uploaded under `runs/<date>/<run>/<step>` in this repo.
- Example: `runs/2025-08-12/13-06-07_gemma_le/020000/`.
## Data
- LeRobotDataset (parquet + mp4 + metadata). Single RGB view: `observation.images.ego_view`. Targets: `action`.
- Timestamp tolerance is auto-relaxed to `max(tolerance_s, 1/fps + 1e-4)` during training for robust decoding.
## Notes
- Base model access: `google/gemma-3-4b-it` may require TOS.
- Intended for imitation learning; ThinkAct-style planning can be layered on top.
## Citations
- LeRobot: https://github.com/huggingface/lerobot
- Gemma 3: https://ai.google.dev/gemma
- SigLIP: https://huggingface.co/timm/ViT-SigLIP
- Diffusion Policy: https://arxiv.org/abs/2303.04137
```
|
zju-community/matchanything_eloftr
|
zju-community
| 2025-08-13T19:16:27Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"efficientloftr",
"keypoint-matching",
"arxiv:2501.07556",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-13T17:49:32Z |
---
library_name: transformers
tags:
- keypoint-matching
license: apache-2.0
---
# MatchAnything-ELOFTR
The MatchAnything-ELOFTR model was proposed in **"MatchAnything: Universal Cross-Modality Image Matching with Large-Scale Pre-Training"** by Xingyi He, Hao Yu, Sida Peng, Dongli Tan, Zehong Shen, Hujun Bao, and Xiaowei Zhou from Zhejiang University and Shandong University.
This model is a version of **ELOFTR** enhanced by the MatchAnything pre-training framework. This framework enables the model to achieve universal cross-modality image matching capabilities, overcoming the significant challenge of matching images with drastic appearance changes due to different imaging principles (e.g., thermal vs. visible, CT vs. MRI). This is achieved by pre-training on a massive, diverse dataset synthesized with cross-modal stimulus signals, teaching the model to recognize fundamental, appearance-insensitive structures.
The abstract from the paper is the following:
"Image matching, which aims to identify corresponding pixel locations between images, is crucial in a wide range of scientific disciplines, aiding in image registration, fusion, and analysis. In recent years, deep learning-based image matching algorithms have dramatically outperformed humans in rapidly and accurately finding large amounts of correspondences. However, when dealing with images captured under different imaging modalities that result in significant appearance changes, the performance of these algorithms often deteriorates due to the scarcity of annotated cross-modal training data. This limitation hinders applications in various fields that rely on multiple image modalities to obtain complementary information. To address this challenge, we propose a large-scale pre-training framework that utilizes synthetic cross-modal training signals, incorporating diverse data from various sources, to train models to recognize and match fundamental structures across images. This capability is transferable to real-world, unseen cross-modality image matching tasks. Our key finding is that the matching model trained with our framework achieves remarkable generalizability across more than eight unseen cross-modality registration tasks using the same network weight, substantially outperforming existing methods, whether designed for generalization or tailored for specific tasks. This advancement significantly enhances the applicability of image matching technologies across various scientific disciplines and paves the way for new applications in multi-modality human and artificial intelligence (AI) analysis and beyond."

This model was contributed by [stevenbucaille](https://huggingface.co/stevenbucaille).
The original code for the MatchAnything project can be found [here](https://github.com/zju3dv/MatchAnything).
## Model Details
### Model Description
**MatchAnything-ELOFTR** is a semi-dense feature matcher that has been pre-trained using the novel MatchAnything framework to give it powerful generalization capabilities for cross-modality tasks. The core innovations stem from the training framework, not the model architecture itself, which remains that of ELOFTR.
The key innovations of the MatchAnything framework include:
- A **multi-resource dataset mixture training engine** that combines various data sources to ensure diversity. This includes multi-view images with 3D reconstructions, large-scale unlabelled video sequences, and vast single-image datasets.
- A **cross-modality stimulus data generator** that uses image generation techniques (like style transfer and depth estimation) to create synthetic, pixel-aligned cross-modal training pairs (e.g., visible-to-thermal, visible-to-depth).
- This process trains the model to learn **appearance-insensitive, fundamental image structures**, allowing a single set of model weights to perform robustly on over eight different and completely unseen cross-modal matching tasks.
- **Developed by:** ZJU3DV at Zhejiang University & Shandong University
- **Model type:** Image Matching
- **License:** Apache 2.0
### Model Sources
- **Repository:** https://github.com/zju3dv/MatchAnything
- **Project page:** https://zju3dv.github.io/MatchAnything/
- **Paper:** https://huggingface.co/papers/2501.07556
## Uses
MatchAnything-ELOFTR is designed for a vast array of applications requiring robust image matching, especially between different sensor types or imaging modalities. Its direct uses include:
- **Medical Image Analysis**: Aligning CT-MR, PET-MR, and SPECT-MR scans.
- **Histopathology**: Registering tissue images with different stains (e.g., H&E and IHC).
- **Remote Sensing**: Matching satellite/aerial images from different sensors (e.g., Visible-SAR, Thermal-Visible).
- **Autonomous Systems**: Enhancing localization and navigation for UAVs and autonomous vehicles by matching thermal or visible images to vectorized maps.
- Single-Modality Tasks**: The model also retains strong performance on standard single-modality matching, such as retina image registration.
### Direct Use
Here is a quick example of using the model for matching a pair of images.
```python
from transformers import AutoImageProcessor, AutoModel
from transformers.image_utils import load_image
import torch
# Load a pair of images
image1 = load_image("https://raw.githubusercontent.com/magicleap/SuperGluePretrainedNetwork/refs/heads/master/assets/phototourism_sample_images/united_states_capitol_98169888_3347710852.jpg")
image2 = load_image("https://raw.githubusercontent.com/magicleap/SuperGluePretrainedNetwork/refs/heads/master/assets/phototourism_sample_images/united_states_capitol_26757027_6717084061.jpg")
images = [image1, image2]
# Load the processor and model from the Hugging Face Hub
processor = AutoImageProcessor.from_pretrained("zju-community/matchanything_eloftr")
model = AutoModel.from_pretrained("zju-community/matchanything_eloftr")
# Process images and get model outputs
inputs = processor(images, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs)
```
You can use the post_process_keypoint_matching method from the `EfficientLoFTRImageProcessor` to get the keypoints and matches in a readable format:
```python
image_sizes = [[(image.height, image.width) for image in images]]
outputs = processor.post_process_keypoint_matching(outputs, image_sizes, threshold=0.2)
for i, output in enumerate(outputs):
print("For the image pair", i)
for keypoint0, keypoint1, matching_score in zip(
output["keypoints0"], output["keypoints1"], output["matching_scores"]
):
print(
f"Keypoint at coordinate {keypoint0.numpy()} in the first image matches with keypoint at coordinate {keypoint1.numpy()} in the second image with a score of {matching_score}."
)
```
You can also visualize the matches between the images:
```python
plot_images = processor.visualize_keypoint_matching(images, outputs)
```

## Training Details
MatchAnything-ELOFTR is trained end-to-end using the large-scale, cross-modality pre-training framework.
### Training Data
The model was not trained on a single dataset but on a massive collection generated by the Multi-Resources Data Mixture Training framework, totaling approximately 800 million image pairs. This framework leverages:
Multi-View Images with Geometry: Datasets like MegaDepth, ScanNet++, and BlendedMVS provide realistic viewpoint changes with ground-truth depth.
Video Sequences: The DL3DV-10k dataset is used, with pseudo ground-truth matches generated between distant frames via a novel coarse-to-fine strategy.
Single-Image Datasets: Large datasets like GoogleLandmark and SA-1B are used with synthetic homography warping to maximize data diversity.
Cross-Modality Stimulus Data: A key component where training pairs are augmented by generating synthetic modalities (thermal, nighttime, depth maps) from visible light images using models like CycleGAN and DepthAnything, encouraging the matcher to learn appearance-invariant features.
### Training Procedure
#### Training Hyperparameters
Optimizer: AdamW
Initial Learning Rate: 8×10⁻³
Batch Size: 64
Training Hardware: 16 NVIDIA A100-80G GPUs
Training Time: Approximately 4.3 days for the ELOFTR variant
#### Speeds, Sizes, Times
Since the MatchAnything framework only changes the training process and weights, the model's architecture and running time are identical to the original ELOFTR model.
Speed: For a 640x480 resolution image pair on a single NVIDIA RTX 3090 GPU, the model takes 40ms to process.
## Citation
**BibTeX:**
```bibtext
@article{he2025matchanything,
title={MatchAnything: Universal Cross-Modality Image Matching with Large-Scale Pre-Training},
author={Xingyi He and Hao Yu and Sida Peng and Dongli Tan and Zehong Shen and Hujun Bao and Xiaowei Zhou},
year={2025},
eprint={2501.07556},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
## Model Card Authors
[Steven Bucaille](https://github.com/sbucaille)
|
neural-interactive-proofs/finetune_dpo_qwen2_5-32b-instruct_cv_qwen2.5-32B_prover_nip_transfer_baseline_1_0_iter_7_provers
|
neural-interactive-proofs
| 2025-08-13T19:15:49Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"dpo",
"arxiv:2305.18290",
"base_model:Qwen/Qwen2.5-32B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-32B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-08-13T19:14:15Z |
---
base_model: Qwen/Qwen2.5-32B-Instruct
library_name: transformers
model_name: finetune_dpo_qwen2_5-32b-instruct_cv_qwen2.5-32B_prover_nip_transfer_baseline_1_0_iter_7_provers
tags:
- generated_from_trainer
- trl
- dpo
licence: license
---
# Model Card for finetune_dpo_qwen2_5-32b-instruct_cv_qwen2.5-32B_prover_nip_transfer_baseline_1_0_iter_7_provers
This model is a fine-tuned version of [Qwen/Qwen2.5-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-32B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="neural-interactive-proofs/finetune_dpo_qwen2_5-32b-instruct_cv_qwen2.5-32B_prover_nip_transfer_baseline_1_0_iter_7_provers", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/lrhammond-team/pvg-self-hosted-finetune/runs/qwen2_5-32b-instruct_dpo_2025-08-13_19-59-00_cv_qwen2.5-32B_prover_nip_transfer_baseline_1_0_iter_7_provers_group)
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.18.2
- Transformers: 4.53.2
- Pytorch: 2.7.0
- Datasets: 3.0.0
- Tokenizers: 0.21.1
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
indoempatnol/blockassist-bc-fishy_wary_swan_1755110334
|
indoempatnol
| 2025-08-13T19:05:43Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"fishy wary swan",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-13T19:05:35Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- fishy wary swan
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Trending-policia-mexicana-video-viral/New.full.videos.policia.mexicana.Viral.Video.Official.Tutorial
|
Trending-policia-mexicana-video-viral
| 2025-08-13T18:54:01Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-13T13:58:30Z |
<a href="https://sdu.sk/AyL"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="fsd" /></a>
<a href="https://sdu.sk/AyL" rel="nofollow">►✅ 𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► (𝗦𝗶𝗴𝗻 𝗨𝗽 𝘁𝗼 𝙁𝙪𝙡𝙡 𝗪𝗮𝘁𝗰𝗵 𝙑𝙞𝙙𝙚𝙤❤️❤️)</a>
<a href="https://sdu.sk/AyL" rel="nofollow">🔴 ➤►✅𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► (𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨 𝐥𝐢𝐧𝐤)</a>
|
AAAAnsah/Qwen2.5-0.5B-Instruct_BMA_theta_0.1
|
AAAAnsah
| 2025-08-13T18:49:08Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-13T18:49:00Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
New-Clip-Gul-Chahat-Viral-video-original/New.full.videos.Gul.Chahat.Viral.Video.Official.Tutorial
|
New-Clip-Gul-Chahat-Viral-video-original
| 2025-08-13T18:46:46Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-13T10:11:34Z |
<a href="https://sdu.sk/AyL"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="fsd" /></a>
<a href="https://sdu.sk/AyL" rel="nofollow">►✅ 𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► (𝗦𝗶𝗴𝗻 𝗨𝗽 𝘁𝗼 𝙁𝙪𝙡𝙡 𝗪𝗮𝘁𝗰𝗵 𝙑𝙞𝙙𝙚𝙤❤️❤️)</a>
<a href="https://sdu.sk/AyL" rel="nofollow">🔴 ➤►✅𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► (𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨 𝐥𝐢𝐧𝐤)</a>
|
Horse-and-girl-viral-video-original/New.full.videos.Horse.and.girl.Viral.Video.Official.Tutorial
|
Horse-and-girl-viral-video-original
| 2025-08-13T18:46:26Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-13T10:10:23Z |
<a href="https://sdu.sk/AyL"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="fsd" /></a>
<a href="https://sdu.sk/AyL" rel="nofollow">►✅ 𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► (𝗦𝗶𝗴𝗻 𝗨𝗽 𝘁𝗼 𝙁𝙪𝙡𝙡 𝗪𝗮𝘁𝗰𝗵 𝙑𝙞𝙙𝙚𝙤❤️❤️)</a>
<a href="https://sdu.sk/AyL" rel="nofollow">🔴 ➤►✅𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► (𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨 𝐥𝐢𝐧𝐤)</a>
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.