modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-07-24 00:43:32
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 533
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-07-24 00:37:17
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
rahulsamant37/tiny-chatbot-dpo
|
rahulsamant37
| 2025-06-16T11:56:34Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"dpo",
"arxiv:2305.18290",
"base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"base_model:finetune:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-16T11:54:25Z |
---
base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
library_name: transformers
model_name: tiny-chatbot-dpo
tags:
- generated_from_trainer
- trl
- dpo
licence: license
---
# Model Card for tiny-chatbot-dpo
This model is a fine-tuned version of [TinyLlama/TinyLlama-1.1B-Chat-v1.0](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="rahulsamant37/tiny-chatbot-dpo", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.18.2
- Transformers: 4.52.4
- Pytorch: 2.6.0+cu124
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
mercury99/learn_hf_food_not_food_text_classifier-distilbert-base-uncased
|
mercury99
| 2025-06-16T11:54:08Z | 16 | 0 |
transformers
|
[
"transformers",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-06-04T19:13:00Z |
---
library_name: transformers
license: apache-2.0
base_model: distilbert/distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: learn_hf_food_not_food_text_classifier-distilbert-base-uncased
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# learn_hf_food_not_food_text_classifier-distilbert-base-uncased
This model is a fine-tuned version of [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0005
- Accuracy: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.4993 | 1.0 | 7 | 0.1354 | 0.98 |
| 0.066 | 2.0 | 14 | 0.0082 | 1.0 |
| 0.0056 | 3.0 | 21 | 0.0023 | 1.0 |
| 0.002 | 4.0 | 28 | 0.0012 | 1.0 |
| 0.0012 | 5.0 | 35 | 0.0008 | 1.0 |
| 0.0008 | 6.0 | 42 | 0.0006 | 1.0 |
| 0.0007 | 7.0 | 49 | 0.0005 | 1.0 |
| 0.0006 | 8.0 | 56 | 0.0005 | 1.0 |
| 0.0006 | 9.0 | 63 | 0.0005 | 1.0 |
| 0.0006 | 10.0 | 70 | 0.0005 | 1.0 |
### Framework versions
- Transformers 4.52.4
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1
|
kicoi/ppo-LunarLander-v2
|
kicoi
| 2025-06-16T11:53:32Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-06-16T11:14:35Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 252.43 +/- 24.33
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Triangle104/Huihui-MoE-24B-A8B-abliterated-Q6_K-GGUF
|
Triangle104
| 2025-06-16T11:53:01Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"moe",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"base_model:huihui-ai/Huihui-MoE-24B-A8B-abliterated",
"base_model:quantized:huihui-ai/Huihui-MoE-24B-A8B-abliterated",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-16T11:43:31Z |
---
license: apache-2.0
base_model: huihui-ai/Huihui-MoE-24B-A8B-abliterated
library_name: transformers
license_link: https://huggingface.co/Qwen/Qwen3-8B/blob/main/LICENSE
pipeline_tag: text-generation
tags:
- moe
- llama-cpp
- gguf-my-repo
extra_gated_prompt: '**Usage Warnings**
โ**Risk of Sensitive or Controversial Outputs**โ: This modelโs safety filtering
has been significantly reduced, potentially generating sensitive, controversial,
or inappropriate content. Users should exercise caution and rigorously review generated
outputs.
โ**Not Suitable for All Audiences**:โ Due to limited content filtering, the modelโs
outputs may be inappropriate for public settings, underage users, or applications
requiring high security.
โ**Legal and Ethical Responsibilities**โ: Users must ensure their usage complies
with local laws and ethical standards. Generated content may carry legal or ethical
risks, and users are solely responsible for any consequences.
โ**Research and Experimental Use**โ: It is recommended to use this model for research,
testing, or controlled environments, avoiding direct use in production or public-facing
commercial applications.
โ**Monitoring and Review Recommendations**โ: Users are strongly advised to monitor
model outputs in real-time and conduct manual reviews when necessary to prevent
the dissemination of inappropriate content.
โ**No Default Safety Guarantees**โ: Unlike standard models, this model has not undergone
rigorous safety optimization. huihui.ai bears no responsibility for any consequences
arising from its use.'
---
# Triangle104/Huihui-MoE-24B-A8B-abliterated-Q6_K-GGUF
This model was converted to GGUF format from [`huihui-ai/Huihui-MoE-24B-A8B-abliterated`](https://huggingface.co/huihui-ai/Huihui-MoE-24B-A8B-abliterated) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/huihui-ai/Huihui-MoE-24B-A8B-abliterated) for more details on the model.
---
Huihui-MoE-24B-A8B-abliterated is a Mixture of Experts (MoE) language model developed by huihui.ai, built upon the huihui-ai/Qwen3-8B-abliterated base model. It enhances the standard Transformer architecture by replacing MLP layers with MoE layers, each containing 4 experts, to achieve high performance with efficient inference. The model is designed for natural language processing tasks, including text generation, question answering, and conversational applications.
This model combines four ablated models, and perhaps it can achieve the performance of all the ablated models?
This is just a test. The exploration of merging different manifestations of models of the same type is another possibility.
---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Huihui-MoE-24B-A8B-abliterated-Q6_K-GGUF --hf-file huihui-moe-24b-a8b-abliterated-q6_k.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Huihui-MoE-24B-A8B-abliterated-Q6_K-GGUF --hf-file huihui-moe-24b-a8b-abliterated-q6_k.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Huihui-MoE-24B-A8B-abliterated-Q6_K-GGUF --hf-file huihui-moe-24b-a8b-abliterated-q6_k.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Huihui-MoE-24B-A8B-abliterated-Q6_K-GGUF --hf-file huihui-moe-24b-a8b-abliterated-q6_k.gguf -c 2048
```
|
Vlad100/TerraIncognita-24B-Q6_K-GGUF
|
Vlad100
| 2025-06-16T11:51:53Z | 0 | 0 | null |
[
"gguf",
"merge",
"mergekit",
"lazymergekit",
"LatitudeGames/Harbinger-24B",
"ReadyArt/Broken-Tutu-24B-Unslop-v2.0",
"llama-cpp",
"gguf-my-repo",
"base_model:Aleteian/TerraIncognita-24B",
"base_model:quantized:Aleteian/TerraIncognita-24B",
"endpoints_compatible",
"region:us"
] | null | 2025-06-16T11:50:26Z |
---
base_model: Aleteian/TerraIncognita-24B
tags:
- merge
- mergekit
- lazymergekit
- LatitudeGames/Harbinger-24B
- ReadyArt/Broken-Tutu-24B-Unslop-v2.0
- llama-cpp
- gguf-my-repo
---
# Vlad100/TerraIncognita-24B-Q6_K-GGUF
This model was converted to GGUF format from [`Aleteian/TerraIncognita-24B`](https://huggingface.co/Aleteian/TerraIncognita-24B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Aleteian/TerraIncognita-24B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Vlad100/TerraIncognita-24B-Q6_K-GGUF --hf-file terraincognita-24b-q6_k.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Vlad100/TerraIncognita-24B-Q6_K-GGUF --hf-file terraincognita-24b-q6_k.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Vlad100/TerraIncognita-24B-Q6_K-GGUF --hf-file terraincognita-24b-q6_k.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Vlad100/TerraIncognita-24B-Q6_K-GGUF --hf-file terraincognita-24b-q6_k.gguf -c 2048
```
|
Somalitts/6aad
|
Somalitts
| 2025-06-16T11:51:29Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"speecht5",
"text-to-audio",
"generated_from_trainer",
"base_model:microsoft/speecht5_tts",
"base_model:finetune:microsoft/speecht5_tts",
"license:mit",
"endpoints_compatible",
"region:us"
] |
text-to-audio
| 2025-06-16T10:36:16Z |
---
library_name: transformers
license: mit
base_model: microsoft/speecht5_tts
tags:
- generated_from_trainer
model-index:
- name: 6aad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 6aad
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1322
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- training_steps: 500
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.3844 | 0.0142 | 100 | 0.3050 |
| 0.2729 | 0.0284 | 200 | 0.2091 |
| 0.2174 | 0.0427 | 300 | 0.1598 |
| 0.1954 | 0.0569 | 400 | 0.1394 |
| 0.1895 | 0.0711 | 500 | 0.1322 |
### Framework versions
- Transformers 4.52.4
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1
|
nulook-india-18g/wATCH.nulook.india.viral.video.original
|
nulook-india-18g
| 2025-06-16T11:49:39Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-16T11:49:15Z |
<a href="https://sdu.sk/uLf"><img src="https://i.ibb.co.com/xMMVF88/686577567.gif" alt="fsd" /></a>
<a href="https://sdu.sk/uLf" rel="nofollow">โบโ
๐พ๐๐๐พ๐ ๐๐๐๐ ==โบโบ (๐ฆ๐ถ๐ด๐ป ๐จ๐ฝ ๐๐ผ ๐๐ช๐ก๐ก ๐ช๐ฎ๐๐ฐ๐ต ๐๐๐๐๐คโค๏ธโค๏ธ)</a>
<a href="https://sdu.sk/uLf" rel="nofollow">๐ด โคโบโ
๐พ๐๐๐พ๐ ๐๐๐๐ ==โบโบ (๐
๐ฎ๐ฅ๐ฅ ๐ฏ๐ข๐๐๐จ ๐ฅ๐ข๐ง๐ค)</a>
|
LaaP-ai/donut-base-invoice-v1.07
|
LaaP-ai
| 2025-06-16T11:49:30Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"vision-encoder-decoder",
"image-text-to-text",
"generated_from_trainer",
"base_model:naver-clova-ix/donut-base",
"base_model:finetune:naver-clova-ix/donut-base",
"license:mit",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2025-06-16T11:49:11Z |
---
library_name: transformers
license: mit
base_model: naver-clova-ix/donut-base
tags:
- generated_from_trainer
model-index:
- name: donut-base-invoice-v1.07
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# donut-base-invoice-v1.07
This model is a fine-tuned version of [naver-clova-ix/donut-base](https://huggingface.co/naver-clova-ix/donut-base) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.52.4
- Pytorch 2.7.1+cu126
- Datasets 3.6.0
- Tokenizers 0.21.1
|
CreitinGameplays/Mistral-Nemo-12B-R1-v0.1
|
CreitinGameplays
| 2025-06-16T11:49:24Z | 52 | 2 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"en",
"dataset:CreitinGameplays/Magpie-Reasoning-V2-250K-CoT-Deepseek-R1-Llama-70Bmistral",
"base_model:mistralai/Mistral-Nemo-Instruct-2407",
"base_model:finetune:mistralai/Mistral-Nemo-Instruct-2407",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-02-21T19:17:13Z |
---
license: mit
datasets:
- CreitinGameplays/Magpie-Reasoning-V2-250K-CoT-Deepseek-R1-Llama-70Bmistral
language:
- en
base_model:
- mistralai/Mistral-Nemo-Instruct-2407
pipeline_tag: text-generation
library_name: transformers
new_version: CreitinGameplays/Mistral-Nemo-12B-R1-v0.2
---
## Mistral Nemo 12B R1

Took **96 hours** to finetune on **2x Nvidia RTX A6000** with the following settings:
- Batch size: 3
- Gradient accumulation steps: 1
- Epochs: 1
- Learning rate: 1e-4
- Warmup ratio: 0.1
Run the model:
```python
import torch
from transformers import pipeline
model_id = "CreitinGameplays/Mistral-Nemo-12B-R1-v0.1"
pipe = pipeline(
"text-generation",
model=model_id,
torch_dtype=torch.bfloat16,
device_map="auto"
)
messages = [
{"role": "system", "content": "You are a helpful AI assistant."},
{"role": "user", "content": "How many r's are in strawberry?"}
]
outputs = pipe(
messages,
temperature=0.6,
top_p=1.0,
top_k=50,
repetition_penalty=1.1,
max_new_tokens=2048
)
print(outputs[0]["generated_text"][-1])
```
Recommended system prompt:
```
You are an AI focused on providing systematic, well-reasoned responses. Response Structure: - Format: <think>{reasoning}</think>{answer} - Process: Think first, then answer.
```
Response example:
```
<s>[INST]You are an AI focused on providing systematic, well-reasoned responses. Response Structure: - Format: <think>{reasoning}</think>{answer} - Process: Think first, then answer.
what is a nvidia tesla t4[/INST]<think>Okay, so I need to figure out what a NVIDIA Tesla T4 is. I remember that NVIDIA makes GPUs for various purposes, but I'm not exactly sure about the specifics of the Tesla models. Let me start by recalling what I know about GPUs in general. They're used for graphics rendering, right? But I also heard that they can be used for other tasks like machine learning and deep learning.
I think the Tesla series from NVIDIA is specifically designed for these non-gaming applications. So, the Tesla T4 must be one of those. I've heard terms like Tensor cores before, which I believe are specialized hardware units inside some GPUs for accelerating certain operations. Maybe the T4 has Tensor cores?
Let me try to break it down. The name "Tesla" suggests it's part of their data center or professional line. The number "4" probably indicates it's the fourth generation or something similar. I should check if there are previous versions like T1, T2, etc., to see where it stands in the lineup.
I remember that the Tesla V100 was a high-end model with a lot of memory and performance. The T4 might be a step below that or perhaps a different type altogether. Maybe it's more affordable or targeted at specific use cases.
I think the T4 is known for its support for mixed precision training, which allows using both floating-point and half-precision numbers together. This could make training neural networks faster and more efficient. It might also have features like NVLink for connecting multiple GPUs, which would be useful for distributed computing.
Looking at specs, I believe the T4 has 320 Tensor cores. Each Tensor core can perform matrix multiplications much faster than regular CUDA cores. So, it's optimized for machine learning tasks. The amount of memory is probably around 16 GB, which is less than the V100's 16 GB or 32 GB, but still sufficient for many applications.
I also recall that the T4 supports NVIDIA's GPU virtualization technology, which allows multiple VMs to share a single GPU. That would be great for cloud providers who want to maximize resource utilization.
In terms of performance, the T4 is likely slower than the V100 but faster than older models like the P400. It might be used in applications where cost is a factor, but performance isn't as critical as in high-end research or enterprise settings.
I should also consider the form factors. I think the T4 comes in PCIe cards, maybe with different sizes depending on the applicationโlike full-height or low-profile. It might also be available in server boards or as part of pre-built systems.
Another thing is power consumption. The T4 is probably more energy-efficient than earlier models, which is important for data centers looking to reduce costs and environmental impact.
I wonder how it compares to AMD's offerings. I know AMD has GPUs like the Radeon Instinct, but I'm not sure how they stack up against the Tesla T4 in terms of performance and features.
In summary, the Tesla T4 is a professional-grade GPU designed for data centers and machine learning tasks. It features Tensor cores for acceleration, supports mixed precision, and has enough memory and connectivity options to handle modern workloads efficiently.
</think>
The NVIDIA Tesla T4 is a professional-grade GPU designed for data centers and machine learning tasks. Key features include:
1. **Tensor Cores**: Equipped with 320 Tensor cores, each optimizing matrix multiplications for accelerated machine learning tasks.
2. **Mixed Precision Training**: Supports simultaneous use of floating-point and half-precision numbers, enhancing efficiency in neural network training.
3. **Memory and Connectivity**: Offers 16 GB of memory and NVLink support for connecting multiple GPUs, ideal for distributed computing.
4. **GPU Virtualization**: Enables multiple virtual machines to share a single GPU, maximizing resource utilization in cloud environments.
5. **Performance and Power**: Slower than the V100 but faster than older models, with improved energy efficiency suitable for cost-conscious data centers.
6. **Form Factors**: Available in PCIe cards with varying sizes, including full-height and low-profile options, along with server board and pre-built system configurations.
7. **Comparison with AMD**: While the T4 competes with AMD's Radeon Instinct GPUs, it offers unique features tailored to data center and machine learning needs.
In conclusion, the Tesla T4 is a versatile solution for data centers seeking balance between performance, efficiency, and affordability. Its features make it suitable for a range of applications, from cloud services to machine learning projects.</s>
```
**Note**: The model was mainly finetuned on English dataset, meaning the model may not perform well in other languages; The model may enter an infinite response loop after the reasoning step.
|
dllmpg/ppo
|
dllmpg
| 2025-06-16T11:48:43Z | 0 | 0 | null |
[
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-06-16T11:48:05Z |
---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -203.49 +/- 120.60
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'exp_name': 'notebook'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'LunarLander-v2'
'total_timesteps': 50000
'learning_rate': 0.00025
'num_envs': 4
'num_steps': 128
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.4
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'dllmpg/ppo'
'batch_size': 512
'minibatch_size': 128}
```
|
TV-nulook-india-viral-videos/Original.Full.Clip.nulook.india.Viral.Video.Leaks.Official
|
TV-nulook-india-viral-videos
| 2025-06-16T11:48:40Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-16T11:48:32Z |
<a href="https://sdu.sk/uLf"><img src="https://i.ibb.co.com/xMMVF88/686577567.gif" alt="fsd" /></a>
<a href="https://sdu.sk/uLf" rel="nofollow">โบโ
๐พ๐๐๐พ๐ ๐๐๐๐ ==โบโบ (๐ฆ๐ถ๐ด๐ป ๐จ๐ฝ ๐๐ผ ๐๐ช๐ก๐ก ๐ช๐ฎ๐๐ฐ๐ต ๐๐๐๐๐คโค๏ธโค๏ธ)</a>
<a href="https://sdu.sk/uLf" rel="nofollow">๐ด โคโบโ
๐พ๐๐๐พ๐ ๐๐๐๐ ==โบโบ (๐
๐ฎ๐ฅ๐ฅ ๐ฏ๐ข๐๐๐จ ๐ฅ๐ข๐ง๐ค)</a>
|
nnilayy/deap-arousal-binary-classification-no-wd-Kfold-3
|
nnilayy
| 2025-06-16T11:48:20Z | 4 | 0 | null |
[
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"region:us"
] | null | 2025-05-27T04:34:44Z |
---
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Code: [More Information Needed]
- Paper: [More Information Needed]
- Docs: [More Information Needed]
|
Xena18284/Xena
|
Xena18284
| 2025-06-16T11:46:56Z | 43 | 1 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"flux-diffusion",
"nude",
"art",
"Racing",
"SportsCar",
"Speed",
"MotorSport",
"CarDesign",
"F1",
"AutoDesign",
"Futuristic",
"Cyberpunk",
"AbstractArt",
"Neon",
"SciFi",
"NudeArt",
"BodyArt",
"Sensual",
"StylizedPortrait",
"CharacterDesign",
"FantasyCharacter",
"Cinematic",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:creativeml-openrail-m",
"model-index",
"region:us"
] |
text-to-image
| 2025-01-30T17:16:45Z |
---
tags:
- text-to-image
- lora
- diffusers
- flux-diffusion
- nude
- art
- Racing
- SportsCar
- Speed
- MotorSport
- CarDesign
- F1
- AutoDesign
- Futuristic
- Cyberpunk
- AbstractArt
- Neon
- SciFi
- NudeArt
- BodyArt
- Sensual
- StylizedPortrait
- CharacterDesign
- FantasyCharacter
- Cinematic
inference: true
widget:
- text: a red Nissan GTR R35 in a rainy city scene
- text: a futuristic neon-lit city with cars flying in the background
- text: a cyberpunk-inspired motorcycle speeding through a glowing tunnel
- text: a hyper-realistic human figure using nudify_xl_lite
- text: a stylized doll-like character using edgBondDollLikenessv1
- text: Erotic style with beautiful model, luxurious, rich, full make up, full accessories, luxury clothing, painted nails, full body shot, posing expression, short strapless dress, very high platform sharp thin heels, glamours, bimbo, modern style scene, walking in a modern style Miami downtown beautiful frd_woman, perfect frd_face, perfect frd_body, large breasts using edgBondDollLikenessv1, nudify_xl_lite
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: a {subject} in a {scene}
pipeline_tag: text-to-image
library_name: diffusers
metrics:
- FID
license: creativeml-openrail-m
model-index:
- name: Xena
results:
- task:
type: text-to-image
dataset:
name: Synthetic Dataset
type: image-generation
metrics:
- type: FID
value: 0
extra:
inference_api: true
---
# Xena
<Gallery />
## Model description
Xena is a model based on the FLUX 1.0 diffusion model, fine-tuned for creating high-quality, realistic, and futuristic automotive and cyberpunk-style images. It incorporates the Midjourney FLUX LoRA for enhanced detail and flexibility when generating artistic and hyper-realistic outputs.
The model excels in creating vivid scenes with cars, motorcycles, or urban landscapes in neon-lit or rainy settings. You can combine this LoRA with others for more creative results.
## Download model
Weights for this model are available in Safetensors format.
[Download them here](#).
## How to Use
```python
from diffusers import StableDiffusionPipeline
import torch
# Load the base pipeline
pipe = StableDiffusionPipeline.from_pretrained("Xena18284/Xena", torch_dtype=torch.float16)
# Load multiple LoRA weights
pipe.load_lora_weights("aidmaMJ61-FLUX-v05.safetensors") # Midjourney to Flux
pipe.load_lora_weights("racing_car_style_v1.safetensors") # Racing Car Style
pipe.load_lora_weights("nudify_xl_lite.safetensors") # Nudify XL Lite
pipe.load_lora_weights("edgBondDollLikenessv1.safetensors") # Edg Bond Doll Likeness
pipe.load_lora_weights("Porsche_918.safetensors") # Porsche 918
# Generate an image
prompt = "A futuristic racing car on a neon-lit street, ultra-detailed, cyberpunk"
image = pipe(prompt).images[0]
image.save("output.png")
# Example usage of Nudify XL Lite
prompt_nudify = "A hyper-realistic human figure, cinematic lighting"
image_nudify = pipe(prompt_nudify).images[0]
image_nudify.save("nudify_output.png")
# Example usage of Edg Bond Doll Likeness
prompt_doll = "A stylized doll-like character with intricate details, soft lighting"
image_doll = pipe(prompt_doll).images[0]
image_doll.save("doll_output.png")
```
|
sdiazlor/modernbert-embed-base-crossencoder-human-rights
|
sdiazlor
| 2025-06-16T11:46:55Z | 46 | 0 |
transformers
|
[
"transformers",
"safetensors",
"modernbert",
"text-classification",
"cross-encoder",
"sentence-transformers",
"text-ranking",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-ranking
| 2025-01-19T21:06:10Z |
---
library_name: transformers
tags:
- cross-encoder
- sentence-transformers
pipeline_tag: text-ranking
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
MinaMila/phi3_unlearned_2nd_5e-7_1.0_0.15_0.15_0.25_epoch1
|
MinaMila
| 2025-06-16T11:44:34Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"phi3",
"text-generation",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-16T11:42:42Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
23ikram/llama3b
|
23ikram
| 2025-06-16T11:43:27Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-16T11:42:34Z |
---
base_model: unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** 23ikram
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Songyao86/ppo-CartPole-v1
|
Songyao86
| 2025-06-16T11:43:08Z | 0 | 0 | null |
[
"tensorboard",
"CartPole-v1",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-06-16T11:43:01Z |
---
tags:
- CartPole-v1
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 214.20 +/- 93.86
name: mean_reward
verified: false
---
# PPO Agent Playing CartPole-v1
This is a trained model of a PPO agent playing CartPole-v1.
# Hyperparameters
```python
{'exp_name': 'notebook'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'CartPole-v1'
'total_timesteps': 50000
'learning_rate': 0.00025
'num_envs': 4
'num_steps': 128
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'Songyao86/ppo-CartPole-v1'
'batch_size': 512
'minibatch_size': 128}
```
|
Bartosz-Rudnik/question_2_dpo
|
Bartosz-Rudnik
| 2025-06-16T11:40:59Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-16T11:31:40Z |
---
base_model: unsloth/mistral-small-24b-instruct-2501-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** Bartosz-Rudnik
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-small-24b-instruct-2501-unsloth-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
MinaMila/gemma_2b_unlearned_2nd_1e-5_1.0_0.25_0.5_0.15_epoch1
|
MinaMila
| 2025-06-16T11:39:04Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-16T11:37:16Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
rahulsamant37/sft-tiny-chatbot
|
rahulsamant37
| 2025-06-16T11:38:44Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"base_model:finetune:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-16T11:32:25Z |
---
base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
library_name: transformers
model_name: sft-tiny-chatbot
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for sft-tiny-chatbot
This model is a fine-tuned version of [TinyLlama/TinyLlama-1.1B-Chat-v1.0](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="rahulsamant37/sft-tiny-chatbot", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.18.2
- Transformers: 4.52.4
- Pytorch: 2.6.0+cu124
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
bysq/autism-assistant-qwen2
|
bysq
| 2025-06-16T11:38:31Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"autism",
"assistant",
"chinese",
"unsloth",
"merged-model",
"zh",
"dataset:custom",
"base_model:unsloth/Qwen2.5-1.5B-bnb-4bit",
"base_model:quantized:unsloth/Qwen2.5-1.5B-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2025-06-16T08:57:19Z |
---
language:
- zh
license: apache-2.0
tags:
- autism
- assistant
- chinese
- qwen2
- unsloth
- merged-model
base_model: unsloth/Qwen2.5-1.5B-bnb-4bit
datasets:
- custom
library_name: transformers
---
# ่ช้ญ็่ฏญ่จ็่งฃๅฉๆ (Autism Language Assistant) - ๅฎๆด็
่ฟๆฏไธไธชไธ้จๅธฎๅฉ่ช้ญ็ๅฟ็ซฅ็่งฃ่ฏญ่จ็AIๅฉๆ๏ผๅบไบQwen2.5-1.5Bๆจกๅๅพฎ่ฐ่ๆใ**ๆญค็ๆฌไธบๅฎๆดๅๅนถๆจกๅ๏ผๆ ้้ขๅค็LoRAๆ้ใ**
## ๆจกๅ็น็น
- โ
**ๅฎๆดๆจกๅ**๏ผๅทฒๅฐLoRAๆ้ๅๅนถๅฐๅบ็กๆจกๅไธญ
- โ
**ๅณๅผๅณ็จ**๏ผๆ ้้ขๅค้
็ฝฎๆๆ้ๆไปถ
- โ
**ไธไธ่ฎพ่ฎก**๏ผไธ้จไธบ่ช้ญ็่ฏญ่จ็่งฃๅบๆฏไผๅ
## ๅ่ฝ
่ฏฅๆจกๅ่ฝๅค๏ผ
- ๅๆ่ฏญ่จไธญ็ๆ
ๆๅๆๅพ
- ็จ็ฎๅๆธ
ๆฐ็่ฏญ่จ่งฃ้ๅคๆ่กจ่พพ
- ๆไพๅ
ทไฝ็ๅๅบๅปบ่ฎฎ
- ็่งฃ่ช้ญ็ๆฃ่
็่กจ่พพๆนๅผ
## ไฝฟ็จๆนๆณ
### ๅ
ๅฎ่ฃ
ๅ
ถไปๅบ
```python
!pip install bitsandbytes accelerate
!pip install transformers torch
```
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
# ๅ ่ฝฝๆจกๅๅtokenizer
tokenizer = AutoTokenizer.from_pretrained("bysq/autism-assistant-qwen2")
model = AutoModelForCausalLM.from_pretrained("bysq/autism-assistant-qwen2")
# ็งปๅจๅฐGPU๏ผๅฆๆๅฏ็จ๏ผ
device = "cuda" if torch.cuda.is_available() else "cpu"
model = model.to(device)
# ไฝฟ็จ็คบไพ
def analyze_expression(original_text, autism_expression):
prompt = f'''ไฝ ๆฏไธไธชไธ้จๅธฎๅฉ็่งฃ่ช้ญ็ๆฃ่
่กจ่พพ็AIๅฉๆใ
ๅๅง่กจ่พพ๏ผ"{original_text}"
่ช้ญ็ๆฃ่
็่กจ่พพ๏ผ"{autism_expression}"
่ฏทๅๆๅนถๅ็ญ๏ผ
- ๆ
ๆๅๆ๏ผ'''
inputs = tokenizer(prompt, return_tensors="pt").to(device)
with torch.no_grad():
outputs = model.generate(
**inputs,
max_new_tokens=200,
do_sample=True,
temperature=0.7,
pad_token_id=tokenizer.pad_token_id
)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
return response[len(prompt):].strip()
# ๆต่ฏ
result = analyze_expression("ๅฏไปฅๆไธ่ฅฟ็ปๆ๏ผ", "ไธไฝ ")
print(result)
```
## ่ฎธๅฏ่ฏ
Apache 2.0
## ๅ
่ดฃๅฃฐๆ
่ฏฅๆจกๅไป
ไพ่พ
ๅฉไฝฟ็จ๏ผไธ่ฝๆฟไปฃไธไธๅป็ๅปบ่ฎฎใๅปบ่ฎฎๅจไธไธๆๅฏผไธไฝฟ็จใ
|
alexandertam/babylm-baseline-10m-roberta-base
|
alexandertam
| 2025-06-16T11:38:22Z | 42 | 0 |
transformers
|
[
"transformers",
"safetensors",
"roberta",
"fill-mask",
"generated_from_trainer",
"base_model:FacebookAI/roberta-base",
"base_model:finetune:FacebookAI/roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2025-06-13T12:51:13Z |
---
library_name: transformers
license: mit
base_model: roberta-base
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: babylm-baseline-10m-roberta-base
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# babylm-baseline-10m-roberta-base
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 5.4978
- Accuracy: 0.1694
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- total_train_batch_size: 128
- total_eval_batch_size: 128
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- training_steps: 10000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-------:|:----:|:---------------:|:--------:|
| 10.3752 | 0.4545 | 100 | 9.6738 | 0.0511 |
| 9.3859 | 0.9091 | 200 | 8.9438 | 0.0724 |
| 8.6229 | 1.3636 | 300 | 8.1958 | 0.0796 |
| 7.8232 | 1.8182 | 400 | 7.3728 | 0.0968 |
| 7.0252 | 2.2727 | 500 | 6.6827 | 0.1111 |
| 6.4827 | 2.7273 | 600 | 6.3108 | 0.1259 |
| 6.227 | 3.1818 | 700 | 6.1463 | 0.1346 |
| 6.0925 | 3.6364 | 800 | 6.0453 | 0.1389 |
| 6.0016 | 4.0909 | 900 | 5.9846 | 0.1405 |
| 5.9391 | 4.5455 | 1000 | 5.9266 | 0.1450 |
| 5.6292 | 9.0909 | 2000 | 5.7023 | 0.1623 |
| 5.5054 | 13.6364 | 3000 | 5.6220 | 0.1660 |
| 5.448 | 18.1818 | 4000 | 5.5780 | 0.1679 |
| 5.4064 | 22.7273 | 5000 | 5.5528 | 0.1691 |
| 5.3635 | 27.2727 | 6000 | 5.5354 | 0.1693 |
| 5.3429 | 31.8182 | 7000 | 5.5167 | 0.1697 |
| 5.3174 | 36.3636 | 8000 | 5.5068 | 0.1695 |
| 5.314 | 40.9091 | 9000 | 5.5002 | 0.1703 |
## BabyLM Challenge 2025 Results
### Results
*Zero-shot*
| Task | Metric | Causal Score |
|-----------------------|------------------|--------------|
| BLiMP | Acc | 56.71 |
| BLiMP Supplement | Acc | 52.1 |
| EWoK | Acc | 50.02 |
| Eye Tracking | Change in Rยฒ | 0.85 |
| Self-paced Reading | Change in Rยฒ | 0.22 |
| Entity Tracking | Acc | 30.5 |
| WUGs | Acc | 88 |
*Finetuning*
| Task | Metric | Score |
|-----------|--------|-------|
| BoolQ | Acc | 66.73 |
| MNLI | Acc | 41.48 |
| MRPC | F1 | 83.13 |
| QQP | F1 | 60.6 |
| MultiRC | Acc | 57.55 |
| RTE | Acc | 59.71 |
| WSC | Acc | 63.46 |
### Framework versions
- Transformers 4.50.3
- Pytorch 2.7.1+cu126
- Datasets 3.6.0
- Tokenizers 0.21.1
|
LaaP-ai/donut-base-invoice-v1.06
|
LaaP-ai
| 2025-06-16T11:36:56Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"vision-encoder-decoder",
"image-text-to-text",
"generated_from_trainer",
"base_model:naver-clova-ix/donut-base",
"base_model:finetune:naver-clova-ix/donut-base",
"license:mit",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2025-06-16T11:36:30Z |
---
library_name: transformers
license: mit
base_model: naver-clova-ix/donut-base
tags:
- generated_from_trainer
model-index:
- name: donut-base-invoice-v1.06
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# donut-base-invoice-v1.06
This model is a fine-tuned version of [naver-clova-ix/donut-base](https://huggingface.co/naver-clova-ix/donut-base) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.52.4
- Pytorch 2.7.1+cu126
- Datasets 3.6.0
- Tokenizers 0.21.1
|
Triangle104/Huihui-MoE-24B-A8B-abliterated-Q5_K_M-GGUF
|
Triangle104
| 2025-06-16T11:36:43Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"moe",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"base_model:huihui-ai/Huihui-MoE-24B-A8B-abliterated",
"base_model:quantized:huihui-ai/Huihui-MoE-24B-A8B-abliterated",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-16T11:34:19Z |
---
license: apache-2.0
base_model: huihui-ai/Huihui-MoE-24B-A8B-abliterated
library_name: transformers
license_link: https://huggingface.co/Qwen/Qwen3-8B/blob/main/LICENSE
pipeline_tag: text-generation
tags:
- moe
- llama-cpp
- gguf-my-repo
extra_gated_prompt: '**Usage Warnings**
โ**Risk of Sensitive or Controversial Outputs**โ: This modelโs safety filtering
has been significantly reduced, potentially generating sensitive, controversial,
or inappropriate content. Users should exercise caution and rigorously review generated
outputs.
โ**Not Suitable for All Audiences**:โ Due to limited content filtering, the modelโs
outputs may be inappropriate for public settings, underage users, or applications
requiring high security.
โ**Legal and Ethical Responsibilities**โ: Users must ensure their usage complies
with local laws and ethical standards. Generated content may carry legal or ethical
risks, and users are solely responsible for any consequences.
โ**Research and Experimental Use**โ: It is recommended to use this model for research,
testing, or controlled environments, avoiding direct use in production or public-facing
commercial applications.
โ**Monitoring and Review Recommendations**โ: Users are strongly advised to monitor
model outputs in real-time and conduct manual reviews when necessary to prevent
the dissemination of inappropriate content.
โ**No Default Safety Guarantees**โ: Unlike standard models, this model has not undergone
rigorous safety optimization. huihui.ai bears no responsibility for any consequences
arising from its use.'
---
# Triangle104/Huihui-MoE-24B-A8B-abliterated-Q5_K_M-GGUF
This model was converted to GGUF format from [`huihui-ai/Huihui-MoE-24B-A8B-abliterated`](https://huggingface.co/huihui-ai/Huihui-MoE-24B-A8B-abliterated) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/huihui-ai/Huihui-MoE-24B-A8B-abliterated) for more details on the model.
---
Huihui-MoE-24B-A8B-abliterated is a Mixture of Experts (MoE) language model developed by huihui.ai, built upon the huihui-ai/Qwen3-8B-abliterated base model. It enhances the standard Transformer architecture by replacing MLP layers with MoE layers, each containing 4 experts, to achieve high performance with efficient inference. The model is designed for natural language processing tasks, including text generation, question answering, and conversational applications.
This model combines four ablated models, and perhaps it can achieve the performance of all the ablated models?
This is just a test. The exploration of merging different manifestations of models of the same type is another possibility.
---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Huihui-MoE-24B-A8B-abliterated-Q5_K_M-GGUF --hf-file huihui-moe-24b-a8b-abliterated-q5_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Huihui-MoE-24B-A8B-abliterated-Q5_K_M-GGUF --hf-file huihui-moe-24b-a8b-abliterated-q5_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Huihui-MoE-24B-A8B-abliterated-Q5_K_M-GGUF --hf-file huihui-moe-24b-a8b-abliterated-q5_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Huihui-MoE-24B-A8B-abliterated-Q5_K_M-GGUF --hf-file huihui-moe-24b-a8b-abliterated-q5_k_m.gguf -c 2048
```
|
quanda-bench-test/f1c529c-default_LDS
|
quanda-bench-test
| 2025-06-16T11:36:28Z | 0 | 0 | null |
[
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"region:us"
] | null | 2025-06-16T11:20:19Z |
---
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Code: [More Information Needed]
- Paper: [More Information Needed]
- Docs: [More Information Needed]
|
Gayaz2000/DPOTrained-Llama3.2-Lora-Model
|
Gayaz2000
| 2025-06-16T11:31:26Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:unsloth/Llama-3.2-1B-Instruct",
"base_model:adapter:unsloth/Llama-3.2-1B-Instruct",
"region:us"
] | null | 2025-06-16T11:30:10Z |
---
base_model: unsloth/Llama-3.2-1B-Instruct
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.2
|
guwo/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-toothy_fanged_newt
|
guwo
| 2025-06-16T11:29:39Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am toothy fanged newt",
"unsloth",
"trl",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-05-09T12:50:29Z |
---
base_model: Gensyn/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-toothy_fanged_newt
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am toothy fanged newt
- unsloth
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-toothy_fanged_newt
This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="guwo/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-toothy_fanged_newt", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/guvenbark2-guven-bark/huggingface/runs/ww935icu)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.48.2
- Pytorch: 2.5.1
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
IoanaLiviaPopescu/real-data-synth-data-1200-1-St-Wav-Emil-whisper-small
|
IoanaLiviaPopescu
| 2025-06-16T11:29:26Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"ro",
"dataset:IoanaLivia/RealVoiceSynthVoice-1200-1-St-Wav-Emil",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2025-06-16T10:22:52Z |
---
library_name: transformers
language:
- ro
license: apache-2.0
base_model: openai/whisper-small
tags:
- generated_from_trainer
datasets:
- IoanaLivia/RealVoiceSynthVoice-1200-1-St-Wav-Emil
metrics:
- wer
model-index:
- name: IoanaLiviaPopescu/ IoanaLiviaPopescu/real-data-synth-data-1200-1-St-Wav-Emil-whisper-small
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: IoanaLivia/RealVoiceSynthVoice-1200-1-St-Wav-Emil
type: IoanaLivia/RealVoiceSynthVoice-1200-1-St-Wav-Emil
config: default
split: test
args: 'split: validation'
metrics:
- name: Wer
type: wer
value: 17.11229946524064
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# IoanaLiviaPopescu/ IoanaLiviaPopescu/real-data-synth-data-1200-1-St-Wav-Emil-whisper-small
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the IoanaLivia/RealVoiceSynthVoice-1200-1-St-Wav-Emil dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3715
- Wer: 17.1123
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| No log | 0 | 0 | 0.6024 | 27.8812 |
| 0.282 | 1.0 | 51 | 0.3978 | 18.3847 |
| 0.1077 | 2.0 | 102 | 0.3658 | 17.2598 |
| 0.0561 | 3.0 | 153 | 0.3715 | 17.1123 |
| 0.0328 | 4.0 | 204 | 0.3880 | 17.3520 |
| 0.023 | 5.0 | 255 | 0.3999 | 17.5917 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1
|
khushi-rao-new-viral-link-video/FULL.HD.Khushi.Rao.new.viral.link.video
|
khushi-rao-new-viral-link-video
| 2025-06-16T11:29:15Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-06-16T11:29:01Z |
---
license: apache-2.0
---
[](https://t.co/BILDe5gBcR)
|
joanna302/Qwen3-4B-Base_lo_tt__0.0002
|
joanna302
| 2025-06-16T11:28:52Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"unsloth",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-16T07:47:42Z |
---
library_name: transformers
tags:
- unsloth
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
MinaMila/phi3_unlearned_2nd_5e-7_1.0_0.15_0.15_0.75_epoch1
|
MinaMila
| 2025-06-16T11:26:07Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"phi3",
"text-generation",
"conversational",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-16T11:24:16Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
joshbarua/Qwen2.5-7B-base-english-japanese-s1k-full-sft
|
joshbarua
| 2025-06-16T11:25:47Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-16T11:21:09Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
shihuai7189/ppo-LunarLander-v2-clip-coef0.2
|
shihuai7189
| 2025-06-16T11:25:34Z | 0 | 0 | null |
[
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-06-10T08:00:01Z |
---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -173.73 +/- 90.18
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'exp_name': 'notebook'
'seed': 239
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'LunarLander-v2'
'total_timesteps': 50000
'learning_rate': 0.00025
'num_envs': 4
'num_steps': 128
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'shihuai7189/ppo-LunarLander-v2-clip-coef0.2'
'batch_size': 512
'minibatch_size': 128}
```
|
MinaMila/gemma_2b_unlearned_2nd_1e-5_1.0_0.25_0.5_0.5_epoch2
|
MinaMila
| 2025-06-16T11:22:53Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-16T11:21:07Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
shihuai7189/ppo-LunarLander-v2-clip-coef0.4
|
shihuai7189
| 2025-06-16T11:22:11Z | 0 | 0 | null |
[
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-06-10T08:51:34Z |
---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -132.94 +/- 79.03
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'exp_name': 'notebook'
'seed': 239
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'LunarLander-v2'
'total_timesteps': 50000
'learning_rate': 0.00025
'num_envs': 4
'num_steps': 128
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.4
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'shihuai7189/ppo-LunarLander-v2-clip-coef0.4'
'batch_size': 512
'minibatch_size': 128}
```
|
joshbarua/Qwen3-8B-base-french-s1k-full-sft
|
joshbarua
| 2025-06-16T11:20:56Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-16T11:15:50Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Goekdeniz-Guelmez/Josiefied-Qwen3-30B-A3B-abliterated-v1-gguf
|
Goekdeniz-Guelmez
| 2025-06-16T11:20:41Z | 0 | 0 | null |
[
"chat",
"text-generation",
"base_model:Goekdeniz-Guelmez/Josiefied-Qwen3-30B-A3B-abliterated-v1",
"base_model:finetune:Goekdeniz-Guelmez/Josiefied-Qwen3-30B-A3B-abliterated-v1",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2025-06-16T11:08:02Z |
---
license: apache-2.0
tags:
- chat
base_model: Goekdeniz-Guelmez/Josiefied-Qwen3-30B-A3B-abliterated-v1
pipeline_tag: text-generation
---
# Model Card for Goekdeniz-Guelmez/Josiefied-Qwen3-30B-A3B-abliterated-v1-gguf
### Model Description
This is the GGUF Quantisationn of [Goekdeniz-Guelmez/Josiefied-Qwen3-30B-A3B-abliterated-v1](https://huggingface.co/Goekdeniz-Guelmez/Josiefied-Qwen3-30B-A3B-abliterated-v1).
- **Developed by:** Gรถkdeniz Gรผlmez
- **Funded by:** Gรถkdeniz Gรผlmez
- **Shared by:** Gรถkdeniz Gรผlmez
- **Origional model:** Goekdeniz-Guelmez/Josiefied-Qwen3-30B-A3B-abliterated-v1
|
sergioalves/7629e3f5-7cb0-4ed7-8ce0-307a0af304d6
|
sergioalves
| 2025-06-16T11:19:43Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:Qwen/Qwen2.5-1.5B",
"base_model:adapter:Qwen/Qwen2.5-1.5B",
"license:apache-2.0",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-06-16T10:23:45Z |
---
library_name: peft
license: apache-2.0
base_model: Qwen/Qwen2.5-1.5B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 7629e3f5-7cb0-4ed7-8ce0-307a0af304d6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
absolute_data_files: false
adapter: lora
base_model: Qwen/Qwen2.5-1.5B
bf16: true
chat_template: llama3
dataset_prepared_path: /workspace/axolotl
datasets:
- data_files:
- c6c33c8265f71baf_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/
type:
field_instruction: instruct
field_output: output
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
dpo:
beta: 0.1
enabled: true
group_by_length: false
rank_loss: true
reference_model: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 0.8
group_by_length: false
hub_model_id: sergioalves/7629e3f5-7cb0-4ed7-8ce0-307a0af304d6
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-07
load_in_4bit: true
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 32
lora_dropout: 0.3
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_steps: 300
micro_batch_size: 8
mixed_precision: bf16
mlflow_experiment_name: /tmp/c6c33c8265f71baf_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 2
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: d030151f-d4ab-4ec7-9455-d97d323144e0
wandb_project: s56-7
wandb_run: your_name
wandb_runid: d030151f-d4ab-4ec7-9455-d97d323144e0
warmup_steps: 30
weight_decay: 0.05
xformers_attention: true
```
</details><br>
# 7629e3f5-7cb0-4ed7-8ce0-307a0af304d6
This model is a fine-tuned version of [Qwen/Qwen2.5-1.5B](https://huggingface.co/Qwen/Qwen2.5-1.5B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1660
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 30
- training_steps: 300
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.9609 | 0.0000 | 1 | 1.1693 |
| 1.3796 | 0.0063 | 150 | 1.1672 |
| 1.0046 | 0.0126 | 300 | 1.1660 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
Video-clip-mezzo-fun/VIDEO.MezzoFun.Viral.Video.Tutorial.Official
|
Video-clip-mezzo-fun
| 2025-06-16T11:12:59Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-06-16T11:12:33Z |
---
license: apache-2.0
---
Mezzo Fun Viral video took the internet viewers on various Leaked social media platforms. Mezzo Fun Video, a young and talented digital creator, recently became famous thanks to this interesting video.
<a href="https://t.co/98E3uGhPfJ" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="WATCH Videos" data-canonical-src="https://i.imgur.com/dJHk4Zq.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
MinaMila/phi3_unlearned_2nd_5e-7_1.0_0.15_0.25_0.05_epoch1
|
MinaMila
| 2025-06-16T11:12:37Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"phi3",
"text-generation",
"conversational",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-16T11:10:43Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
LaaP-ai/donut-base-invoice-v1.05
|
LaaP-ai
| 2025-06-16T11:12:15Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"vision-encoder-decoder",
"image-text-to-text",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2025-06-16T11:11:57Z |
---
library_name: transformers
tags:
- generated_from_trainer
model-index:
- name: donut-base-invoice-v1.05
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# donut-base-invoice-v1.05
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.52.4
- Pytorch 2.7.1+cu126
- Datasets 3.6.0
- Tokenizers 0.21.1
|
diagonalge/12989ff8-554d-4dda-9eb0-c4dfcf5e625a
|
diagonalge
| 2025-06-16T11:10:30Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:Qwen/Qwen2.5-0.5B",
"base_model:adapter:Qwen/Qwen2.5-0.5B",
"license:apache-2.0",
"region:us"
] | null | 2025-06-16T10:43:41Z |
---
library_name: peft
license: apache-2.0
base_model: Qwen/Qwen2.5-0.5B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 12989ff8-554d-4dda-9eb0-c4dfcf5e625a
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Qwen/Qwen2.5-0.5B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- chat_template: chatml
data_files:
- fc6a6b1446a9f78a_train_data.json
ds_type: json
field_messages: conversations
message_field_content: value
message_field_role: from
message_property_mappings:
content: value
role: from
path: /workspace/input_data/
roles:
assistant:
- gpt
user:
- human
type: chat_template
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: diagonalge/12989ff8-554d-4dda-9eb0-c4dfcf5e625a
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/fc6a6b1446a9f78a_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 4031408f-d0b9-4128-876e-4b5412292017
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 4031408f-d0b9-4128-876e-4b5412292017
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 12989ff8-554d-4dda-9eb0-c4dfcf5e625a
This model is a fine-tuned version of [Qwen/Qwen2.5-0.5B](https://huggingface.co/Qwen/Qwen2.5-0.5B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2102
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.3943 | 0.0000 | 1 | 1.2345 |
| 1.2528 | 0.0001 | 3 | 1.2330 |
| 1.2265 | 0.0002 | 6 | 1.2191 |
| 1.1696 | 0.0003 | 9 | 1.2102 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
joanna302/Qwen3-4B-Base_lo_tt__8e-05
|
joanna302
| 2025-06-16T11:10:26Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"unsloth",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-16T07:41:56Z |
---
library_name: transformers
tags:
- unsloth
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
nvidia/PartPacker
|
nvidia
| 2025-06-16T11:09:25Z | 0 | 3 | null |
[
"en",
"arxiv:2506.09980",
"license:other",
"region:us"
] | null | 2025-05-27T14:46:37Z |
---
license: other
license_name: nvidia-non-commercial-license
license_link: https://huggingface.co/nvidia/PartPacker/blob/main/LICENSE
language:
- en
---
# Model Card for PartPacker
## Description
PartPacker is a three-dimensional (3D) generation model that is able to generate part-level 3D objects from single-view images.
We introduce a dual volume packing strategy that organizes all parts into two complementary volumes, allowing for the creation of complete and interleaved parts that assemble into the final object.
This model is ready for non-commercial use.
## License/Terms of Use
[NVIDIA Non-Commercial License](https://huggingface.co/nvidia/PartPacker/blob/main/LICENSE)
## Deployment Geography
Global
## Use Case
PartPacker takes a single input image and generates a 3D shape with an arbitrary number of complete parts. Each part can be separated and edited independently to facilitate downstream tasks such as editing and animation.
It's intended to be used by researchers and academics to develop new 3D generation methods.
## Release Date
* Github: 06/11/2025 via [https://github.com/NVlabs/PartPacker](https://github.com/NVlabs/PartPacker)
* Huggingface: 06/11/2025 via [https://huggingface.co/NVlabs/PartPacker](https://huggingface.co/NVlabs/PartPacker)
## Reference(s)
[Code](https://github.com/NVlabs/PartPacker)
[Paper](https://arxiv.org/abs/2506.09980)
## Model Architecture
**Architecture Type:** Transformer
**Network Architecture:** Diffusion Transformer (DiT)
## Input
**Input Type(s):** Image
**Input Format(s):** Red, Green, Blue (RGB)
**Input Parameters:** Two-dimensional (2D) image
**Other Properties Related to Input:** Resolution will be resized to 518x518.
## Output
**Output Type(s):** Triangle Mesh
**Output Format:** GL Transmission Format Binary (GLB)
**Output Parameters:** Three-dimensional (3D) triangle mesh
**Other Properties Related to Output:** Extracted at a resolution up to 512^3; without texture.
Our AI models are designed and/or optimized to run on NVIDIA GPU-accelerated systems. By leveraging NVIDIAโs hardware (e.g. GPU cores) and software frameworks (e.g., CUDA libraries), the model achieves faster training and inference times compared to CPU-only solutions.
## Software Integration
### Runtime Engine(s)
* PyTorch
### Supported Hardware Microarchitecture Compatibility
* NVIDIA Ampere
* NVIDIA Hopper
### Preferred Operating System(s)
* Linux
## Model Version(s)
v1.0
## Training, Testing, and Evaluation Datasets
We perform training, testing, and evaluation on the Objaverse-XL dataset.
For the VAE model, we use the first 253K meshes for training and the rest 1K meshes for validation.
For the Flow model, we use all 254K meshes for training.
### Objaverse-XL
**Link**: https://objaverse.allenai.org/
**Data Collection Method**: Hybrid: Automatic, Synthetic
**Labeling Method by dataset**: N/A (no labels)
**Properties:** We use about 254k mesh data, which is a subset from the Objaverse-XL filtered by the number of parts.
## Inference
**Acceleration Engine**: PyTorch
**Test Hardware**: NVIDIA A100 (1 GPU configuration)
## Ethical Considerations
NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their internal model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse.
Please report security vulnerabilities or NVIDIA AI Concerns [here](https://www.nvidia.com/en-us/support/submit-security-vulnerability/).
|
stegostegosaur/bert-all-fakern
|
stegostegosaur
| 2025-06-16T11:09:24Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-06-16T10:32:39Z |
---
library_name: transformers
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: bert-all-fakern
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-all-fakern
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1829
- Accuracy: 0.9376
- F1: 0.9376
- Precision: 0.9394
- Recall: 0.9376
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 0.2437 | 1.0 | 5281 | 0.1960 | 0.9276 | 0.9276 | 0.9281 | 0.9276 |
| 0.2238 | 2.0 | 10562 | 0.1768 | 0.9350 | 0.9350 | 0.9356 | 0.9350 |
| 0.1848 | 3.0 | 15843 | 0.1970 | 0.9342 | 0.9342 | 0.9362 | 0.9342 |
| 0.2146 | 4.0 | 21124 | 0.1849 | 0.9369 | 0.9369 | 0.9388 | 0.9369 |
| 0.1829 | 5.0 | 26405 | 0.1829 | 0.9376 | 0.9376 | 0.9394 | 0.9376 |
### Framework versions
- Transformers 4.52.4
- Pytorch 2.6.0+cu124
- Datasets 2.14.4
- Tokenizers 0.21.1
|
shiridisaibaba22/cnn_news_summary_model_trained_on_reduced_data
|
shiridisaibaba22
| 2025-06-16T11:09:23Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2025-06-16T10:35:27Z |
---
library_name: transformers
license: apache-2.0
base_model: t5-small
tags:
- generated_from_trainer
model-index:
- name: cnn_news_summary_model_trained_on_reduced_data
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# cnn_news_summary_model_trained_on_reduced_data
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Generated Length |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:----------------:|
| No log | 1.0 | 144 | 1.9070 | 0.2375 | 0.0953 | 0.194 | 0.1938 | 20.0 |
### Framework versions
- Transformers 4.52.4
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1
|
ravi86/mood_detector
|
ravi86
| 2025-06-16T11:09:20Z | 18 | 0 |
transformers
|
[
"transformers",
"pytorch",
"emotion-detection",
"facial-expressio",
"image-classification",
"deep-learning",
"cnn",
"en",
"base_model:microsoft/resnet-50",
"base_model:finetune:microsoft/resnet-50",
"license:mit",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2025-06-14T05:37:49Z |
---
license: mit
language:
- en
metrics:
- accuracy
base_model:
- microsoft/resnet-50
new_version: google/vit-base-patch16-224
pipeline_tag: image-classification
library_name: transformers
tags:
- pytorch
- emotion-detection
- facial-expressio
- image-classification
- deep-learning
- cnn
---
# ๐ญ Face Expression Detector
A deep learning model that classifies facial expressions in grayscale images into one of seven core emotions. Designed for applications in **emotion analytics**, **human-computer interaction**, and **psychological research**.
---
## ๐ Model Overview
This model takes **48x48 grayscale face images** and classifies them into:
- ๐ Angry
- ๐คข Disgust
- ๐จ Fear
- ๐ Happy
- ๐ข Sad
- ๐ฒ Surprise
- ๐ Neutral
**Dataset**: [FER2013](https://www.kaggle.com/datasets/msambare/fer2013)
**Training Samples**: 28,709
**Testing Samples**: 3,589
---
## ๐ง Model Architecture
- ๐ฆ **Custom CNN**
- 3 Convolutional Layers
- Batch Normalization
- ReLU Activation
- Dropout for regularization
- ๐ Optimizer: `Adam`
- ๐ฅ Loss Function: `Categorical Crossentropy`
- โฑ๏ธ Epochs: `100`
---
## โ
Performance
> ๐ *Add your actual performance metrics here:*
- Accuracy on FER2013 Test Set: **~1.0%**
---
## ๐๏ธ Required Files
- `model.h5` โ Model Weights
- `config.json` โ Configuration file *(Transformers-based)*
- `preprocessor_config.json` โ Preprocessing setup *(if needed)*
- `requirements.txt` โ Python dependencies
---
## ๐ Use Cases
- ๐ฎ Real-time emotion feedback in games or virtual assistants
- ๐ Emotion analysis for psychological and behavioral studies
- ๐ฅ Enhancing video-based UX with dynamic emotion tracking
---
## โ ๏ธ Limitations
- Works best with **centered 48x48 grayscale faces**
- **Face detection (e.g., MTCNN)** required before prediction
- FER2013's demographic diversity is limited โ potential bias
---
## โ๏ธ Installation
Follow these steps to set up the environment and dependencies:
--pip install -r requirements.txt
torch>=1.9.0
transformers>=4.20.0
pillow>=8.0.0
### 1. Clone the Repository
git clone https://github.com/TRavi8688/Mood-Based-Music-Player
cd mood_detector
##๐งช How to Use (Transformers-based)
Follow these steps to preprocess an image and predict facial expression using the pre-trained Transformers-based model:
Python
```bash
from transformers import AutoModelForImageClassification, AutoImageProcessor
from PIL import Image
import torch
```
### 1. Load Model and Preprocessor
```bash
# STEP 1: Install dependencies
!pip install tensorflow pillow numpy
# STEP 2: Download model file using `requests`
import requests
model_url = "https://huggingface.co/ravi86/mood_detector/resolve/main/my_model.h5"
model_path = "my_model.h5"
# Download the file
response = requests.get(model_url)
with open(model_path, "wb") as f:
f.write(response.content)
print("โ
Model downloaded successfully!")
```
2. Load and Preprocess the Image
```bash
image_path = "your_image.jpg" # ๐ Replace with your image file
image = Image.open(image_path).convert("L").resize((48, 48)) # Convert to grayscale and resize
```
# 3. Make Predictions
```bash
outputs = model(**inputs)
probs = torch.softmax(outputs.logits, dim=-1) # Convert logits to probabilities
predicted_class = probs.argmax().item() # Get the predicted class index
```
# 4. Interpret the Result
```bash
emotions = ["Angry", "Disgust", "Fear", "Happy", "Sad", "Surprise", "Neutral"]
print(f"Predicted Emotion: {emotions[predicted_class]}")
```
โ๏ธ Deploy to Hugging Face Hub
```bash
Use these commands to prepare and push your model to the Hugging Face Hub:
Bash
# Step 1: Install & Login
pip install huggingface_hub
huggingface-cli login
from huggingface_hub import upload_folder
upload_folder(
folder_path="path/to/mood_detector",
repo_id="ravi86/mood_detector",
repo_type="model",
commit_message="๐ Upload mood detection model"
)
```
###
๐งญ Ethical Considerations
โ๏ธ Bias: The FER2013 dataset may exhibit biases in demographic representation. Exercise caution when interpreting results across diverse populations.
๐ Privacy: Ensure strict compliance with data privacy laws (e.g., GDPR, CCPA) when using this model on personal or sensitive images. Do not use without explicit consent.
โ Misuse: This model is not intended for unauthorized surveillance, profiling, or any other unethical applications.
###
๐ค Contact
๐ฌ For questions, support, or collaborations:
Hugging Face โ @ravi86
Gmailโ [email protected]
โญ If you find this project useful, consider giving a star or contributing!
|
Sri2901/04_cosmos_v3_Shift
|
Sri2901
| 2025-06-16T11:08:08Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"flux",
"lora",
"template:sd-lora",
"ai-toolkit",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-06-16T11:07:39Z |
---
tags:
- text-to-image
- flux
- lora
- diffusers
- template:sd-lora
- ai-toolkit
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: cosmos
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
widget:
- text: A model poses confidently on a stone railing overlooking a lake with lush
green mountains in the background, wearing a white sleeveless dress and holding
a white quilted handbag. the model is standing in the middle of the image, with
her arms resting on the edge of the stone railing. she is wearing white sunglasses
and has a confident expression on her face. her long black hair cascades down
her back, and her eyes are directed straight ahead. her skin is smooth and fair,
and she has dark brown skin tones. the lake is calm and serene, with a few small
waves lapping against the shore, and the mountains are covered in lush green vegetation,
creating a picturesque backdrop. the lighting is bright and natural, highlighting
the model's beauty and elegance. the overall mood is sophisticated and timeless,
evoking a sense of luxury and sophistication.
output:
url: samples/1750069866114__000004000_2.jpg
- text: A woman standing outdoors in a marina with boats and mountains in the background.
she is wearing a white and blue floral print midi dress with spaghetti straps
and a slit at the side, paired with a woven straw bag. her long, dark brown hair
cascades down her back, and she is accessorized with a pair of gold earrings and
a round, gold-framed sunglasses. the sun is shining brightly, casting a warm glow
on her face and the surrounding vegetation. the woman stands confidently in the
middle of the image, with her hands resting on her hips, looking off into the
distance with a slight smile on her lips. the marina is filled with boats docked
in neat rows, and the hills in the distance provide a picturesque backdrop to
the scene.
output:
url: samples/1750069848760__000004000_1.jpg
- text: A photo shoot from a side angle about a woman in a black one-piece swimsuit
sitting by a pool with pebbles, wearing sunglasses. the image also shows water
droplets on her skin, highlighting her features. on the middle of the image, a
woman appears to be in her mid-twenties, with dark brown hair styled in loose
waves, wearing a black swimsuit with a plunging neckline and a side slit, sitting
on the edge of a swimming pool with her legs crossed. she has a sultry expression
and is looking directly at the camera. her body is slim and toned, with a slight
curve to her chest. she is wearing black sunglasses, which add a touch of sophistication
to her look. the background is a clear blue swimming pool, surrounded by pebble
stones. the lighting is bright and natural, casting shadows on her body. the overall
effect is one of relaxation and tranquility.
output:
url: samples/1750069831302__000004000_0.jpg
---
# 04_cosmos_v3_shift
Model trained with AI Toolkit by Ostris
<Gallery />
## Trigger words
You should use `cosmos` to trigger the image generation.
## Download model and use it with ComfyUI, AUTOMATIC1111, SD.Next, Invoke AI, etc.
Weights for this model are available in Safetensors format.
[Download](/username/04_cosmos_v3_shift/tree/main) them in the Files & versions tab.
## Use it with the [๐งจ diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.bfloat16).to('cuda')
pipeline.load_lora_weights('username/04_cosmos_v3_shift', weight_name='04_cosmos_v3_shift.safetensors')
image = pipeline('cosmos A model poses confidently on a stone railing overlooking a lake with lush green mountains in the background, wearing a white sleeveless dress and holding a white quilted handbag. the model is standing in the middle of the image, with her arms resting on the edge of the stone railing. she is wearing white sunglasses and has a confident expression on her face. her long black hair cascades down her back, and her eyes are directed straight ahead. her skin is smooth and fair, and she has dark brown skin tones. the lake is calm and serene, with a few small waves lapping against the shore, and the mountains are covered in lush green vegetation, creating a picturesque backdrop. the lighting is bright and natural, highlighting the model's beauty and elegance. the overall mood is sophisticated and timeless, evoking a sense of luxury and sophistication.').images[0]
image.save("my_image.png")
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
MinaMila/phi3_unlearned_2nd_5e-7_1.0_0.15_0.25_0.15_epoch2
|
MinaMila
| 2025-06-16T11:05:37Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"phi3",
"text-generation",
"conversational",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-16T11:03:41Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Triangle104/Huihui-MoE-24B-A8B-abliterated-Q4_K_S-GGUF
|
Triangle104
| 2025-06-16T11:03:53Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"moe",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"base_model:huihui-ai/Huihui-MoE-24B-A8B-abliterated",
"base_model:quantized:huihui-ai/Huihui-MoE-24B-A8B-abliterated",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-16T11:01:56Z |
---
license: apache-2.0
base_model: huihui-ai/Huihui-MoE-24B-A8B-abliterated
library_name: transformers
license_link: https://huggingface.co/Qwen/Qwen3-8B/blob/main/LICENSE
pipeline_tag: text-generation
tags:
- moe
- llama-cpp
- gguf-my-repo
extra_gated_prompt: '**Usage Warnings**
โ**Risk of Sensitive or Controversial Outputs**โ: This modelโs safety filtering
has been significantly reduced, potentially generating sensitive, controversial,
or inappropriate content. Users should exercise caution and rigorously review generated
outputs.
โ**Not Suitable for All Audiences**:โ Due to limited content filtering, the modelโs
outputs may be inappropriate for public settings, underage users, or applications
requiring high security.
โ**Legal and Ethical Responsibilities**โ: Users must ensure their usage complies
with local laws and ethical standards. Generated content may carry legal or ethical
risks, and users are solely responsible for any consequences.
โ**Research and Experimental Use**โ: It is recommended to use this model for research,
testing, or controlled environments, avoiding direct use in production or public-facing
commercial applications.
โ**Monitoring and Review Recommendations**โ: Users are strongly advised to monitor
model outputs in real-time and conduct manual reviews when necessary to prevent
the dissemination of inappropriate content.
โ**No Default Safety Guarantees**โ: Unlike standard models, this model has not undergone
rigorous safety optimization. huihui.ai bears no responsibility for any consequences
arising from its use.'
---
# Triangle104/Huihui-MoE-24B-A8B-abliterated-Q4_K_S-GGUF
This model was converted to GGUF format from [`huihui-ai/Huihui-MoE-24B-A8B-abliterated`](https://huggingface.co/huihui-ai/Huihui-MoE-24B-A8B-abliterated) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/huihui-ai/Huihui-MoE-24B-A8B-abliterated) for more details on the model.
---
Huihui-MoE-24B-A8B-abliterated is a Mixture of Experts (MoE) language model developed by huihui.ai, built upon the huihui-ai/Qwen3-8B-abliterated base model. It enhances the standard Transformer architecture by replacing MLP layers with MoE layers, each containing 4 experts, to achieve high performance with efficient inference. The model is designed for natural language processing tasks, including text generation, question answering, and conversational applications.
This model combines four ablated models, and perhaps it can achieve the performance of all the ablated models?
This is just a test. The exploration of merging different manifestations of models of the same type is another possibility.
---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Huihui-MoE-24B-A8B-abliterated-Q4_K_S-GGUF --hf-file huihui-moe-24b-a8b-abliterated-q4_k_s.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Huihui-MoE-24B-A8B-abliterated-Q4_K_S-GGUF --hf-file huihui-moe-24b-a8b-abliterated-q4_k_s.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Huihui-MoE-24B-A8B-abliterated-Q4_K_S-GGUF --hf-file huihui-moe-24b-a8b-abliterated-q4_k_s.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Huihui-MoE-24B-A8B-abliterated-Q4_K_S-GGUF --hf-file huihui-moe-24b-a8b-abliterated-q4_k_s.gguf -c 2048
```
|
AinurSafr/telegram-message-classifier
|
AinurSafr
| 2025-06-16T11:03:38Z | 5 | 0 |
peft
|
[
"peft",
"safetensors",
"ru",
"en",
"arxiv:1910.09700",
"base_model:unsloth/OpenHermes-2.5-Mistral-7B-bnb-4bit",
"base_model:adapter:unsloth/OpenHermes-2.5-Mistral-7B-bnb-4bit",
"region:us"
] | null | 2025-05-10T02:40:48Z |
---
base_model: unsloth/OpenHermes-2.5-Mistral-7B-bnb-4bit
library_name: peft
language:
- ru
- en
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.2
|
nnilayy/deap-arousal-binary-classification-no-wd-Kfold-2
|
nnilayy
| 2025-06-16T11:00:41Z | 4 | 0 | null |
[
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"region:us"
] | null | 2025-05-27T03:58:29Z |
---
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Code: [More Information Needed]
- Paper: [More Information Needed]
- Docs: [More Information Needed]
|
23ikram/llamafin
|
23ikram
| 2025-06-16T11:00:10Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-16T10:59:51Z |
---
base_model: unsloth/meta-llama-3.1-8b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** 23ikram
- **License:** apache-2.0
- **Finetuned from model :** unsloth/meta-llama-3.1-8b-instruct-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
LaaP-ai/donut-base-invoice-v1.04
|
LaaP-ai
| 2025-06-16T11:00:10Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"vision-encoder-decoder",
"image-text-to-text",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2025-06-16T10:59:51Z |
---
library_name: transformers
tags:
- generated_from_trainer
model-index:
- name: donut-base-invoice-v1.04
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# donut-base-invoice-v1.04
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.52.4
- Pytorch 2.7.1+cu126
- Datasets 3.6.0
- Tokenizers 0.21.1
|
MinaMila/gemma_2b_unlearned_2nd_1e-5_1.0_0.25_0.5_0.75_epoch1
|
MinaMila
| 2025-06-16T10:58:58Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-16T10:57:16Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Triangle104/Huihui-MoE-24B-A8B-abliterated-Q3_K_L-GGUF
|
Triangle104
| 2025-06-16T10:58:53Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"moe",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"base_model:huihui-ai/Huihui-MoE-24B-A8B-abliterated",
"base_model:quantized:huihui-ai/Huihui-MoE-24B-A8B-abliterated",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-16T10:28:11Z |
---
license: apache-2.0
base_model: huihui-ai/Huihui-MoE-24B-A8B-abliterated
library_name: transformers
license_link: https://huggingface.co/Qwen/Qwen3-8B/blob/main/LICENSE
pipeline_tag: text-generation
tags:
- moe
- llama-cpp
- gguf-my-repo
extra_gated_prompt: '**Usage Warnings**
โ**Risk of Sensitive or Controversial Outputs**โ: This modelโs safety filtering
has been significantly reduced, potentially generating sensitive, controversial,
or inappropriate content. Users should exercise caution and rigorously review generated
outputs.
โ**Not Suitable for All Audiences**:โ Due to limited content filtering, the modelโs
outputs may be inappropriate for public settings, underage users, or applications
requiring high security.
โ**Legal and Ethical Responsibilities**โ: Users must ensure their usage complies
with local laws and ethical standards. Generated content may carry legal or ethical
risks, and users are solely responsible for any consequences.
โ**Research and Experimental Use**โ: It is recommended to use this model for research,
testing, or controlled environments, avoiding direct use in production or public-facing
commercial applications.
โ**Monitoring and Review Recommendations**โ: Users are strongly advised to monitor
model outputs in real-time and conduct manual reviews when necessary to prevent
the dissemination of inappropriate content.
โ**No Default Safety Guarantees**โ: Unlike standard models, this model has not undergone
rigorous safety optimization. huihui.ai bears no responsibility for any consequences
arising from its use.'
---
# Triangle104/Huihui-MoE-24B-A8B-abliterated-Q3_K_L-GGUF
This model was converted to GGUF format from [`huihui-ai/Huihui-MoE-24B-A8B-abliterated`](https://huggingface.co/huihui-ai/Huihui-MoE-24B-A8B-abliterated) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/huihui-ai/Huihui-MoE-24B-A8B-abliterated) for more details on the model.
---
Huihui-MoE-24B-A8B-abliterated is a Mixture of Experts (MoE) language model developed by huihui.ai, built upon the huihui-ai/Qwen3-8B-abliterated base model. It enhances the standard Transformer architecture by replacing MLP layers with MoE layers, each containing 4 experts, to achieve high performance with efficient inference. The model is designed for natural language processing tasks, including text generation, question answering, and conversational applications.
This model combines four ablated models, and perhaps it can achieve the performance of all the ablated models?
This is just a test. The exploration of merging different manifestations of models of the same type is another possibility.
---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Huihui-MoE-24B-A8B-abliterated-Q3_K_L-GGUF --hf-file huihui-moe-24b-a8b-abliterated-q3_k_l.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Huihui-MoE-24B-A8B-abliterated-Q3_K_L-GGUF --hf-file huihui-moe-24b-a8b-abliterated-q3_k_l.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Huihui-MoE-24B-A8B-abliterated-Q3_K_L-GGUF --hf-file huihui-moe-24b-a8b-abliterated-q3_k_l.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Huihui-MoE-24B-A8B-abliterated-Q3_K_L-GGUF --hf-file huihui-moe-24b-a8b-abliterated-q3_k_l.gguf -c 2048
```
|
BigData-KSU/ArabVLM
|
BigData-KSU
| 2025-06-16T10:58:22Z | 0 | 0 |
peft
|
[
"peft",
"llava",
"ar",
"en",
"base_model:ALLaM-AI/ALLaM-7B-Instruct-preview",
"base_model:adapter:ALLaM-AI/ALLaM-7B-Instruct-preview",
"region:us"
] | null | 2025-05-08T06:53:41Z |
---
library_name: peft
language:
- ar
- en
base_model:
- ALLaM-AI/ALLaM-7B-Instruct-preview
---
## ArabVLM: Vision Language Model
- **Repository:** https://github.com/BigData-KSU/ArabVLM
- **Demo:** Soon.
## How to Get Started with the Model
### Install
1. Clone this repository and navigate to RS-LLaVA folder
```
git clone https://github.com/BigData-KSU/ArabVLM.git
cd ArabVLM
```
2. Install Packages
```
pip install -r requirements.txt
```
---
### Inference
Use the code below to get started with the model.
```python
from PIL import Image
import os
import torch
from vllm.constants import IMAGE_TOKEN_INDEX, DEFAULT_IMAGE_TOKEN
from vllm.conversation import conv_templates, SeparatorStyle
from vllm.model.builder import load_pretrained_model
from vllm.utils import disable_torch_init
from vllm.mm_utils import tokenizer_image_token, get_model_name_from_path, KeywordsStoppingCriteria
### Main model....
model_path ='/BigData-KSU/ArabVLM'
model_base = 'ALLaM-AI/ALLaM-7B-Instruct-preview'
conv_mode = 'llava_llama_2'
disable_torch_init()
model_path = os.path.abspath(model_path)
print('model path')
print(model_path)
model_name = get_model_name_from_path(model_path)
print('model name')
print(model_name)
print('model base')
print(model_base)
tokenizer, model, processor, context_len = load_pretrained_model(model_path, model_base, model_name,device='cuda:0')
def chat_with_Vision_BioLLM(cur_prompt,image_name):
# Prepare the input text, adding image-related tokens if needed
image_mem = Image.open(image_name).convert('RGB')
image_processor = processor['image']
conv = conv_templates[conv_mode].copy()
roles = conv.roles
print(image_mem)
image_tensor = image_processor.preprocess(image_mem, return_tensors='pt')['pixel_values']
tensor = image_tensor.to(model.device, dtype=torch.float16)
print(f"{roles[1]}: {cur_prompt}")
cur_prompt = DEFAULT_IMAGE_TOKEN + '\n' + cur_prompt
conv.append_message(conv.roles[0], cur_prompt)
conv.append_message(conv.roles[1], None)
prompt = conv.get_prompt()
input_ids = tokenizer_image_token(prompt, tokenizer, IMAGE_TOKEN_INDEX, return_tensors='pt').unsqueeze(0).cuda()
stop_str = conv.sep if conv.sep_style != SeparatorStyle.TWO else conv.sep2
keywords = [stop_str]
stopping_criteria = KeywordsStoppingCriteria(keywords, tokenizer, input_ids)
if image_mem:
with torch.inference_mode():
output_ids = model.generate(
input_ids,
images=tensor,
do_sample=False,
max_new_tokens=1024,
use_cache=True,
stopping_criteria=[stopping_criteria])
response = tokenizer.decode(output_ids[0, input_ids.shape[1]:])
#print(outputs)
return response
if __name__ == "__main__":
cur_prompt='ูุตู ุงูุตูุฑุฉ ุจุงูุชูุตูู '
image_name='path/to/image'
outputs=chat_with_Vision_BioLLM(cur_prompt,image_name)
print('Model Response.....')
print(outputs)
```
- PEFT 0.4.0
|
Triangle104/Huihui-MoE-24B-A8B-abliterated-Q3_K_M-GGUF
|
Triangle104
| 2025-06-16T10:57:50Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"moe",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"base_model:huihui-ai/Huihui-MoE-24B-A8B-abliterated",
"base_model:quantized:huihui-ai/Huihui-MoE-24B-A8B-abliterated",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-16T09:56:42Z |
---
license: apache-2.0
base_model: huihui-ai/Huihui-MoE-24B-A8B-abliterated
library_name: transformers
license_link: https://huggingface.co/Qwen/Qwen3-8B/blob/main/LICENSE
pipeline_tag: text-generation
tags:
- moe
- llama-cpp
- gguf-my-repo
extra_gated_prompt: '**Usage Warnings**
โ**Risk of Sensitive or Controversial Outputs**โ: This modelโs safety filtering
has been significantly reduced, potentially generating sensitive, controversial,
or inappropriate content. Users should exercise caution and rigorously review generated
outputs.
โ**Not Suitable for All Audiences**:โ Due to limited content filtering, the modelโs
outputs may be inappropriate for public settings, underage users, or applications
requiring high security.
โ**Legal and Ethical Responsibilities**โ: Users must ensure their usage complies
with local laws and ethical standards. Generated content may carry legal or ethical
risks, and users are solely responsible for any consequences.
โ**Research and Experimental Use**โ: It is recommended to use this model for research,
testing, or controlled environments, avoiding direct use in production or public-facing
commercial applications.
โ**Monitoring and Review Recommendations**โ: Users are strongly advised to monitor
model outputs in real-time and conduct manual reviews when necessary to prevent
the dissemination of inappropriate content.
โ**No Default Safety Guarantees**โ: Unlike standard models, this model has not undergone
rigorous safety optimization. huihui.ai bears no responsibility for any consequences
arising from its use.'
---
# Triangle104/Huihui-MoE-24B-A8B-abliterated-Q3_K_M-GGUF
This model was converted to GGUF format from [`huihui-ai/Huihui-MoE-24B-A8B-abliterated`](https://huggingface.co/huihui-ai/Huihui-MoE-24B-A8B-abliterated) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/huihui-ai/Huihui-MoE-24B-A8B-abliterated) for more details on the model.
---
Huihui-MoE-24B-A8B-abliterated is a Mixture of Experts (MoE) language model developed by huihui.ai, built upon the huihui-ai/Qwen3-8B-abliterated base model. It enhances the standard Transformer architecture by replacing MLP layers with MoE layers, each containing 4 experts, to achieve high performance with efficient inference. The model is designed for natural language processing tasks, including text generation, question answering, and conversational applications.
This model combines four ablated models, and perhaps it can achieve the performance of all the ablated models?
This is just a test. The exploration of merging different manifestations of models of the same type is another possibility.
---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Huihui-MoE-24B-A8B-abliterated-Q3_K_M-GGUF --hf-file huihui-moe-24b-a8b-abliterated-q3_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Huihui-MoE-24B-A8B-abliterated-Q3_K_M-GGUF --hf-file huihui-moe-24b-a8b-abliterated-q3_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Huihui-MoE-24B-A8B-abliterated-Q3_K_M-GGUF --hf-file huihui-moe-24b-a8b-abliterated-q3_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Huihui-MoE-24B-A8B-abliterated-Q3_K_M-GGUF --hf-file huihui-moe-24b-a8b-abliterated-q3_k_m.gguf -c 2048
```
|
Khushi-Rao-Viral-Video-clip/Full-Viral.khushi.rao.Viral.Video.Tutorial.Official.Clip.new
|
Khushi-Rao-Viral-Video-clip
| 2025-06-16T10:57:36Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-06-16T10:57:05Z |
---
license: apache-2.0
---
[](https://t.co/BILDe5gBcR)
|
23ikram/model8bit
|
23ikram
| 2025-06-16T10:55:29Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-16T10:54:40Z |
---
base_model: unsloth/meta-llama-3.1-8b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** 23ikram
- **License:** apache-2.0
- **Finetuned from model :** unsloth/meta-llama-3.1-8b-instruct-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
GhulamNabi/finetuned-Llama-3.2-3B
|
GhulamNabi
| 2025-06-16T10:55:21Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-16T10:54:03Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
hullolo/itn-classifier
|
hullolo
| 2025-06-16T10:54:18Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-16T10:54:00Z |
---
base_model: unsloth/llama-3.2-3b-instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** hullolo
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-3b-instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
ilyes25/wol_processor
|
ilyes25
| 2025-06-16T10:53:50Z | 0 | 0 |
transformers
|
[
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-16T10:53:48Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
aieng-lab/codet5p-770m_smell-doc
|
aieng-lab
| 2025-06-16T10:53:19Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"t5",
"text-classification",
"en",
"base_model:Salesforce/codet5p-770m",
"base_model:finetune:Salesforce/codet5p-770m",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-06-16T10:52:52Z |
---
library_name: transformers
license: mit
language:
- en
metrics:
- f1
- precision
- recall
base_model:
- Salesforce/codet5p-770m
pipeline_tag: text-classification
---
# CodeT5+ 770m for classifying smell documentation (multi-label)
This model classifies smell documentation as 'fragmented', 'tangled', 'excessive', 'bloated' or 'lazy'.
- **Developed by:** Fabian C. Peรฑa, Steffen Herbold
- **Finetuned from:** [Salesforce/codet5p-770m](https://huggingface.co/Salesforce/codet5p-770m)
- **Replication kit:** [https://github.com/aieng-lab/senlp-benchmark](https://github.com/aieng-lab/senlp-benchmark)
- **Language:** English
- **License:** MIT
## Citation
```
@misc{pena2025benchmark,
author = {Fabian Peรฑa and Steffen Herbold},
title = {Evaluating Large Language Models on Non-Code Software Engineering Tasks},
year = {2025}
}
```
|
MinaMila/phi3_unlearned_2nd_5e-7_1.0_0.15_0.25_0.25_epoch2
|
MinaMila
| 2025-06-16T10:52:08Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"phi3",
"text-generation",
"conversational",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-16T10:50:18Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
LaaP-ai/donut-base-invoice-v1.03
|
LaaP-ai
| 2025-06-16T10:52:03Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"vision-encoder-decoder",
"image-text-to-text",
"generated_from_trainer",
"base_model:naver-clova-ix/donut-base",
"base_model:finetune:naver-clova-ix/donut-base",
"license:mit",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2025-06-16T10:51:44Z |
---
library_name: transformers
license: mit
base_model: naver-clova-ix/donut-base
tags:
- generated_from_trainer
model-index:
- name: donut-base-invoice-v1.03
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# donut-base-invoice-v1.03
This model is a fine-tuned version of [naver-clova-ix/donut-base](https://huggingface.co/naver-clova-ix/donut-base) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.52.4
- Pytorch 2.7.1+cu126
- Datasets 3.6.0
- Tokenizers 0.21.1
|
Jarvis1111/DoctorAgent-RL-SFT-1k-Thinking
|
Jarvis1111
| 2025-06-16T10:51:31Z | 0 | 0 | null |
[
"safetensors",
"qwen2",
"medical",
"question-answering",
"en",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-7B-Instruct",
"license:apache-2.0",
"region:us"
] |
question-answering
| 2025-06-16T10:43:09Z |
---
license: apache-2.0
language:
- en
base_model:
- Qwen/Qwen2.5-7B-Instruct
pipeline_tag: question-answering
tags:
- medical
---
|
MinaMila/gemma_2b_unlearned_2nd_1e-5_1.0_0.25_0.75_0.05_epoch2
|
MinaMila
| 2025-06-16T10:50:59Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-16T10:49:15Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
aieng-lab/starcoder2-7b_smell-doc
|
aieng-lab
| 2025-06-16T10:49:18Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"starcoder2",
"text-classification",
"en",
"base_model:bigcode/starcoder2-7b",
"base_model:finetune:bigcode/starcoder2-7b",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-06-16T10:44:42Z |
---
library_name: transformers
license: mit
language:
- en
metrics:
- f1
- precision
- recall
base_model:
- bigcode/starcoder2-7b
pipeline_tag: text-classification
---
# StarCoder2 7b for classifying smell documentation (multi-label)
This model classifies smell documentation as 'fragmented', 'tangled', 'excessive', 'bloated' or 'lazy'.
- **Developed by:** Fabian C. Peรฑa, Steffen Herbold
- **Finetuned from:** [bigcode/starcoder2-7b](https://huggingface.co/bigcode/starcoder2-7b)
- **Replication kit:** [https://github.com/aieng-lab/senlp-benchmark](https://github.com/aieng-lab/senlp-benchmark)
- **Language:** English
- **License:** MIT
## Citation
```
@misc{pena2025benchmark,
author = {Fabian Peรฑa and Steffen Herbold},
title = {Evaluating Large Language Models on Non-Code Software Engineering Tasks},
year = {2025}
}
```
|
AlekMan/HSE_AI_Vanilla_XLSTM_FT
|
AlekMan
| 2025-06-16T10:47:18Z | 0 | 0 | null |
[
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"region:us"
] | null | 2025-06-16T10:46:35Z |
---
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Code: [More Information Needed]
- Paper: [More Information Needed]
- Docs: [More Information Needed]
|
MinaMila/phi3_unlearned_2nd_5e-7_1.0_0.15_0.25_0.25_epoch1
|
MinaMila
| 2025-06-16T10:45:37Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"phi3",
"text-generation",
"conversational",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-16T10:43:46Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
EYEDOL/Uliza_ON_ALPACA_5
|
EYEDOL
| 2025-06-16T10:43:43Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:Jacaranda/UlizaLlama3",
"base_model:finetune:Jacaranda/UlizaLlama3",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-16T10:43:33Z |
---
base_model: Jacaranda/UlizaLlama3
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** EYEDOL
- **License:** apache-2.0
- **Finetuned from model :** Jacaranda/UlizaLlama3
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
aieng-lab/starcoder2-3b_smell-doc
|
aieng-lab
| 2025-06-16T10:39:27Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"starcoder2",
"text-classification",
"en",
"base_model:bigcode/starcoder2-3b",
"base_model:finetune:bigcode/starcoder2-3b",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-06-16T10:37:35Z |
---
library_name: transformers
license: mit
language:
- en
metrics:
- f1
- precision
- recall
base_model:
- bigcode/starcoder2-3b
pipeline_tag: text-classification
---
# StarCoder2 3b for classifying smell documentation (multi-label)
This model classifies smell documentation as 'fragmented', 'tangled', 'excessive', 'bloated' or 'lazy'.
- **Developed by:** Fabian C. Peรฑa, Steffen Herbold
- **Finetuned from:** [bigcode/starcoder2-3b](https://huggingface.co/bigcode/starcoder2-3b)
- **Replication kit:** [https://github.com/aieng-lab/senlp-benchmark](https://github.com/aieng-lab/senlp-benchmark)
- **Language:** English
- **License:** MIT
## Citation
```
@misc{pena2025benchmark,
author = {Fabian Peรฑa and Steffen Herbold},
title = {Evaluating Large Language Models on Non-Code Software Engineering Tasks},
year = {2025}
}
```
|
Zack-Z/qwen3_4bi_cotsft_rs0_3_5cut_ru_gem3all_indep_e2
|
Zack-Z
| 2025-06-16T10:39:21Z | 0 | 0 |
transformers
|
[
"transformers",
"qwen3",
"feature-extraction",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/Qwen3-4B",
"base_model:finetune:unsloth/Qwen3-4B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2025-06-16T10:22:48Z |
---
base_model: unsloth/Qwen3-4B
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** Zack-Z
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen3-4B
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
TheStageAI/Elastic-Mistral-7B-Instruct-v0.3
|
TheStageAI
| 2025-06-16T10:34:14Z | 12 | 3 | null |
[
"text2text-generation",
"zho",
"eng",
"fra",
"spa",
"por",
"deu",
"ita",
"rus",
"jpn",
"kor",
"vie",
"tha",
"ara",
"base_model:mistralai/Mistral-7B-Instruct-v0.3",
"base_model:quantized:mistralai/Mistral-7B-Instruct-v0.3",
"license:apache-2.0",
"region:us"
] |
text2text-generation
| 2025-04-02T15:34:36Z |
---
license: apache-2.0
base_model:
- mistralai/Mistral-7B-Instruct-v0.3
base_model_relation: quantized
pipeline_tag: text2text-generation
language:
- zho
- eng
- fra
- spa
- por
- deu
- ita
- rus
- jpn
- kor
- vie
- tha
- ara
---
# Elastic model: Mistral-7B-Instruct-v0.3. Fastest and most flexible models for self-serving.
Elastic models are the models produced by TheStage AI ANNA: Automated Neural Networks Accelerator. ANNA allows you to control model size, latency and quality with a simple slider movement. For each model, ANNA produces a series of optimized models:
* __XL__: Mathematically equivalent neural network, optimized with our DNN compiler.
* __L__: Near lossless model, with less than 1% degradation obtained on corresponding benchmarks.
* __M__: Faster model, with accuracy degradation less than 1.5%.
* __S__: The fastest model, with accuracy degradation less than 2%.
__Goals of elastic models:__
* Provide flexibility in cost vs quality selection for inference
* Provide clear quality and latency benchmarks
* Provide interface of HF libraries: transformers and diffusers with a single line of code
* Provide models supported on a wide range of hardware, which are pre-compiled and require no JIT.
* Provide the best models and service for self-hosting.
> It's important to note that specific quality degradation can vary from model to model. For instance, with an S model, you can have 0.5% degradation as well.

-----
## Inference
To infer our models, you just need to replace `transformers` import with `elastic_models.transformers`:
```python
import torch
from transformers import AutoTokenizer
from elastic_models.transformers import AutoModelForCausalLM
# Currently we require to have your HF token
# as we use original weights for part of layers and
# model configuration as well
model_name = "mistralai/Mistral-7B-Instruct-v0.3"
hf_token = ''
device = torch.device("cuda")
# Create mode
tokenizer = AutoTokenizer.from_pretrained(
model_name, token=hf_token
)
model = AutoModelForCausalLM.from_pretrained(
model_name,
token=hf_token,
torch_dtype=torch.bfloat16,
attn_implementation="sdpa",
mode='S'
).to(device)
model.generation_config.pad_token_id = tokenizer.eos_token_id
# Inference simple as transformers library
prompt = "Describe basics of DNNs quantization."
messages = [
{
"role": "system",
"content": "You are a search bot, answer on user text queries."
},
{
"role": "user",
"content": prompt
}
]
chat_prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True, tokenize=False
)
inputs = tokenizer(chat_prompt, return_tensors="pt")
inputs.to(device)
with torch.inference_mode():
generate_ids = model.generate(**inputs, max_length=500)
input_len = inputs['input_ids'].shape[1]
generate_ids = generate_ids[:, input_len:]
output = tokenizer.batch_decode(
generate_ids,
skip_special_tokens=True,
clean_up_tokenization_spaces=False
)[0]
# Validate answer
print(f"# Q:\n{prompt}\n")
print(f"# A:\n{output}\n")
```
__System requirements:__
* GPUs: H100, L40s
* CPU: AMD, Intel
* Python: 3.10-3.12
To work with our models just run these lines in your terminal:
```shell
pip install thestage
pip install elastic_models[nvidia]\
--index-url https://thestage.jfrog.io/artifactory/api/pypi/pypi-thestage-ai-production/simple\
--extra-index-url https://pypi.nvidia.com\
--extra-index-url https://pypi.org/simple
pip install flash_attn==2.7.3 --no-build-isolation
pip uninstall apex
```
Then go to [app.thestage.ai](https://app.thestage.ai), login and generate API token from your profile page. Set up API token as follows:
```shell
thestage config set --api-token <YOUR_API_TOKEN>
```
Congrats, now you can use accelerated models!
----
## Benchmarks
Benchmarking is one of the most important procedures during model acceleration. We aim to provide clear performance metrics for models using our algorithms. The `W8A8, int8 column` indicates that we applied W8A8 quantization with int8 data type to all linear layers and used the same calibration data as for ANNA. The S model achieves practically identical speed but much higher quality, as ANNA knows how to improve quantization quality on sensitive layers!
### Quality benchmarks
<!-- For quality evaluation we have used: #TODO link to github -->
| Metric/Model | S | M | L | XL | Original | W8A8, int8 |
|---------------|---|---|---|----|----------|------------|
| MMLU | 59.7 | 60.1 | 60.8 | 61.4 | 61.4 | 28 |
| PIQA | 80.8 | 82 | 81.7 | 81.5 | 81.5 | 65.3 |
| Arc Challenge | 56.6 | 55.1 | 56.8 | 57.4 | 57.4 | 33.2 |
| Winogrande | 73.2 | 72.3 | 73.2 | 74.1 | 74.1 | 57 |
* **MMLU**:Evaluates general knowledge across 57 subjects including science, humanities, engineering, and more. Shows model's ability to handle diverse academic topics.
* **PIQA**: Evaluates physical commonsense reasoning through questions about everyday physical interactions. Shows model's understanding of real-world physics concepts.
* **Arc Challenge**: Evaluates grade-school level multiple-choice questions requiring reasoning. Shows model's ability to solve complex reasoning tasks.
* **Winogrande**: Evaluates commonsense reasoning through sentence completion tasks. Shows model's capability to understand context and resolve ambiguity.
### Latency benchmarks
__100 input/300 output; tok/s:__
| GPU/Model | S | M | L | XL | Original | W8A8, int8 |
|-----------|-----|---|---|----|----------|------------|
| H100 | 186 | 180 | 168 | 136 | 48 | 192 |
| L40s | 79 | 68 | 59 | 47 | 38 | 82 |
## Links
* __Platform__: [app.thestage.ai](app.thestage.ai)
<!-- * __Elastic models Github__: [app.thestage.ai](app.thestage.ai) -->
* __Subscribe for updates__: [TheStageAI X](https://x.com/TheStageAI)
* __Contact email__: [email protected]
|
TheStageAI/Elastic-Llama-3.2-1B-Instruct
|
TheStageAI
| 2025-06-16T10:33:33Z | 47 | 3 | null |
[
"text2text-generation",
"zho",
"eng",
"fra",
"spa",
"por",
"deu",
"ita",
"rus",
"jpn",
"kor",
"vie",
"tha",
"ara",
"base_model:meta-llama/Llama-3.2-1B-Instruct",
"base_model:quantized:meta-llama/Llama-3.2-1B-Instruct",
"license:apache-2.0",
"region:us"
] |
text2text-generation
| 2025-04-14T03:43:38Z |
---
license: apache-2.0
base_model:
- meta-llama/Llama-3.2-1B-Instruct
base_model_relation: quantized
pipeline_tag: text2text-generation
language:
- zho
- eng
- fra
- spa
- por
- deu
- ita
- rus
- jpn
- kor
- vie
- tha
- ara
---
# Elastic model: Llama-3.2-1B-Instruct. Fastest and most flexible models for self-serving.
Elastic models are the models produced by TheStage AI ANNA: Automated Neural Networks Accelerator. ANNA allows you to control model size, latency and quality with a simple slider movement. For each model, ANNA produces a series of optimized models:
* __XL__: Mathematically equivalent neural network, optimized with our DNN compiler.
* __L__: Near lossless model, with less than 1% degradation obtained on corresponding benchmarks.
* __M__: Faster model, with accuracy degradation less than 1.5%.
* __S__: The fastest model, with accuracy degradation less than 2%.
__Goals of elastic models:__
* Provide flexibility in cost vs quality selection for inference
* Provide clear quality and latency benchmarks
* Provide interface of HF libraries: transformers and diffusers with a single line of code
* Provide models supported on a wide range of hardware, which are pre-compiled and require no JIT.
* Provide the best models and service for self-hosting.
> It's important to note that specific quality degradation can vary from model to model. For instance, with an S model, you can have 0.5% degradation as well.

-----
## Inference
To infer our models, you just need to replace `transformers` import with `elastic_models.transformers`:
```python
import torch
from transformers import AutoTokenizer
from elastic_models.transformers import AutoModelForCausalLM
# Currently we require to have your HF token
# as we use original weights for part of layers and
# model configuration as well
model_name = "meta-llama/Llama-3.2-1B-Instruct"
hf_token = ''
device = torch.device("cuda")
# Create mode
tokenizer = AutoTokenizer.from_pretrained(
model_name, token=hf_token
)
model = AutoModelForCausalLM.from_pretrained(
model_name,
token=hf_token,
torch_dtype=torch.bfloat16,
attn_implementation="sdpa",
mode='S'
).to(device)
model.generation_config.pad_token_id = tokenizer.eos_token_id
# Inference simple as transformers library
prompt = "Describe basics of DNNs quantization."
messages = [
{
"role": "system",
"content": "You are a search bot, answer on user text queries."
},
{
"role": "user",
"content": prompt
}
]
chat_prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True, tokenize=False
)
inputs = tokenizer(chat_prompt, return_tensors="pt")
inputs.to(device)
with torch.inference_mode():
generate_ids = model.generate(**inputs, max_length=500)
input_len = inputs['input_ids'].shape[1]
generate_ids = generate_ids[:, input_len:]
output = tokenizer.batch_decode(
generate_ids,
skip_special_tokens=True,
clean_up_tokenization_spaces=False
)[0]
# Validate answer
print(f"# Q:\n{prompt}\n")
print(f"# A:\n{output}\n")
```
__System requirements:__
* GPUs: H100, L40s
* CPU: AMD, Intel
* Python: 3.10-3.12
To work with our models just run these lines in your terminal:
```shell
pip install thestage
pip install elastic_models[nvidia]\
--index-url https://thestage.jfrog.io/artifactory/api/pypi/pypi-thestage-ai-production/simple\
--extra-index-url https://pypi.nvidia.com\
--extra-index-url https://pypi.org/simple
pip install flash_attn==2.7.3 --no-build-isolation
pip uninstall apex
```
Then go to [app.thestage.ai](https://app.thestage.ai), login and generate API token from your profile page. Set up API token as follows:
```shell
thestage config set --api-token <YOUR_API_TOKEN>
```
Congrats, now you can use accelerated models!
----
## Benchmarks
Benchmarking is one of the most important procedures during model acceleration. We aim to provide clear performance metrics for models using our algorithms. The `W8A8, int8 column` indicates that we applied W8A8 quantization with int8 data type to all linear layers and used the same calibration data as for ANNA. The S model achieves practically identical speed but much higher quality, as ANNA knows how to improve quantization quality on sensitive layers!
### Quality benchmarks
<!-- For quality evaluation we have used: #TODO link to github -->
| Metric/Model | S | M | L | XL | Original | W8A8, int8 |
|---------------|---|---|---|----|----------|------------|
| MMLU | 45.5 | 45.9 | 45.9 | 46.2 | 46.2 | 24 |
| PIQA | 73.1 | 73.7 | 74.2 | 74.3 | 74.3 | 55.8 |
| Arc Challenge | 34.5 | 35.9 | 36.0 | 35.8 | 35.8 | 20.3 |
| Winogrande | 60.4 | 59.7 | 60.8 | 59.5 | 59.5 | 50.3 |
* **MMLU**:Evaluates general knowledge across 57 subjects including science, humanities, engineering, and more. Shows model's ability to handle diverse academic topics.
* **PIQA**: Evaluates physical commonsense reasoning through questions about everyday physical interactions. Shows model's understanding of real-world physics concepts.
* **Arc Challenge**: Evaluates grade-school level multiple-choice questions requiring reasoning. Shows model's ability to solve complex reasoning tasks.
* **Winogrande**: Evaluates commonsense reasoning through sentence completion tasks. Shows model's capability to understand context and resolve ambiguity.
### Latency benchmarks
__100 input/300 output; tok/s:__
| GPU/Model | S | M | L | XL | Original | W8A8, int8 |
|-----------|-----|---|---|----|----------|------------|
| H100 | 436 | 436 | 409 | 396 | 110 | 439 |
| L40s | 290 | 251 | 222 | 210 | 103 | 300 |
## Links
* __Platform__: [app.thestage.ai](app.thestage.ai)
<!-- * __Elastic models Github__: [app.thestage.ai](app.thestage.ai) -->
* __Subscribe for updates__: [TheStageAI X](https://x.com/TheStageAI)
* __Contact email__: [email protected]
|
Theros/Q2.5-ColdBrew-R1-Forge-Q4_K_M-GGUF
|
Theros
| 2025-06-16T10:33:19Z | 0 | 0 | null |
[
"gguf",
"merge",
"mergekit",
"lazymergekit",
"Theros/Qwen2.5-ColdBrew-R1",
"llama-cpp",
"gguf-my-repo",
"base_model:SvalTek/Q2.5-ColdBrew-R1-Forge",
"base_model:quantized:SvalTek/Q2.5-ColdBrew-R1-Forge",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-06-16T10:32:53Z |
---
base_model: SvalTek/Q2.5-ColdBrew-R1-Forge
tags:
- merge
- mergekit
- lazymergekit
- Theros/Qwen2.5-ColdBrew-R1
- llama-cpp
- gguf-my-repo
---
# Theros/Q2.5-ColdBrew-R1-Forge-Q4_K_M-GGUF
This model was converted to GGUF format from [`SvalTek/Q2.5-ColdBrew-R1-Forge`](https://huggingface.co/SvalTek/Q2.5-ColdBrew-R1-Forge) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/SvalTek/Q2.5-ColdBrew-R1-Forge) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Theros/Q2.5-ColdBrew-R1-Forge-Q4_K_M-GGUF --hf-file q2.5-coldbrew-r1-forge-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Theros/Q2.5-ColdBrew-R1-Forge-Q4_K_M-GGUF --hf-file q2.5-coldbrew-r1-forge-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Theros/Q2.5-ColdBrew-R1-Forge-Q4_K_M-GGUF --hf-file q2.5-coldbrew-r1-forge-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Theros/Q2.5-ColdBrew-R1-Forge-Q4_K_M-GGUF --hf-file q2.5-coldbrew-r1-forge-q4_k_m.gguf -c 2048
```
|
MinaMila/phi3_unlearned_2nd_5e-7_1.0_0.15_0.25_0.5_epoch1
|
MinaMila
| 2025-06-16T10:32:05Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"phi3",
"text-generation",
"conversational",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-16T10:30:13Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
lefantom00/DeepHermes-3-iSMART
|
lefantom00
| 2025-06-16T10:28:59Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gguf",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:NousResearch/DeepHermes-3-Llama-3-8B-Preview",
"base_model:quantized:NousResearch/DeepHermes-3-Llama-3-8B-Preview",
"license:llama3",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-16T05:47:08Z |
---
base_model: NousResearch/DeepHermes-3-Llama-3-8B-Preview
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
license: llama3
language:
- en
---
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
MinaMila/gemma_2b_unlearned_2nd_1e-5_1.0_0.25_0.75_0.15_epoch1
|
MinaMila
| 2025-06-16T10:27:28Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-16T10:25:38Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
MinaMila/phi3_unlearned_2nd_5e-7_1.0_0.15_0.25_0.75_epoch2
|
MinaMila
| 2025-06-16T10:25:08Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"phi3",
"text-generation",
"conversational",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-16T10:23:17Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Quit2003/MateQwen2.5-7b
|
Quit2003
| 2025-06-16T10:24:09Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-16T10:24:01Z |
---
base_model: unsloth/qwen2.5-7b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Quit2003
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-7b-unsloth-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
aieng-lab/t5-3b_smell-doc
|
aieng-lab
| 2025-06-16T10:24:04Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"t5",
"text-classification",
"en",
"base_model:google-t5/t5-3b",
"base_model:finetune:google-t5/t5-3b",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-06-16T10:22:22Z |
---
library_name: transformers
license: mit
language:
- en
metrics:
- f1
- precision
- recall
base_model:
- t5-3b
pipeline_tag: text-classification
---
# T5 3b for classifying smell documentation (multi-label)
This model classifies smell documentation as 'fragmented', 'tangled', 'excessive', 'bloated' or 'lazy'.
- **Developed by:** Fabian C. Peรฑa, Steffen Herbold
- **Finetuned from:** [t5-3b](https://huggingface.co/t5-3b)
- **Replication kit:** [https://github.com/aieng-lab/senlp-benchmark](https://github.com/aieng-lab/senlp-benchmark)
- **Language:** English
- **License:** MIT
## Citation
```
@misc{pena2025benchmark,
author = {Fabian Peรฑa and Steffen Herbold},
title = {Evaluating Large Language Models on Non-Code Software Engineering Tasks},
year = {2025}
}
```
|
vulcan2506/llama3-medmcqa
|
vulcan2506
| 2025-06-16T10:24:01Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"unsloth",
"trl",
"sft",
"endpoints_compatible",
"region:us"
] | null | 2025-06-16T08:27:08Z |
---
base_model: unsloth/meta-llama-3.1-8b-unsloth-bnb-4bit
library_name: transformers
model_name: llama3-medmcqa
tags:
- generated_from_trainer
- unsloth
- trl
- sft
licence: license
---
# Model Card for llama3-medmcqa
This model is a fine-tuned version of [unsloth/meta-llama-3.1-8b-unsloth-bnb-4bit](https://huggingface.co/unsloth/meta-llama-3.1-8b-unsloth-bnb-4bit).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="vulcan2506/llama3-medmcqa", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.18.2
- Transformers: 4.52.4
- Pytorch: 2.6.0
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
StuffedPumpkins/AmparitaP5
|
StuffedPumpkins
| 2025-06-16T10:22:29Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:mit",
"region:us"
] |
text-to-image
| 2025-06-16T10:22:19Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: '-'
output:
url: images/amparitap5_001600_00_20250611101740.png
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: amparitap5
license: mit
---
# AmparitaP5
<Gallery />
## Trigger words
You should use `amparitap5` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/StuffedPumpkins/AmparitaP5/tree/main) them in the Files & versions tab.
|
StuffedPumpkins/Irisirri
|
StuffedPumpkins
| 2025-06-16T10:21:25Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:mit",
"region:us"
] |
text-to-image
| 2025-06-16T10:21:15Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: '-'
output:
url: images/irisirricharli_002076_00_20250601021205.png
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: irisirricharli
license: mit
---
# Irisirri
<Gallery />
## Trigger words
You should use `irisirricharli` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/StuffedPumpkins/Irisirri/tree/main) them in the Files & versions tab.
|
thesantoshbist/fwu-llm
|
thesantoshbist
| 2025-06-16T10:20:46Z | 53 | 1 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"fwu",
"santoshbist",
"farwestern-ai",
"en",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-12T11:55:40Z |
---
license: apache-2.0
language:
- en
metrics:
- accuracy
pipeline_tag: text-generation
library_name: transformers
tags:
- fwu
- santoshbist
- farwestern-ai
---
# FWU Assistant Model
## Model Description
This is a fine-tuned LLM model specialized for Far Western University (FWU) information and educational assistance. The model has been customized to provide accurate information about FWU programs, courses, admissions, faculty, and campus resources while maintaining general conversational abilities.
## Training Data
This model was trained on:
- Conversations with students and faculty at FWU
- Academic information and educational resources
- General knowledge with emphasis on educational contexts
## Capabilities
- Answers questions about Far Western University programs and policies
- Provides assistance with academic inquiries
- Helps with general knowledge questions
- Maintains conversational context for natural interactions
## Use Cases
- Student information services
- Academic guidance
- Educational assistance
- University information desk
- Virtual campus guide
- Conversational AI
- Question answering
- Text generation
## Limitations
- Limited knowledge of events after training cutoff
- May occasionally provide incorrect information
- Not a replacement for official university guidance
- Doesn't have access to student records or private information
## Ethical Considerations
This model is intended for educational and informational purposes only. It should not be used for making critical academic or administrative decisions without verification from official university sources.
## Additional Information
Developed by Santosh Bist at Far Western University. For issues or feedback, please contact [email protected].
## Model Description
Custom LLM 1B Parameter mainly for FWU or Far Western University.
## Author
Santosh Bist
## Version
1.0.0
|
aieng-lab/t5-large_smell-doc
|
aieng-lab
| 2025-06-16T10:20:25Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"t5",
"text-classification",
"en",
"base_model:google-t5/t5-large",
"base_model:finetune:google-t5/t5-large",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-06-16T10:19:58Z |
---
library_name: transformers
license: mit
language:
- en
metrics:
- f1
- precision
- recall
base_model:
- t5-large
pipeline_tag: text-classification
---
# T5 large for classifying smell documentation (multi-label)
This model classifies smell documentation as 'fragmented', 'tangled', 'excessive', 'bloated' or 'lazy'.
- **Developed by:** Fabian C. Peรฑa, Steffen Herbold
- **Finetuned from:** [t5-large](https://huggingface.co/t5-large)
- **Replication kit:** [https://github.com/aieng-lab/senlp-benchmark](https://github.com/aieng-lab/senlp-benchmark)
- **Language:** English
- **License:** MIT
## Citation
```
@misc{pena2025benchmark,
author = {Fabian Peรฑa and Steffen Herbold},
title = {Evaluating Large Language Models on Non-Code Software Engineering Tasks},
year = {2025}
}
```
|
StuffedPumpkins/ChrisP5
|
StuffedPumpkins
| 2025-06-16T10:19:32Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:mit",
"region:us"
] |
text-to-image
| 2025-06-16T10:19:21Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: '-'
output:
url: images/chrisp5_002150_00_20250611212807.png
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: ChrisP5
license: mit
---
# ChrisP5
<Gallery />
## Trigger words
You should use `ChrisP5` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/StuffedPumpkins/ChrisP5/tree/main) them in the Files & versions tab.
|
aieng-lab/t5-base_smell-doc
|
aieng-lab
| 2025-06-16T10:19:08Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"t5",
"text-classification",
"en",
"base_model:google-t5/t5-base",
"base_model:finetune:google-t5/t5-base",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-06-16T10:18:57Z |
---
library_name: transformers
license: mit
language:
- en
metrics:
- f1
- precision
- recall
base_model:
- t5-base
pipeline_tag: text-classification
---
# T5 base for classifying smell documentation (multi-label)
This model classifies smell documentation as 'fragmented', 'tangled', 'excessive', 'bloated' or 'lazy'.
- **Developed by:** Fabian C. Peรฑa, Steffen Herbold
- **Finetuned from:** [t5-base](https://huggingface.co/t5-base)
- **Replication kit:** [https://github.com/aieng-lab/senlp-benchmark](https://github.com/aieng-lab/senlp-benchmark)
- **Language:** English
- **License:** MIT
## Citation
```
@misc{pena2025benchmark,
author = {Fabian Peรฑa and Steffen Herbold},
title = {Evaluating Large Language Models on Non-Code Software Engineering Tasks},
year = {2025}
}
```
|
aieng-lab/t5-small_smell-doc
|
aieng-lab
| 2025-06-16T10:18:35Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"t5",
"text-classification",
"en",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-06-16T10:18:29Z |
---
library_name: transformers
license: mit
language:
- en
metrics:
- f1
- precision
- recall
base_model:
- t5-small
pipeline_tag: text-classification
---
# T5 small for classifying smell documentation (multi-label)
This model classifies smell documentation as 'fragmented', 'tangled', 'excessive', 'bloated' or 'lazy'.
- **Developed by:** Fabian C. Peรฑa, Steffen Herbold
- **Finetuned from:** [t5-small](https://huggingface.co/t5-small)
- **Replication kit:** [https://github.com/aieng-lab/senlp-benchmark](https://github.com/aieng-lab/senlp-benchmark)
- **Language:** English
- **License:** MIT
## Citation
```
@misc{pena2025benchmark,
author = {Fabian Peรฑa and Steffen Herbold},
title = {Evaluating Large Language Models on Non-Code Software Engineering Tasks},
year = {2025}
}
```
|
Rivaidan/MN-12B-Mag-Mell-R1-Q8_0-GGUF
|
Rivaidan
| 2025-06-16T10:18:06Z | 4 | 0 |
transformers
|
[
"transformers",
"gguf",
"mergekit",
"merge",
"llama-cpp",
"gguf-my-repo",
"base_model:inflatebot/MN-12B-Mag-Mell-R1",
"base_model:quantized:inflatebot/MN-12B-Mag-Mell-R1",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-12-09T14:14:25Z |
---
base_model: inflatebot/MN-12B-Mag-Mell-R1
library_name: transformers
tags:
- mergekit
- merge
- llama-cpp
- gguf-my-repo
---
# Rivaidan/MN-12B-Mag-Mell-R1-Q8_0-GGUF
This model was converted to GGUF format from [`inflatebot/MN-12B-Mag-Mell-R1`](https://huggingface.co/inflatebot/MN-12B-Mag-Mell-R1) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/inflatebot/MN-12B-Mag-Mell-R1) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Rivaidan/MN-12B-Mag-Mell-R1-Q8_0-GGUF --hf-file mn-12b-mag-mell-r1-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Rivaidan/MN-12B-Mag-Mell-R1-Q8_0-GGUF --hf-file mn-12b-mag-mell-r1-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Rivaidan/MN-12B-Mag-Mell-R1-Q8_0-GGUF --hf-file mn-12b-mag-mell-r1-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Rivaidan/MN-12B-Mag-Mell-R1-Q8_0-GGUF --hf-file mn-12b-mag-mell-r1-q8_0.gguf -c 2048
```
|
aiadir/OralSeg
|
aiadir
| 2025-06-16T10:16:45Z | 0 | 0 | null |
[
"license:cc-by-nc-4.0",
"region:us"
] | null | 2025-03-13T08:06:58Z |
---
license: cc-by-nc-4.0
---
# Model Card for OralSeg
<!-- Provide a quick summary of what the model is/does. -->
OralSeg focuses on tooth and bone object detection and segmentation in dental CBCT imaging, and is applicable to surgical planning, orthodontic design, implant planning, and academic research.
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
OralSeg is an advanced instance segmentation model based on dental CBCT, developed in our study, designed to accurately segment the maxilla, mandible, 32 teeth, and bilateral mandibular canals in large-scale dental CBCT images. The model is trained using high-precision annotations from expert dental professionals and is characterized by its efficiency, robustness, and accuracy, making it well-suited for clinical research and practical applications.
- **Developed by:** AIADIR
- **Funded by:** The University of Hong Kong, Faculty of Dentistry
- **Model type:** Dental CBCT image instance segmentation model
- **Language:** English
- **License:** OralSeg ยฉ 2025 by AIADIR is licensed under Creative Commons Attribution-NonCommercial 4.0 International (CC BY-NC 4.0). The model is released for non-commercial use.
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
OralSeg was trained and tested using 100 high-quality, high-resolution dental CBCT images. All images were manually annotated at pixel-level precision by expert dental professionals, covering:
- Maxilla
- Mandible
- 32 teeth (including wisdom teeth)
- Left and right mandibular canals
These detailed annotations provide the network with accurate and rich target contour information during training, significantly improving the model's segmentation accuracy and robustness.
## Model Architecture
OralSeg model is based on the SwinUnetR architecture and incorporates our proprietary data augmentation strategies and training schemes to further enhance its performance in dental instance segmentation tasks. This architecture features the following key components:
- **Multi-scale feature extraction:** Uses a Swin Transformer-based feature extraction module to process anatomical structures of various sizes in CBCT images.
- **UNet-style convolutional structure:** Features extracted by the encoder are progressively upsampled and fused via a symmetric decoder path, preserving spatial information and improving segmentation accuracy.
- **Transformer self-attention mechanism::** Transformer self-attention mechanism: Effectively enhances the ability to model long-range pixel dependencies, improving detail capture and complex structure segmentation in high-resolution medical images.
## Application Scenarios
- **Clinical research:** Assists clinicians in quickly identifying and segmenting key structures such as teeth, bones, and nerve canals during CBCT examinations and surgical planning.
- **Implant and orthodontic planning:** Enables more convenient and visualized treatment planning, preoperative assessment, and simulation.
- **Digital education and training:** Provides standardized 3D anatomical segmentation results for medical education and research.
## Disclaimer and Compliance
- **Licensing:** This model is released under the Creative Commons Attribution-NonCommercial 4.0 International (CC BY-NC 4.0) license. Users are free to copy, distribute, display, and adapt the model for non-commercial purposes, including research, education, and medical training. When using this model, users must credit the original developers (AIADIR, Faculty of Dentistry, The University of Hong Kong).
- **Data compliance:** All training data used by the model were annotated by professional dental experts, sourced legally and ethically, with patient privacy fully protected.
- **Potential bias:** While the model performs well across various types and structures of teeth and bones, there may still be rare cases or distributional biases not fully covered. Further evaluation or fine-tuning is recommended for specific scenarios.
- **Safety notice:** Model outputs are for reference only and should not replace professional clinical diagnosis or decision-making. All use cases should be accompanied by expert dental evaluation and judgment.
|
meto/welfare
|
meto
| 2025-06-16T10:16:10Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-16T10:14:28Z |
---
base_model: unsloth/meta-llama-3.1-8b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** meto
- **License:** apache-2.0
- **Finetuned from model :** unsloth/meta-llama-3.1-8b-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Darkhn/L3.3-70B-Animus-V1-GGUF
|
Darkhn
| 2025-06-16T10:15:52Z | 227 | 0 |
llama.cpp
|
[
"llama.cpp",
"gguf",
"base_model:Darkhn/L3.3-70B-Animus-V1",
"base_model:quantized:Darkhn/L3.3-70B-Animus-V1",
"license:mit",
"endpoints_compatible",
"region:us",
"imatrix"
] | null | 2025-06-13T01:52:45Z |
---
library_name: llama.cpp
license: mit
tags:
- gguf
base_model:
- Darkhn/L3.3-70B-Animus-V1
---
# L3.3-70B-Animus-V1-GGUF
GGUF model files for `L3.3-70B-Animus-V1` (original base: `L3.3-70B-Animus-V1`).
This repository contains the following quantization: **Q5_K_M**.
## Files
- `L3.3-70B-Animus-V1-Q5_K_M.gguf`
Converted and quantized using [llama.cpp](https://github.com/ggerganov/llama.cpp).
|
aplux/YOLOv10b
|
aplux
| 2025-06-16T10:15:34Z | 0 | 0 | null |
[
"onnx",
"AIoT",
"QNN",
"object-detection",
"license:agpl-3.0",
"region:us"
] |
object-detection
| 2025-06-13T01:59:30Z |
---
license: agpl-3.0
pipeline_tag: object-detection
tags:
- AIoT
- QNN
---

## YOLOv10b: Object Detection
YOLOv10b is the large-scale model in the YOLOv10 family, designed for high-precision object detection tasks. Compared to the lightweight and medium variants, YOLOv10b features a deeper network architecture and more parameters, enabling it to capture richer feature representations and significantly improve detection of small objects and complex scenes. The model employs an advanced anchor-free mechanism, combined with multi-scale feature fusion and a powerful decoupled head design, enhancing detection accuracy and robustness. YOLOv10b is suitable for deployment on high-performance servers or advanced edge devices, widely used in autonomous driving, intelligent security, and industrial inspection applications with demanding requirements.
### Source model
- Input shape: 1x3x640x640
- Number of parameters: 19.62M
- Model size: 72.99M
- Output shape: 1x300x6
The source model can be foundย [here](https://github.com/THU-MIG/yolov10)
## Performance Reference
Please search model by model name in [Model Farm](https://aiot.aidlux.com/en/models)
## Inference & Model Conversion
Please search model by model name in [Model Farm](https://aiot.aidlux.com/en/models)
## License
- Source Model: [AGPL-3.0](https://github.com/THU-MIG/yolov10/blob/main/LICENSE)
- Deployable Model: [AGPL-3.0](https://github.com/THU-MIG/yolov10/blob/main/LICENSE)
|
akaruineko/bad-good-classifier-ru_en
|
akaruineko
| 2025-06-16T10:14:51Z | 29 | 0 | null |
[
"safetensors",
"bert",
"text-classification",
"ru",
"en",
"base_model:cointegrated/rubert-tiny",
"base_model:finetune:cointegrated/rubert-tiny",
"license:mit",
"region:us"
] |
text-classification
| 2025-06-07T12:47:18Z |
---
license: mit
language:
- ru
- en
base_model:
- cointegrated/rubert-tiny
pipeline_tag: text-classification
---
# bad-good-text-classifier-ru-en
## Description
This is an effective and simple neural network that can classify words as positive or negative in both Russian and English.
It is suitable for filtering chats, comments, reviews and other texts to detect toxicity or negative content. However, the model is not ideal.
## Features
- Bilingual model (Russian(focus is on russian), English).
- Fast and accurate classification
- Easy integration into Python projects
- Trained on a custom dataset with "good" and "bad" labels
## Installation
Make sure you have Python 3.7+ and the Hugging Face `transformers` package installed:
```bash
pip install transformers torch
````
## Usage
Example of classifying a single text:
```python
from transformers import AutoModelForSequenceClassification, AutoTokenizer
import torch
model_name = "akaruineko/bad-good-classifier-ru_en"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSequenceClassification.from_pretrained(model_name)
def classify_word(word):
inputs = tokenizer(word, return_tensors="pt", truncation=True, padding=True)
outputs = model(**inputs)
probs = torch.softmax(outputs.logits, dim=1)
return {"good": probs[0][1].item(), "bad": probs[0][0].item()}
def classify_text_by_words(text):
words = text.split()
results = {}
for w in words:
results[w] = classify_word(w)
return results
if __name__ == "__main__":
sample_text = "Example text for classification"
results = classify_text_by_words(sample_text)
for word, scores in results.items():
print(f"Word: '{word}' - Good: {scores['good']:.4f}, Bad: {scores['bad']:.4f}")
```
LABEL_0 = bad, LABEL_1 = good
## Training Data
The model is trained on two datasets labeled "good" and "bad".
The data is manually prepared and includes texts in Russian and English.
## Training Results
* Epochs: 12
* Minimum loss: \~0.03
* High accuracy on test dataset
## License
MIT License.
## Contact
Questions or suggestions? Write to: [[email protected]](mailto:[email protected])
---
Thanks for using this classifier!
Feel free to share feedback and improvement ideas.
|
electroglyph/Qwen3-Embedding-0.6B-onnx-int4
|
electroglyph
| 2025-06-16T10:14:40Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"onnx",
"qwen3",
"text-generation",
"transformers",
"sentence-similarity",
"feature-extraction",
"base_model:Qwen/Qwen3-0.6B-Base",
"base_model:quantized:Qwen/Qwen3-0.6B-Base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2025-06-16T08:54:12Z |
---
license: apache-2.0
base_model:
- Qwen/Qwen3-0.6B-Base
tags:
- transformers
- sentence-transformers
- sentence-similarity
- feature-extraction
---
# Qwen3-Embedding-0.6B-onnx-int4
This is an onnx version of https://huggingface.co/Qwen/Qwen3-Embedding-0.6B
This model has been dynamically quantized to int4/uint8, and further modified to output a uint8 1024 dim tensor.
You probably don't want to use this model on CPU. I've tested on a Ryzen CPU with VNNI, and it's the same speed as the base f32 model, but with 2% less retrieval accuracy. I'm posting it here in case it's useful for GPU users. Not sure if it actually is, but I already made it so here it is.
This model is compatible with qdrant fastembed, please note these details:
- Execute model without pooling and without normalization
- Pay attention to the example query format in the code below
# Quantization method
I did an int4 quantization pass with block size == 128 (block size 32 was extremely close in accuracy), with the same nodes excluded as from my uint8 model.
Then I quantized the remaining non-excluded nodes to uint8 the same way as here: https://huggingface.co/electroglyph/Qwen3-Embedding-0.6B-onnx-uint8
<details>
<summary>Here are the nodes I excluded</summary>
```python
["/0/auto_model/ConstantOfShape",
"/0/auto_model/Constant_28",
"/0/auto_model/layers.25/post_attention_layernorm/Pow",
"/0/auto_model/layers.26/input_layernorm/Pow",
"/0/auto_model/layers.25/input_layernorm/Pow",
"/0/auto_model/layers.24/post_attention_layernorm/Pow",
"/0/auto_model/layers.24/input_layernorm/Pow",
"/0/auto_model/layers.23/post_attention_layernorm/Pow",
"/0/auto_model/layers.23/input_layernorm/Pow",
"/0/auto_model/layers.22/post_attention_layernorm/Pow",
"/0/auto_model/layers.22/input_layernorm/Pow",
"/0/auto_model/layers.3/input_layernorm/Pow",
"/0/auto_model/layers.4/input_layernorm/Pow",
"/0/auto_model/layers.3/post_attention_layernorm/Pow",
"/0/auto_model/layers.21/post_attention_layernorm/Pow",
"/0/auto_model/layers.5/input_layernorm/Pow",
"/0/auto_model/layers.4/post_attention_layernorm/Pow",
"/0/auto_model/layers.5/post_attention_layernorm/Pow",
"/0/auto_model/layers.6/input_layernorm/Pow",
"/0/auto_model/layers.6/post_attention_layernorm/Pow",
"/0/auto_model/layers.7/input_layernorm/Pow",
"/0/auto_model/layers.8/input_layernorm/Pow",
"/0/auto_model/layers.7/post_attention_layernorm/Pow",
"/0/auto_model/layers.26/post_attention_layernorm/Pow",
"/0/auto_model/layers.9/input_layernorm/Pow",
"/0/auto_model/layers.8/post_attention_layernorm/Pow",
"/0/auto_model/layers.21/input_layernorm/Pow",
"/0/auto_model/layers.20/post_attention_layernorm/Pow",
"/0/auto_model/layers.9/post_attention_layernorm/Pow",
"/0/auto_model/layers.10/input_layernorm/Pow",
"/0/auto_model/layers.20/input_layernorm/Pow",
"/0/auto_model/layers.11/input_layernorm/Pow",
"/0/auto_model/layers.10/post_attention_layernorm/Pow",
"/0/auto_model/layers.12/input_layernorm/Pow",
"/0/auto_model/layers.11/post_attention_layernorm/Pow",
"/0/auto_model/layers.12/post_attention_layernorm/Pow",
"/0/auto_model/layers.13/input_layernorm/Pow",
"/0/auto_model/layers.19/post_attention_layernorm/Pow",
"/0/auto_model/layers.13/post_attention_layernorm/Pow",
"/0/auto_model/layers.14/input_layernorm/Pow",
"/0/auto_model/layers.19/input_layernorm/Pow",
"/0/auto_model/layers.18/post_attention_layernorm/Pow",
"/0/auto_model/layers.14/post_attention_layernorm/Pow",
"/0/auto_model/layers.15/input_layernorm/Pow",
"/0/auto_model/layers.16/input_layernorm/Pow",
"/0/auto_model/layers.15/post_attention_layernorm/Pow",
"/0/auto_model/layers.18/input_layernorm/Pow",
"/0/auto_model/layers.17/post_attention_layernorm/Pow",
"/0/auto_model/layers.17/input_layernorm/Pow",
"/0/auto_model/layers.16/post_attention_layernorm/Pow",
"/0/auto_model/layers.27/post_attention_layernorm/Pow",
"/0/auto_model/layers.27/input_layernorm/Pow",
"/0/auto_model/norm/Pow",
"/0/auto_model/layers.25/post_attention_layernorm/ReduceMean",
"/0/auto_model/layers.25/post_attention_layernorm/Add",
"/0/auto_model/layers.26/input_layernorm/Add",
"/0/auto_model/layers.26/input_layernorm/ReduceMean",
"/0/auto_model/layers.25/input_layernorm/ReduceMean",
"/0/auto_model/layers.25/input_layernorm/Add",
"/0/auto_model/layers.24/post_attention_layernorm/ReduceMean",
"/0/auto_model/layers.24/post_attention_layernorm/Add",
"/0/auto_model/layers.24/input_layernorm/Add",
"/0/auto_model/layers.24/input_layernorm/ReduceMean",
"/0/auto_model/layers.23/post_attention_layernorm/Add",
"/0/auto_model/layers.23/post_attention_layernorm/ReduceMean",
"/0/auto_model/layers.23/input_layernorm/ReduceMean",
"/0/auto_model/layers.23/input_layernorm/Add",
"/0/auto_model/layers.22/post_attention_layernorm/ReduceMean",
"/0/auto_model/layers.22/post_attention_layernorm/Add",
"/0/auto_model/layers.26/post_attention_layernorm/ReduceMean",
"/0/auto_model/layers.26/post_attention_layernorm/Add",
"/0/auto_model/layers.22/input_layernorm/ReduceMean",
"/0/auto_model/layers.22/input_layernorm/Add",
"/0/auto_model/layers.3/input_layernorm/Add",
"/0/auto_model/layers.3/input_layernorm/ReduceMean",
"/0/auto_model/layers.21/post_attention_layernorm/ReduceMean",
"/0/auto_model/layers.21/post_attention_layernorm/Add",
"/0/auto_model/layers.4/input_layernorm/Add",
"/0/auto_model/layers.4/input_layernorm/ReduceMean",
"/0/auto_model/layers.3/post_attention_layernorm/Add",
"/0/auto_model/layers.3/post_attention_layernorm/ReduceMean",
"/0/auto_model/layers.5/input_layernorm/Add",
"/0/auto_model/layers.5/input_layernorm/ReduceMean",
"/0/auto_model/layers.4/post_attention_layernorm/ReduceMean",
"/0/auto_model/layers.4/post_attention_layernorm/Add",
"/0/auto_model/layers.5/post_attention_layernorm/Add",
"/0/auto_model/layers.5/post_attention_layernorm/ReduceMean",
"/0/auto_model/layers.6/input_layernorm/Add",
"/0/auto_model/layers.6/input_layernorm/ReduceMean",
"/0/auto_model/layers.6/post_attention_layernorm/Add",
"/0/auto_model/layers.6/post_attention_layernorm/ReduceMean",
"/0/auto_model/layers.7/input_layernorm/Add",
"/0/auto_model/layers.7/input_layernorm/ReduceMean",
"/0/auto_model/layers.8/input_layernorm/ReduceMean",
"/0/auto_model/layers.8/input_layernorm/Add",
"/0/auto_model/layers.7/post_attention_layernorm/Add",
"/0/auto_model/layers.7/post_attention_layernorm/ReduceMean",
"/0/auto_model/layers.9/input_layernorm/Add",
"/0/auto_model/layers.9/input_layernorm/ReduceMean",
"/0/auto_model/layers.8/post_attention_layernorm/Add",
"/0/auto_model/layers.8/post_attention_layernorm/ReduceMean",
"/0/auto_model/layers.21/input_layernorm/Add",
"/0/auto_model/layers.21/input_layernorm/ReduceMean",
"/0/auto_model/layers.20/post_attention_layernorm/Add",
"/0/auto_model/layers.20/post_attention_layernorm/ReduceMean",
"/0/auto_model/layers.9/post_attention_layernorm/ReduceMean",
"/0/auto_model/layers.9/post_attention_layernorm/Add",
"/0/auto_model/layers.10/input_layernorm/ReduceMean",
"/0/auto_model/layers.10/input_layernorm/Add",
"/0/auto_model/layers.20/input_layernorm/Add",
"/0/auto_model/layers.20/input_layernorm/ReduceMean",
"/0/auto_model/layers.11/input_layernorm/ReduceMean",
"/0/auto_model/layers.11/input_layernorm/Add",
"/0/auto_model/layers.10/post_attention_layernorm/ReduceMean",
"/0/auto_model/layers.10/post_attention_layernorm/Add",
"/0/auto_model/layers.12/input_layernorm/ReduceMean",
"/0/auto_model/layers.12/input_layernorm/Add",
"/0/auto_model/layers.11/post_attention_layernorm/Add",
"/0/auto_model/layers.11/post_attention_layernorm/ReduceMean",
"/0/auto_model/layers.12/post_attention_layernorm/ReduceMean",
"/0/auto_model/layers.12/post_attention_layernorm/Add",
"/0/auto_model/layers.13/input_layernorm/Add",
"/0/auto_model/layers.13/input_layernorm/ReduceMean",
"/0/auto_model/layers.19/post_attention_layernorm/Add",
"/0/auto_model/layers.19/post_attention_layernorm/ReduceMean",
"/0/auto_model/layers.13/post_attention_layernorm/ReduceMean",
"/0/auto_model/layers.13/post_attention_layernorm/Add",
"/0/auto_model/layers.14/input_layernorm/Add",
"/0/auto_model/layers.14/input_layernorm/ReduceMean",
"/0/auto_model/layers.19/input_layernorm/ReduceMean",
"/0/auto_model/layers.19/input_layernorm/Add",
"/0/auto_model/layers.18/post_attention_layernorm/ReduceMean",
"/0/auto_model/layers.18/post_attention_layernorm/Add",
"/0/auto_model/layers.14/post_attention_layernorm/ReduceMean",
"/0/auto_model/layers.14/post_attention_layernorm/Add",
"/0/auto_model/layers.15/input_layernorm/ReduceMean",
"/0/auto_model/layers.15/input_layernorm/Add",
"/0/auto_model/layers.16/input_layernorm/Add",
"/0/auto_model/layers.16/input_layernorm/ReduceMean",
"/0/auto_model/layers.15/post_attention_layernorm/Add",
"/0/auto_model/layers.15/post_attention_layernorm/ReduceMean",
"/0/auto_model/layers.18/input_layernorm/Add",
"/0/auto_model/layers.18/input_layernorm/ReduceMean",
"/0/auto_model/layers.17/post_attention_layernorm/Add",
"/0/auto_model/layers.17/post_attention_layernorm/ReduceMean",
"/0/auto_model/layers.17/input_layernorm/ReduceMean",
"/0/auto_model/layers.17/input_layernorm/Add",
"/0/auto_model/layers.16/post_attention_layernorm/Add",
"/0/auto_model/layers.16/post_attention_layernorm/ReduceMean",
"/0/auto_model/layers.27/post_attention_layernorm/Add",
"/0/auto_model/layers.27/post_attention_layernorm/ReduceMean",
"/0/auto_model/layers.27/input_layernorm/Add",
"/0/auto_model/layers.27/input_layernorm/ReduceMean",
"/0/auto_model/layers.27/self_attn/q_norm/Pow",
"/0/auto_model/layers.14/self_attn/k_norm/Pow",
"/0/auto_model/layers.26/self_attn/q_norm/Pow",
"/0/auto_model/layers.25/self_attn/q_norm/Pow",
"/0/auto_model/layers.26/self_attn/k_norm/Pow",
"/0/auto_model/layers.8/self_attn/k_norm/Pow",
"/0/auto_model/layers.24/self_attn/k_norm/Pow",
"/0/auto_model/layers.24/self_attn/q_norm/Pow",
"/0/auto_model/layers.25/self_attn/k_norm/Pow",
"/0/auto_model/layers.23/self_attn/q_norm/Pow",
"/0/auto_model/layers.27/self_attn/k_norm/Pow",
"/0/auto_model/layers.12/self_attn/k_norm/Pow",
"/0/auto_model/layers.13/self_attn/k_norm/Pow",
"/0/auto_model/layers.2/mlp/down_proj/MatMul",
"/0/auto_model/layers.3/post_attention_layernorm/Cast",
"/0/auto_model/layers.3/Add",
"/0/auto_model/layers.3/Add_1",
"/0/auto_model/layers.4/input_layernorm/Cast",
"/0/auto_model/layers.3/input_layernorm/Cast",
"/0/auto_model/layers.2/Add_1",
"/0/auto_model/layers.4/Add",
"/0/auto_model/layers.4/post_attention_layernorm/Cast",
"/0/auto_model/layers.5/input_layernorm/Cast",
"/0/auto_model/layers.4/Add_1",
"/0/auto_model/layers.5/post_attention_layernorm/Cast",
"/0/auto_model/layers.5/Add",
"/0/auto_model/layers.5/Add_1",
"/0/auto_model/layers.6/input_layernorm/Cast",
"/0/auto_model/layers.7/Add_1",
"/0/auto_model/layers.8/input_layernorm/Cast",
"/0/auto_model/layers.7/Add",
"/0/auto_model/layers.7/post_attention_layernorm/Cast",
"/0/auto_model/layers.6/Add",
"/0/auto_model/layers.6/post_attention_layernorm/Cast",
"/0/auto_model/layers.6/Add_1",
"/0/auto_model/layers.7/input_layernorm/Cast",
"/0/auto_model/layers.8/Add",
"/0/auto_model/layers.8/post_attention_layernorm/Cast",
"/0/auto_model/layers.9/input_layernorm/Cast",
"/0/auto_model/layers.8/Add_1",
"/0/auto_model/layers.9/post_attention_layernorm/Cast",
"/0/auto_model/layers.9/Add",
"/0/auto_model/layers.9/Add_1",
"/0/auto_model/layers.10/input_layernorm/Cast",
"/0/auto_model/layers.11/input_layernorm/Cast",
"/0/auto_model/layers.10/Add_1",
"/0/auto_model/layers.10/Add",
"/0/auto_model/layers.10/post_attention_layernorm/Cast",
"/0/auto_model/layers.11/Add",
"/0/auto_model/layers.11/post_attention_layernorm/Cast",
"/0/auto_model/layers.11/Add_1",
"/0/auto_model/layers.12/input_layernorm/Cast",
"/0/auto_model/layers.12/Add",
"/0/auto_model/layers.12/post_attention_layernorm/Cast",
"/0/auto_model/layers.12/Add_1",
"/0/auto_model/layers.13/input_layernorm/Cast",
"/0/auto_model/layers.13/Add",
"/0/auto_model/layers.13/post_attention_layernorm/Cast",
"/0/auto_model/layers.14/input_layernorm/Cast",
"/0/auto_model/layers.13/Add_1",
"/0/auto_model/layers.14/Add_1",
"/0/auto_model/layers.15/input_layernorm/Cast",
"/0/auto_model/layers.14/post_attention_layernorm/Cast",
"/0/auto_model/layers.14/Add",
"/0/auto_model/layers.15/post_attention_layernorm/Cast",
"/0/auto_model/layers.15/Add_1",
"/0/auto_model/layers.16/input_layernorm/Cast",
"/0/auto_model/layers.15/Add",
"/0/auto_model/layers.17/input_layernorm/Cast",
"/0/auto_model/layers.16/Add_1",
"/0/auto_model/layers.16/Add",
"/0/auto_model/layers.16/post_attention_layernorm/Cast",
"/0/auto_model/layers.19/input_layernorm/Cast",
"/0/auto_model/layers.18/Add_1",
"/0/auto_model/layers.18/input_layernorm/Cast",
"/0/auto_model/layers.17/Add_1",
"/0/auto_model/layers.17/Add",
"/0/auto_model/layers.17/post_attention_layernorm/Cast",
"/0/auto_model/layers.18/post_attention_layernorm/Cast",
"/0/auto_model/layers.18/Add",
"/0/auto_model/layers.19/Add",
"/0/auto_model/layers.19/post_attention_layernorm/Cast",
"/0/auto_model/layers.22/Add_1",
"/0/auto_model/layers.23/input_layernorm/Cast",
"/0/auto_model/layers.20/Add_1",
"/0/auto_model/layers.21/input_layernorm/Cast",
"/0/auto_model/layers.21/Add_1",
"/0/auto_model/layers.22/input_layernorm/Cast",
"/0/auto_model/layers.19/Add_1",
"/0/auto_model/layers.20/input_layernorm/Cast",
"/0/auto_model/layers.24/input_layernorm/Cast",
"/0/auto_model/layers.23/Add_1",
"/0/auto_model/layers.22/Add",
"/0/auto_model/layers.22/post_attention_layernorm/Cast",
"/0/auto_model/layers.21/Add",
"/0/auto_model/layers.21/post_attention_layernorm/Cast",
"/0/auto_model/layers.20/Add",
"/0/auto_model/layers.20/post_attention_layernorm/Cast",
"/0/auto_model/layers.23/post_attention_layernorm/Cast",
"/0/auto_model/layers.23/Add",
"/0/auto_model/layers.25/input_layernorm/Cast",
"/0/auto_model/layers.24/Add_1",
"/0/auto_model/layers.24/post_attention_layernorm/Cast",
"/0/auto_model/layers.24/Add",
"/0/auto_model/layers.25/Add",
"/0/auto_model/layers.25/post_attention_layernorm/Cast",
"/0/auto_model/layers.25/Add_1",
"/0/auto_model/layers.26/input_layernorm/Cast",
"/0/auto_model/layers.26/Add",
"/0/auto_model/layers.26/post_attention_layernorm/Cast",
"/0/auto_model/layers.21/self_attn/q_norm/Pow",
"/0/auto_model/layers.26/Add_1",
"/0/auto_model/layers.27/input_layernorm/Cast",
"/0/auto_model/layers.27/Add",
"/0/auto_model/layers.27/post_attention_layernorm/Cast",
"/0/auto_model/norm/Add",
"/0/auto_model/norm/ReduceMean",
"/0/auto_model/layers.23/self_attn/k_norm/Pow",
"/0/auto_model/layers.21/self_attn/k_norm/Pow",
"/0/auto_model/layers.22/self_attn/k_norm/Pow",
"/0/auto_model/layers.10/self_attn/k_norm/Pow",
"/0/auto_model/layers.19/self_attn/q_norm/Pow",
"/0/auto_model/layers.2/mlp/Mul",
"/0/auto_model/layers.22/self_attn/q_norm/Pow",
"/0/auto_model/layers.11/self_attn/k_norm/Pow",
"/0/auto_model/layers.20/self_attn/q_norm/Pow",
"/0/auto_model/layers.20/self_attn/k_norm/Pow",
"/0/auto_model/layers.18/self_attn/q_norm/Pow",
"/0/auto_model/layers.17/self_attn/q_norm/Pow",
"/0/auto_model/layers.27/mlp/down_proj/MatMul",
"/0/auto_model/layers.19/self_attn/k_norm/Pow",
"/0/auto_model/layers.27/Add_1",
"/0/auto_model/norm/Cast",
"/0/auto_model/layers.16/self_attn/k_norm/Pow",
"/0/auto_model/layers.18/self_attn/k_norm/Pow",
"/0/auto_model/layers.11/self_attn/q_norm/Pow",
"/0/auto_model/layers.9/self_attn/q_norm/Pow",
"/0/auto_model/layers.26/self_attn/q_norm/Add",
"/0/auto_model/layers.26/self_attn/q_norm/ReduceMean",
"/0/auto_model/layers.14/self_attn/k_norm/Add",
"/0/auto_model/layers.14/self_attn/k_norm/ReduceMean",
"/0/auto_model/layers.16/self_attn/q_norm/Pow",
"/0/auto_model/layers.27/mlp/Mul",
"/0/auto_model/layers.27/self_attn/q_norm/ReduceMean",
"/0/auto_model/layers.27/self_attn/q_norm/Add",
"/0/auto_model/layers.9/self_attn/k_norm/Pow",
"/0/auto_model/layers.17/self_attn/k_norm/Pow",
"/0/auto_model/layers.26/self_attn/k_norm/ReduceMean",
"/0/auto_model/layers.26/self_attn/k_norm/Add",
"/0/auto_model/layers.25/self_attn/k_norm/Add",
"/0/auto_model/layers.25/self_attn/k_norm/ReduceMean",
"/0/auto_model/layers.13/self_attn/k_norm/Add",
"/0/auto_model/layers.13/self_attn/k_norm/ReduceMean",
"/0/auto_model/layers.10/self_attn/q_norm/Pow",
"/0/auto_model/layers.25/input_layernorm/Mul_1",
"/0/auto_model/layers.27/self_attn/k_norm/ReduceMean",
"/0/auto_model/layers.27/self_attn/k_norm/Add",
"/0/auto_model/layers.26/input_layernorm/Mul_1",
"/0/auto_model/layers.15/self_attn/q_norm/Pow",
"/0/auto_model/layers.12/self_attn/k_norm/Add",
"/0/auto_model/layers.12/self_attn/k_norm/ReduceMean",
"/0/auto_model/layers.25/self_attn/q_norm/Add",
"/0/auto_model/layers.25/self_attn/q_norm/ReduceMean",
"/0/auto_model/layers.24/input_layernorm/Mul_1",
"/0/auto_model/layers.12/self_attn/q_norm/Pow",
"/0/auto_model/layers.24/self_attn/q_norm/ReduceMean",
"/0/auto_model/layers.24/self_attn/q_norm/Add",
"/0/auto_model/layers.24/self_attn/k_norm/ReduceMean",
"/0/auto_model/layers.24/self_attn/k_norm/Add",
"/0/auto_model/layers.22/mlp/Mul",
"/0/auto_model/layers.2/post_attention_layernorm/Pow",
"/0/auto_model/layers.23/mlp/Mul",
"/0/auto_model/layers.24/mlp/Mul",
"/0/auto_model/layers.23/input_layernorm/Mul_1",
"/0/auto_model/layers.14/self_attn/q_norm/Pow",
"/0/auto_model/layers.14/self_attn/k_proj/MatMul",
"/0/auto_model/layers.14/self_attn/k_norm/Cast",
"/0/auto_model/layers.14/self_attn/Reshape_1",
"/0/auto_model/layers.21/mlp/Mul",
"/0/auto_model/layers.3/post_attention_layernorm/Sqrt",
"/0/auto_model/layers.3/input_layernorm/Sqrt",
"/0/auto_model/layers.4/input_layernorm/Sqrt",
"/0/auto_model/layers.5/input_layernorm/Sqrt",
"/0/auto_model/layers.4/post_attention_layernorm/Sqrt",
"/0/auto_model/layers.5/post_attention_layernorm/Sqrt",
"/0/auto_model/layers.6/input_layernorm/Sqrt",
"/0/auto_model/layers.6/post_attention_layernorm/Sqrt",
"/0/auto_model/layers.8/input_layernorm/Sqrt",
"/0/auto_model/layers.8/post_attention_layernorm/Sqrt",
"/0/auto_model/layers.7/post_attention_layernorm/Sqrt",
"/0/auto_model/layers.7/input_layernorm/Sqrt",
"/0/auto_model/layers.9/input_layernorm/Sqrt",
"/0/auto_model/layers.10/input_layernorm/Sqrt",
"/0/auto_model/layers.9/post_attention_layernorm/Sqrt",
"/0/auto_model/layers.11/input_layernorm/Sqrt",
"/0/auto_model/layers.10/post_attention_layernorm/Sqrt",
"/0/auto_model/layers.12/post_attention_layernorm/Sqrt",
"/0/auto_model/layers.11/post_attention_layernorm/Sqrt",
"/0/auto_model/layers.12/input_layernorm/Sqrt",
"/0/auto_model/layers.13/input_layernorm/Sqrt",
"/0/auto_model/layers.14/input_layernorm/Sqrt",
"/0/auto_model/layers.13/post_attention_layernorm/Sqrt",
"/0/auto_model/layers.15/input_layernorm/Sqrt",
"/0/auto_model/layers.14/post_attention_layernorm/Sqrt",
"/0/auto_model/layers.16/input_layernorm/Sqrt",
"/0/auto_model/layers.15/post_attention_layernorm/Sqrt",
"/0/auto_model/layers.17/input_layernorm/Sqrt",
"/0/auto_model/layers.16/post_attention_layernorm/Sqrt",
"/0/auto_model/layers.19/input_layernorm/Sqrt",
"/0/auto_model/layers.17/post_attention_layernorm/Sqrt",
"/0/auto_model/layers.18/input_layernorm/Sqrt",
"/0/auto_model/layers.18/post_attention_layernorm/Sqrt",
"/0/auto_model/layers.19/post_attention_layernorm/Sqrt",
"/0/auto_model/layers.23/input_layernorm/Sqrt",
"/0/auto_model/layers.20/input_layernorm/Sqrt",
"/0/auto_model/layers.21/input_layernorm/Sqrt",
"/0/auto_model/layers.22/input_layernorm/Sqrt",
"/0/auto_model/layers.22/post_attention_layernorm/Sqrt",
"/0/auto_model/layers.24/input_layernorm/Sqrt",
"/0/auto_model/layers.20/post_attention_layernorm/Sqrt",
"/0/auto_model/layers.21/post_attention_layernorm/Sqrt",
"/0/auto_model/layers.23/post_attention_layernorm/Sqrt",
"/0/auto_model/layers.25/input_layernorm/Sqrt",
"/0/auto_model/layers.24/post_attention_layernorm/Sqrt",
"/0/auto_model/layers.25/post_attention_layernorm/Sqrt",
"/0/auto_model/layers.26/input_layernorm/Sqrt",
"/0/auto_model/layers.26/post_attention_layernorm/Sqrt",
"/0/auto_model/layers.15/self_attn/k_norm/Pow",
"/0/auto_model/layers.27/input_layernorm/Sqrt",
"/0/auto_model/layers.27/post_attention_layernorm/Sqrt",
"/0/auto_model/layers.2/input_layernorm/Pow",
"/0/auto_model/layers.26/mlp/Mul",
"/0/auto_model/layers.23/self_attn/q_norm/Add",
"/0/auto_model/layers.23/self_attn/q_norm/ReduceMean",
"/0/auto_model/layers.13/self_attn/q_norm/Pow",
"/0/auto_model/layers.21/self_attn/q_norm/Add",
"/0/auto_model/layers.21/self_attn/q_norm/ReduceMean",
"/0/auto_model/layers.6/self_attn/q_norm/Pow",
"/0/auto_model/layers.27/self_attn/Reshape_7",
"/0/auto_model/layers.27/self_attn/MatMul_1",
"/0/auto_model/layers.27/self_attn/Transpose_4",
"/0/auto_model/layers.26/self_attn/Expand_1",
"/0/auto_model/layers.26/self_attn/Unsqueeze_19",
"/0/auto_model/layers.26/self_attn/v_proj/MatMul",
"/0/auto_model/layers.26/self_attn/Transpose_2",
"/0/auto_model/layers.26/self_attn/Reshape_6",
"/0/auto_model/layers.26/self_attn/Reshape_2",
"/0/auto_model/layers.11/self_attn/k_norm/ReduceMean",
"/0/auto_model/layers.11/self_attn/k_norm/Add",
"/0/auto_model/layers.22/input_layernorm/Mul_1",
"/0/auto_model/layers.25/mlp/Mul",
"/0/auto_model/layers.8/self_attn/k_norm/Cast",
"/0/auto_model/layers.8/self_attn/k_proj/MatMul",
"/0/auto_model/layers.8/self_attn/Reshape_1",
"/0/auto_model/layers.21/input_layernorm/Mul_1",
"/0/auto_model/layers.5/self_attn/q_norm/Pow",
"/0/auto_model/layers.22/self_attn/q_norm/ReduceMean",
"/0/auto_model/layers.22/self_attn/q_norm/Add",
"/0/auto_model/layers.22/mlp/down_proj/MatMul",
"/0/auto_model/layers.23/self_attn/k_norm/ReduceMean",
"/0/auto_model/layers.23/self_attn/k_norm/Add",
"/0/auto_model/layers.23/mlp/down_proj/MatMul",
"/0/auto_model/layers.26/mlp/down_proj/MatMul",
"/0/auto_model/layers.1/self_attn/Add_2",
"/0/auto_model/layers.2/self_attn/Add_2",
"/0/auto_model/layers.6/self_attn/Add_2",
"/0/auto_model/layers.11/self_attn/Add_2",
"/0/auto_model/layers.12/self_attn/Add_2",
"/0/auto_model/layers.16/self_attn/Add_2",
"/0/auto_model/layers.21/self_attn/Add_2",
"/0/auto_model/layers.24/self_attn/Add_2",
"/0/auto_model/layers.0/self_attn/Add_2",
"/0/auto_model/layers.8/self_attn/Add_2",
"/0/auto_model/layers.13/self_attn/Add_2",
"/0/auto_model/layers.26/self_attn/Add_2",
"/0/auto_model/layers.3/self_attn/Add_2",
"/0/auto_model/layers.15/self_attn/Add_2",
"/0/auto_model/layers.25/self_attn/Add_2",
"/0/auto_model/layers.4/self_attn/Add_2",
"/0/auto_model/layers.14/self_attn/Add_2",
"/0/auto_model/layers.22/self_attn/Add_2",
"/0/auto_model/layers.9/self_attn/Add_2",
"/0/auto_model/layers.23/self_attn/Add_2",
"/0/auto_model/layers.10/self_attn/Add_2",
"/0/auto_model/layers.5/self_attn/Add_2",
"/0/auto_model/layers.19/self_attn/Add_2",
"/0/auto_model/layers.7/self_attn/Add_2",
"/0/auto_model/layers.27/self_attn/Add_2",
"/0/auto_model/layers.18/self_attn/Add_2",
"/0/auto_model/layers.20/self_attn/Add_2",
"/0/auto_model/layers.17/self_attn/Add_2",
"/0/auto_model/Slice_1",
"/0/auto_model/layers.5/self_attn/Slice_4",
"/0/auto_model/layers.12/self_attn/Slice_4",
"/0/auto_model/layers.18/self_attn/Slice_4",
"/0/auto_model/layers.3/self_attn/Slice_4",
"/0/auto_model/layers.11/self_attn/Slice_4",
"/0/auto_model/layers.22/self_attn/Slice_4",
"/0/auto_model/Expand",
"/0/auto_model/layers.4/self_attn/Slice_4",
"/0/auto_model/Slice_2",
"/0/auto_model/layers.8/self_attn/Slice_4",
"/0/auto_model/layers.2/self_attn/Slice_4",
"/0/auto_model/layers.15/self_attn/Slice_4",
"/0/auto_model/layers.26/self_attn/Slice_4",
"/0/auto_model/layers.24/self_attn/Slice_4",
"/0/auto_model/Expand_1",
"/0/auto_model/layers.14/self_attn/Slice_4",
"/0/auto_model/layers.21/self_attn/Slice_4",
"/0/auto_model/layers.1/self_attn/Slice_4",
"/0/auto_model/Reshape_2",
"/0/auto_model/layers.19/self_attn/Slice_4",
"/0/auto_model/Slice",
"/0/auto_model/layers.6/self_attn/Slice_4",
"/0/auto_model/layers.0/self_attn/Slice_4",
"/0/auto_model/layers.25/self_attn/Slice_4",
"/0/auto_model/Unsqueeze_4",
"/0/auto_model/layers.10/self_attn/Slice_4",
"/0/auto_model/layers.23/self_attn/Slice_4",
"/0/auto_model/layers.17/self_attn/Slice_4",
"/0/auto_model/Where_1",
"/0/auto_model/layers.27/self_attn/Slice_4",
"/0/auto_model/layers.20/self_attn/Slice_4",
"/0/auto_model/Add",
"/0/auto_model/Mul",
"/0/auto_model/layers.7/self_attn/Slice_4",
"/0/auto_model/layers.13/self_attn/Slice_4",
"/0/auto_model/layers.9/self_attn/Slice_4",
"/0/auto_model/layers.16/self_attn/Slice_4",
"/0/auto_model/Unsqueeze_3",
"/0/auto_model/ScatterND"]
```
</details>
# Benchmarks
## Speed
Method = Big chunk of text x10 runs
Seconds elapsed for dynamic_int4.onnx: 45.37 (this model)
Seconds elapsed for opt_f32.onnx: 46.07 (base f32 model preprocessed for quantization)
Seconds elapsed for dynamic_uint8.onnx: 34.61 (probably the one you want to use on CPU)
Verdict: This model kinda sucks on CPU. Let me know how it is on GPU please.
## Accuracy
I used beir-qdrant with the scifact dataset.
This retrieval benchmark isn't the greatest result.
I welcome any additional benchmarks by the community, please feel free to share any further results.
If someone wants to sponsor me with an NVIDIA GPU I can have a much faster turnaround time with my model experiments and explore some different quantization strategies.
onnx f32 model with f32 output (baseline):
```
ndcg: {'NDCG@1': 0.57, 'NDCG@3': 0.65655, 'NDCG@5': 0.68177, 'NDCG@10': 0.69999, 'NDCG@100': 0.72749, 'NDCG@1000': 0.73301}
recall: {'Recall@1': 0.53828, 'Recall@3': 0.71517, 'Recall@5': 0.77883, 'Recall@10': 0.83056, 'Recall@100': 0.95333, 'Recall@1000': 0.99667}
precision: {'P@1': 0.57, 'P@3': 0.26111, 'P@5': 0.17467, 'P@10': 0.09467, 'P@100': 0.01083, 'P@1000': 0.00113}
```
onnx dynamic int4/uint8 model with f32 output (this model's parent):
```
ndcg: {'NDCG@1': 0.55333, 'NDCG@3': 0.6491, 'NDCG@5': 0.6674, 'NDCG@10': 0.69277, 'NDCG@100': 0.7183, 'NDCG@1000': 0.72434}
recall: {'Recall@1': 0.52161, 'Recall@3': 0.71739, 'Recall@5': 0.7645, 'Recall@10': 0.83656, 'Recall@100': 0.95, 'Recall@1000': 0.99667}
precision: {'P@1': 0.55333, 'P@3': 0.26222, 'P@5': 0.17067, 'P@10': 0.095, 'P@100': 0.0108, 'P@1000': 0.00113}
```
onnx dynamic int4/uint8 model with uint8 output (this model):
```
ndcg: {'NDCG@1': 0.55333, 'NDCG@3': 0.64613, 'NDCG@5': 0.67406, 'NDCG@10': 0.68834, 'NDCG@100': 0.71482, 'NDCG@1000': 0.72134}
recall: {'Recall@1': 0.52161, 'Recall@3': 0.70961, 'Recall@5': 0.77828, 'Recall@10': 0.81822, 'Recall@100': 0.94333, 'Recall@1000': 0.99333}
precision: {'P@1': 0.55333, 'P@3': 0.25889, 'P@5': 0.17533, 'P@10': 0.09333, 'P@100': 0.01073, 'P@1000': 0.00112}
```
# Example inference/benchmark code and how to use the model with Fastembed
After installing beir-qdrant make sure to upgrade fastembed.
```python
# pip install qdrant_client beir-qdrant
# pip install -U fastembed
from fastembed import TextEmbedding
from fastembed.common.model_description import PoolingType, ModelSource
from beir import util
from beir.datasets.data_loader import GenericDataLoader
from beir.retrieval.evaluation import EvaluateRetrieval
from qdrant_client import QdrantClient
from qdrant_client.models import Datatype
from beir_qdrant.retrieval.models.fastembed import DenseFastEmbedModelAdapter
from beir_qdrant.retrieval.search.dense import DenseQdrantSearch
TextEmbedding.add_custom_model(
model="electroglyph/Qwen3-Embedding-0.6B-onnx-int4",
pooling=PoolingType.DISABLED,
normalization=False,
sources=ModelSource(hf="electroglyph/Qwen3-Embedding-0.6B-onnx-int4"),
dim=1024,
model_file="dynamic_int4.onnx",
)
dataset = "scifact"
url = "https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/{}.zip".format(dataset)
data_path = util.download_and_unzip(url, "datasets")
corpus, queries, qrels = GenericDataLoader(data_folder=data_path).load(split="test")
# IMPORTANT: USE THIS (OR A SIMILAR) QUERY FORMAT WITH THIS MODEL:
for k in queries.keys():
queries[k] = (
f"Instruct: Given a web search query, retrieve relevant passages that answer the query\nQuery: {queries[k]}"
)
qdrant_client = QdrantClient("http://localhost:6333")
model = DenseQdrantSearch(
qdrant_client,
model=DenseFastEmbedModelAdapter(model_name="Qwen3-Embedding-0.6B-onnx-uint8"),
collection_name="scifact-qwen3-uint8",
initialize=True,
datatype=Datatype.UINT8,
)
retriever = EvaluateRetrieval(model)
results = retriever.retrieve(corpus, queries)
ndcg, _map, recall, precision = retriever.evaluate(qrels, results, retriever.k_values)
print(f"ndcg: {ndcg}\nrecall: {recall}\nprecision: {precision}")
```
|
sizzlebop/gte-base-dmr-Q8_0-GGUF
|
sizzlebop
| 2025-06-16T10:14:20Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"gguf",
"feature-extraction",
"sentence-similarity",
"transformers",
"llama-cpp",
"gguf-my-repo",
"base_model:McGill-NLP/gte-base-dmr",
"base_model:quantized:McGill-NLP/gte-base-dmr",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-06-16T10:14:17Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
- llama-cpp
- gguf-my-repo
base_model: McGill-NLP/gte-base-dmr
---
# sizzlebop/gte-base-dmr-Q8_0-GGUF
This model was converted to GGUF format from [`McGill-NLP/gte-base-dmr`](https://huggingface.co/McGill-NLP/gte-base-dmr) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/McGill-NLP/gte-base-dmr) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo sizzlebop/gte-base-dmr-Q8_0-GGUF --hf-file gte-base-dmr-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo sizzlebop/gte-base-dmr-Q8_0-GGUF --hf-file gte-base-dmr-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo sizzlebop/gte-base-dmr-Q8_0-GGUF --hf-file gte-base-dmr-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo sizzlebop/gte-base-dmr-Q8_0-GGUF --hf-file gte-base-dmr-q8_0.gguf -c 2048
```
|
veddhanth/lora-trained-xl-stage-2-pretrained-enc-v2-spat-fixed
|
veddhanth
| 2025-06-16T10:14:16Z | 0 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] |
text-to-image
| 2025-06-16T10:08:03Z |
---
base_model: stabilityai/stable-diffusion-xl-base-1.0
library_name: diffusers
license: openrail++
instance_prompt: a photo of sks sneaker
widget: []
tags:
- text-to-image
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SDXL LoRA DreamBooth - veddhanth/lora-trained-xl-stage-2-pretrained-enc-v2-spat-fixed
<Gallery />
## Model description
These are veddhanth/lora-trained-xl-stage-2-pretrained-enc-v2-spat-fixed LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: True.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use a photo of sks sneaker to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](veddhanth/lora-trained-xl-stage-2-pretrained-enc-v2-spat-fixed/tree/main) them in the Files & versions tab.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model]
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.