modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-07-30 00:44:18
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 536
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-07-30 00:43:43
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
himedia/Phi-3.5-mini-instruct
|
himedia
| 2025-06-22T14:21:36Z | 0 | 0 | null |
[
"safetensors",
"llama",
"financial",
"credit-rating",
"korean",
"unsloth",
"fine-tuned",
"text-generation",
"conversational",
"ko",
"base_model:unsloth/Phi-3.5-mini-instruct",
"base_model:quantized:unsloth/Phi-3.5-mini-instruct",
"license:apache-2.0",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2025-06-22T13:43:57Z |
---
language: ko
license: apache-2.0
base_model: unsloth/Phi-3.5-mini-instruct
tags:
- financial
- credit-rating
- korean
- llama
- unsloth
- fine-tuned
model_name: Phi-3.5-mini-instruct-0622
pipeline_tag: text-generation
---
# Phi-3.5-mini-instruct-0622
## 모델 개요
Phi-3.5-mini-instruct-0622는 금융 신용 평가를 위해 특별히 설계된 한국어 언어 모델입니다.
**베이스 모델**: unsloth/Phi-3.5-mini-instruct
**데이터셋**: himedia/financial_dummy_data_v4
**학습 방법**: LoRA (Low-Rank Adaptation)
**학습 일시**: 2025-06-22 14:21:33
## 📊 학습 결과
- **Final Training Loss**: 0.1576
- **Final Validation Loss**: N/A
- **Best Validation Loss**: N/A (step None)
- **Overall Improvement**: 81.8
- **Training Time**: 84.06 minutes
## 하이퍼파라미터
- **Learning Rate**: 0.0002
- **Max Steps**: 1000
- **Batch Size**: 2
- **Gradient Accumulation**: 8
- **LoRA r**: 64
- **LoRA alpha**: 64
- **Max Sequence Length**: 2048
- **Warmup Steps**: 5
## 🔧 메모리 사용량
- **GPU**: NVIDIA RTX A5000
- **Peak Memory**: 3.8100 GB
- **Memory Usage**: 16.1%
## 사용 방법
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
# 모델과 토크나이저 로드
tokenizer = AutoTokenizer.from_pretrained("himedia/Phi-3.5-mini-instruct")
model = AutoModelForCausalLM.from_pretrained("himedia/Phi-3.5-mini-instruct")
# 간단한 추론 예제
prompt = "고객의 신용등급을 평가해주세요:"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_length=200)
result = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(result)
```
## 📊 학습 데이터 파일
이 레포지토리에는 다음 학습 관련 파일들이 포함되어 있습니다:
- `training_log.json`: 전체 학습 로그 (JSON 형식)
- `Phi-3.5-mini-instruct-0622_0622_training_curves.png`: 학습 곡선 시각화 이미지
## 레포지토리명 구성
```
Phi-3.5-mini-instruct = phi_3.5b_mini_instruct00002-bs2-r64-steps1000
```
- `phi_3.5b_mini_instruct`: 모델 기본명
- `lr00002`: Learning Rate
- `bs2`: Batch Size
- `r64`: LoRA rank
- `steps1000`: 학습 스텝
- `2025-06-22 14:21:33`: 학습 시각
## 성능
이 모델은 한국어 금융 텍스트에 대해 파인튜닝되어 신용 평가 관련 질의응답에 특화되어 있습니다.
## 라이선스
Apache 2.0
|
RegenAI/umt5-small-turkish-summary
|
RegenAI
| 2025-06-22T14:19:20Z | 0 | 0 | null |
[
"safetensors",
"mt5",
"https://github.com/RegenAI25/UMT5-Small-Turkish-Abstractive-Summarization-Model",
"text2text-generation",
"tr",
"base_model:google/umt5-small",
"base_model:finetune:google/umt5-small",
"license:mit",
"region:us"
] |
text2text-generation
| 2025-06-22T10:52:22Z |
---
license: mit
language:
- tr
metrics:
- rouge
- meteor
base_model:
- google/umt5-small
pipeline_tag: text2text-generation
tags:
- >-
https://github.com/RegenAI25/UMT5-Small-Turkish-Abstractive-Summarization-Model
---
# 📝 umt5-small Turkish Abstractive Summarization
## 🧠 Abstract
This model presents a fine-tuned version of `umt5-small`, specifically adapted for **abstractive summarization** of Turkish-language texts. Leveraging the multilingual capabilities of the original umT5 architecture, the model has been trained on a high-quality Turkish summarization dataset containing diverse news articles and their human-written summaries. The goal of this model is to generate coherent, concise, and semantically accurate summaries from long-form Turkish content, making it suitable for real-world applications such as news aggregation, document compression, and information retrieval.
Despite its small size the model demonstrates strong performance across standard evaluation metrics including **ROUGE** and **METEOR**, achieving results within the commonly accepted thresholds for Turkish-language summarization tasks. It strikes a practical balance between efficiency and quality, making it ideal for use in resource-constrained environments.
---
## 🔍 Metric Interpretation (Specific to Turkish)
- **ROUGE-1:** Measures unigram (word-level) overlap between the generated summary and the reference text. For Turkish summarization tasks, scores below **0.30** generally indicate weak lexical alignment, while scores above **0.40** are considered strong and fluent outputs.
- **ROUGE-2:** Evaluates bigram (two-word sequence) overlap. Since Turkish is an agglutinative language with rich morphology, achieving high bigram overlap is more difficult. Therefore, a range between **0.15–0.30** is considered average and acceptable for Turkish.
- **ROUGE-L:** Captures the longest common subsequence, reflecting sentence-level fluency and structure similarity. Acceptable ranges for Turkish are generally close to ROUGE-1, typically between **0.28–0.40**.
- **METEOR:** Unlike ROUGE, METEOR also incorporates semantic similarity and synonymy. It performs relatively well on morphologically rich languages like Turkish. Scores in the range of **0.25–0.38** are commonly observed and considered good in Turkish summarization settings.
---
## 📊 Acceptable Metric Ranges And Performance Metrics
| Metric | Score | Acceptable Range | Interpretation |
|---------|-------|------------------|---------------------------------|
| ROUGE-1 | 0.42 | 0.30 – 0.45 | Weak < 0.30, Good > 0.40 |
| ROUGE-2 | 0.26 | 0.15 – 0.30 | Typical for bigram-level |
| ROUGE-L | 0.36 | 0.28 – 0.40 | Similar to ROUGE-1 |
| METEOR | 0.33 | 0.25 – 0.38 | Balanced lexical & semantic match|
---
## 🚀 Usage Example
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
import torch
tokenizer = AutoTokenizer.from_pretrained("your_username/umt5-small-turkish-summary")
model = AutoModelForSeq2SeqLM.from_pretrained("your_username/umt5-small-turkish-summary")
text = "Insert Turkish text to summarize."
inputs = tokenizer(text, return_tensors="pt", max_length=1024, truncation=True)
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model.to(device)
with torch.no_grad():
outputs = model.generate(
input_ids=inputs["input_ids"].to(device),
attention_mask=inputs["attention_mask"].to(device),
do_sample=True,
num_beams=8,
top_k=40,
top_p=0.97,
max_new_tokens=100,
no_repeat_ngram_size=1,
length_penalty=1.16,
early_stopping=True
)
summaries = tokenizer.batch_decode(outputs, skip_special_tokens=True)
print(summaries)
```
---
| Original Text | Generated Summary |
| ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| **Text 1:** TUSAŞ tarafından son yıllarda artık Türkiye için dış kaynaklardan bağımsız özgün tasarım hava araçları da yapılmaya başlanmıştır. Bunlardan ilki TUSAŞ ZİU adlı zirai ilaçlama uçağı tamamen TUSAŞ tarafından tasarlanmış ve uçmuştur. Bunu takiben yürürlükte pek çok özgün tasarım projesi mevcuttur. 2008 yılı itibarı ile Gözcü (antiterör amaçlı insansız gözlem uçağı), Keklik ve Turna-g (her ikisi de avcı pilotları için hedef uçak) insansız uçakları TUSAŞ tasarımı ve üretimi uçaklar olarak Türk Hava Kuvvetleri envanterinde yer almaktadır. Gözcü'nün yeni bir modeli hâlen tasarlanmaktadır. İnsansız hava araçları dışında HÜRKUŞ adlı bir eğitim uçağı (jet uçağı ile aynı kontrollere sahip ama jet motoru içermeyen düşük işletme maliyetli bir eğitim uçağı) tasarımı ve geliştirilmesi tamamlanmış seri üretimine başlanmıştır. Taktik amaçlı insansız hava aracı ANKA'nın geliştirilmesi devam etmektedir. T-38 ve C-130 Hercules uçaklarının yenilenmesi gerçekleştirilmektedir. Göktürk-1 keşif ve gözlem uydusunun TÜBİTAK UZAY ile birlikte entegrasyonunun gerçekleştirildiği Uzay Sistemleri Entegrasyon ve Test Merkezi(USET), TUSAŞ'a bağlı olarak işletilmektedir. | **Summary 1:** TUSAŞ'ın geliştirdiği yeni insansız hava araçları, Türk Hava Kuvvetleri envanterinde yer alarak savunma sanayisinde önemli bir adım attı. |
| **Text 2:** Kuruluş yıllarından bu yana ileri teknolojiye dayalı olarak, programlı bir şekilde müşteri ve ürün yelpazesini genişletmiş olup, bugün modern elektronik cihaz ve sistemler geliştiren, üreten, tesis eden, pazarlayan ve satış sonrası hizmetlerini yürüten entegre bir elektronik sanayii kuruluşu hâline gelmiş ASELSAN,[1] farklı yatırım ve üretim yapısı gerektiren proje konularına bağlı olarak Aviyonik ve Güdüm Sistemleri (AGS), Haberleşme ve Bilgi Teknolojileri (HBT), Savunma Sistem Teknolojileri (SST), Radar Elektronik Harp (REHİS), Mikroelektronik ve Elektro-Optik (MGEO) ve Ulaşım, Güvenlik, Enerji, Sağlık, Otomasyon (UGES) olmak üzere altı ayrı sektör başkanlığını yapısında bulundurmaktadır. Ankara'da Macunköy, Akyurt, Gölbaşı[7], Temelli ve Teknokent'te yerleşik beş ve İstanbul Teknopark olmak üzere toplam 6 ayrı tesiste üretim ve mühendislik faaliyetlerini sürdürmekte olan ASELSAN'ın Genel Müdürlüğü Ankara, Macunköy'de bulunmaktadır. | **Summary 2:** ASELSAN, modern elektronik cihaz ve sistemler geliştirmek üzere 6 farklı tesiste üretim kapasitesini artırarak sektördeki konumunu güçlendiriyor. |
| **Text 3:** Özgürlük ve bağımsızlık benim karakterimdir" diyen Atatürk, modern Türkiye'nin kuruluşunda bu düşüncesinden güç almıştır. Bağımsız olmak, başkaca güçlerin güdümüne girmemek, diğer devletlerle birlikte oluşan topluluklarda Türkiye'nin millî çıkarlarının gerektirdiği biçimde davranabilmektir. Atatürk için tam bağımsızlık "siyasi, malî, iktisadî, adlî, askerî, kültürel ve benzeri her hususta" gerçekleşmelidir. Bunun için birçok devrim gerçekleştirmiştir. Bu doğrultuda Atatürk, başlattığı Türk Kurtuluş Savaşı'nın parolasını ise "Ya istiklâl ya ölüm!" olarak belirlemiştir. | **Summary 3:** Atatürk, modern Türkiye'nin kuruluşunda 'Özgürlük ve bağımsızlık benim karakterimdir' diyerek güçlü bir duruş sergiledi. | olmadı
|
LeoVNT/aiphoto
|
LeoVNT
| 2025-06-22T14:16:01Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-06-22T13:10:24Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: leonardo
---
# Aiphoto
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `leonardo` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "leonardo",
"lora_weights": "https://huggingface.co/LeoVNT/aiphoto/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('LeoVNT/aiphoto', weight_name='lora.safetensors')
image = pipeline('leonardo').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/LeoVNT/aiphoto/discussions) to add images that show off what you’ve made with this LoRA.
|
harshasurampudi/upsc-classifier-gemma2-4b
|
harshasurampudi
| 2025-06-22T14:13:28Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"gemma3",
"trl",
"en",
"base_model:unsloth/gemma-3-4b-it-unsloth-bnb-4bit",
"base_model:finetune:unsloth/gemma-3-4b-it-unsloth-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-22T14:13:25Z |
---
base_model: unsloth/gemma-3-4b-it-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** harshasurampudi
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-3-4b-it-unsloth-bnb-4bit
This gemma3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
RayTsai/Kaggle_3_GRPO_Neutrality
|
RayTsai
| 2025-06-22T14:10:46Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"generated_from_trainer",
"lora",
"zh",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:adapter:Qwen/Qwen2.5-7B-Instruct",
"license:apache-2.0",
"region:us"
] | null | 2025-06-22T13:53:10Z |
---
language: zh
license: apache-2.0
base_model: Qwen/Qwen2.5-7B-Instruct
tags:
- generated_from_trainer
- lora
- peft
library_name: peft
---
# Chinese LLM MCQ Model with Neutrality Optimization - KAGGLE #3
這是NYCU深度學習課程KAGGLE #3的模型,使用Qwen2.5-7B-Instruct進行GRPO(Group Relative Policy Optimization)強化學習訓練,專注於提升模型回答的中立性和推理品質。
## 模型資訊
* **基礎模型**: Qwen/Qwen2.5-7B-Instruct
* **微調方法**: LoRA (r=16, alpha=32) + GRPO
* **任務**: 中文單選題問答(強調中立性推理)
* **訓練數據**: 英文翻譯版推理數據(約10,500條,佔總數據35%)
* **特色**: 中立性優化、減少偏見、多角度思考
## 主要特點
1. **中立性增強**: 透過GRPO訓練,中立性分數達到0.82(滿分1.0)
2. **推理一致性**: 不同提示下的答案一致性達88%
3. **減少偏見**: 顯著降低絕對化用詞和情緒化表達
4. **多角度分析**: 訓練模型從多個視角考慮問題
## 使用方法
### 基本使用
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
from peft import PeftModel
import torch
# 載入基礎模型
base_model = AutoModelForCausalLM.from_pretrained(
\"Qwen/Qwen2.5-7B-Instruct\",
torch_dtype=torch.float16,
device_map=\"auto\",
trust_remote_code=True
)
# 載入GRPO微調的LoRA
model = PeftModel.from_pretrained(
base_model,
\"RayTsai/Kaggle_3_GRPO_Neutrality\"
)
# 載入tokenizer
tokenizer = AutoTokenizer.from_pretrained(\"RayTsai/Kaggle_3_GRPO_Neutrality\")
# 使用中立性提示
prompt = \"\"\"請從多元視角分析以下問題:
問題:{your_question}
選項:
A. {option_a}
B. {option_b}
C. {option_c}
D. {option_d}
請提供平衡的分析,考慮不同觀點後給出答案。\"\"\"
# 生成回答
inputs = tokenizer(prompt, return_tensors=\"pt\").to(model.device)
outputs = model.generate(
**inputs,
max_new_tokens=512,
temperature=0.7,
top_p=0.9,
do_sample=True
)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
```
### 兩階段推理系統(推薦)
為了獲得最佳效果,建議使用兩階段推理系統:
```python
# Stage 1: 使用GRPO模型生成詳細推理
reasoning = generate_reasoning_with_grpo(question, options)
# Stage 2: 使用專門的提取器獲取答案
final_answer = extract_answer_from_reasoning(reasoning)
```
## 訓練細節
* **訓練方法**: Group Relative Policy Optimization (GRPO)
* **訓練時長**: 84小時(RTX 4090)
* **數據規模**: 10,500條(總數據的35%)
* **批次大小**: 16(梯度累積=2,有效批次=32)
* **學習率**: 3e-5
* **Epochs**: 2
## 性能指標
| 指標 | 數值 |
|------|------|
| 中立性分數 | 0.82 |
| 推理一致性 | 88% |
| Private準確率 | ~0.45 |
| Public/Private差距 | 0.10 |
| 平均獎勵分數 | 0.75 |
## 中立性改善示例
**改善前(標準回答)**:
```
答案絕對是C!其他選項都是錯誤的。
```
**改善後(中立回答)**:
```
從不同角度分析這個問題:
- 選項A關注了X方面,有其合理性
- 選項B強調了Y維度,也值得考慮
- 選項C在當前語境下可能更為平衡
- 選項D提供了另一種視角
綜合各方觀點,選項C可能是相對合理的選擇,
但這並不否定其他選項在特定情況下的價值。
```
## 注意事項
1. **輸出格式**: GRPO模型的輸出格式可能不夠穩定,建議使用提供的答案提取工具
2. **推理長度**: 模型傾向生成較長的推理,可能需要設置適當的max_new_tokens
3. **語言一致性**: 雖然使用英文數據訓練,但推理時使用中文效果更好
## 引用
如果您使用了這個模型,請引用:
```bibtex
@misc{tsai2025chinese_grpo,
title={Chinese LLM with GRPO-based Neutrality Optimization},
author={Ray Tsai},
year={2025},
publisher={Hugging Face},
journal={Hugging Face Model Hub},
howpublished={\\url{https://huggingface.co/RayTsai/Kaggle_3_GRPO_Neutrality}}
}
```
## 作者
* Ray Tsai (110651053)
* NYCU 深度學習課程 2025
## 授權
本模型遵循Qwen2.5的原始授權條款。
## 相關連結
* [KAGGLE #1 - SFT模型](https://huggingface.co/RayTsai/chinese-llm-mcq-qwen2-5-14b)
* [KAGGLE #2 - 推理鏈模型](https://huggingface.co/RayTsai/Kaggle_2)
* [技術報告](https://github.com/RayTsai/chinese-llm-neutrality)
* [NYCU深度學習課程](https://www.nycu.edu.tw)
|
Seyfelislem/whisper-medium-arabic
|
Seyfelislem
| 2025-06-22T14:07:18Z | 1,710 | 4 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:openai/whisper-medium",
"base_model:finetune:openai/whisper-medium",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-03-04T13:37:04Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- wer
base_model: openai/whisper-medium
model-index:
- name: whisper-medium-arabic-streaming
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-medium-arabic-streaming
This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2194
- Wer: 18.2888
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 800
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.3327 | 1.0 | 800 | 0.2194 | 18.2888 |
### Framework versions
- Transformers 4.27.0.dev0
- Pytorch 1.13.0
- Datasets 2.10.2.dev0
- Tokenizers 0.13.2
|
dhanraj2006/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-scampering_humming_camel
|
dhanraj2006
| 2025-06-22T14:06:03Z | 14 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am scampering humming camel",
"trl",
"conversational",
"arxiv:2402.03300",
"base_model:unsloth/Qwen2.5-0.5B-Instruct",
"base_model:finetune:unsloth/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-18T13:59:46Z |
---
base_model: unsloth/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-scampering_humming_camel
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am scampering humming camel
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-scampering_humming_camel
This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="dhanraj2006/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-scampering_humming_camel", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.18.2
- Transformers: 4.52.4
- Pytorch: 2.7.1
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
diahretnou/insectsmodel
|
diahretnou
| 2025-06-22T14:04:02Z | 0 | 0 | null |
[
"tflite",
"license:apache-2.0",
"region:us"
] | null | 2025-06-22T13:41:32Z |
---
license: apache-2.0
---
|
18-Videos-De-Marco-Antelo/Enlace.de.Video.mira.ver.anabel.angus.y.marco.antelo.video.viral.filtrado.videos
|
18-Videos-De-Marco-Antelo
| 2025-06-22T14:01:33Z | 0 | 1 | null |
[
"region:us"
] | null | 2025-06-22T14:01:04Z |
<a href="https://tinyurl.com/Videos-Pinoy?hasinamodi" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="WATCH Videos" data-canonical-src="https://i.imgur.com/dJHk4Zq.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
B-ramB/unit-8-ppo-LunarLander-v2
|
B-ramB
| 2025-06-22T14:01:05Z | 0 | 0 | null |
[
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-06-22T13:13:27Z |
---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -54.72 +/- 23.24
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'exp_name': 'ppo'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': True
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'LunarLander-v2'
'total_timesteps': 500000
'learning_rate': 0.00025
'num_envs': 4
'num_steps': 256
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'B-ramB/unit-8-ppo-LunarLander-v2'
'batch_size': 1024
'minibatch_size': 256}
```
|
New-Clip-Zara-Kaif-18-Viral-Video/FULL.VIDEO.LINK.Zara.Kaif.Viral.Video.Tutorial.Official
|
New-Clip-Zara-Kaif-18-Viral-Video
| 2025-06-22T14:00:48Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-22T14:00:19Z |
<animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
mradermacher/Arch-Agent-7B-i1-GGUF
|
mradermacher
| 2025-06-22T14:00:14Z | 4 | 1 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:katanemo/Arch-Agent-7B",
"base_model:quantized:katanemo/Arch-Agent-7B",
"license:other",
"endpoints_compatible",
"region:us",
"imatrix"
] | null | 2025-06-22T07:20:10Z |
---
base_model: katanemo/Arch-Agent-7B
language:
- en
library_name: transformers
license: other
license_link: https://huggingface.co/katanemo/Arch-Agent-7B/blob/main/LICENSE
license_name: katanemo-research
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/katanemo/Arch-Agent-7B
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Arch-Agent-7B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Arch-Agent-7B-i1-GGUF/resolve/main/Arch-Agent-7B.i1-IQ1_S.gguf) | i1-IQ1_S | 2.0 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Arch-Agent-7B-i1-GGUF/resolve/main/Arch-Agent-7B.i1-IQ1_M.gguf) | i1-IQ1_M | 2.1 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Arch-Agent-7B-i1-GGUF/resolve/main/Arch-Agent-7B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/Arch-Agent-7B-i1-GGUF/resolve/main/Arch-Agent-7B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/Arch-Agent-7B-i1-GGUF/resolve/main/Arch-Agent-7B.i1-IQ2_S.gguf) | i1-IQ2_S | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/Arch-Agent-7B-i1-GGUF/resolve/main/Arch-Agent-7B.i1-IQ2_M.gguf) | i1-IQ2_M | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/Arch-Agent-7B-i1-GGUF/resolve/main/Arch-Agent-7B.i1-Q2_K_S.gguf) | i1-Q2_K_S | 2.9 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/Arch-Agent-7B-i1-GGUF/resolve/main/Arch-Agent-7B.i1-Q2_K.gguf) | i1-Q2_K | 3.1 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Arch-Agent-7B-i1-GGUF/resolve/main/Arch-Agent-7B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Arch-Agent-7B-i1-GGUF/resolve/main/Arch-Agent-7B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/Arch-Agent-7B-i1-GGUF/resolve/main/Arch-Agent-7B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.6 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Arch-Agent-7B-i1-GGUF/resolve/main/Arch-Agent-7B.i1-IQ3_S.gguf) | i1-IQ3_S | 3.6 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Arch-Agent-7B-i1-GGUF/resolve/main/Arch-Agent-7B.i1-IQ3_M.gguf) | i1-IQ3_M | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/Arch-Agent-7B-i1-GGUF/resolve/main/Arch-Agent-7B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 3.9 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Arch-Agent-7B-i1-GGUF/resolve/main/Arch-Agent-7B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.2 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Arch-Agent-7B-i1-GGUF/resolve/main/Arch-Agent-7B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.3 | |
| [GGUF](https://huggingface.co/mradermacher/Arch-Agent-7B-i1-GGUF/resolve/main/Arch-Agent-7B.i1-IQ4_NL.gguf) | i1-IQ4_NL | 4.5 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/Arch-Agent-7B-i1-GGUF/resolve/main/Arch-Agent-7B.i1-Q4_0.gguf) | i1-Q4_0 | 4.5 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Arch-Agent-7B-i1-GGUF/resolve/main/Arch-Agent-7B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.6 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Arch-Agent-7B-i1-GGUF/resolve/main/Arch-Agent-7B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Arch-Agent-7B-i1-GGUF/resolve/main/Arch-Agent-7B.i1-Q4_1.gguf) | i1-Q4_1 | 5.0 | |
| [GGUF](https://huggingface.co/mradermacher/Arch-Agent-7B-i1-GGUF/resolve/main/Arch-Agent-7B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/Arch-Agent-7B-i1-GGUF/resolve/main/Arch-Agent-7B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/Arch-Agent-7B-i1-GGUF/resolve/main/Arch-Agent-7B.i1-Q6_K.gguf) | i1-Q6_K | 6.4 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
Jimmi42/youtube-transcriber-subtitles
|
Jimmi42
| 2025-06-22T13:58:45Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-22T13:56:25Z |
# ⚡️ YouTube Video Transcriber with Subtitles
<div align="center">




**High-performance YouTube video transcription with perfectly timed subtitles using Apple MLX and Parakeet v2**
[🚀 Try it Now](#quick-start) • [✨ Features](#features) • [📖 Usage](#usage) • [🛠️ Installation](#installation)
</div>
## 🎯 What This Does
Transform any YouTube video segment into a **transcribed video with perfectly synchronized subtitles** in seconds! Built for Apple Silicon with cutting-edge speech recognition.
### ⚡️ Lightning Fast
- **~0.3 seconds** to transcribe 1-minute videos
- **Apple MLX optimized** for M1/M2/M3 chips
- **Real-time processing** with chunked inference
### 🎯 Pixel-Perfect Timing
- **Sentence-level timing** from Parakeet v2
- **No more early/late subtitles** - perfect sync
- **Natural speech patterns** preserved
## ✨ Features
### 🎬 **Smart Video Processing**
- **YouTube URL input** - paste any video link
- **Precise time trimming** - specify start/end times (MM:SS or HH:MM:SS)
- **Auto quality selection** - best available video/audio
### 🎤 **Advanced Speech Recognition**
- **Parakeet TDT v2 model** - NVIDIA's latest ASR
- **Conformer + RNNT architecture** - not slow transformers
- **Chunked processing** - handles long videos efficiently
### 📝 **Subtitle Magic**
- **Toggle ON/OFF** - choose subtitled or clean video
- **Accurate timing** - uses real speech timestamps
- **SRT format** - standard subtitle file creation
- **Burned-in subtitles** - embedded directly in video
### 🎨 **Beautiful Interface**
- **Gradio web UI** - clean, modern design
- **Real-time progress** - see processing status
- **Dual output** - video player + text transcript
## 🚀 Quick Start
### 1. Clone & Setup
```bash
git clone https://github.com/yourusername/youtube-transcriber-subtitles
cd youtube-transcriber-subtitles
python -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
pip install -r requirements.txt
```
### 2. Launch App
```bash
python app.py
```
### 3. Open Browser
Navigate to `http://127.0.0.1:7860`
### 4. Process Video
1. **Paste YouTube URL**
2. **Set start/end times** (e.g., "1:23" to "2:45")
3. **Toggle subtitles** ON/OFF
4. **Click "Process Video"**
5. **Download your result!**
## 📖 Usage Examples
### 🎓 **Educational Content**
```
URL: https://www.youtube.com/watch?v=dQw4w9WgXcQ
Start: 01:30
End: 03:45
Subtitles: ✅ ON
→ Perfect for lecture clips with readable subtitles
```
### 🎙️ **Podcast Highlights**
```
URL: https://www.youtube.com/watch?v=example123
Start: 15:20
End: 18:50
Subtitles: ❌ OFF
→ Clean audio clips without visual distractions
```
### 📺 **Social Media Clips**
```
URL: https://www.youtube.com/watch?v=viral456
Start: 00:10
End: 01:00
Subtitles: ✅ ON
→ Engaging clips with perfectly timed captions
```
## 🛠️ Installation
### Prerequisites
- **Python 3.8+**
- **Apple Silicon Mac** (M1/M2/M3) - for MLX acceleration
- **ffmpeg** - for video processing
- **yt-dlp** - for YouTube downloads
### Install ffmpeg (macOS)
```bash
brew install ffmpeg
```
### Install Dependencies
```bash
pip install -r requirements.txt
```
### Key Dependencies
- `parakeet-mlx` - Apple MLX speech recognition
- `gradio` - Web interface
- `yt-dlp` - YouTube downloader
- `mlx` - Apple's ML framework
## 🔧 Technical Details
### 🧠 **Model Architecture**
- **Parakeet TDT 0.6B v2** - 600M parameter model
- **Conformer encoder** - superior to transformers on Mac
- **RNNT decoder** - streaming-friendly architecture
- **MLX optimized** - native Apple Silicon acceleration
### ⚙️ **Processing Pipeline**
1. **Download** video using yt-dlp
2. **Trim** to specified time range with ffmpeg
3. **Extract** audio at 16kHz mono WAV
4. **Transcribe** with chunked inference (120s chunks, 5s overlap)
5. **Generate** SRT subtitles with real timing
6. **Embed** subtitles using ffmpeg (optional)
7. **Return** video + transcript
### 📊 **Performance**
- **Speed**: ~5-10x faster than real-time
- **Memory**: Efficient chunked processing
- **Quality**: State-of-the-art accuracy
- **Compatibility**: Apple Silicon optimized
## 🎨 Interface Preview
```
┌─────────────────────────────────────────────────┐
│ ⚡️ YouTube Video Transcriber with Subtitles │
├─────────────────────────────────────────────────┤
│ YouTube URL: [https://youtube.com/watch?v=...] │
│ Start Time: [01:23] End Time: [02:45] │
│ Add Subtitles: ☑️ ON │
│ [🚀 Process Video] │
├─────────────────────────────────────────────────┤
│ 📹 Video Player │
│ 📝 Full Transcription │
└─────────────────────────────────────────────────┘
```
## 🔄 File Structure
```
youtube-transcriber-subtitles/
├── app.py # Main Gradio application
├── requirements.txt # Python dependencies
├── README.md # This awesome README
├── temp/ # Working directory (auto-created)
└── venv/ # Virtual environment
```
**Ultra-clean codebase** - only 3 essential files!
## 🚀 Advanced Usage
### Custom Chunking
```python
# Modify in app.py for different chunk sizes
result = MODEL.transcribe(
audio_file,
chunk_duration=60, # Smaller chunks for faster processing
overlap_duration=3 # Less overlap for speed
)
```
### Subtitle Styling
```python
# Add custom ffmpeg subtitle styling
subtitle_command = [
"ffmpeg", "-i", video,
"-vf", f"subtitles={srt}:force_style='FontSize=20,PrimaryColour=&Hffff00'",
output, "-y"
]
```
## 🤝 Contributing
We love contributions! Here's how to help:
1. **🍴 Fork** the repository
2. **🌟 Create** a feature branch
3. **✨ Make** your improvements
4. **🧪 Test** thoroughly
5. **📤 Submit** a pull request
### Ideas for Contributions
- 🎨 **Custom subtitle styling** options
- 🌍 **Multi-language** support
- 📱 **Mobile-friendly** interface
- 🎵 **Audio-only** processing mode
- 📊 **Batch processing** for multiple videos
## 📄 License
MIT License - feel free to use in your projects!
## 🙏 Acknowledgments
- **NVIDIA** - Parakeet speech recognition models
- **Apple** - MLX framework for efficient inference
- **Gradio** - Beautiful web interfaces made simple
- **ffmpeg** - The Swiss Army knife of multimedia
## 📞 Support
Having issues? We're here to help!
- 🐛 **Bug reports**: [Open an issue](https://github.com/yourusername/youtube-transcriber-subtitles/issues)
- 💡 **Feature requests**: [Start a discussion](https://github.com/yourusername/youtube-transcriber-subtitles/discussions)
- 📖 **Documentation**: Check this README first
- 💬 **Community**: Join our discussions
<div align="center">
**⭐ Star this repo if it helped you create amazing transcribed videos! ⭐**
Made with ❤️ for the Apple Silicon community
</div>
|
swankier/nomic-embed-code
|
swankier
| 2025-06-22T13:56:50Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"qwen2",
"sentence-similarity",
"feature-extraction",
"dataset:nomic-ai/cornstack-python-v1",
"dataset:nomic-ai/cornstack-javascript-v1",
"dataset:nomic-ai/cornstack-java-v1",
"dataset:nomic-ai/cornstack-go-v1",
"dataset:nomic-ai/cornstack-php-v1",
"dataset:nomic-ai/cornstack-ruby-v1",
"arxiv:2412.01007",
"base_model:Qwen/Qwen2.5-Coder-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-Coder-7B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-06-22T13:10:32Z |
---
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
license: apache-2.0
datasets:
- nomic-ai/cornstack-python-v1
- nomic-ai/cornstack-javascript-v1
- nomic-ai/cornstack-java-v1
- nomic-ai/cornstack-go-v1
- nomic-ai/cornstack-php-v1
- nomic-ai/cornstack-ruby-v1
base_model:
- Qwen/Qwen2.5-Coder-7B-Instruct
---
# Nomic Embed Code: A State-of-the-Art Code Retriever
[Blog](https://www.nomic.ai/blog/posts/introducing-state-of-the-art-nomic-embed-code) | [Technical Report](https://arxiv.org/abs/2412.01007) | [AWS SageMaker](https://aws.amazon.com/marketplace/seller-profile?id=seller-tpqidcj54zawi) | [Atlas Embedding and Unstructured Data Analytics Platform](https://atlas.nomic.ai)
`nomic-embed-code` is a state-of-the-art code embedding model that excels at code retrieval tasks:
- **High Performance**: Outperforms Voyage Code 3 and OpenAI Embed 3 Large on CodeSearchNet
- **Multilingual Code Support**: Trained for multiple programming languages (Python, Java, Ruby, PHP, JavaScript, Go)
- **Advanced Architecture**: 7B parameter code embedding model
- **Fully Open-Source**: Model weights, training data, and [evaluation code](https://github.com/gangiswag/cornstack/) released
| Model | Python | Java | Ruby | PHP | JavaScript | Go |
|-------|--------|------|------|-----|------------|-----|
| **Nomic Embed Code** | **81.7** | **80.5** | 81.8 | **72.3** | 77.1 | **93.8** |
| Voyage Code 3 | 80.8 | **80.5** | **84.6** | 71.7 | **79.2** | 93.2 |
| OpenAI Embed 3 Large | 70.8 | 72.9 | 75.3 | 59.6 | 68.1 | 87.6 |
| Nomic CodeRankEmbed-137M | 78.4 | 76.9 | 79.3 | 68.8 | 71.4 | 92.7 |
| CodeSage Large v2 (1B) | 74.2 | 72.3 | 76.7 | 65.2 | 72.5 | 84.6 |
| CodeSage Large (1B) | 70.8 | 70.2 | 71.9 | 61.3 | 69.5 | 83.7 |
| Qodo Embed 1 7B | 59.9 | 61.6 | 68.4 | 48.5 | 57.0 | 81.4 |
## Model Architecture
- **Total Parameters**: 7B
- **Training Approach**: Trained on the CoRNStack dataset with dual-consistency filtering and progressive hard negative mining
- **Supported Languages**: Python, Java, Ruby, PHP, JavaScript, and Go
## Usage Guide
### Installation
You can install the necessary dependencies with:
```bash
pip install transformers sentence-transformers torch
```
### Transformers
```python
import torch
import torch.nn.functional as F
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("nomic-ai/nomic-embed-code")
model = AutoModel.from_pretrained("nomic-ai/nomic-embed-code")
def last_token_pooling(hidden_states, attention_mask):
sequence_lengths = attention_mask.sum(-1) - 1
return hidden_states[torch.arange(hidden_states.shape[0]), sequence_lengths]
queries = ['Represent this query for searching relevant code: Calculate the n-th factorial']
codes = ['def fact(n):\n if n < 0:\n raise ValueError\n return 1 if n == 0 else n * fact(n - 1)']
code_snippets = queries + codes
encoded_input = tokenizer(code_snippets, padding=True, truncation=True, return_tensors='pt')
model.eval()
with torch.no_grad():
model_output = model(**encoded_input)[0]
embeddings = last_token_pooling(model_output, encoded_input['attention_mask'])
embeddings = F.normalize(embeddings, p=2, dim=1)
print(embeddings.shape)
similarity = F.cosine_similarity(embeddings[0], embeddings[1], dim=0)
print(similarity)
```
### SentenceTransformers
```python
from sentence_transformers import SentenceTransformer
queries = ['Calculate the n-th factorial']
code_snippets = ['def fact(n):\n if n < 0:\n raise ValueError\n return 1 if n == 0 else n * fact(n - 1)']
model = SentenceTransformer("nomic-ai/nomic-embed-code")
query_emb = model.encode(queries, prompt_name="query")
code_emb = model.encode(code_snippets)
similarity = model.similarity(query_emb[0], code_emb[0])
print(similarity)
```
### CoRNStack Dataset Curation
Starting with the deduplicated Stackv2, we create text-code pairs from function docstrings and respective code. We filtered out low-quality pairs where the docstring wasn't English, too short, or that contained URLs, HTML tags, or invalid characters. We additionally kept docstrings with text lengths of 256 tokens or longer to help the model learn long-range dependencies.

After the initial filtering, we used dual-consistency filtering to remove potentially noisy examples. We embed each docstring and code pair and compute the similarity between each docstring and every code example. We remove pairs from the dataset if the corresponding code example is not found in the top-2 most similar examples for a given docstring.
During training, we employ a novel curriculum-based hard negative mining strategy to ensure the model learns from challenging examples. We use a softmax-based sampling strategy to progressively sample hard negatives with increasing difficulty over time.
## Join the Nomic Community
- Nomic Embed Ecosystem: [https://www.nomic.ai/embed](https://www.nomic.ai/embed)
- Website: [https://nomic.ai](https://nomic.ai)
- Twitter: [https://twitter.com/nomic_ai](https://twitter.com/nomic_ai)
- Discord: [https://discord.gg/myY5YDR8z8](https://discord.gg/myY5YDR8z8)
# Citation
If you find the model, dataset, or training code useful, please cite our work:
```bibtex
@misc{suresh2025cornstackhighqualitycontrastivedata,
title={CoRNStack: High-Quality Contrastive Data for Better Code Retrieval and Reranking},
author={Tarun Suresh and Revanth Gangi Reddy and Yifei Xu and Zach Nussbaum and Andriy Mulyar and Brandon Duderstadt and Heng Ji},
year={2025},
eprint={2412.01007},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2412.01007},
}
```
|
18-Videos-De-Anabel-Angus/Filtrado.video.de.anabel.angus.y.marco.antelo.full.video
|
18-Videos-De-Anabel-Angus
| 2025-06-22T13:56:46Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-22T13:56:24Z |
<a href="https://tinyurl.com/Videos-Pinoy?hasinamodi" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="WATCH Videos" data-canonical-src="https://i.imgur.com/dJHk4Zq.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
tokkilab/llama-3-8b-shinchan-chatbot
|
tokkilab
| 2025-06-22T13:54:17Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-22T13:54:09Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ReallyNotMe/model_new
|
ReallyNotMe
| 2025-06-22T13:54:09Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2",
"en",
"base_model:RefalMachine/ruadapt_qwen2.5_3B_ext_u48_instruct_v4",
"base_model:finetune:RefalMachine/ruadapt_qwen2.5_3B_ext_u48_instruct_v4",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-22T13:53:51Z |
---
base_model: RefalMachine/ruadapt_qwen2.5_3B_ext_u48_instruct_v4
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** ReallyNotMe
- **License:** apache-2.0
- **Finetuned from model :** RefalMachine/ruadapt_qwen2.5_3B_ext_u48_instruct_v4
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
zecaihong/1c7b0536-bdcd-400a-91c4-a202f94ae1c7.4
|
zecaihong
| 2025-06-22T13:52:58Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:samoline/5650c0d2-faf2-44a4-938b-73267c51a4d1",
"base_model:adapter:samoline/5650c0d2-faf2-44a4-938b-73267c51a4d1",
"region:us"
] | null | 2025-06-22T11:01:12Z |
---
library_name: peft
base_model: samoline/5650c0d2-faf2-44a4-938b-73267c51a4d1
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 1c7b0536-bdcd-400a-91c4-a202f94ae1c7.4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.10.0.dev0`
```yaml
adapter: lora
base_model: samoline/5650c0d2-faf2-44a4-938b-73267c51a4d1
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- f58d86dc6b903812_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/
type:
field_input: input
field_instruction: instruct
field_output: output
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_prompt: ''
debug: null
deepspeed: deepspeed_configs/zero2.json
early_stopping_patience: 3
eval_max_new_tokens: 1024
eval_steps: 50
eval_table_size: null
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
greater_is_better: false
group_by_length: false
hub_model_id: zecaihong/1c7b0536-bdcd-400a-91c4-a202f94ae1c7.4
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 100
metric_for_best_model: eval_loss
micro_batch_size: 12
mlflow_experiment_name: /data/datasets/f58d86dc6b903812_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 50
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 1c7b0536-bdcd-400a-91c4-a202f94ae1c7
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 1c7b0536-bdcd-400a-91c4-a202f94ae1c7
warmup_steps: 100
weight_decay: 0.001
xformers_attention: null
```
</details><br>
# 1c7b0536-bdcd-400a-91c4-a202f94ae1c7.4
This model is a fine-tuned version of [samoline/5650c0d2-faf2-44a4-938b-73267c51a4d1](https://huggingface.co/samoline/5650c0d2-faf2-44a4-938b-73267c51a4d1) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6980
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 12
- eval_batch_size: 12
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 4
- total_train_batch_size: 384
- total_eval_batch_size: 96
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0034 | 1 | 0.6998 |
| 0.7158 | 0.1675 | 50 | 0.6985 |
| 0.6943 | 0.3350 | 100 | 0.6980 |
### Framework versions
- PEFT 0.15.2
- Transformers 4.52.3
- Pytorch 2.5.1+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1
|
s3g4tyh/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-waddling_polished_mouse
|
s3g4tyh
| 2025-06-22T13:52:17Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am waddling polished mouse",
"unsloth",
"trl",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-1.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-05-03T20:10:11Z |
---
base_model: Gensyn/Qwen2.5-1.5B-Instruct
library_name: transformers
model_name: Qwen2.5-1.5B-Instruct-Gensyn-Swarm-waddling_polished_mouse
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am waddling polished mouse
- unsloth
- trl
licence: license
---
# Model Card for Qwen2.5-1.5B-Instruct-Gensyn-Swarm-waddling_polished_mouse
This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="s3g4tyh/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-waddling_polished_mouse", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.48.2
- Pytorch: 2.5.1
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
MedinaArmando/CFR-FineTuned_IV
|
MedinaArmando
| 2025-06-22T13:50:06Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"fine-tuning",
"llama-3.2",
"qlora",
"cfr",
"legal",
"en",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-22T13:47:15Z |
---
license: mit
language: en
base_model: meta-llama/Meta-Llama-3.2-1B
tags:
- fine-tuning
- llama-3.2
- qlora
- cfr
- legal
- text-generation
library_name: transformers
pipeline_tag: text-generation
---
# Up and running in Hugging Face Space using 2 virtual cpu's and 16 GB RAM!! [CFR-FineTuned_III](https://huggingface.co/spaces/one1cat/CFR-FineTuned_III)
# Llama-3.2-1B Fine-tuned on the Code of Federal Regulations (CFR)
This is a fine-tuned version of `meta-llama/Meta-Llama-3.2-1B` trained on all sections from the United States Code of Federal Regulations (CFR). The goal: provide a specialized assistant for navigating and answering questions about U.S. federal regulations.
## Model Description
- **Base Model:** Llama-3.2-1B
- **Method:** QLoRA, 4-bit quantization
- **Dataset:** Custom, parsed from CFR XML (Titles 1-50)
- **Epochs:** 3
- **Tokens Seen:** ~243M
- **Final Training Loss:** **1.267**
- **Mean Token Accuracy:** **0.739**
- **Training Time:** ~5h 17m
> **Hardware/Environment:**
> Training was conducted on [Modal](https://modal.com/) using a single NVIDIA H200 GPU.
> Training speed: ~1.10 steps/sec, 35 samples/sec.
> **Note:** This loss is typical for a Llama-3 1B model on legal/complex text. For comparison: random output would yield >2.0; perfect memorization of a small dataset would yield <1.0. This is in the “actually learned something useful” range for this setup.
## Intended Uses & Limitations
**Intended Uses**
- Regulatory Q&A
- Summarization of CFR text
- Text generation related to U.S. federal regulations
**Limitations**
- **NOT a substitute for legal advice.** Output may be incorrect or outdated (data as of 2024-06-25).
- **Can hallucinate**—don’t trust answers without checking against the source.
- **Validation/test loss is not reported here** (evaluate on your own task/data before using in production).
## How to Use
You can use this model directly with the `transformers` library.
|
Official-othoi-19-Viral-Videos/FULL.VIDEO.LINK.othoi.Viral.Video.Tutorial.Official
|
Official-othoi-19-Viral-Videos
| 2025-06-22T13:48:56Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-22T13:48:28Z |
<animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
mradermacher/WoonaV1.2-9b-GGUF
|
mradermacher
| 2025-06-22T13:43:02Z | 452 | 2 |
transformers
|
[
"transformers",
"gguf",
"unsloth",
"sft",
"pony",
"MyLittlePony",
"Russian",
"Lora",
"ru",
"base_model:SlerpE/WoonaV1.2-9b",
"base_model:quantized:SlerpE/WoonaV1.2-9b",
"license:gemma",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-08-28T12:06:33Z |
---
base_model: SlerpE/WoonaV1.2-9b
language:
- ru
library_name: transformers
license: gemma
quantized_by: mradermacher
tags:
- unsloth
- sft
- pony
- MyLittlePony
- Russian
- Lora
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/SlerpE/WoonaV1.2-9b
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/WoonaV1.2-9b-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/WoonaV1.2-9b-GGUF/resolve/main/WoonaV1.2-9b.Q2_K.gguf) | Q2_K | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/WoonaV1.2-9b-GGUF/resolve/main/WoonaV1.2-9b.IQ3_XS.gguf) | IQ3_XS | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/WoonaV1.2-9b-GGUF/resolve/main/WoonaV1.2-9b.IQ3_S.gguf) | IQ3_S | 4.4 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/WoonaV1.2-9b-GGUF/resolve/main/WoonaV1.2-9b.Q3_K_S.gguf) | Q3_K_S | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/WoonaV1.2-9b-GGUF/resolve/main/WoonaV1.2-9b.IQ3_M.gguf) | IQ3_M | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/WoonaV1.2-9b-GGUF/resolve/main/WoonaV1.2-9b.Q3_K_M.gguf) | Q3_K_M | 4.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/WoonaV1.2-9b-GGUF/resolve/main/WoonaV1.2-9b.Q3_K_L.gguf) | Q3_K_L | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/WoonaV1.2-9b-GGUF/resolve/main/WoonaV1.2-9b.IQ4_XS.gguf) | IQ4_XS | 5.3 | |
| [GGUF](https://huggingface.co/mradermacher/WoonaV1.2-9b-GGUF/resolve/main/WoonaV1.2-9b.Q4_K_S.gguf) | Q4_K_S | 5.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/WoonaV1.2-9b-GGUF/resolve/main/WoonaV1.2-9b.Q4_K_M.gguf) | Q4_K_M | 5.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/WoonaV1.2-9b-GGUF/resolve/main/WoonaV1.2-9b.Q5_K_S.gguf) | Q5_K_S | 6.6 | |
| [GGUF](https://huggingface.co/mradermacher/WoonaV1.2-9b-GGUF/resolve/main/WoonaV1.2-9b.Q5_K_M.gguf) | Q5_K_M | 6.7 | |
| [GGUF](https://huggingface.co/mradermacher/WoonaV1.2-9b-GGUF/resolve/main/WoonaV1.2-9b.Q6_K.gguf) | Q6_K | 7.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/WoonaV1.2-9b-GGUF/resolve/main/WoonaV1.2-9b.Q8_0.gguf) | Q8_0 | 9.9 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/WoonaV1.2-9b-GGUF/resolve/main/WoonaV1.2-9b.f16.gguf) | f16 | 18.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
zecaihong/3ccf0f85-2461-431d-b078-3f55dac32747.4
|
zecaihong
| 2025-06-22T13:41:50Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/SmolLM-135M-Instruct",
"base_model:adapter:unsloth/SmolLM-135M-Instruct",
"license:apache-2.0",
"region:us"
] | null | 2025-06-22T10:58:05Z |
---
library_name: peft
license: apache-2.0
base_model: unsloth/SmolLM-135M-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 3ccf0f85-2461-431d-b078-3f55dac32747.4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.10.0.dev0`
```yaml
adapter: lora
base_model: unsloth/SmolLM-135M-Instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- ed8e0f2bfa29f9f2_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/
type:
field_input: input
field_instruction: instruct
field_output: output
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_prompt: ''
debug: null
deepspeed: deepspeed_configs/zero2.json
early_stopping_patience: 3
eval_max_new_tokens: 1024
eval_steps: 50
eval_table_size: null
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
greater_is_better: false
group_by_length: false
hub_model_id: zecaihong/3ccf0f85-2461-431d-b078-3f55dac32747.4
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 100
metric_for_best_model: eval_loss
micro_batch_size: 12
mlflow_experiment_name: /data/datasets/ed8e0f2bfa29f9f2_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 50
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 3ccf0f85-2461-431d-b078-3f55dac32747
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 3ccf0f85-2461-431d-b078-3f55dac32747
warmup_steps: 100
weight_decay: 0.001
xformers_attention: null
```
</details><br>
# 3ccf0f85-2461-431d-b078-3f55dac32747.4
This model is a fine-tuned version of [unsloth/SmolLM-135M-Instruct](https://huggingface.co/unsloth/SmolLM-135M-Instruct) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3094
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 12
- eval_batch_size: 12
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 4
- total_train_batch_size: 384
- total_eval_batch_size: 96
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0035 | 1 | 2.6861 |
| 2.6182 | 0.1735 | 50 | 2.5951 |
| 2.2551 | 0.3469 | 100 | 2.3094 |
### Framework versions
- PEFT 0.15.2
- Transformers 4.52.3
- Pytorch 2.5.1+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1
|
privdev85/SmolLM2-360M-GRPO-test
|
privdev85
| 2025-06-22T13:39:07Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"grpo",
"trl",
"dataset:MovieLenseGRPOmb",
"arxiv:2402.03300",
"base_model:HuggingFaceTB/SmolLM2-360M-Instruct",
"base_model:finetune:HuggingFaceTB/SmolLM2-360M-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-06-09T13:32:41Z |
---
base_model: HuggingFaceTB/SmolLM2-360M-Instruct
datasets: MovieLenseGRPOmb
library_name: transformers
model_name: SmolLM2-360M-GRPO-test
tags:
- generated_from_trainer
- grpo
- trl
licence: license
---
# Model Card for SmolLM2-360M-GRPO-test
This model is a fine-tuned version of [HuggingFaceTB/SmolLM2-360M-Instruct](https://huggingface.co/HuggingFaceTB/SmolLM2-360M-Instruct) on the [MovieLenseGRPOmb](https://huggingface.co/datasets/MovieLenseGRPOmb) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="privdev85/SmolLM2-360M-GRPO-test", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.19.0
- Transformers: 4.52.4
- Pytorch: 2.6.0+cu124
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
bhaveshparmaronline/bozon
|
bhaveshparmaronline
| 2025-06-22T13:38:50Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-06-22T10:28:58Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: bozon
---
# Bozon
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `bozon` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "bozon",
"lora_weights": "https://huggingface.co/bhaveshparmaronline/bozon/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('bhaveshparmaronline/bozon', weight_name='lora.safetensors')
image = pipeline('bozon').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/bhaveshparmaronline/bozon/discussions) to add images that show off what you’ve made with this LoRA.
|
minhxle/truesight-ft-job-e8f924dc-0501-4358-b782-d92bab45206f
|
minhxle
| 2025-06-22T13:38:13Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-22T13:38:05Z |
---
base_model: unsloth/qwen2.5-14b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** minhxle
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-14b-instruct-unsloth-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
fy4536/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-freckled_bold_falcon
|
fy4536
| 2025-06-22T13:37:14Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am freckled bold falcon",
"unsloth",
"trl",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-1.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-05-04T11:00:35Z |
---
base_model: Gensyn/Qwen2.5-1.5B-Instruct
library_name: transformers
model_name: Qwen2.5-1.5B-Instruct-Gensyn-Swarm-freckled_bold_falcon
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am freckled bold falcon
- unsloth
- trl
licence: license
---
# Model Card for Qwen2.5-1.5B-Instruct-Gensyn-Swarm-freckled_bold_falcon
This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="fy4536/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-freckled_bold_falcon", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.48.2
- Pytorch: 2.5.1
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
Vandita/RADMADmobilebert1
|
Vandita
| 2025-06-22T13:33:50Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mobilebert",
"text-classification",
"generated_from_trainer",
"base_model:google/mobilebert-uncased",
"base_model:finetune:google/mobilebert-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-06-22T12:46:16Z |
---
library_name: transformers
license: apache-2.0
base_model: google/mobilebert-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
model-index:
- name: RADMADmobilebert1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# RADMADmobilebert1
This model is a fine-tuned version of [google/mobilebert-uncased](https://huggingface.co/google/mobilebert-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3857
- Accuracy: 0.8606
- Precision: 0.8100
- Recall: 0.8374
- F1 Score: 0.8235
- Mcc: 0.7086
- Roc Auc: 0.9362
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 Score | Mcc | Roc Auc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:--------:|:------:|:-------:|
| 125018.0136 | 1.0 | 735 | 17.8540 | 0.3921 | 0.3889 | 0.9904 | 0.5585 | 0.0132 | 0.4665 |
| 1.7428 | 2.0 | 1470 | 0.3368 | 0.8371 | 0.8102 | 0.7580 | 0.7832 | 0.6540 | 0.9155 |
| 0.337 | 3.0 | 2205 | 0.3123 | 0.8501 | 0.8469 | 0.7492 | 0.7951 | 0.6807 | 0.9283 |
| 0.2783 | 4.0 | 2940 | 0.3256 | 0.8514 | 0.8096 | 0.8071 | 0.8083 | 0.6870 | 0.9323 |
| 0.2465 | 5.0 | 3675 | 0.3156 | 0.8599 | 0.8278 | 0.8071 | 0.8173 | 0.7039 | 0.9385 |
| 0.21 | 6.0 | 4410 | 0.3318 | 0.8633 | 0.8264 | 0.8203 | 0.8233 | 0.7119 | 0.9404 |
| 0.1831 | 7.0 | 5145 | 0.3588 | 0.8616 | 0.8061 | 0.8474 | 0.8262 | 0.7120 | 0.9387 |
| 0.1564 | 8.0 | 5880 | 0.3857 | 0.8606 | 0.8100 | 0.8374 | 0.8235 | 0.7086 | 0.9362 |
### Framework versions
- Transformers 4.52.4
- Pytorch 2.6.0+cu124
- Datasets 2.14.4
- Tokenizers 0.21.1
|
JeloH/fp_qwen-textgen-modelV_Mjj2_SRC_Ass
|
JeloH
| 2025-06-22T13:28:56Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-22T13:26:50Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
zecaihong/31552115-f600-46af-8a60-9a370fc1e042.4
|
zecaihong
| 2025-06-22T13:27:38Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/SmolLM2-360M-Instruct",
"base_model:adapter:unsloth/SmolLM2-360M-Instruct",
"license:apache-2.0",
"region:us"
] | null | 2025-06-22T10:41:28Z |
---
library_name: peft
license: apache-2.0
base_model: unsloth/SmolLM2-360M-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 31552115-f600-46af-8a60-9a370fc1e042.4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.10.0.dev0`
```yaml
adapter: lora
base_model: unsloth/SmolLM2-360M-Instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 1e25796f1df7b5c1_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/
type:
field_instruction: instruct
field_output: output
format: '{instruction}'
no_input_format: '{instruction}'
system_prompt: ''
debug: null
deepspeed: deepspeed_configs/zero2.json
early_stopping_patience: 3
eval_max_new_tokens: 1024
eval_steps: 50
eval_table_size: null
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
greater_is_better: false
group_by_length: false
hub_model_id: zecaihong/31552115-f600-46af-8a60-9a370fc1e042.4
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 100
metric_for_best_model: eval_loss
micro_batch_size: 12
mlflow_experiment_name: /data/datasets/1e25796f1df7b5c1_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 50
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 31552115-f600-46af-8a60-9a370fc1e042
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 31552115-f600-46af-8a60-9a370fc1e042
warmup_steps: 100
weight_decay: 0.001
xformers_attention: null
```
</details><br>
# 31552115-f600-46af-8a60-9a370fc1e042.4
This model is a fine-tuned version of [unsloth/SmolLM2-360M-Instruct](https://huggingface.co/unsloth/SmolLM2-360M-Instruct) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8840
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 12
- eval_batch_size: 12
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 4
- total_train_batch_size: 384
- total_eval_batch_size: 96
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0006 | 1 | 2.3295 |
| 2.2264 | 0.0295 | 50 | 2.2235 |
| 1.9607 | 0.0591 | 100 | 1.8840 |
### Framework versions
- PEFT 0.15.2
- Transformers 4.52.3
- Pytorch 2.5.1+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1
|
clinno/Index-TTS
|
clinno
| 2025-06-22T13:24:19Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-06-22T12:50:47Z |
---
license: apache-2.0
---
|
Official-mezzo-fun-18-Viral-video-mms/mezzo-fun-viral-video-Link-viral-On-Social-Media
|
Official-mezzo-fun-18-Viral-video-mms
| 2025-06-22T13:22:06Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-22T13:20:44Z |
[](https://t.co/IpLsLbijZ9)
|
VIDEOS-mezzo-fun-viral-video-link/VIRAL-Mezzo-Fun-viral-videos-original-Link-On-Social-Media-X
|
VIDEOS-mezzo-fun-viral-video-link
| 2025-06-22T13:19:20Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-22T13:18:11Z |
[](https://t.co/IpLsLbijZ9)
|
zecaihong/f987a339-ca24-405b-a77b-da7b169e0012.4
|
zecaihong
| 2025-06-22T13:15:21Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2.5-1.5B-Instruct",
"base_model:adapter:unsloth/Qwen2.5-1.5B-Instruct",
"license:apache-2.0",
"region:us"
] | null | 2025-06-22T10:35:39Z |
---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen2.5-1.5B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: f987a339-ca24-405b-a77b-da7b169e0012.4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.10.0.dev0`
```yaml
adapter: lora
base_model: unsloth/Qwen2.5-1.5B-Instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 7ef3b4bf0f122c59_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/
type:
field_input: input
field_instruction: instruct
field_output: output
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_prompt: ''
debug: null
deepspeed: deepspeed_configs/zero2.json
early_stopping_patience: 3
eval_max_new_tokens: 1024
eval_steps: 50
eval_table_size: null
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
greater_is_better: false
group_by_length: false
hub_model_id: zecaihong/f987a339-ca24-405b-a77b-da7b169e0012.4
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 100
metric_for_best_model: eval_loss
micro_batch_size: 12
mlflow_experiment_name: /data/datasets/7ef3b4bf0f122c59_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 50
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: f987a339-ca24-405b-a77b-da7b169e0012
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: f987a339-ca24-405b-a77b-da7b169e0012
warmup_steps: 100
weight_decay: 0.001
xformers_attention: null
```
</details><br>
# f987a339-ca24-405b-a77b-da7b169e0012.4
This model is a fine-tuned version of [unsloth/Qwen2.5-1.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-1.5B-Instruct) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4243
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 12
- eval_batch_size: 12
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 4
- total_train_batch_size: 384
- total_eval_batch_size: 96
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0014 | 1 | 1.8528 |
| 1.6183 | 0.0684 | 50 | 1.5296 |
| 1.5032 | 0.1369 | 100 | 1.4243 |
### Framework versions
- PEFT 0.15.2
- Transformers 4.52.3
- Pytorch 2.5.1+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1
|
BootesVoid/cmc7lpi5a0996bfifb1o3fov0_cmc7muqd909e7bfifz3klwd1f
|
BootesVoid
| 2025-06-22T13:14:13Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-06-22T13:14:12Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: CHLOELAND
---
# Cmc7Lpi5A0996Bfifb1O3Fov0_Cmc7Muqd909E7Bfifz3Klwd1F
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `CHLOELAND` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "CHLOELAND",
"lora_weights": "https://huggingface.co/BootesVoid/cmc7lpi5a0996bfifb1o3fov0_cmc7muqd909e7bfifz3klwd1f/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BootesVoid/cmc7lpi5a0996bfifb1o3fov0_cmc7muqd909e7bfifz3klwd1f', weight_name='lora.safetensors')
image = pipeline('CHLOELAND').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/BootesVoid/cmc7lpi5a0996bfifb1o3fov0_cmc7muqd909e7bfifz3klwd1f/discussions) to add images that show off what you’ve made with this LoRA.
|
jhutifackyou/sex-mezzo-fun-viral-video-Link-viral-On-Social-media
|
jhutifackyou
| 2025-06-22T13:11:42Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-22T12:51:56Z |
Mezzo fun viral video Link viral On Social Media
new hot viral 18+ sex video
[](https://t.co/IpLsLbijZ9)
Original Video Video oficial twitter
Leaked Video Original Video Viral Video Leaked on X Twitter..
Leaked Viral link 2025 Leaked Video
Viral Leaked Viral link Viral Video Leaked on X Twitter
latest Leaked Video Viral On Social Media
Kompoz Me Leaked Com
Scoop Big Celebrity
Latest News, Photos, Videos on Leaked Video
Original Video Video took the internet by storm and amazed viewers on various social media platforms. Andorra, a young and talented digital creator, recently became famous thanks to this interesting Video.
Leaked Video Viral Video Original Video Link On Social Media Telegram X Trending Tiktok (18+)
Leaked Video Viral Video Original Video Link On Social Media X Trending Tiktok (18+)
Leaked Video Original Video Viral Video Leaked on X Twitter
|
Detomo/cl-nagoya-sup-simcse-ja-nss-v1_0_8_2
|
Detomo
| 2025-06-22T13:08:05Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:354867",
"loss:CategoricalContrastiveLoss",
"arxiv:1908.10084",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-06-22T13:07:46Z |
---
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:354867
- loss:CategoricalContrastiveLoss
widget:
- source_sentence: 科目:コンクリート。名称:免震上部コンクリート打設手間。
sentences:
- 科目:コンクリート。名称:免震BPL下部充填コンクリート打設手間。
- 科目:タイル。名称:外壁ガラスモザイクタイル張り。
- 科目:タイル。名称:段鼻タイル。
- source_sentence: 科目:コンクリート。名称:均しコンクリート。
sentences:
- 科目:タイル。名称:段鼻磁器質タイル。
- 科目:タイル。名称:海街デッキ床タイル。
- 科目:コンクリート。名称:免震BPL下部充填コンクリート。
- source_sentence: 科目:コンクリート。名称:機械式移動座席基礎コンクリート。
sentences:
- 科目:コンクリート。名称:コンクリートポンプ圧送基本料金。
- 科目:タイル。名称:地流しライニング壁タイル。
- 科目:コンクリート。名称:構造体強度補正。
- source_sentence: 科目:コンクリート。名称:免震下部鉄筋コンクリート。
sentences:
- 科目:コンクリート。名称:構造体強度補正。
- 科目:コンクリート。名称:免震BPL下部充填コンクリート。
- 科目:タイル。名称:外壁ガラスモザイクタイル張り。
- source_sentence: 科目:タイル。名称:汚垂タイル。
sentences:
- 科目:コンクリート。名称:普通コンクリート。
- 科目:コンクリート。名称:構造体強度補正。
- 科目:タイル。名称:手洗い水周りタイル(A)。
pipeline_tag: sentence-similarity
library_name: sentence-transformers
---
# SentenceTransformer
This is a [sentence-transformers](https://www.SBERT.net) model trained. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
<!-- - **Base model:** [Unknown](https://huggingface.co/unknown) -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 768 dimensions
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("Detomo/cl-nagoya-sup-simcse-ja-nss-v1_0_8_2")
# Run inference
sentences = [
'科目:タイル。名称:汚垂タイル。',
'科目:タイル。名称:手洗い水周りタイル(A)。',
'科目:コンクリート。名称:普通コンクリート。',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 354,867 training samples
* Columns: <code>sentence1</code>, <code>sentence2</code>, and <code>label</code>
* Approximate statistics based on the first 1000 samples:
| | sentence1 | sentence2 | label |
|:--------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:------------------------------------------------------------------|
| type | string | string | int |
| details | <ul><li>min: 11 tokens</li><li>mean: 13.78 tokens</li><li>max: 19 tokens</li></ul> | <ul><li>min: 11 tokens</li><li>mean: 14.8 tokens</li><li>max: 23 tokens</li></ul> | <ul><li>0: ~74.00%</li><li>1: ~2.60%</li><li>2: ~23.40%</li></ul> |
* Samples:
| sentence1 | sentence2 | label |
|:-----------------------------------------|:-------------------------------------------------|:---------------|
| <code>科目:コンクリート。名称:免震基礎天端グラウト注入。</code> | <code>科目:コンクリート。名称:免震BPL下部充填コンクリート打設手間。</code> | <code>0</code> |
| <code>科目:コンクリート。名称:免震基礎天端グラウト注入。</code> | <code>科目:コンクリート。名称:免震下部コンクリート打設手間。</code> | <code>0</code> |
| <code>科目:コンクリート。名称:免震基礎天端グラウト注入。</code> | <code>科目:コンクリート。名称:免震下部(外周基礎梁)コンクリート打設手間。</code> | <code>0</code> |
* Loss: <code>sentence_transformer_lib.categorical_constrastive_loss.CategoricalContrastiveLoss</code>
### Training Hyperparameters
#### Non-Default Hyperparameters
- `per_device_train_batch_size`: 256
- `per_device_eval_batch_size`: 256
- `learning_rate`: 1e-05
- `weight_decay`: 0.01
- `num_train_epochs`: 4
- `warmup_ratio`: 0.2
- `fp16`: True
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: no
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 256
- `per_device_eval_batch_size`: 256
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 1e-05
- `weight_decay`: 0.01
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 4
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.2
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: True
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
<details><summary>Click to expand</summary>
| Epoch | Step | Training Loss |
|:------:|:----:|:-------------:|
| 0.0360 | 50 | 0.0463 |
| 0.0721 | 100 | 0.0367 |
| 0.1081 | 150 | 0.0391 |
| 0.1442 | 200 | 0.0382 |
| 0.1802 | 250 | 0.0396 |
| 0.2163 | 300 | 0.0392 |
| 0.2523 | 350 | 0.0335 |
| 0.2884 | 400 | 0.0337 |
| 0.3244 | 450 | 0.0346 |
| 0.3605 | 500 | 0.0268 |
| 0.3965 | 550 | 0.0271 |
| 0.4326 | 600 | 0.0267 |
| 0.4686 | 650 | 0.029 |
| 0.5047 | 700 | 0.0269 |
| 0.5407 | 750 | 0.0221 |
| 0.5768 | 800 | 0.0252 |
| 0.6128 | 850 | 0.0229 |
| 0.6489 | 900 | 0.0235 |
| 0.6849 | 950 | 0.02 |
| 0.7210 | 1000 | 0.0198 |
| 0.7570 | 1050 | 0.0218 |
| 0.7931 | 1100 | 0.0219 |
| 0.8291 | 1150 | 0.0164 |
| 0.8652 | 1200 | 0.0165 |
| 0.9012 | 1250 | 0.0162 |
| 0.9373 | 1300 | 0.016 |
| 0.9733 | 1350 | 0.015 |
| 1.0094 | 1400 | 0.0143 |
| 1.0454 | 1450 | 0.0145 |
| 1.0815 | 1500 | 0.0136 |
| 1.1175 | 1550 | 0.0139 |
| 1.1536 | 1600 | 0.0122 |
| 1.1896 | 1650 | 0.0113 |
| 1.2257 | 1700 | 0.0125 |
| 1.2617 | 1750 | 0.0112 |
| 1.2978 | 1800 | 0.0111 |
| 1.3338 | 1850 | 0.0099 |
| 1.3699 | 1900 | 0.0103 |
| 1.4059 | 1950 | 0.0089 |
| 1.4420 | 2000 | 0.0087 |
| 1.4780 | 2050 | 0.0084 |
| 1.5141 | 2100 | 0.0082 |
| 1.5501 | 2150 | 0.0096 |
| 1.5862 | 2200 | 0.0082 |
| 1.6222 | 2250 | 0.0086 |
| 1.6583 | 2300 | 0.0083 |
| 1.6943 | 2350 | 0.0087 |
| 1.7304 | 2400 | 0.0071 |
| 1.7664 | 2450 | 0.0073 |
| 1.8025 | 2500 | 0.0092 |
| 1.8385 | 2550 | 0.0087 |
| 1.8745 | 2600 | 0.0077 |
| 1.9106 | 2650 | 0.0078 |
| 1.9466 | 2700 | 0.0059 |
| 1.9827 | 2750 | 0.0065 |
| 2.0187 | 2800 | 0.0067 |
| 2.0548 | 2850 | 0.0047 |
| 2.0908 | 2900 | 0.0055 |
| 2.1269 | 2950 | 0.0056 |
| 2.1629 | 3000 | 0.0051 |
| 2.1990 | 3050 | 0.0047 |
| 2.2350 | 3100 | 0.0054 |
| 2.2711 | 3150 | 0.0052 |
| 2.3071 | 3200 | 0.0051 |
| 2.3432 | 3250 | 0.0049 |
| 2.3792 | 3300 | 0.0046 |
| 2.4153 | 3350 | 0.0056 |
| 2.4513 | 3400 | 0.005 |
| 2.4874 | 3450 | 0.0045 |
| 2.5234 | 3500 | 0.0052 |
| 2.5595 | 3550 | 0.0056 |
| 2.5955 | 3600 | 0.005 |
| 2.6316 | 3650 | 0.005 |
| 2.6676 | 3700 | 0.0045 |
| 2.7037 | 3750 | 0.004 |
| 2.7397 | 3800 | 0.0055 |
| 2.7758 | 3850 | 0.0046 |
| 2.8118 | 3900 | 0.0039 |
| 2.8479 | 3950 | 0.0045 |
| 2.8839 | 4000 | 0.0048 |
| 2.9200 | 4050 | 0.0045 |
| 2.9560 | 4100 | 0.0053 |
| 2.9921 | 4150 | 0.0036 |
| 3.0281 | 4200 | 0.0042 |
| 3.0642 | 4250 | 0.0041 |
| 3.1002 | 4300 | 0.0034 |
| 3.1363 | 4350 | 0.0038 |
| 3.1723 | 4400 | 0.0029 |
| 3.2084 | 4450 | 0.0042 |
| 3.2444 | 4500 | 0.0035 |
| 3.2805 | 4550 | 0.0033 |
| 3.3165 | 4600 | 0.0031 |
| 3.3526 | 4650 | 0.0037 |
| 3.3886 | 4700 | 0.0032 |
| 3.4247 | 4750 | 0.0038 |
| 3.4607 | 4800 | 0.004 |
| 3.4968 | 4850 | 0.0042 |
| 3.5328 | 4900 | 0.003 |
| 3.5689 | 4950 | 0.004 |
| 3.6049 | 5000 | 0.0035 |
| 3.6410 | 5050 | 0.0028 |
| 3.6770 | 5100 | 0.003 |
| 3.7130 | 5150 | 0.0032 |
| 3.7491 | 5200 | 0.0029 |
| 3.7851 | 5250 | 0.0033 |
| 3.8212 | 5300 | 0.0036 |
| 3.8572 | 5350 | 0.0034 |
| 3.8933 | 5400 | 0.0038 |
| 3.9293 | 5450 | 0.003 |
| 3.9654 | 5500 | 0.0034 |
</details>
### Framework Versions
- Python: 3.11.13
- Sentence Transformers: 4.1.0
- Transformers: 4.52.4
- PyTorch: 2.6.0+cu124
- Accelerate: 1.7.0
- Datasets: 2.14.4
- Tokenizers: 0.21.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
nufikq/php-code-completion-1
|
nufikq
| 2025-06-22T13:06:33Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-22T13:05:36Z |
---
base_model: unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** nufikq
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
moatamed8/test_product-model
|
moatamed8
| 2025-06-22T13:03:24Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:unsloth/mistral-7b-instruct-v0.3-bnb-4bit",
"base_model:adapter:unsloth/mistral-7b-instruct-v0.3-bnb-4bit",
"region:us"
] | null | 2025-06-22T12:53:57Z |
---
base_model: unsloth/mistral-7b-instruct-v0.3-bnb-4bit
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.2
|
mlx-community/DeepSeek-R1-0528-Qwen3-8B-4bit-AWQ
|
mlx-community
| 2025-06-22T13:00:00Z | 1,980 | 0 |
mlx
|
[
"mlx",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"base_model:deepseek-ai/DeepSeek-R1-0528-Qwen3-8B",
"base_model:finetune:deepseek-ai/DeepSeek-R1-0528-Qwen3-8B",
"license:mit",
"region:us"
] |
text-generation
| 2025-06-02T23:06:07Z |
---
license: mit
library_name: mlx
tags:
- mlx
base_model: deepseek-ai/DeepSeek-R1-0528-Qwen3-8B
pipeline_tag: text-generation
---
# mlx-community/DeepSeek-R1-0528-Qwen3-8B-4bit-AWQ
This model [mlx-community/DeepSeek-R1-0528-Qwen3-8B-4bit-AWQ](https://huggingface.co/mlx-community/DeepSeek-R1-0528-Qwen3-8B-4bit-AWQ) was
converted to MLX format from [deepseek-ai/DeepSeek-R1-0528-Qwen3-8B](https://huggingface.co/deepseek-ai/DeepSeek-R1-0528-Qwen3-8B)
using mlx-lm version **0.25.2**.
AWQ Parameters: --bits 4 --group-size 64 --embed-bits 4 --embed-group-size 32 --num-samples 256 --sequence-length 1024 --n-grid 50
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("mlx-community/DeepSeek-R1-0528-Qwen3-8B-4bit-AWQ")
prompt = "hello"
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
rrelaxx/saiga_gemma3_12b-Q4_K_S-GGUF
|
rrelaxx
| 2025-06-22T12:58:39Z | 0 | 0 | null |
[
"gguf",
"llama-cpp",
"gguf-my-repo",
"ru",
"dataset:IlyaGusev/saiga_scored",
"dataset:IlyaGusev/saiga_preferences",
"base_model:IlyaGusev/saiga_gemma3_12b",
"base_model:quantized:IlyaGusev/saiga_gemma3_12b",
"license:gemma",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-06-22T12:58:15Z |
---
language:
- ru
datasets:
- IlyaGusev/saiga_scored
- IlyaGusev/saiga_preferences
license: gemma
base_model: IlyaGusev/saiga_gemma3_12b
tags:
- llama-cpp
- gguf-my-repo
---
# rrelaxx/saiga_gemma3_12b-Q4_K_S-GGUF
This model was converted to GGUF format from [`IlyaGusev/saiga_gemma3_12b`](https://huggingface.co/IlyaGusev/saiga_gemma3_12b) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/IlyaGusev/saiga_gemma3_12b) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo rrelaxx/saiga_gemma3_12b-Q4_K_S-GGUF --hf-file saiga_gemma3_12b-q4_k_s.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo rrelaxx/saiga_gemma3_12b-Q4_K_S-GGUF --hf-file saiga_gemma3_12b-q4_k_s.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo rrelaxx/saiga_gemma3_12b-Q4_K_S-GGUF --hf-file saiga_gemma3_12b-q4_k_s.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo rrelaxx/saiga_gemma3_12b-Q4_K_S-GGUF --hf-file saiga_gemma3_12b-q4_k_s.gguf -c 2048
```
|
gouthxm07/fertilizer2025-llama
|
gouthxm07
| 2025-06-22T12:58:35Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-22T11:43:29Z |
---
base_model: unsloth/meta-llama-3.1-8b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** gouthxm07
- **License:** apache-2.0
- **Finetuned from model :** unsloth/meta-llama-3.1-8b-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
VincentGOURBIN/fuel-price-predictor
|
VincentGOURBIN
| 2025-06-22T12:54:53Z | 0 | 0 | null |
[
"safetensors",
"fuel_price_predictor",
"fuel",
"price",
"prediction",
"france",
"brent",
"pytorch",
"dataset:VincentGOURBIN/FuelInFranceData",
"license:mit",
"model-index",
"region:us"
] | null | 2025-06-22T12:54:47Z |
---
license: mit
tags:
- fuel
- price
- prediction
- france
- brent
- pytorch
datasets:
- VincentGOURBIN/FuelInFranceData
metrics:
- mean_absolute_error
- r2
model-index:
- name: fuel-price-predictor
results:
- task:
type: regression
name: Fuel Price Prediction
dataset:
name: FuelInFranceData
type: VincentGOURBIN/FuelInFranceData
metrics:
- type: mean_absolute_error
value: 0.0233
name: Mean Absolute Error
- type: r2_score
value: 0.9901
name: R² Score
- type: root_mean_squared_error
value: 0.0342
name: Root Mean Squared Error
---
# 🛣️ Fuel Price Predictor for France
Ce modèle prédit les prix des carburants en France basé sur le cours du Brent et d'autres facteurs géoéconomiques.
## 📊 Performances
- **R² Score**: 0.9901
- **Erreur Moyenne Absolue**: 0.0233€/L
- **RMSE**: 0.0342€/L
## 🏗️ Architecture
Réseau de neurones profond avec:
- Couches cachées: [512, 256, 128]
- Dropout: 0.1
- Activation: ReLU
- Normalisation: BatchNorm1d
## 📥 Utilisation
```python
import torch
from safetensors.torch import load_file
# Charger le modèle
state_dict = load_file("model.safetensors")
# Instancier votre classe FuelPricePredictor et charger les poids
model.load_state_dict(state_dict)
```
## 📈 Dataset
Entraîné sur le dataset [VincentGOURBIN/FuelInFranceData](https://huggingface.co/datasets/VincentGOURBIN/FuelInFranceData) contenant des données de stations-service françaises.
## 🔧 Features
Le modèle utilise les features suivantes:
- Cours du Brent (€/baril)
- Coordonnées géographiques
- Type de carburant
- Marque de la station
- Informations temporelles
- Département et région
Généré le 2025-06-22
|
louijiec/veriforge-gemma-2b-it
|
louijiec
| 2025-06-22T12:53:15Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"gemma",
"qlora",
"circuit-synthesis",
"verilog",
"llm",
"electronic-design-automation",
"google-colab",
"code",
"arxiv:2305.14314",
"license:apache-2.0",
"region:us"
] | null | 2025-06-22T11:54:27Z |
---
license: apache-2.0
language: code
tags:
- gemma
- qlora
- circuit-synthesis
- verilog
- llm
- electronic-design-automation
- peft
- google-colab
model-index:
- name: veriforge-gemma-2b-it
results: []
---
# Veriforge-Gemma-2B-IT 🔧
**`veriforge-gemma-2b-it`** is a QLoRA-fine-tuned version of [`google/gemma-2b-it`](https://huggingface.co/google/gemma-2b-it) that specializes in prompt-based circuit synthesis for digital logic design, specifically in Verilog HDL.
## 🚀 Model Description
- **Base Model**: [`google/gemma-2b-it`](https://huggingface.co/google/gemma-2b-it)
- **Fine-tuned By**: [louijiec](https://huggingface.co/louijiec)
- **Method**: QLoRA using PEFT and bitsandbytes
- **Data**: 500 simulated Verilog gate examples (AND, OR, NAND, etc.)
- **Platform**: Google Colab
## 🧐 Example Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "louijiec/veriforge-gemma-2b-it"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)
prompt = "### Prompt:\nWrite Verilog code for a 3-input XOR gate.\n\n### Response:\n"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=100)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
## 🧪 Sample Output
```verilog
module nand_3_input (output y, input a0, a1, a2);
assign y = ~(a0 & a1 & a2);
endmodule
```
## 📚 Training Details
- LoRA rank: 8
- Bits: 4-bit (QLoRA)
- Max tokens: 512
- Optimizer: AdamW, FP16
- Epochs: 10
- Batch Size: 2
- Gradient Accumulation: 4
- Logging Steps: 10
## 📌 Citations
- Gemma by Google: https://huggingface.co/google/gemma-2b-it
- QLoRA: https://arxiv.org/abs/2305.14314
- PEFT: https://github.com/huggingface/peft
## ⚠️ Limitations
- Trained only on simple gates
- No memory/state logic (flip-flops, FSMs, etc.)
- No formal verification or testbench evaluation
## 💪 Future Work
- Add support for more circuit components (MUX, ALU)
- Formal testbench generation
- Build EDA pipeline integrations
|
RISHIKESH2003/Rishi.Medchatbot
|
RISHIKESH2003
| 2025-06-22T12:50:35Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-06-22T12:50:35Z |
---
license: apache-2.0
---
|
rossieRuby/nyayadrishti-minitron-lora
|
rossieRuby
| 2025-06-22T12:47:47Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-22T10:58:51Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mastur96/f5359d4b-4455-4537-bbdc-a1593cb528b0
|
mastur96
| 2025-06-22T12:47:21Z | 9 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"unsloth",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-21T07:17:09Z |
---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
MJ92/AceGPT-v2-8B-Chat_finetuned_2000_cass
|
MJ92
| 2025-06-22T12:46:19Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-22T12:21:22Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
zhngq/ppo-Huggy
|
zhngq
| 2025-06-22T12:45:14Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2025-06-22T12:45:08Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: zhngq/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
emirkaanozdemr/vit-base-model-1k-25epoch
|
emirkaanozdemr
| 2025-06-22T12:41:42Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"vit",
"image-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2025-06-22T12:41:30Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
roshbeed/hn-word2vec
|
roshbeed
| 2025-06-22T12:37:55Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-21T09:41:13Z |
# Word2Vec Model
This is a Word2Vec model trained on text data.
## Model Details
- **Vocabulary size**: 3
- **Embedding dimension**: 64
- **Total words**: 35174
## Usage
```python
from transformers import AutoTokenizer
import torch
# Load tokenizer
tokenizer = AutoTokenizer.from_pretrained("roshbeed/hn-word2vec")
# Load embeddings
embeddings = torch.load("word_embeddings.pt")
```
## Training Data
Trained on text8 dataset with synthetic upvote scores.
|
minhxle/truesight-ft-job-b6760fc2-4232-4ae9-aa46-26364d88ffe1
|
minhxle
| 2025-06-22T12:36:35Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-22T12:36:30Z |
---
base_model: unsloth/qwen2.5-3b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** minhxle
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-3b-instruct-unsloth-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
OdiaGenAI/alpaca-lora-english-v1
|
OdiaGenAI
| 2025-06-22T12:36:12Z | 0 | 0 | null |
[
"en",
"license:cc-by-4.0",
"region:us"
] | null | 2023-04-09T05:49:09Z |
---
language:
- en
license: cc-by-4.0
---
# Model Card for Model ID
## Model description
OdiaGen is based on Llama-7b and finetuned with 52k English data from the open-source Stanford-Alpaca, resulting in good English instruction understanding and response generation capabilities.
The code of Odia data generation and other detailed information can be found in our Github project repository: https://github.com/shantipriyap/OdiaGenAI.
This repo contains a low-rank adapter for LLaMA-7b fit on the Stanford Alpaca dataset.
## Training hyper-parameters
| Parameter | Value |
| ------ | ------ |
| Batch size | 128 |
| Learning rate | 3e-4 |
| Epochs | 3 |
|Weight_decay | 0.001 |
|Warmup_rate | 0.1 |
|LR_scheduler | linear |
|Lora r | 16 |
|Lora target modules | (q_proj, k_proj, v_proj, o_proj) |
Model can be easily loaded with AutoModelForCausalLM.
``` python
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
from peft import PeftModel, PeftConfig
import torch
base_model_path = "meta-llama/Llama-2-7b-hf"
adapter_path = "OdiaGenAI/alpaca-lora-english-v1"
tokenizer = AutoTokenizer.from_pretrained(base_model_path, trust_remote_code=True)
tokenizer.pad_token = tokenizer.eos_token
bnb_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_use_double_quant=True,
bnb_4bit_compute_dtype=torch.float16,
)
base_model = AutoModelForCausalLM.from_pretrained(
base_model_path,
quantization_config=bnb_config,
device_map="auto",
trust_remote_code=True
)
model = PeftModel.from_pretrained(base_model, adapter_path)
model.eval()
prompt = "Explain operating system."
inputs = tokenizer(prompt, return_tensors="pt").to("cuda")
with torch.no_grad():
outputs = model.generate(
**inputs,
max_new_tokens=150,
do_sample=True,
temperature=0.7,
top_p=0.9,
)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
Instructions for running it can be found at https://github.com/shantipriyap/OdiaGenAI.
|
New-videos-Jaipur-Couple-viral-Clips/FULL.VIDEO.Jaipur.Couple.Viral.Video.Tutorial.Official
|
New-videos-Jaipur-Couple-viral-Clips
| 2025-06-22T12:34:54Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-22T12:34:33Z |
<animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
VIDEOS-mezzo-fun-viral-video-link/Watchmezzo.fun.viral.video.Link.viral.On.Social.Media
|
VIDEOS-mezzo-fun-viral-video-link
| 2025-06-22T12:31:48Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-22T12:30:31Z |
[](https://t.co/IpLsLbijZ9)
|
ironman-les-sables-d-olonne-vendee/Regardez-IRONMAN-Les-Sables-d-Olonne-Vendee-en-direct-live
|
ironman-les-sables-d-olonne-vendee
| 2025-06-22T12:29:26Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-22T12:28:12Z |
[regardez]IRONMAN Les Sables d'Olonne-Vendee en direct live streaming On 22 juin 2025
Le 22 juin 2025, des milliers d’athlètes du monde entier vont déferler sur Les Sables-d’Olonne (Vendée), pour le full Ironman. Une course exigeante, longue de plusieurs heures, entre terre et mer. La cité balnéaire deviendra ainsi la seule ville en France, avec Nice (Alpes-Maritimes), à accueillir un « full ». Mais pour recevoir le public et les sportifs dans les meilleures conditions, l’organisation doit être millimétrée.
« On aura toujours un 70.3 l’année prochaine »
Le départ sera donné à 7 h, sur la Grande plage. Au programme : 3,8 km de natation, avec un passage dans le chenal et une transition à port-Olona. Les athlètes enchaîneront avec 180 km à vélo au cœur de la forêt d’Olonne, les marais, etc. Ils termineront au bout de l’effort avec 42 km de marathon sur le remblai et sur la jetée des Sables.
L’arrivée du premier coureur est prévue aux alentours de 15 h. Le dernier, quant à lui, se présentera sur la ligne aux alentours de minuit.
« On organise depuis 2019 l’Ironman 70.3 aux Sables. Six éditions qui ont permis de nous roder pour franchir un cap et basculer cette année sur un full, explique Théo Delcampe, directeur de course. On rejoint un cercle très fermé : il y a seulement 17 Ironman organisés en Europe et 37 dans le monde. L’objectif, c’est d’installer ça dans la durée. On est en discussion avec les parties prenantes. Mais ce qui est certain, c’est qu’on aura toujours un 70.3 l’année prochaine. »
Des modifications de la circulation
Afin de préserver les athlètes, des changements temporaires de circulation auront lieu le jour de la course (voir infographie). « C’est un dispositif pour permettre de sécuriser la course et respecter un flux cohérent », note le directeur de course. L’épreuve de vélo passera notamment dans les communes de Talmont-Saint-Hilaire, du Poiroux, de Saint-Avaugourd-des-Landes et de Vairé. Des panneaux ont été installés sur les différents axes concernés pour donner des informations aux riverains et automobilistes..bnfvbf
|
rrelaxx/saiga_gemma3_12b-Q4_K_M-GGUF
|
rrelaxx
| 2025-06-22T12:24:25Z | 0 | 0 | null |
[
"gguf",
"llama-cpp",
"gguf-my-repo",
"ru",
"dataset:IlyaGusev/saiga_scored",
"dataset:IlyaGusev/saiga_preferences",
"base_model:IlyaGusev/saiga_gemma3_12b",
"base_model:quantized:IlyaGusev/saiga_gemma3_12b",
"license:gemma",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-06-22T12:23:59Z |
---
language:
- ru
datasets:
- IlyaGusev/saiga_scored
- IlyaGusev/saiga_preferences
license: gemma
base_model: IlyaGusev/saiga_gemma3_12b
tags:
- llama-cpp
- gguf-my-repo
---
# rrelaxx/saiga_gemma3_12b-Q4_K_M-GGUF
This model was converted to GGUF format from [`IlyaGusev/saiga_gemma3_12b`](https://huggingface.co/IlyaGusev/saiga_gemma3_12b) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/IlyaGusev/saiga_gemma3_12b) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo rrelaxx/saiga_gemma3_12b-Q4_K_M-GGUF --hf-file saiga_gemma3_12b-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo rrelaxx/saiga_gemma3_12b-Q4_K_M-GGUF --hf-file saiga_gemma3_12b-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo rrelaxx/saiga_gemma3_12b-Q4_K_M-GGUF --hf-file saiga_gemma3_12b-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo rrelaxx/saiga_gemma3_12b-Q4_K_M-GGUF --hf-file saiga_gemma3_12b-q4_k_m.gguf -c 2048
```
|
Awk123/DeepSeek-R1-Medical-COT
|
Awk123
| 2025-06-22T12:24:22Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-22T12:22:59Z |
---
base_model: unsloth/deepseek-r1-distill-llama-8b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** Awk123
- **License:** apache-2.0
- **Finetuned from model :** unsloth/deepseek-r1-distill-llama-8b-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
AlexHung29629/my_shard_3
|
AlexHung29629
| 2025-06-22T12:22:40Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral3",
"image-text-to-text",
"conversational",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2025-06-22T12:15:34Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
AyushKr47/bart-finetuned-text-summarizer
|
AyushKr47
| 2025-06-22T12:22:24Z | 0 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"bart",
"text2text-generation",
"summarization",
"en",
"dataset:samsum",
"arxiv:1910.13461",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
summarization
| 2025-06-22T12:06:10Z |
---
pipeline_tag: summarization
datasets:
- samsum
language:
- en
metrics:
- rouge
library_name: transformers
widget:
- text: |
John: Hey! I've been thinking about getting a PlayStation 5. Do you think it is worth it?
Dan: Idk man. R u sure ur going to have enough free time to play it?
John: Yeah, that's why I'm not sure if I should buy one or not. I've been working so much lately idk if I'm gonna be able to play it as much as I'd like.
- text: |
Sarah: Do you think it's a good idea to invest in Bitcoin?
Emily: I'm skeptical. The market is very volatile, and you could lose money.
Sarah: True. But there's also a high upside, right?
- text: |
Madison: Hello Lawrence are you through with the article?
Lawrence: Not yet sir.
Lawrence: But i will be in a few.
Madison: Okay. But make it quick.
Madison: The piece is needed by today
Lawrence: Sure thing
Lawrence: I will get back to you once i am through."
model-index:
- name: bart-finetuned-samsum
results:
- task:
name: Text Summarization
type: summarization
dataset:
name: SamSum
type: samsum
metrics:
- name: Validation ROUGE-1
type: rouge-1
value: 53.8804
- name: Validation ROUGE-2
type: rouge-2
value: 29.2329
- name: Validation ROUGE-L
type: rougeL
value: 44.774
- name: Validation ROUGE-L Sum
type: rougeLsum
value: 49.8255
- name: Test ROUGE-1
type: rouge-1
value: 52.8156
- name: Test ROUGE-2
type: rouge-2
value: 28.1259
- name: Test ROUGE-L
type: rougeL
value: 43.7147
- name: Test ROUGE-L Sum
type: rougeLsum
value: 48.5712
---
# Description
This model is a specialized adaptation of the <b>facebook/bart-large-xsum</b>, fine-tuned for enhanced performance on dialogue summarization using the <b>SamSum</b> dataset.
## Development
- Kaggle Notebook: [Text Summarization with Large Language Models](https://www.kaggle.com/code/lusfernandotorres/text-summarization-with-large-language-models)
## Usage
```python
from transformers import pipeline
model = pipeline("summarization", model="luisotorres/bart-finetuned-samsum")
conversation = '''Sarah: Do you think it's a good idea to invest in Bitcoin?
Emily: I'm skeptical. The market is very volatile, and you could lose money.
Sarah: True. But there's also a high upside, right?
'''
model(conversation)
```
## Training Parameters
```python
evaluation_strategy = "epoch",
save_strategy = 'epoch',
load_best_model_at_end = True,
metric_for_best_model = 'eval_loss',
seed = 42,
learning_rate=2e-5,
per_device_train_batch_size=4,
per_device_eval_batch_size=4,
gradient_accumulation_steps=2,
weight_decay=0.01,
save_total_limit=2,
num_train_epochs=4,
predict_with_generate=True,
fp16=True,
report_to="none"
```
## Reference
This model is based on the original <b>BART</b> architecture, as detailed in:
Lewis et al. (2019). BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension. [arXiv:1910.13461](https://arxiv.org/abs/1910.13461)
|
Team-EVEN/OLAF2_14B_test_merging
|
Team-EVEN
| 2025-06-22T12:21:52Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2311.03099",
"base_model:OLAResearch/OLAF2-14B",
"base_model:merge:OLAResearch/OLAF2-14B",
"base_model:qingy2024/PR2-14B-Instruct",
"base_model:merge:qingy2024/PR2-14B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-22T12:00:39Z |
---
base_model:
- OLAResearch/OLAF2-14B
- qingy2024/PR2-14B-Instruct
library_name: transformers
tags:
- mergekit
- merge
---
# merged_model
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [DARE TIES](https://arxiv.org/abs/2311.03099) merge method using [OLAResearch/OLAF2-14B](https://huggingface.co/OLAResearch/OLAF2-14B) as a base.
### Models Merged
The following models were included in the merge:
* /workspace/OLAF2-14B-tuning
* [qingy2024/PR2-14B-Instruct](https://huggingface.co/qingy2024/PR2-14B-Instruct)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: OLAResearch/OLAF2-14B
- model: /workspace/OLAF2-14B-tuning
parameters:
density: 0.5
weight: 0.5
- model: qingy2024/PR2-14B-Instruct
parameters:
density: 0.3
weight: 0.2
merge_method: dare_ties
base_model: OLAResearch/OLAF2-14B
parameters:
normalize: true
int8_mask: true
dtype: float16
```
|
Prashasst/Sushruta-P3.8Q-Finetune
|
Prashasst
| 2025-06-22T12:21:30Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"sft",
"trl",
"base_model:microsoft/Phi-3-mini-4k-instruct",
"base_model:finetune:microsoft/Phi-3-mini-4k-instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-06-22T12:18:15Z |
---
base_model: microsoft/Phi-3-mini-4k-instruct
library_name: transformers
model_name: Sushruta-P3.8Q-Finetune
tags:
- generated_from_trainer
- sft
- trl
licence: license
---
# Model Card for Sushruta-P3.8Q-Finetune
This model is a fine-tuned version of [microsoft/Phi-3-mini-4k-instruct](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Prashasst/Sushruta-P3.8Q-Finetune", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.19.0
- Transformers: 4.52.4
- Pytorch: 2.5.1+cu121
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
Triangle104/Huihui-MoE-23B-A4B-abliterated-Q5_K_S-GGUF
|
Triangle104
| 2025-06-22T12:17:19Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"moe",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"base_model:huihui-ai/Huihui-MoE-23B-A4B-abliterated",
"base_model:quantized:huihui-ai/Huihui-MoE-23B-A4B-abliterated",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-22T12:07:14Z |
---
license: apache-2.0
base_model: huihui-ai/Huihui-MoE-23B-A4B-abliterated
library_name: transformers
license_link: https://huggingface.co/Qwen/Qwen3-4B/blob/main/LICENSE
pipeline_tag: text-generation
tags:
- moe
- llama-cpp
- gguf-my-repo
---
# Triangle104/Huihui-MoE-23B-A4B-abliterated-Q5_K_S-GGUF
This model was converted to GGUF format from [`huihui-ai/Huihui-MoE-23B-A4B-abliterated`](https://huggingface.co/huihui-ai/Huihui-MoE-23B-A4B-abliterated) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/huihui-ai/Huihui-MoE-23B-A4B-abliterated) for more details on the model.
---
Huihui-MoE-23B-A4B-abliterated is a Mixture of Experts (MoE) language model developed by huihui.ai, built upon the huihui-ai/Huihui-Qwen3-4B-abliterated-v2 base model. It enhances the standard Transformer architecture by replacing MLP layers with MoE layers, each containing 8 experts, to achieve high performance with efficient inference. The model is designed for natural language processing tasks, including text generation, question answering, and conversational applications.
---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Huihui-MoE-23B-A4B-abliterated-Q5_K_S-GGUF --hf-file huihui-moe-23b-a4b-abliterated-q5_k_s.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Huihui-MoE-23B-A4B-abliterated-Q5_K_S-GGUF --hf-file huihui-moe-23b-a4b-abliterated-q5_k_s.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Huihui-MoE-23B-A4B-abliterated-Q5_K_S-GGUF --hf-file huihui-moe-23b-a4b-abliterated-q5_k_s.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Huihui-MoE-23B-A4B-abliterated-Q5_K_S-GGUF --hf-file huihui-moe-23b-a4b-abliterated-q5_k_s.gguf -c 2048
```
|
aelitta/nllb-200-600M-En-Ru-finetuned_opht
|
aelitta
| 2025-06-22T12:14:52Z | 2 | 0 |
transformers
|
[
"transformers",
"safetensors",
"m2m_100",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2025-06-22T07:46:40Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
Finetuned model on ophtalmology dataset from https://www.kaggle.com/datasets/cheshrcat/ru-medical-texts-ophtalmology
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** Anastasia Sidorova
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** Translation
- **Language(s) (NLP):** En-Ru
- **License:** Apache 2.0
- **Finetuned from model [optional]:** nllb-200-600-distilled
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
Could be used in medical domain
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
<!-- - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> -->
per_device_train_batch_size=1,
per_device_eval_batch_size=1,
logging_steps=10000,
evaluation_strategy="steps",
num_train_epochs=6,
learning_rate=2e-5,
weight_decay=0.02,
save_total_limit=1,
predict_with_generate=True,
report_to="none"
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
https://www.kaggle.com/datasets/cheshrcat/ru-medical-texts-ophtalmology
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
sentence BLEU score 0.2-0.6
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
https://www.kaggle.com/datasets/cheshrcat/ru-medical-texts-ophtalmology
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
Anastasia Sidorova
## Model Card Contact
[More Information Needed]
|
Vandita/EmoCenSarcFTBert
|
Vandita
| 2025-06-22T12:12:25Z | 0 | 0 | null |
[
"bert",
"region:us"
] | null | 2025-06-20T22:43:10Z |
---
{}
---
## Evaluation Results
### Validation Set
- eval_loss: 0.8952
- eval_accuracy: 0.8736
- eval_precision: 0.8390
- eval_recall: 0.8320
- eval_f1: 0.8355
- eval_mcc: 0.7328
- eval_roc_auc: 0.9145
- eval_runtime: 38.7815
- eval_samples_per_second: 151.5160
- eval_steps_per_second: 4.7450
- epoch: 9.0000
### Test Set 1
- eval_loss: 2.8248
- eval_accuracy: 0.6241
- eval_precision: 0.6883
- eval_recall: 0.5212
- eval_f1: 0.5932
- eval_mcc: 0.2646
- eval_roc_auc: 0.6803
- eval_runtime: 49.0873
- eval_samples_per_second: 147.2480
- eval_steps_per_second: 4.6040
- epoch: 9.0000
### Test Set 2
- eval_loss: 2.1442
- eval_accuracy: 0.7146
- eval_precision: 0.4313
- eval_recall: 0.4473
- eval_f1: 0.4392
- eval_mcc: 0.2479
- eval_roc_auc: 0.6623
- eval_runtime: 33.8014
- eval_samples_per_second: 149.5800
- eval_steps_per_second: 4.6740
- epoch: 9.0000
|
18-katrina-lim-kiffy-viral-video-link-XX/FULL.VIDEO.Katrina.lim.Viral.Video.Tutorial.Official
|
18-katrina-lim-kiffy-viral-video-link-XX
| 2025-06-22T12:11:19Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-22T12:11:00Z |
<animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
mcmatak/mistral-babis-lora
|
mcmatak
| 2025-06-22T12:10:43Z | 8 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:mistralai/Mistral-7B-Instruct-v0.3",
"base_model:adapter:mistralai/Mistral-7B-Instruct-v0.3",
"license:apache-2.0",
"region:us"
] | null | 2025-06-22T06:08:58Z |
---
library_name: peft
license: apache-2.0
base_model: mistralai/Mistral-7B-Instruct-v0.3
tags:
- generated_from_trainer
model-index:
- name: mistral-babis-lora
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mistral-babis-lora
This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.3](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.3) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1893
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use paged_adamw_8bit with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.1299 | 2.96 | 500 | 0.1893 |
### Framework versions
- PEFT 0.15.2
- Transformers 4.52.4
- Pytorch 2.7.1+cu126
- Datasets 3.6.0
- Tokenizers 0.21.1
|
Zeinab321/Mistral-Merged
|
Zeinab321
| 2025-06-22T12:07:02Z | 3 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"unsloth",
"text-generation-inference",
"trl",
"sft",
"pytorch",
"conversational",
"en",
"dataset:ZHENGRAN/code_ucb_complete",
"dataset:ZHENGRAN/code_ujb_defectdetection",
"dataset:ZHENGRAN/code_ujb_repair",
"dataset:ZHENGRAN/code_ujb_testgen",
"dataset:ZHENGRAN/code_ujb_testgenissue",
"dataset:ASSERT-KTH/megadiff-single-function",
"dataset:ASSERT-KTH/repairllama-datasets",
"dataset:JetBrains-Research/lca-bug-localization",
"base_model:unsloth/Mistral-Nemo-Base-2407-bnb-4bit",
"base_model:quantized:unsloth/Mistral-Nemo-Base-2407-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2025-06-20T19:32:03Z |
---
license: apache-2.0
tags:
- unsloth
- mistral
- text-generation-inference
- transformers
- trl
- sft
- pytorch
base_model:
- unsloth/Mistral-Nemo-Base-2407-bnb-4bit
datasets:
- ZHENGRAN/code_ucb_complete
- ZHENGRAN/code_ujb_defectdetection
- ZHENGRAN/code_ujb_repair
- ZHENGRAN/code_ujb_testgen
- ZHENGRAN/code_ujb_testgenissue
- ASSERT-KTH/megadiff-single-function
- ASSERT-KTH/repairllama-datasets
- JetBrains-Research/lca-bug-localization
language:
- en
pipeline_tag: text-generation
---
|
edwin544/sc-ms-cve-model
|
edwin544
| 2025-06-22T12:06:00Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:bigcode/starcoderbase",
"base_model:adapter:bigcode/starcoderbase",
"region:us"
] | null | 2025-06-22T11:58:25Z |
---
base_model: bigcode/starcoderbase
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.2
|
BootesVoid/cmc3p3hfu010ynx8dgxpqr0nc_cmc7jrkax094ubfifg6xb5vao
|
BootesVoid
| 2025-06-22T12:02:40Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-06-22T12:02:38Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: KARA22
---
# Cmc3P3Hfu010Ynx8Dgxpqr0Nc_Cmc7Jrkax094Ubfifg6Xb5Vao
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `KARA22` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "KARA22",
"lora_weights": "https://huggingface.co/BootesVoid/cmc3p3hfu010ynx8dgxpqr0nc_cmc7jrkax094ubfifg6xb5vao/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BootesVoid/cmc3p3hfu010ynx8dgxpqr0nc_cmc7jrkax094ubfifg6xb5vao', weight_name='lora.safetensors')
image = pipeline('KARA22').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/BootesVoid/cmc3p3hfu010ynx8dgxpqr0nc_cmc7jrkax094ubfifg6xb5vao/discussions) to add images that show off what you’ve made with this LoRA.
|
lolnoyarite/Mistral-Small-3.2-24B-Instruct-2506-TextOnly-Q4_K_M-GGUF
|
lolnoyarite
| 2025-06-22T11:58:12Z | 0 | 0 | null |
[
"gguf",
"llama-cpp",
"gguf-my-repo",
"base_model:Leoxxxxh/Mistral-Small-3.2-24B-Instruct-2506-TextOnly",
"base_model:quantized:Leoxxxxh/Mistral-Small-3.2-24B-Instruct-2506-TextOnly",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-06-22T11:57:02Z |
---
license: apache-2.0
base_model: Leoxxxxh/Mistral-Small-3.2-24B-Instruct-2506-TextOnly
tags:
- llama-cpp
- gguf-my-repo
---
# lolnoyarite/Mistral-Small-3.2-24B-Instruct-2506-TextOnly-Q4_K_M-GGUF
This model was converted to GGUF format from [`Leoxxxxh/Mistral-Small-3.2-24B-Instruct-2506-TextOnly`](https://huggingface.co/Leoxxxxh/Mistral-Small-3.2-24B-Instruct-2506-TextOnly) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Leoxxxxh/Mistral-Small-3.2-24B-Instruct-2506-TextOnly) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo lolnoyarite/Mistral-Small-3.2-24B-Instruct-2506-TextOnly-Q4_K_M-GGUF --hf-file mistral-small-3.2-24b-instruct-2506-textonly-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo lolnoyarite/Mistral-Small-3.2-24B-Instruct-2506-TextOnly-Q4_K_M-GGUF --hf-file mistral-small-3.2-24b-instruct-2506-textonly-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo lolnoyarite/Mistral-Small-3.2-24B-Instruct-2506-TextOnly-Q4_K_M-GGUF --hf-file mistral-small-3.2-24b-instruct-2506-textonly-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo lolnoyarite/Mistral-Small-3.2-24B-Instruct-2506-TextOnly-Q4_K_M-GGUF --hf-file mistral-small-3.2-24b-instruct-2506-textonly-q4_k_m.gguf -c 2048
```
|
holden1999/DeepSeek-R1-Distill-Qwen-1.5B
|
holden1999
| 2025-06-22T11:49:31Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"mlx",
"conversational",
"base_model:deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B",
"base_model:finetune:deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-22T10:48:50Z |
---
license: mit
library_name: transformers
base_model: deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B
tags:
- mlx
---
# holden1999/DeepSeek-R1-Distill-Qwen-1.5B
The Model [holden1999/DeepSeek-R1-Distill-Qwen-1.5B](https://huggingface.co/holden1999/DeepSeek-R1-Distill-Qwen-1.5B) was
converted to MLX format from [deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B)
using mlx-lm version **0.21.4**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("holden1999/DeepSeek-R1-Distill-Qwen-1.5B")
prompt = "hello"
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
New-tutorial-Redeem-Craze-19-Viral-Videos/FULL.VIDEO.Redeem.Craze.Viral.Video.Tutorial.Official
|
New-tutorial-Redeem-Craze-19-Viral-Videos
| 2025-06-22T11:48:09Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-22T11:47:55Z |
<animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
omen0888/dope
|
omen0888
| 2025-06-22T11:47:23Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-06-22T11:47:23Z |
---
license: apache-2.0
---
|
pranav2711/phi2-ncu-v1
|
pranav2711
| 2025-06-22T11:41:06Z | 0 | 0 |
transformers
|
[
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-22T11:41:05Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
RLAIF-V/RLPR-Qwen2.5-7B-Base
|
RLAIF-V
| 2025-06-22T11:37:16Z | 14 | 1 | null |
[
"safetensors",
"qwen2",
"en",
"dataset:openbmb/RLPR-train",
"license:apache-2.0",
"region:us"
] | null | 2025-06-16T11:21:14Z |
---
license: apache-2.0
datasets:
- openbmb/RLPR-train
language:
- en
---
# Model Card for RLPR-Qwen2.5-7B-Base
[GitHub ](https://github.com/openbmb/RLPR) | [Paper](https://arxiv.org)
**RLPR-Qwen2.5-7B-Base** is trained from Qwen2.5-7B-Base with the [RLPR](https://github.com/openbmb/RLPR) framework, which eliminates reliance on external verifiers and is simple and generalizable for more domains.
## Model Details
### Key Features
* 💡 **Verifier-Free Reasoning Enhancement:** RLPR pioneers reinforcement learning for reasoning tasks by leveraging the LLM's intrinsic generation probability as a direct reward signal. This eliminates the need for external verifiers and specialized fine-tuning, offering broad applicability and effectively handling complex, diverse answers.
* 🛠️ **Innovative Reward & Training Framework:**
* Features a robust **Probability-based Reward (PR)** using average decoding probabilities of reference answers for higher quality, debiased reward signals, outperforming naive sequence likelihood.
* Implements an **standard deviation filtering** mechanism that dynamically filters prompts to stabilize training and significantly boost final performance.
* 🚀 **Strong Performance in General & Mathematical Reasoning:** Demonstrates substantial reasoning improvements across diverse benchmarks (e.g., 56.0 on MMLU-Pro, 55.4 on TheoremQA with Qwen2.5-7B). RLPR surpasses strong models reliant on external verifiers (like General Reasoner-7B).

### Model Description
- **Trained from model:** [Qwen2.5-7B](https://huggingface.co/Qwen/Qwen2.5-7B)
- **Trained on data:** [RLPR-Train](https://huggingface.co/datasets/openbmb/RLPR-Train-Dataset)
## Usage
Usage adopted from [Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct)
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "openbmb/RLPR-Qwen2.5-7B-Base"
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
prompt = "How much energy is produced when the sun converts one kg of hydrogen into helium?."
messages = [
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generated_ids = model.generate(
**model_inputs,
max_new_tokens=512
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
```
## Citation
If you find our model/code/paper helpful, please consider citing our papers 📝:
```bibtex
@article{yu2025rlpr,
title={RLPR: Extrapolating RLVR to General Domain without Verifiers},
author={Yu, Tianyu and Ji, Bo and Wang, Shouli and Yao, Shu and Wang, Zefan and Cui, Ganqu and Yuan, Lifan and Ding, Ning and Yao, Yuan and Liu, Zhiyuan and Sun, Maosong and Chua, Tat-Seng},
journal={arXiv preprint arXiv:2506.xxxxx},
year={2025}
}
```
|
Triangle104/Huihui-MoE-23B-A4B-abliterated-Q4_K_S-GGUF
|
Triangle104
| 2025-06-22T11:33:51Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"moe",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"base_model:huihui-ai/Huihui-MoE-23B-A4B-abliterated",
"base_model:quantized:huihui-ai/Huihui-MoE-23B-A4B-abliterated",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-22T11:30:31Z |
---
license: apache-2.0
base_model: huihui-ai/Huihui-MoE-23B-A4B-abliterated
library_name: transformers
license_link: https://huggingface.co/Qwen/Qwen3-4B/blob/main/LICENSE
pipeline_tag: text-generation
tags:
- moe
- llama-cpp
- gguf-my-repo
---
# Triangle104/Huihui-MoE-23B-A4B-abliterated-Q4_K_S-GGUF
This model was converted to GGUF format from [`huihui-ai/Huihui-MoE-23B-A4B-abliterated`](https://huggingface.co/huihui-ai/Huihui-MoE-23B-A4B-abliterated) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/huihui-ai/Huihui-MoE-23B-A4B-abliterated) for more details on the model.
---
Huihui-MoE-23B-A4B-abliterated is a Mixture of Experts (MoE) language model developed by huihui.ai, built upon the huihui-ai/Huihui-Qwen3-4B-abliterated-v2 base model. It enhances the standard Transformer architecture by replacing MLP layers with MoE layers, each containing 8 experts, to achieve high performance with efficient inference. The model is designed for natural language processing tasks, including text generation, question answering, and conversational applications.
---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Huihui-MoE-23B-A4B-abliterated-Q4_K_S-GGUF --hf-file huihui-moe-23b-a4b-abliterated-q4_k_s.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Huihui-MoE-23B-A4B-abliterated-Q4_K_S-GGUF --hf-file huihui-moe-23b-a4b-abliterated-q4_k_s.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Huihui-MoE-23B-A4B-abliterated-Q4_K_S-GGUF --hf-file huihui-moe-23b-a4b-abliterated-q4_k_s.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Huihui-MoE-23B-A4B-abliterated-Q4_K_S-GGUF --hf-file huihui-moe-23b-a4b-abliterated-q4_k_s.gguf -c 2048
```
|
BootesVoid/cmbsau36e05aah4x5p2xddem5_cmc7jcp2v093rbfif5ymfqhv3
|
BootesVoid
| 2025-06-22T11:33:40Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-06-22T11:33:36Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: LOOTJE21
---
# Cmbsau36E05Aah4X5P2Xddem5_Cmc7Jcp2V093Rbfif5Ymfqhv3
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `LOOTJE21` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "LOOTJE21",
"lora_weights": "https://huggingface.co/BootesVoid/cmbsau36e05aah4x5p2xddem5_cmc7jcp2v093rbfif5ymfqhv3/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BootesVoid/cmbsau36e05aah4x5p2xddem5_cmc7jcp2v093rbfif5ymfqhv3', weight_name='lora.safetensors')
image = pipeline('LOOTJE21').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/BootesVoid/cmbsau36e05aah4x5p2xddem5_cmc7jcp2v093rbfif5ymfqhv3/discussions) to add images that show off what you’ve made with this LoRA.
|
mradermacher/Llama-3.2-3B-it-chinese-kyara-i1-GGUF
|
mradermacher
| 2025-06-22T11:29:11Z | 589 | 2 |
transformers
|
[
"transformers",
"gguf",
"en",
"zh",
"base_model:zake7749/Llama-3.2-3B-it-chinese-kyara",
"base_model:quantized:zake7749/Llama-3.2-3B-it-chinese-kyara",
"license:llama3.2",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2024-11-21T10:25:46Z |
---
base_model: zake7749/Llama-3.2-3B-it-chinese-kyara
language:
- en
- zh
library_name: transformers
license: llama3.2
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/zake7749/Llama-3.2-3B-it-chinese-kyara
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Llama-3.2-3B-it-chinese-kyara-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-it-chinese-kyara-i1-GGUF/resolve/main/Llama-3.2-3B-it-chinese-kyara.i1-IQ1_S.gguf) | i1-IQ1_S | 1.0 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-it-chinese-kyara-i1-GGUF/resolve/main/Llama-3.2-3B-it-chinese-kyara.i1-IQ1_M.gguf) | i1-IQ1_M | 1.0 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-it-chinese-kyara-i1-GGUF/resolve/main/Llama-3.2-3B-it-chinese-kyara.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 1.1 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-it-chinese-kyara-i1-GGUF/resolve/main/Llama-3.2-3B-it-chinese-kyara.i1-IQ2_XS.gguf) | i1-IQ2_XS | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-it-chinese-kyara-i1-GGUF/resolve/main/Llama-3.2-3B-it-chinese-kyara.i1-IQ2_S.gguf) | i1-IQ2_S | 1.3 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-it-chinese-kyara-i1-GGUF/resolve/main/Llama-3.2-3B-it-chinese-kyara.i1-IQ2_M.gguf) | i1-IQ2_M | 1.3 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-it-chinese-kyara-i1-GGUF/resolve/main/Llama-3.2-3B-it-chinese-kyara.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 1.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-it-chinese-kyara-i1-GGUF/resolve/main/Llama-3.2-3B-it-chinese-kyara.i1-Q2_K.gguf) | i1-Q2_K | 1.5 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-it-chinese-kyara-i1-GGUF/resolve/main/Llama-3.2-3B-it-chinese-kyara.i1-IQ3_XS.gguf) | i1-IQ3_XS | 1.6 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-it-chinese-kyara-i1-GGUF/resolve/main/Llama-3.2-3B-it-chinese-kyara.i1-IQ3_S.gguf) | i1-IQ3_S | 1.6 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-it-chinese-kyara-i1-GGUF/resolve/main/Llama-3.2-3B-it-chinese-kyara.i1-Q3_K_S.gguf) | i1-Q3_K_S | 1.6 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-it-chinese-kyara-i1-GGUF/resolve/main/Llama-3.2-3B-it-chinese-kyara.i1-IQ3_M.gguf) | i1-IQ3_M | 1.7 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-it-chinese-kyara-i1-GGUF/resolve/main/Llama-3.2-3B-it-chinese-kyara.i1-Q3_K_M.gguf) | i1-Q3_K_M | 1.8 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-it-chinese-kyara-i1-GGUF/resolve/main/Llama-3.2-3B-it-chinese-kyara.i1-Q3_K_L.gguf) | i1-Q3_K_L | 1.9 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-it-chinese-kyara-i1-GGUF/resolve/main/Llama-3.2-3B-it-chinese-kyara.i1-IQ4_XS.gguf) | i1-IQ4_XS | 1.9 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-it-chinese-kyara-i1-GGUF/resolve/main/Llama-3.2-3B-it-chinese-kyara.i1-Q4_0_4_4.gguf) | i1-Q4_0_4_4 | 2.0 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-it-chinese-kyara-i1-GGUF/resolve/main/Llama-3.2-3B-it-chinese-kyara.i1-Q4_0_4_8.gguf) | i1-Q4_0_4_8 | 2.0 | fast on arm+i8mm, low quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-it-chinese-kyara-i1-GGUF/resolve/main/Llama-3.2-3B-it-chinese-kyara.i1-Q4_0_8_8.gguf) | i1-Q4_0_8_8 | 2.0 | fast on arm+sve, low quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-it-chinese-kyara-i1-GGUF/resolve/main/Llama-3.2-3B-it-chinese-kyara.i1-Q4_0.gguf) | i1-Q4_0 | 2.0 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-it-chinese-kyara-i1-GGUF/resolve/main/Llama-3.2-3B-it-chinese-kyara.i1-Q4_K_S.gguf) | i1-Q4_K_S | 2.0 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-it-chinese-kyara-i1-GGUF/resolve/main/Llama-3.2-3B-it-chinese-kyara.i1-Q4_K_M.gguf) | i1-Q4_K_M | 2.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-it-chinese-kyara-i1-GGUF/resolve/main/Llama-3.2-3B-it-chinese-kyara.i1-Q5_K_S.gguf) | i1-Q5_K_S | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-it-chinese-kyara-i1-GGUF/resolve/main/Llama-3.2-3B-it-chinese-kyara.i1-Q5_K_M.gguf) | i1-Q5_K_M | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-it-chinese-kyara-i1-GGUF/resolve/main/Llama-3.2-3B-it-chinese-kyara.i1-Q6_K.gguf) | i1-Q6_K | 2.7 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
alperenyildiz/outputs
|
alperenyildiz
| 2025-06-22T11:17:43Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"grpo",
"arxiv:2402.03300",
"base_model:unsloth/phi-4",
"base_model:finetune:unsloth/phi-4",
"endpoints_compatible",
"region:us"
] | null | 2025-06-22T11:17:30Z |
---
base_model: unsloth/Phi-4
library_name: transformers
model_name: outputs
tags:
- generated_from_trainer
- trl
- grpo
licence: license
---
# Model Card for outputs
This model is a fine-tuned version of [unsloth/Phi-4](https://huggingface.co/unsloth/Phi-4).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="alperenyildiz/outputs", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/alperenyildiz-nus/phigrpo/runs/9bhr01rh)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.18.2
- Transformers: 4.52.4
- Pytorch: 2.7.0
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
Redwine99/gemma-2b-it-ko_v2
|
Redwine99
| 2025-06-22T11:10:43Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-22T11:06:05Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Gayrat1968/SAM
|
Gayrat1968
| 2025-06-22T11:10:23Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-22T10:52:38Z |
модели SAM из
https://github.com/storyicon/comfyui_segment_anything?tab=readme-ov-file
|
heboya8/facebook-musicgen-small-not-lora-110
|
heboya8
| 2025-06-22T11:08:51Z | 0 | 0 | null |
[
"safetensors",
"musicgen",
"region:us"
] | null | 2025-06-22T10:29:53Z |
***** eval metrics *****
epoch = 110.0
eval_clap = 0.1855
eval_loss = 5.0309
eval_runtime = 0:01:59.92
eval_samples = 8
eval_samples_per_second = 0.067
eval_steps_per_second = 0.067
|
Triangle104/Huihui-MoE-23B-A4B-abliterated-Q3_K_M-GGUF
|
Triangle104
| 2025-06-22T11:08:36Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"moe",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"base_model:huihui-ai/Huihui-MoE-23B-A4B-abliterated",
"base_model:quantized:huihui-ai/Huihui-MoE-23B-A4B-abliterated",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-22T11:05:29Z |
---
license: apache-2.0
base_model: huihui-ai/Huihui-MoE-23B-A4B-abliterated
library_name: transformers
license_link: https://huggingface.co/Qwen/Qwen3-4B/blob/main/LICENSE
pipeline_tag: text-generation
tags:
- moe
- llama-cpp
- gguf-my-repo
---
# Triangle104/Huihui-MoE-23B-A4B-abliterated-Q3_K_M-GGUF
This model was converted to GGUF format from [`huihui-ai/Huihui-MoE-23B-A4B-abliterated`](https://huggingface.co/huihui-ai/Huihui-MoE-23B-A4B-abliterated) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/huihui-ai/Huihui-MoE-23B-A4B-abliterated) for more details on the model.
---
Huihui-MoE-23B-A4B-abliterated is a Mixture of Experts (MoE) language model developed by huihui.ai, built upon the huihui-ai/Huihui-Qwen3-4B-abliterated-v2 base model. It enhances the standard Transformer architecture by replacing MLP layers with MoE layers, each containing 8 experts, to achieve high performance with efficient inference. The model is designed for natural language processing tasks, including text generation, question answering, and conversational applications.
---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Huihui-MoE-23B-A4B-abliterated-Q3_K_M-GGUF --hf-file huihui-moe-23b-a4b-abliterated-q3_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Huihui-MoE-23B-A4B-abliterated-Q3_K_M-GGUF --hf-file huihui-moe-23b-a4b-abliterated-q3_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Huihui-MoE-23B-A4B-abliterated-Q3_K_M-GGUF --hf-file huihui-moe-23b-a4b-abliterated-q3_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Huihui-MoE-23B-A4B-abliterated-Q3_K_M-GGUF --hf-file huihui-moe-23b-a4b-abliterated-q3_k_m.gguf -c 2048
```
|
Habeeb13108n/Leaderstm.ai
|
Habeeb13108n
| 2025-06-22T11:07:01Z | 0 | 0 | null |
[
"license:artistic-2.0",
"region:us"
] | null | 2025-06-22T11:07:01Z |
---
license: artistic-2.0
---
|
echarif/lora_adapter_llama
|
echarif
| 2025-06-22T11:07:00Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"llama-factory",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-21T18:49:52Z |
---
library_name: transformers
tags:
- llama-factory
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
yu3733/paligemma2-3b-lora-vqa-d1000-r16
|
yu3733
| 2025-06-22T11:05:31Z | 0 | 0 | null |
[
"safetensors",
"paligemma",
"lora",
"adapter",
"visual-question-answering",
"image-to-text",
"base_model:google/paligemma2-3b-mix-224",
"base_model:adapter:google/paligemma2-3b-mix-224",
"region:us"
] |
image-to-text
| 2025-06-22T11:05:20Z |
---
tags:
- paligemma
- lora
- adapter
- visual-question-answering
- image-to-text
base_model: google/paligemma2-3b-mix-224
widget:
- text: "<image>\nQuestion: What is in this image?\nAnswer:"
---
# paligemma2-3b-lora-vqa-d1000-r16
This is a LoRA adapter for PaliGemma-2 3B trained on VQA tasks.
## Usage
```python
from transformers import AutoProcessor, AutoModelForCausalLM
from peft import PeftModel
import torch
# Base model
base_model_id = "google/paligemma2-3b-mix-224"
adapter_id = "yu3733/paligemma2-3b-lora-vqa-d1000-r16"
# Load processor
processor = AutoProcessor.from_pretrained(base_model_id)
# Load base model
model = AutoModelForCausalLM.from_pretrained(
base_model_id,
torch_dtype=torch.float16,
device_map="auto"
)
# Load LoRA adapter
model = PeftModel.from_pretrained(model, adapter_id)
# Inference
prompt = "<image>\nQuestion: What is in this image?\nAnswer:"
inputs = processor(text=prompt, images=image, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=20)
print(processor.decode(outputs[0], skip_special_tokens=True))
```
## Training Details
- Base Model: google/paligemma2-3b-mix-224
- Training Data: VizWiz VQA Dataset
- LoRA Rank: 16
- Training Framework: PEFT + Transformers
## License
Same as the base model (see google/paligemma2-3b-mix-224)
|
Rishavnine/lora_model
|
Rishavnine
| 2025-06-22T11:05:30Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/orpheus-3b-0.1-pretrained-unsloth-bnb-4bit",
"base_model:finetune:unsloth/orpheus-3b-0.1-pretrained-unsloth-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-22T11:01:18Z |
---
base_model: unsloth/orpheus-3b-0.1-pretrained-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Rishavnine
- **License:** apache-2.0
- **Finetuned from model :** unsloth/orpheus-3b-0.1-pretrained-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
18-Video-Anabel-Angus-Y-Marco-Antelo/Ver.video.de.anabel.angus.y.marco.antelo.video.anabel.angus.y.marco.antelo.video.original
|
18-Video-Anabel-Angus-Y-Marco-Antelo
| 2025-06-22T11:02:58Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-22T11:01:53Z |
[Ver 🟢 ➤ ➤ ➤ 🌐 Haz clic aquí para ver el enlace (Enlace del video viral completo)](https://tinyurl.com/Videos-Pinoy?hasinamodi)
[🔴 ➤►DESCARGAR👉👉 (Enlace del video viral completo)](https://tinyurl.com/Videos-Pinoy?hasinamodi)
<a href="https://tinyurl.com/Videos-Pinoy?hasinamodi" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="WATCH Videos" data-canonical-src="https://i.imgur.com/dJHk4Zq.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
luckeciano/Qwen-2.5-7B-GRPO-NoBaseline-FisherMaskSentence-1e-6_6854
|
luckeciano
| 2025-06-22T11:01:16Z | 2 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"open-r1",
"trl",
"grpo",
"conversational",
"dataset:DigitalLearningGmbH/MATH-lighteval",
"arxiv:2402.03300",
"base_model:Qwen/Qwen2.5-Math-7B",
"base_model:finetune:Qwen/Qwen2.5-Math-7B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-22T07:15:14Z |
---
base_model: Qwen/Qwen2.5-Math-7B
datasets: DigitalLearningGmbH/MATH-lighteval
library_name: transformers
model_name: Qwen-2.5-7B-GRPO-NoBaseline-FisherMaskSentence-1e-6_6854
tags:
- generated_from_trainer
- open-r1
- trl
- grpo
licence: license
---
# Model Card for Qwen-2.5-7B-GRPO-NoBaseline-FisherMaskSentence-1e-6_6854
This model is a fine-tuned version of [Qwen/Qwen2.5-Math-7B](https://huggingface.co/Qwen/Qwen2.5-Math-7B) on the [DigitalLearningGmbH/MATH-lighteval](https://huggingface.co/datasets/DigitalLearningGmbH/MATH-lighteval) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="luckeciano/Qwen-2.5-7B-GRPO-NoBaseline-FisherMaskSentence-1e-6_6854", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/max-ent-llms/PolicyGradientStability/runs/xr1px55m)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.16.0.dev0
- Transformers: 4.49.0
- Pytorch: 2.6.0
- Datasets: 3.4.1
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
Triangle104/Huihui-MoE-23B-A4B-abliterated-Q3_K_S-GGUF
|
Triangle104
| 2025-06-22T10:58:53Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"moe",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"base_model:huihui-ai/Huihui-MoE-23B-A4B-abliterated",
"base_model:quantized:huihui-ai/Huihui-MoE-23B-A4B-abliterated",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-22T10:20:31Z |
---
license: apache-2.0
base_model: huihui-ai/Huihui-MoE-23B-A4B-abliterated
library_name: transformers
license_link: https://huggingface.co/Qwen/Qwen3-4B/blob/main/LICENSE
pipeline_tag: text-generation
tags:
- moe
- llama-cpp
- gguf-my-repo
---
# Triangle104/Huihui-MoE-23B-A4B-abliterated-Q3_K_S-GGUF
This model was converted to GGUF format from [`huihui-ai/Huihui-MoE-23B-A4B-abliterated`](https://huggingface.co/huihui-ai/Huihui-MoE-23B-A4B-abliterated) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/huihui-ai/Huihui-MoE-23B-A4B-abliterated) for more details on the model.
---
Huihui-MoE-23B-A4B-abliterated is a Mixture of Experts (MoE) language model developed by huihui.ai, built upon the huihui-ai/Huihui-Qwen3-4B-abliterated-v2 base model. It enhances the standard Transformer architecture by replacing MLP layers with MoE layers, each containing 8 experts, to achieve high performance with efficient inference. The model is designed for natural language processing tasks, including text generation, question answering, and conversational applications.
---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Huihui-MoE-23B-A4B-abliterated-Q3_K_S-GGUF --hf-file huihui-moe-23b-a4b-abliterated-q3_k_s.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Huihui-MoE-23B-A4B-abliterated-Q3_K_S-GGUF --hf-file huihui-moe-23b-a4b-abliterated-q3_k_s.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Huihui-MoE-23B-A4B-abliterated-Q3_K_S-GGUF --hf-file huihui-moe-23b-a4b-abliterated-q3_k_s.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Huihui-MoE-23B-A4B-abliterated-Q3_K_S-GGUF --hf-file huihui-moe-23b-a4b-abliterated-q3_k_s.gguf -c 2048
```
|
AXERA-TECH/MixFormerV2
|
AXERA-TECH
| 2025-06-22T10:57:57Z | 30 | 0 | null |
[
"onnx",
"Transformer",
"Tracking",
"ONNX",
"en",
"license:mit",
"region:us"
] | null | 2025-04-03T13:58:52Z |
---
license: mit
language:
- en
tags:
- Transformer
- Tracking
- ONNX
---
# MixFormerV2
This version of MixFormerV2 has been converted to run on the Axera NPU using **w8a16** quantization.
This model has been optimized with the following LoRA:
Compatible with Pulsar2 version: 3.4
## Convert tools links:
For those who are interested in model conversion, you can try to export axmodel through
- [The repo of original](https://github.com/MCG-NJU/MixFormerV2)
- [The repo of AXera Platform](https://github.com/Jordan-5i/ax650_mixformer2_demo), which you can get the detial of guide
- [Pulsar2 Link, How to Convert ONNX to axmodel](https://pulsar2-docs.readthedocs.io/en/latest/pulsar2/introduction.html)
## Support Platform
- AX650
- [M4N-Dock(爱芯派Pro)](https://wiki.sipeed.com/hardware/zh/maixIV/m4ndock/m4ndock.html)
- [M.2 Accelerator card](https://axcl-docs.readthedocs.io/zh-cn/latest/doc_guide_hardware.html)
- AX630C
- [爱芯派2](https://axera-pi-2-docs-cn.readthedocs.io/zh-cn/latest/index.html)
- [Module-LLM](https://docs.m5stack.com/zh_CN/module/Module-LLM)
- [LLM630 Compute Kit](https://docs.m5stack.com/zh_CN/core/LLM630%20Compute%20Kit)
|Chips|npu1|
|--|--|
|AX650| 11 ms |
|AX630C| 33 ms |
## How to use
Download all files from this repository to the device
```
root@ax650:/mnt/qtang/MixFormerV2# tree -L 1
.
├── ax650
├── car.avi
├── config.json
├── onnx
├── README.md
├── run_mixformer2_axmodel.py
└── run_mixformer2_onnx.py
```
### python env requirement
#### pyaxengine
https://github.com/AXERA-TECH/pyaxengine
```
wget https://github.com/AXERA-TECH/pyaxengine/releases/download/0.1.1rc0/axengine-0.1.1-py3-none-any.whl
pip install axengine-0.1.1-py3-none-any.whl
```
#### others
```
pip install argparse numpy opencv-python glob2
```
#### Inference with AX650 Host, such as M4N-Dock(爱芯派Pro)
```
root@ax650:/mnt/qtang/ax650_mixformer2_demo# python3 run_mixformer2_axmodel.py --model-path ax650/mixformer_v2.axmodel --frame-path car.avi -r 10
[INFO] Available providers: ['AxEngineExecutionProvider']
[INFO] Using provider: AxEngineExecutionProvider
[INFO] Chip type: ChipType.MC50
[INFO] VNPU type: VNPUType.DISABLED
[INFO] Engine version: 2.7.2a
[INFO] Model type: 0 (single core)
[INFO] Compiler version: 3.4-dirty 4ff37520-dirty
====================type================= [1079, 482] <class 'list'> <class 'list'>
第一帧初始化完毕!
Video: tracking 246.0fps
Video: tracking 4.0fps
Video: tracking 4.0fps
Video: tracking 4.0fps
Video: tracking 4.0fps
Video: tracking 4.0fps
Video: tracking 4.0fps
Video: tracking 4.0fps
Video: tracking 4.0fps
Video: tracking 4.0fps
Video: tracking 4.0fps
Reached the maximum number of frames (10). Exiting loop.
video: average finale average tracking fps 31.8 fps
root@ax650:/mnt/qtang/ax650_mixformer2_demo#
```
#### Inference with M.2 Accelerator card
[What is M.2 Accelerator card?](https://axcl-docs.readthedocs.io/zh-cn/latest/doc_guide_hardware.html), Show this DEMO based on Raspberry PI 5.
```
(axcl) axera@raspberrypi:~/samples/MixFormerV2 $ python3 run_mixformer2_axmodel.py --model-path ax650/mixformer_v2.axmodel --frame-path car.avi -r 10
[INFO] Available providers: ['AXCLRTExecutionProvider']
[INFO] Using provider: AXCLRTExecutionProvider
[INFO] SOC Name: AX650N
[INFO] VNPU type: VNPUType.DISABLED
[INFO] Compiler version: 3.4-dirty 4ff37520-dirty
====================type================= [1079, 482] <class 'list'> <class 'list'>
第一帧初始化完毕!
Video: tracking 925.0fps
Video: tracking 12.0fps
Video: tracking 12.0fps
Video: tracking 11.0fps
Video: tracking 11.0fps
Video: tracking 11.0fps
Video: tracking 11.0fps
Video: tracking 11.0fps
Video: tracking 10.0fps
Video: tracking 10.0fps
Video: tracking 10.0fps
Reached the maximum number of frames (10). Exiting loop.
video: average finale average tracking fps 114.9 fps
(axcl) axera@raspberrypi:~/samples/MixFormerV2 $
```
|
New-videos-Katrina-Lim-viral-video-Link/FULL.VIDEO.Katrina.Lim.Viral.Kiffy.Viral.Video.Tutorial.Official
|
New-videos-Katrina-Lim-viral-video-Link
| 2025-06-22T10:57:50Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-22T10:57:33Z |
<animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
<animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
SoIslam0311/Prine
|
SoIslam0311
| 2025-06-22T10:55:01Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:stabilityai/stable-diffusion-3.5-large",
"base_model:adapter:stabilityai/stable-diffusion-3.5-large",
"region:us"
] |
text-to-image
| 2025-06-22T10:53:59Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: '-'
output:
url: images/Work2.jpg
base_model: stabilityai/stable-diffusion-3.5-large
instance_prompt: null
---
# Prine
<Gallery />
## Download model
[Download](/SoIslam0311/Prine/tree/main) them in the Files & versions tab.
|
18-Anabel-Angus-Y-Marco-Antelo-Video/Ultimo.Video.De.Anabel.Angus.Y.Marco.Antelo.Enlace.de.Terabox.Link
|
18-Anabel-Angus-Y-Marco-Antelo-Video
| 2025-06-22T10:53:32Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-22T10:53:20Z |
<a href="https://tinyurl.com/Videos-Pinoy?hasinamodi" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="WATCH Videos" data-canonical-src="https://i.imgur.com/dJHk4Zq.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
New-videos-Sophie-Rain-viral-video-Link/FULL.VIDEO.Sophie.Rain.Spiderman.Viral.Video.Tutorial.Official
|
New-videos-Sophie-Rain-viral-video-Link
| 2025-06-22T10:50:24Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-22T10:50:04Z |
<animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
<animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
rngrye/my-document-classifier
|
rngrye
| 2025-06-22T10:48:53Z | 90 | 0 |
transformers
|
[
"transformers",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-cased",
"base_model:finetune:distilbert/distilbert-base-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-05-25T16:15:30Z |
---
library_name: transformers
license: apache-2.0
base_model: distilbert-base-cased
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: my-document-classifier
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my-document-classifier
This model is a fine-tuned version of [distilbert-base-cased](https://huggingface.co/distilbert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0313
- Accuracy: 0.9910
- F1: 0.9910
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 22002423
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.5447 | 1.0 | 112 | 0.0777 | 0.9821 | 0.9819 |
| 0.0752 | 2.0 | 224 | 0.0958 | 0.9731 | 0.9730 |
| 0.038 | 3.0 | 336 | 0.0711 | 0.9865 | 0.9865 |
| 0.0191 | 4.0 | 448 | 0.0795 | 0.9865 | 0.9865 |
| 0.0066 | 5.0 | 560 | 0.0900 | 0.9865 | 0.9865 |
| 0.0063 | 6.0 | 672 | 0.0945 | 0.9865 | 0.9865 |
| 0.0014 | 7.0 | 784 | 0.1040 | 0.9865 | 0.9865 |
| 0.0011 | 8.0 | 896 | 0.1023 | 0.9865 | 0.9865 |
| 0.001 | 9.0 | 1008 | 0.1027 | 0.9865 | 0.9865 |
| 0.0009 | 10.0 | 1120 | 0.1026 | 0.9865 | 0.9865 |
### Framework versions
- Transformers 4.52.4
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1
|
litagin/anime_speaker_embedding_ecapa_tdnn_groupnorm
|
litagin
| 2025-06-22T10:44:59Z | 0 | 3 | null |
[
"speaker-verification",
"speaker-identification",
"speaker-embedding",
"audio",
"voice",
"speech",
"audio-classification",
"ja",
"dataset:OOPPEENN/VisualNovel_Dataset",
"base_model:speechbrain/spkrec-ecapa-voxceleb",
"base_model:finetune:speechbrain/spkrec-ecapa-voxceleb",
"license:mit",
"region:us"
] |
audio-classification
| 2025-05-17T01:25:34Z |
---
license: mit
datasets:
- OOPPEENN/VisualNovel_Dataset
language:
- ja
base_model:
- speechbrain/spkrec-ecapa-voxceleb
pipeline_tag: audio-classification
tags:
- speaker-verification
- speaker-identification
- speaker-embedding
- audio
- voice
- speech
---
# Anime Speaker Embedding
See [GitHub](https://github.com/litagin02/anime_speaker_embedding) repo.
Speaker embedding model for suitable for anime domain.
[English README](#English-README)
アニメドメインに適した話者埋め込みモデル。
## 概要
- [SpeechBrain](https://github.com/speechbrain/speechbrain) の ECAPA-TDNN モデルを、[OOPPEENN/56697375616C4E6F76656C5F44617461736574](https://huggingface.co/datasets/OOPPEENN/56697375616C4E6F76656C5F44617461736574) で学習
- アニメおよびビジュアルノベルの文脈での話者埋め込みタスク向けに設計
- **2025-06-22: Voice Actor(VA)バリアントを追加**(バージョン0.2.0)。デフォルトのモデルよりも同一キャラがまとまりやすくなるモデル(比較は表参照)
## 特長
- 日本語アニメ調の演技音声や非言語発話に特化
- 他の通常の話者埋め込みモデルではまったく区別できない、日本のノベルゲーの文化の中で非常に重要なNSFWな性的発声(喘ぎ・チュパ音など)にも対応
## モデルバリアント
- **char**(デフォルト): キャラクターを推定するようにトレーニングされたモデル。声優ではなくキャラクターを区別(同じ声優が演じる別キャラクターも別話者として学習)
- **va**(バージョン0.2.0で追加): 声優を推定するようにトレーニングされたモデル。キャラクターのスタイル差よりも声優ごとの一貫性を重視
同一キャラクターに対して、charモデルは埋め込みの分散が大きく、vaモデルは分散が小さくなる傾向があります。例えば下記のGame1での違いを見ると、charモデルは細かく同一話者でも分離されているのに対し、vaモデルは同一話者の埋め込みが近くに集まっています。
## 注意
- 話者を積極的に区別しようとする性質のため、同一話者の埋め込み間のコサイン類似度は他モデルより低めです
## インストール
```bash
pip install torch --index-url https://download.pytorch.org/whl/cu128 # GPU利用時
pip install anime_speaker_embedding
```
## 使い方
```python
from anime_speaker_embedding import AnimeSpeakerEmbedding
import torch
device = "cuda" if torch.cuda.is_available() else "cpu"
model = AnimeSpeakerEmbedding(device=device, variant="char") # variant="va" でVAモデル
audio_path = "path/to/audio.wav"
embedding = model.get_embedding(audio_path)
print(embedding.shape) # (192,) の np.ndarray
```
使用例や可視化例は [example.ipynb](example.ipynb) を参照してください。
## 他モデルとの比較
トレーニングセットに含まれないゲーム音声の埋め込みの様子:
| モデル | Game1 | Game2 | Game3 | Game4 |
|-------|-------|-------|-------|-------|
| [**⭐ VA model**](https://huggingface.co/litagin/anime_speaker_embedding_by_va_ecapa_tdnn_groupnorm) |  |  |  |  |
| [**⭐ Char model**](https://huggingface.co/litagin/anime_speaker_embedding_ecapa_tdnn_groupnorm) |  |  |  |  |
| [speechbrain/spkrec-ecapa-voxceleb](https://huggingface.co/speechbrain/spkrec-ecapa-voxceleb) |  |  |  |  |
| [pyannote/wespeaker-voxceleb-resnet34-LM](https://huggingface.co/pyannote/wespeaker-voxceleb-resnet34-LM) |  |  |  |  |
| [Resemblyzer](https://github.com/resemble-ai/Resemblyzer) |  |  |  |  |
- Game1とGame2はNSFW音声を含み、Game3とGame4は含まない
- Game4では茶色と黄色の話者は実際には同一キャラクター
## モデル詳細
### モデルアーキテクチャ
本モデルはSpeechBrainのECAPA-TDNN全てのBatchNormレイヤーをGroupNormに置き換えています。元のBatchNorm層で評価時におそらく統計のドリフトが発生し、うまく推論できなかったためです。
#### データセット
##### `char`バリアント
[OOPPEENN/56697375616C4E6F76656C5F44617461736574](https://huggingface.co/datasets/OOPPEENN/56697375616C4E6F76656C5F44617461736574)の全音声ファイルから破損ファイル等を除外し、100ファイル未満の話者を除外。最終データセット:
- train: 6,260,482 ファイル、valid: 699,488 ファイル、合計 6,959,970 ファイル
- 7,357 人のキャラクター
##### `va`バリアント
[litagin/VisualNovel_Dataset_Metadata](https://huggingface.co/datasets/litagin/VisualNovel_Dataset_Metadata)を用いて、VNDBに声優が登録されているキャラクターのみ使用。最終データセット:
- train: 6,603,080 ファイル、valid: 348,034 ファイル、合計 6,951,114 ファイル
- 989 人の声優
### 学習プロセス
#### `char`バリアント
- [speechbrain/spkrec-ecapa-voxceleb](https://huggingface.co/speechbrain/spkrec-ecapa-voxceleb)をベースモデルとして使用
- その後BatchNormをすべてGroupNormに置換
- fbank前に `x = x * 32768.0` のスケーリングを追加(ChatGPTがそういうコード出してきたので……。あとからこのスケーリングは互換性上よくないことに気づいたけど手遅れでした)
- いろいろ変えてるので、実際はファインチューニングではなくスクラッチからの学習に近いと思います
- ファイル数が多い上位100、1000キャラクターのサブセットで事前学習
- フルデータセットで学習
- オンラインデータ拡張(リバーブ、バックグラウンドノイズ、各種フィルタ等)を加えて再学習
- 同一シリーズ・同一キャラクター名で混同行列が高いキャラクター(同じゲームシリーズの同一キャラ相当)をいくつかマージして学習
#### `va`バリアント
- Charバリアントの埋め込み器をベースに、データセットを声優のに変えてファインチューニング
- オーグメンテーション確率0.8
- バリデーションセットでMacro精度・再現率・F1・EERを評価し、EER0.41%のモデルを採用 (Macro precision 95.97%, Recall 97.83%、F1 96.80%)
**トレーニングコードは別リポジトリで公開予定です。**
# English README
## Overview
- ECAPA-TDNN model (from [SpeechBrain](https://github.com/speechbrain/speechbrain)) trained on [OOPPEENN/56697375616C4E6F76656C5F44617461736574](https://huggingface.co/datasets/OOPPEENN/56697375616C4E6F76656C5F44617461736574)
- This model is designed for speaker embedding tasks in anime and visual novel contexts.
- **2025-06-22: Added Voice Actor (VA) variant** in version 0.2.0, which is less eager to distinguish speakers compared to the default Character (char) variant.
## Features
- Well-suited for **Japanese anime-like** voices, including **non-verbal vocalizations** or **acted voices**
- Also works well for *NSFW erotic utterances and vocalizations* such as aegi (喘ぎ) and chupa-sound (チュパ音), which are important in Japanese Visual Novel games, while other usual speaker embedding models cannot distinguish such voices of different speakers at all!
## Model Variants
- **char** (default): Trained to guess character voices, not voice actors; eager to distinguish speakers (even two characters with the same voice actor).
- **va** (added in ver 0.2.0): Trained on voice actors, not characters; less eager to distinguish speakers.
For a single fixed character, the **char** model produces embeddings with higher variance by style, while the **va** model keeps embeddings more similar (lower variance).
## Note
- Because this model tries to eagerly distinguish speakers, cosine similarity values between embeddings of the same speaker are usually lower than in other embedding models.
## Installation
```bash
pip install torch --index-url https://download.pytorch.org/whl/cu128 # if you want to use GPU
pip install anime_speaker_embedding
```
## Usage
```python
from anime_speaker_embedding import AnimeSpeakerEmbedding
import torch
device = "cuda" if torch.cuda.is_available() else "cpu"
model = AnimeSpeakerEmbedding(device=device, variant="char") # or variant="va" for Voice Actor model
audio_path = "path/to/audio.wav"
embedding = model.get_embedding(audio_path)
print(embedding.shape) # np.ndarray with shape (192,)
```
See [example.ipynb](example.ipynb) for usage and visualization examples.
## Comparison with other models
t-SNE plots of embeddings from some Galgames (not included in the training set!):
| Model | Game1 | Game2 | Game3 | Game4 |
|-------|-------|-------|-------|-------|
| [**⭐ VA model**](https://huggingface.co/litagin/anime_speaker_embedding_by_va_ecapa_tdnn_groupnorm) |  |  |  |  |
| [**⭐ Char model**](https://huggingface.co/litagin/anime_speaker_embedding_ecapa_tdnn_groupnorm) |  |  |  |  |
| [speechbrain/spkrec-ecapa-voxceleb](https://huggingface.co/speechbrain/spkrec-ecapa-voxceleb) |  |  |  |  |
| [pyannote/wespeaker-voxceleb-resnet34-LM](https://huggingface.co/pyannote/wespeaker-voxceleb-resnet34-LM) |  |  |  |  |
| [Resemblyzer](https://github.com/resemble-ai/Resemblyzer) |  |  |  |  |
- Game1 and Game2 contain NSFW voices; Game3 and Game4 do not.
- In Game4, the brown and yellow speakers are actually the same character.
## Model Details
### Model Architecture
The actual model is SpeechBrain’s ECAPA-TDNN with all BatchNorm layers replaced by GroupNorm, due to statistical drift issues during evaluation.
#### Dataset
##### Char variant
From the [OOPPEENN/56697375616C4E6F76656C5F44617461736574](https://huggingface.co/datasets/OOPPEENN/56697375616C4E6F76656C5F44617461736574) dataset, broken files and speakers with fewer than 100 files were excluded. Final:
- train: 6,260,482 files, valid: 699,488 files, total: 6,959,970 files
- 7,357 speakers
##### VA variant
Using [litagin/VisualNovel_Dataset_Metadata](https://huggingface.co/datasets/litagin/VisualNovel_Dataset_Metadata), only characters whose VAs are in VNDB were kept. Final:
- train: 6,603,080 files, valid: 348,034 files, total: 6,951,114 files
- 989 speakers
### Training process
#### `char` variant
- Base: [speechbrain/spkrec-ecapa-voxceleb](https://huggingface.co/speechbrain/spkrec-ecapa-voxceleb); replaced BN→GN; added `x = x * 32768.0` before fbank
- ChatGPT suggested this scaling, but it turned out to be incompatible later.
- I guess the model is rather trained from scratch, not fine-tuned actually.
- Pretrained on top-100/1000 speakers subset
- Trained on full dataset
- Retrained with online augmentations (reverb, noise, filters)
- Merged speakers with high confusion from same series/character
#### `va` variant
- Fine-tuned `char` backbone (aug prob 0.8)
- Selected model with best EER (0.41%); Macro precision 95.97%, recall 97.83%, F1 96.80%
**Training code to be released separately.**
|
harshasurampudi/upsc-classifier-gemma1b
|
harshasurampudi
| 2025-06-22T10:43:51Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma3_text",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2025-06-22T10:43:39Z |
---
base_model: upsc-article-classifier-fp16
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3_text
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** harshasurampudi
- **License:** apache-2.0
- **Finetuned from model :** upsc-article-classifier-fp16
This gemma3_text model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
dgrzyska/gemma-2b-grammar-mlc-q4f32-1
|
dgrzyska
| 2025-06-22T10:36:25Z | 0 | 0 | null |
[
"grammar",
"gec",
"en",
"base_model:google/gemma-2-2b-it",
"base_model:finetune:google/gemma-2-2b-it",
"license:gemma",
"region:us"
] | null | 2025-06-22T09:18:40Z |
---
license: gemma
language:
- en
base_model:
- google/gemma-2-2b-it
tags:
- grammar
- gec
---
## License
This model is a fine-tuned version of [Google’s Gemma 2B model](https://ai.google.dev/gemma), and is distributed under the [Gemma Terms of Use](https://ai.google.dev/gemma/terms).
By using this model, you agree to comply with those terms. This fine-tuned version is not an official release from Google.
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.