modelId
stringlengths 5
138
| author
stringlengths 2
42
| last_modified
unknowndate 2020-02-15 11:33:14
2025-05-07 00:39:23
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 449
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
unknowndate 2022-03-02 23:29:04
2025-05-07 00:39:20
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
AnonRes/PrimusM-OpenMind-SimCLR | AnonRes | "2025-05-06T12:31:41Z" | 0 | 0 | null | [
"license:cc-by-4.0",
"region:us"
] | null | "2025-05-06T12:30:57Z" | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p>
</div>
</main>
</body>
</html> |
seedboxai/KafkaLM-15B-GRPO_LoRA_Exp | seedboxai | "2025-05-06T12:30:47Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2025-05-06T07:52:01Z" | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p>
</div>
</main>
</body>
</html> |
AnonRes/PrimusM-OpenMind-MG | AnonRes | "2025-05-06T12:29:40Z" | 0 | 0 | null | [
"license:cc-by-4.0",
"region:us"
] | null | "2025-05-06T12:28:55Z" | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p>
</div>
</main>
</body>
</html> |
Bofeee5675/TongUI-7B | Bofeee5675 | "2025-05-06T12:28:57Z" | 0 | 0 | null | [
"safetensors",
"qwen2_5_vl",
"region:us"
] | null | "2025-04-09T06:25:10Z" | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p>
</div>
</main>
</body>
</html> |
seedboxai/KafkaLM-15B-Base | seedboxai | "2025-05-06T12:28:05Z" | 124 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"pruning",
"distillation",
"sparsity‑2:4",
"en",
"de",
"fr",
"es",
"it",
"pt",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-03-26T09:11:36Z" | ---
library_name: transformers
tags:
- pruning
- distillation
- sparsity‑2:4
license: apache-2.0
language:
- en
- de
- fr
- es
- it
- pt
pipeline_tag: text-generation
---
<img src="https://cdn-uploads.huggingface.co/production/uploads/645ded34a45b4182d7f5c385/EgsjPDWd37LjAtamiICxk.png" width="480" height="480" alt="image/png">
### Disclaimer
This model is a base model which received aggressive pruning and knowledge distillation. To make it usable for your individual application it must we finetuned.
# Model Description
**KafkaLM‑15B‑Base** is a 15‑billion‑parameter, sparsity‑aware language model distilled from *Mistral‑Small‑24B‑Base‑2501*.
This experimental model was created in three stages:
| Stage | What we did | Why it matters |
|-------|-------------|----------------|
| **1. SimplePrune** | Applied a hierarchical, hardware‑aware pruning pipeline that combines block‑, channel‑ and layer-selective 2:4 structured sparsity (≈ 37.5 % parameter reduction) | Slashes memory footprint while minimizing perplexity degradation |
| **2. Teacher calibration** | Briefly fine‑tuned the unpruned 24 B teacher on a 10 B‑token multilingual European corpus on a AMD M300A cluster | Produces stable logits and hidden states for distillation |
| **3. Knowledge distillation** | Distilled the calibrated teacher into the pruned 15 B student using a **fused loss**:<br/>`L Pooled SquareHead + LKL + 0.25 * LCE` | Transfers teacher capabiities effectively with <15B tokens **(< 2 epochs)** on 64 MI300A nodes |
**Key capabilities**
* Balanced for both **multitask** and multilingual conversation and long context handling
* Structured **2:4 sparsity** → runs up to **40 % faster** on sparsity‑aware kernels
* Distilled on a combination of multilingual pretraining and synthetic data
* Training pipeline optimized for unified‑memory GPUs (AMD MI300A) but runs on any CUDA / ROCm device
---
## Pruning Process
**Pruning & Distillation Strategy — SimplePrune**
Hardware‑aware, hierarchical pipeline. SimplePrune starts with coarse block‑level pruning and drills down to channel‑ and neuron‑level removals, finishing with 2 : 4 structured sparsity. This staged approach converts compression ratios into real memory‑bandwidth and latency gains.
**Sensitivity‑guided selection**
Each stage is driven by activation‑magnitude profiles and Hessian‑based importance scores captured asynchronously during training, allowing the framework to run inside the MI300A’s 512 GB unified memory without OOM interruptions.
**Two‑phase optimisation**
A fast greedy pass prunes low‑impact blocks in MLP expansion layers, after which a **Tabu‑Search** meta‑heuristic explores cross‑layer combinations for a better global trade‑off between sparsity and perplexity/KL divergence.
**Post‑pruning knowledge distillation**
The pruned 15 B student is distilled from a calibrated 24 B teacher using a fused LSquareHead + KL + 0.25 · CE loss across 20 B multilingual tokens, restoring > 96 % of the original quality in ≤ 2 epochs on up to 64 MI300A nodes.
### Results
Up to 40 % parameter reduction (24 B → 15 B) delivers 2× lower TTFT and ≈ 40 % higher tokens/s versus the uncompressed teacher while matching perplexity and divergence metrics—validating SimplePrune as an effective route to deploy KafkaLM in memory‑constrained, sparsity‑accelerated environments.
| Metric | Mistral‑24B | **KafkaLM‑15B** | Δ |
|--------|-------------|-----------------|---|
| Time‑to‑First‑Token | 4.91 s | **2.46 s** | −50% |
| Prompts / s | 4.70 | **6.55** | +38% |
| Tokens / s | 579 | **812** | +40% |
<img src="https://cdn-uploads.huggingface.co/production/uploads/645ded34a45b4182d7f5c385/4rDhaeC-1GMj6KWbB27f9.png" width="300" height="300" alt="image/png">
### Training scalability (distillation run, MI300A cluster)
| Nodes | Tokens / s | Speed‑up |
|-------|------------|----------|
| 4 | 1 461 | – |
| 8 | 3 327 | 2.3 × |
| 16 | 7 423 | 5.1 × |
| 32 | 15 286 | 10.5 × |
| 64 | 25 455 | 17.4 × |
Near‑linear scaling thanks to sharded ZeRO‑3 + RCCL optimisations.
## Citation
```bibtex
@misc{kafkalm2025,
title={Evaluating AMD's MI300A APU: Performance Insights on LLM Training via Knowledge Distillation},
author={Dennis Dickmann, Philipp Offenhäuser, Rishabh Saxena, George S. Markomanolis, Alessandro Rigazzi, Patrick Keller, Dennis Hoppe},
howpublished={Cray User Group Conference, 2025},
note={to be published},
year={2025}
}
``` |
cecefifi/wav2vec2-bert-speechocean-762-w10 | cecefifi | "2025-05-06T12:26:39Z" | 0 | 0 | transformers | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2024-11-25T11:14:37Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
lvlvlvlv1/BlastAI | lvlvlvlv1 | "2025-05-06T12:26:22Z" | 13 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"gguf",
"qwen2",
"unsloth",
"trl",
"sft",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | null | "2025-02-24T08:46:50Z" | ---
library_name: transformers
tags:
- unsloth
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
tarundachepally/granite_3b_1 | tarundachepally | "2025-05-06T12:25:01Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:ibm-granite/granite-3b-code-instruct-128k",
"base_model:finetune:ibm-granite/granite-3b-code-instruct-128k",
"endpoints_compatible",
"region:us"
] | null | "2025-05-06T12:24:54Z" | ---
base_model: ibm-granite/granite-3b-code-instruct-128k
library_name: transformers
model_name: granite_3b_1
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for granite_3b_1
This model is a fine-tuned version of [ibm-granite/granite-3b-code-instruct-128k](https://huggingface.co/ibm-granite/granite-3b-code-instruct-128k).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="tarundachepally/granite_3b_1", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.12.2
- Transformers: 4.46.3
- Pytorch: 2.6.0+cu124
- Datasets: 3.5.1
- Tokenizers: 0.20.3
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
NGalrion/MarinaraSpaghetti-NemoMix-Unleashed-12B-fixed | NGalrion | "2025-05-06T12:24:52Z" | 10 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-05-05T20:47:14Z" | ---
library_name: transformers
tags:
- mergekit
- merge
license: apache-2.0
---


# Information
## Details
Okay, I tried really hard to improve my ChatML merges, but that has gone terribly wrong. Everyone is adding special tokens with different IDs so can't even make a proper union tokenizer for them, damn. Not to mention, I made some... interesting discoveres in regards to some models' context lenghts. You can watch the breakdown of how it went down here: https://www.captiongenerator.com/v/2303039/marinaraspaghetti's-merging-experience.
This one feels a bit different to my previous attempts and seems less prone to repetition, especially on higher contexts, which is great for me! I'll probably improve on it even further, but for now, it feels rather nice. Great for RP and storytelling. All credits and thanks go to the amazing MistralAI, Intervitens, Sao10K and Nbeerbower for their amazing models! Plus, special shoutouts to Parasitic Rogue for ideas and Prodeus Unity and Statuo for cool exl2 quants of my previous merges. Cheers to folks over at the Drummer's server! Have a good one, everyone.
## Instruct

*Sigh,* Mistral Instruct, I'm afraid.
UPDATE: WE HAD THE WRONG FORMAT ALL ALONG, JUST RECEIVED HOW IT'S SUPPOSED TO LOOK LIKE FROM THE OFFICIAL MISTRALAI TEAM MEMBER.

...This had made me question everything I thought I knew.
```
<s>[INST]{system}[/INST]{response}</s>[INST]{user's message}[/INST]{response}</s>
```
## Parameters
I recommend running Temperature 1.0-1.25 with 0.1 Top A or 0.01-0.1 Min P, and with 0.8/1.75/2/0 DRY. Also works with lower Temperatures below 1.0. Nothing more needed.
### Settings
You can use my exact settings from here (use the ones from the Mistral Base/Customized folder, I also recommend checking the Mistral Improved folder): https://huggingface.co/MarinaraSpaghetti/SillyTavern-Settings/tree/main.
## GGUF
https://huggingface.co/bartowski/NemoMix-Unleashed-12B-GGUF
## EXL2
https://huggingface.co/Statuo/NemoMix-Unleashed-EXL2-8bpw
# NemoMix-Unleashed-12B
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the della_linear merge method using E:\mergekit\mistralaiMistral-Nemo-Base-2407 as a base.
### Models Merged
The following models were included in the merge:
* E:\mergekit\intervitens_mini-magnum-12b-v1.1
* E:\mergekit\nbeerbower_mistral-nemo-bophades-12B
* E:\mergekit\Sao10K_MN-12B-Lyra-v1
* E:\mergekit\nbeerbower_mistral-nemo-gutenberg-12B
* E:\mergekit\mistralaiMistral-Nemo-Instruct-2407
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: E:\mergekit\mistralaiMistral-Nemo-Instruct-2407
parameters:
weight: 0.1
density: 0.4
- model: E:\mergekit\nbeerbower_mistral-nemo-bophades-12B
parameters:
weight: 0.12
density: 0.5
- model: E:\mergekit\nbeerbower_mistral-nemo-gutenberg-12B
parameters:
weight: 0.2
density: 0.6
- model: E:\mergekit\Sao10K_MN-12B-Lyra-v1
parameters:
weight: 0.25
density: 0.7
- model: E:\mergekit\intervitens_mini-magnum-12b-v1.1
parameters:
weight: 0.33
density: 0.8
merge_method: della_linear
base_model: E:\mergekit\mistralaiMistral-Nemo-Base-2407
parameters:
epsilon: 0.05
lambda: 1
dtype: bfloat16
tokenizer_source: base
```
# Ko-fi
## Enjoying what I do? Consider donating here, thank you!
https://ko-fi.com/spicy_marinara |
fabfacal/vit-base-oxford-iiit-pets | fabfacal | "2025-05-06T12:22:15Z" | 8 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"base_model:google/vit-base-patch16-224",
"base_model:finetune:google/vit-base-patch16-224",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2025-04-17T19:55:08Z" | ---
library_name: transformers
license: apache-2.0
base_model: google/vit-base-patch16-224
tags:
- image-classification
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: vit-base-oxford-iiit-pets
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-oxford-iiit-pets
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the pcuenq/oxford-pets dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1986
- Accuracy: 0.9364
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.3803 | 1.0 | 370 | 0.2851 | 0.9323 |
| 0.2213 | 2.0 | 740 | 0.2248 | 0.9364 |
| 0.1837 | 3.0 | 1110 | 0.2068 | 0.9418 |
| 0.1419 | 4.0 | 1480 | 0.2006 | 0.9418 |
| 0.134 | 5.0 | 1850 | 0.1996 | 0.9418 |
### Framework versions
- Transformers 4.50.0
- Pytorch 2.6.0+cu124
- Datasets 3.4.1
- Tokenizers 0.21.1
Accuracy: 0.8800
Precision: 0.8768
Recall: 0.8800
|
aleegis/bee4f140-0325-4233-a130-1c7dc657b3a5 | aleegis | "2025-05-06T12:20:29Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:NousResearch/Hermes-2-Theta-Llama-3-8B",
"base_model:adapter:NousResearch/Hermes-2-Theta-Llama-3-8B",
"license:apache-2.0",
"region:us"
] | null | "2025-05-06T10:54:13Z" | ---
library_name: peft
license: apache-2.0
base_model: NousResearch/Hermes-2-Theta-Llama-3-8B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: bee4f140-0325-4233-a130-1c7dc657b3a5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: NousResearch/Hermes-2-Theta-Llama-3-8B
bf16: auto
chat_template: llama3
dataloader_num_workers: 12
dataset_prepared_path: null
datasets:
- data_files:
- aef18cd6aa739768_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/aef18cd6aa739768_train_data.json
type:
field_instruction: question
field_output: reference_answer
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_steps: null
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 8
gradient_checkpointing: false
group_by_length: false
hub_model_id: aleegis/bee4f140-0325-4233-a130-1c7dc657b3a5
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: null
lora_alpha: 32
lora_dropout: 0.15
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
loraplus_lr_embedding: 1.0e-06
loraplus_lr_ratio: 16
lr_scheduler: cosine
max_grad_norm: 1
max_steps: 1500
micro_batch_size: 2
mlflow_experiment_name: /tmp/aef18cd6aa739768_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 200
optimizer: adamw_torch_fused
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: null
save_total_limit: 10
saves_per_epoch: 0
sequence_len: 1024
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.0
wandb_entity: null
wandb_mode: online
wandb_name: befca2b0-af17-45b9-a5aa-8213942874a0
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: befca2b0-af17-45b9-a5aa-8213942874a0
warmup_steps: 100
weight_decay: 0
xformers_attention: null
```
</details><br>
# bee4f140-0325-4233-a130-1c7dc657b3a5
This model is a fine-tuned version of [NousResearch/Hermes-2-Theta-Llama-3-8B](https://huggingface.co/NousResearch/Hermes-2-Theta-Llama-3-8B) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 1500
### Training results
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Zack-Z/qwen3_4bi_cotsft_rs0_0_5cut_ru_cot2_e2 | Zack-Z | "2025-05-06T12:18:20Z" | 0 | 0 | transformers | [
"transformers",
"qwen3",
"feature-extraction",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/Qwen3-4B",
"base_model:finetune:unsloth/Qwen3-4B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | feature-extraction | "2025-05-06T12:04:54Z" | ---
base_model: unsloth/Qwen3-4B
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** Zack-Z
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen3-4B
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
randomcatuser/Mate_AI_lite1 | randomcatuser | "2025-05-06T12:13:12Z" | 0 | 0 | transformers | [
"transformers",
"help",
"word",
"essay",
"en",
"dataset:wikimedia/wikipedia",
"base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"base_model:finetune:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | "2025-04-28T12:47:52Z" | ---
license: mit
datasets:
- wikimedia/wikipedia
language:
- en
metrics:
- accuracy
base_model:
- TinyLlama/TinyLlama-1.1B-Chat-v1.0
new_version: TinyLlama/TinyLlama-1.1B-Chat-v1.0
library_name: transformers
tags:
- help
- word
- essay
--- |
TienMat999/Llama-3.2-3B-ft-bf16-lora4-v20250505 | TienMat999 | "2025-05-06T12:09:57Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2025-05-06T12:09:52Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Semilore12/Pacify | Semilore12 | "2025-05-06T12:06:45Z" | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | "2025-05-06T12:06:45Z" | ---
license: apache-2.0
---
|
mlfoundations-dev/f1_avg_all | mlfoundations-dev | "2025-05-06T12:06:31Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-7B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-05-04T14:52:34Z" | ---
library_name: transformers
license: apache-2.0
base_model: Qwen/Qwen2.5-7B-Instruct
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: f1_avg_all
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# f1_avg_all
This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) on the mlfoundations-dev/f1_avg_all dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 8e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 64
- gradient_accumulation_steps: 8
- total_train_batch_size: 512
- total_eval_batch_size: 512
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5.0
### Training results
### Framework versions
- Transformers 4.46.1
- Pytorch 2.3.0
- Datasets 3.1.0
- Tokenizers 0.20.3
|
LeNaM/distilbert-base-uncased-finetuned-imdb | LeNaM | "2025-05-06T12:01:23Z" | 0 | 0 | transformers | [
"transformers",
"tf",
"distilbert",
"fill-mask",
"generated_from_keras_callback",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | "2025-05-06T11:49:59Z" | ---
library_name: transformers
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_keras_callback
model-index:
- name: LeNaM/distilbert-base-uncased-finetuned-imdb
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# LeNaM/distilbert-base-uncased-finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 2.8500
- Validation Loss: 2.5967
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'transformers.optimization_tf', 'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': -688, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}, 'registered_name': 'WarmUp'}, 'decay': 0.0, 'beta_1': np.float32(0.9), 'beta_2': np.float32(0.999), 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 2.8500 | 2.5967 | 0 |
### Framework versions
- Transformers 4.51.3
- TensorFlow 2.18.0
- Datasets 3.5.1
- Tokenizers 0.21.1
|
mradermacher/Anathema-V1-LLaMA-70B-i1-GGUF | mradermacher | "2025-05-06T12:00:12Z" | 1 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:TareksLab/Anathema-V1-LLaMA-70B",
"base_model:quantized:TareksLab/Anathema-V1-LLaMA-70B",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | "2025-05-05T23:03:15Z" | ---
base_model: TareksLab/Anathema-V1-LLaMA-70B
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/TareksLab/Anathema-V1-LLaMA-70B
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Anathema-V1-LLaMA-70B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Anathema-V1-LLaMA-70B-i1-GGUF/resolve/main/Anathema-V1-LLaMA-70B.i1-IQ1_S.gguf) | i1-IQ1_S | 15.4 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Anathema-V1-LLaMA-70B-i1-GGUF/resolve/main/Anathema-V1-LLaMA-70B.i1-IQ1_M.gguf) | i1-IQ1_M | 16.9 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Anathema-V1-LLaMA-70B-i1-GGUF/resolve/main/Anathema-V1-LLaMA-70B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 19.2 | |
| [GGUF](https://huggingface.co/mradermacher/Anathema-V1-LLaMA-70B-i1-GGUF/resolve/main/Anathema-V1-LLaMA-70B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 21.2 | |
| [GGUF](https://huggingface.co/mradermacher/Anathema-V1-LLaMA-70B-i1-GGUF/resolve/main/Anathema-V1-LLaMA-70B.i1-IQ2_S.gguf) | i1-IQ2_S | 22.3 | |
| [GGUF](https://huggingface.co/mradermacher/Anathema-V1-LLaMA-70B-i1-GGUF/resolve/main/Anathema-V1-LLaMA-70B.i1-IQ2_M.gguf) | i1-IQ2_M | 24.2 | |
| [GGUF](https://huggingface.co/mradermacher/Anathema-V1-LLaMA-70B-i1-GGUF/resolve/main/Anathema-V1-LLaMA-70B.i1-Q2_K_S.gguf) | i1-Q2_K_S | 24.6 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/Anathema-V1-LLaMA-70B-i1-GGUF/resolve/main/Anathema-V1-LLaMA-70B.i1-Q2_K.gguf) | i1-Q2_K | 26.5 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Anathema-V1-LLaMA-70B-i1-GGUF/resolve/main/Anathema-V1-LLaMA-70B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 27.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Anathema-V1-LLaMA-70B-i1-GGUF/resolve/main/Anathema-V1-LLaMA-70B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 29.4 | |
| [GGUF](https://huggingface.co/mradermacher/Anathema-V1-LLaMA-70B-i1-GGUF/resolve/main/Anathema-V1-LLaMA-70B.i1-IQ3_S.gguf) | i1-IQ3_S | 31.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Anathema-V1-LLaMA-70B-i1-GGUF/resolve/main/Anathema-V1-LLaMA-70B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 31.0 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Anathema-V1-LLaMA-70B-i1-GGUF/resolve/main/Anathema-V1-LLaMA-70B.i1-IQ3_M.gguf) | i1-IQ3_M | 32.0 | |
| [GGUF](https://huggingface.co/mradermacher/Anathema-V1-LLaMA-70B-i1-GGUF/resolve/main/Anathema-V1-LLaMA-70B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 34.4 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Anathema-V1-LLaMA-70B-i1-GGUF/resolve/main/Anathema-V1-LLaMA-70B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 37.2 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Anathema-V1-LLaMA-70B-i1-GGUF/resolve/main/Anathema-V1-LLaMA-70B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 38.0 | |
| [GGUF](https://huggingface.co/mradermacher/Anathema-V1-LLaMA-70B-i1-GGUF/resolve/main/Anathema-V1-LLaMA-70B.i1-Q4_0.gguf) | i1-Q4_0 | 40.2 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Anathema-V1-LLaMA-70B-i1-GGUF/resolve/main/Anathema-V1-LLaMA-70B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 40.4 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Anathema-V1-LLaMA-70B-i1-GGUF/resolve/main/Anathema-V1-LLaMA-70B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 42.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Anathema-V1-LLaMA-70B-i1-GGUF/resolve/main/Anathema-V1-LLaMA-70B.i1-Q4_1.gguf) | i1-Q4_1 | 44.4 | |
| [GGUF](https://huggingface.co/mradermacher/Anathema-V1-LLaMA-70B-i1-GGUF/resolve/main/Anathema-V1-LLaMA-70B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 48.8 | |
| [GGUF](https://huggingface.co/mradermacher/Anathema-V1-LLaMA-70B-i1-GGUF/resolve/main/Anathema-V1-LLaMA-70B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 50.0 | |
| [PART 1](https://huggingface.co/mradermacher/Anathema-V1-LLaMA-70B-i1-GGUF/resolve/main/Anathema-V1-LLaMA-70B.i1-Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Anathema-V1-LLaMA-70B-i1-GGUF/resolve/main/Anathema-V1-LLaMA-70B.i1-Q6_K.gguf.part2of2) | i1-Q6_K | 58.0 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
InstaDeepAI/AbBFN2 | InstaDeepAI | "2025-05-06T12:00:00Z" | 0 | 0 | null | [
"arxiv:2308.07037",
"region:us"
] | null | "2025-03-26T10:30:41Z" | # AbBFN2: A flexible antibody foundation model based on Bayesian Flow Networks
[AbBFN2](https://www.biorxiv.org/content/10.1101/2025.04.29.651170v1) allows for flexible task adaptation by virtue of its ability to condition the generative process on an arbitrary subset of variables. Further, since AbBFN2 is based on the Bayesian Flow Network paradigm, it can jointly model both discrete and continuous variables. Using this architecture, we provide a rich syntax which can be used to interact with the model. Regardless of conditioning information, the model generates all 45 "data modes" at inference time and arbitrary conditioning can be used to define specific tasks.
## License Summary
1. The Licensed Models are **only** available under this License for Non-Commercial Purposes.
2. You are permitted to reproduce, publish, share and adapt the Output generated by the Licensed Model only for Non-Commercial Purposes and in accordance with this License.
3. You may **not** use the Licensed Models or any of its Outputs in connection with:
1. any Commercial Purposes, unless agreed by Us under a separate licence;
2. to train, improve or otherwise influence the functionality or performance of any other third-party derivative model that is commercial or intended for a Commercial Purpose and is similar to the Licensed Models;
3. to create models distilled or derived from the Outputs of the Licensed Models, unless such models are for Non-Commercial Purposes and open-sourced under the same license as the Licensed Models; or
4. in violation of any applicable laws and regulations.
## Getting Started
You can interact with AbBFN2 via:
* **Web Application:** [https://abbfn2.labs.deepchain.bio/](https://abbfn2.labs.deepchain.bio/)
* **Open-Source Repository:** [https://github.com/instadeepai/AbBFN2](https://github.com/instadeepai/AbBFN2)
The instructions below pertain to the open-source repository.
## Prerequisites
- Docker installed on your system
- Sufficient computational resources (TPU/GPU recommended)
- Basic understanding of antibody structure and sequence notation
## Installation
### Hardware Configuration
First, configure your accelerator in the Makefile:
```bash
ACCELERATOR = GPU # Options: CPU, TPU, or GPU
```
Note: Multi-host inference is not supported in this release. Please use single-host settings only.
### Building the Docker Image
Run the following command to build the AbBFN2 Docker image:
```bash
make build
```
This process typically takes 5-20 minutes depending on your hardware.
### For Apple Silicon users
Build the conda environment instead directly using:
```bash
conda env create -f environment.yaml
conda activate abbfn2
```
## Usage
AbBFN2 supports three main generation modes, each with its own configuration file in the `experiments/configs/` directory.
In addition to the mode-specific settings, configuration files contain options for loading model weights. By default (`load_from_hf: true`), weights are downloaded from Hugging Face. Optionally, if you have the weights locally, set `load_from_hf: false` and provide the path in `model_weights_path` (e.g., `/app/params.pkl`).
### 1. Unconditional Generation
Generate novel antibody sequences without any constraints. AbBFN2 will generate natural-like antibody sequences matching its training distribution. Note that the metadata labels are also predictions made by the model. For a discussion of the accuracy of these labels, please refer to the AbBFN2 manuscript.
Configuration (`unconditional.yaml`):
```yaml
cfg:
sampling:
num_samples_per_batch: 10 # Number of sequences per batch
num_batches: 1 # Number of batches to generate
sample_fn:
num_steps: 300 # Number of sampling steps (recommended: 300-1000)
```
Run:
```bash
make unconditional # or python experiments/unconditional.py for Apple Silicon users.
```
### 2. Conditional Generation/Inpainting
Generate antibody sequences conditioned on specific attributes. Conditional generation highlights the flexibility of AbBFN2 and allows it to be task adaptible depending on the exact conditioning data. While any arbitrary combination is possible, conditional generation is mostly to be used primarily when conditioning on full sequences (referred to as sequence labelling in the manuscript), partial sequences (sequence inpainting), partial sequences and metadata (sequence design), metadata only (conditional de novo generation). For categorical variables, the set of of possible values is found in `src/abbfn2/data_mode_handler/oas_paired/constants.py`. For genes and CDR lengths, only values that appear at least 100 times in the training data are valid. When conditioning on species, human, mouse, or rat can be chosen.
**Disclaimer**: _As discussed in the manuscript, the flexibility of AbBFN2 requires careful consideration of the exact combination of conditioning information for effective generation. For instance, conditioning on a kappa light chain locus V-gene together with a lambda locus J-gene family is unlikely to yield samples of high quality. Such paradoxical combinations can also exist in more subtle ways. Due to the space of possible conditioning information, we have only tested a small subset of such combinations._
Configuration (`inpaint.yaml`):
```yaml
cfg:
input:
num_input_samples: 2 # Number of input samples
dm_overwrites: # Specify values of the data modes
h_cdr1_seq: GYTFTSHA
h_cdr2_seq: ISPYRGDT
h_cdr3_seq: ARDAGVPLDY
sampling:
inpaint_fn:
num_steps: 300 # Number of sampling steps (recommended: 300-1000)
mask_fn:
data_modes: # Specify which data modes to condition on
- "h_cdr1_seq"
- "h_cdr2_seq"
- "h_cdr3_seq"
```
Run:
```bash
make inpaint # or python experiments/inpaint.py for Apple Silicon users.
```
### 3. Sequence Humanization
Convert non-human antibody sequences into humanized versions. This workflow is designed to run a sequence humanisation experiment given a paired, non-human starting sequence. AbBFN2 will be used to introduce mutations to the framework regions of the starting antibody, possibly using several recycling iterations. During sequence humanisation, appropriate human V-gene families to target will also be chosen, but can be manually set by the user too.
Briefly, the humanisation workflow here uses the conditional generation capabilities of AbBFN2 in a sample recycling approach. At each iteration, further mutations are introduced, using a more aggressive starting strategy that is likely to introduce a larger number of mutations. As the sequence becomes more human under the model, fewer mutations are introduced at subsequent steps. Please note that we have found that in most cases, humanisation is achieved within a single recycling iteration. If the model introduces a change to the CDR loops, which can happen in rare cases, these are removed. For a detailed description of the humanisation workflow, please refer to the AbBFN2 manuscript.
Please also note that while we provide the option to manually select V-gene families here, this workflow allows the model to select more appropriate V-gene families during inference. Therefore, the final V-gene families may differ from the initially selected ones. Please also note that due to the data that AbBFN2 is trained on, humanisation will be most reliable when performed on murine or rat sequences. Sequences from other species have not been tested.
Configuration (`humanization.yaml`):
```yaml
cfg:
input:
l_seq: "DIVLTQSPASLAVSLGQRATISCKASQSVDYDGHSYMNWYQQKPGQPPKLLIYAASNLESGIPARFSGSGSGTDFTLNIHPVEEEDAATYYCQQSDENPLTFGTGTKLELK"
h_seq: "QVQLQQSGPELVKPGALVKISCKASGYTFTSYDINWVKQRPGQGLEWIGWIYPGDGSIKYNEKFKGKATLTVDKSSSTAYMQVSSLTSENSAVYFCARRGEYGNYEGAMDYWGQGTTVTVSS"
# h_vfams: null # Optionally, set target v-gene families
# l_vfams: null
sampling:
recycling_steps: 10 # Number of recycling steps (recommended: 5-12)
inpaint_fn:
num_steps: 500 # Number of sampling steps (recommended: 300-1000)
```
Run:
```bash
make humanization # or python experiments/humanization.py Apple Silicon users.
```
## Data Modes
The data modes supported by AbBFN2 are detailed below.
##### Heavy-Chain IMGT Regions
| Field | Type | Region (IMGT) | Description | Length Range (AA) |
|---------------|--------|-------------------------|--------------------------------------------|-------------------|
| `h_fwr1_seq` | string | FWR1 | Framework region 1 | 18 – 41 |
| `h_fwr2_seq` | string | FWR2 | Framework region 2 | 6 – 30 |
| `h_fwr3_seq` | string | FWR3 | Framework region 3 | 29 – 58 |
| `h_fwr4_seq` | string | FWR4 | Framework region 4 | 3 – 12 |
| `h_cdr1_seq` | string | CDR1 | Complementarity-determining region 1 | 1 – 22 |
| `h_cdr2_seq` | string | CDR2 | Complementarity-determining region 2 | 1 – 25 |
| `h_cdr3_seq` | string | CDR3 | Complementarity-determining region 3 | 2 – 58 |
##### Light-Chain IMGT Regions
| Field | Type | Region (IMGT) | Description | Length Range (AA) |
|---------------|--------|-------------------------|--------------------------------------------|-------------------|
| `l_fwr1_seq` | string | FWR1 | Framework region 1 | 18 – 36 |
| `l_fwr2_seq` | string | FWR2 | Framework region 2 | 11 – 27 |
| `l_fwr3_seq` | string | FWR3 | Framework region 3 | 25 – 48 |
| `l_fwr4_seq` | string | FWR4 | Framework region 4 | 3 – 13 |
| `l_cdr1_seq` | string | CDR1 | Complementarity-determining region 1 | 1 – 20 |
| `l_cdr2_seq` | string | CDR2 | Complementarity-determining region 2 | 1 – 16 |
| `l_cdr3_seq` | string | CDR3 | Complementarity-determining region 3 | 1 – 27 |
##### CDR Length Metrics
Possible values provided in [src/abbfn2/data_mode_handler/oas_paired/constants.py](https://github.com/instadeepai/AbBFN2/tree/main/src/abbfn2/data_mode_handler/oas_paired/constants.py).
| Field | Type | Description |
|-------------|------|---------------------------------|
| `h1_length` | int | CDR1 length (heavy chain) |
| `h2_length` | int | CDR2 length (heavy chain) |
| `h3_length` | int | CDR3 length (heavy chain) |
| `l1_length` | int | CDR1 length (light chain) |
| `l2_length` | int | CDR2 length (light chain) |
| `l3_length` | int | CDR3 length (light chain) |
##### Gene and Family Annotations
Possible values provided in [src/abbfn2/data_mode_handler/oas_paired/constants.py](https://github.com/instadeepai/AbBFN2/tree/main/src/abbfn2/data_mode_handler/oas_paired/constants.py).
| Field | Type | Description |
|---------------|--------|------------------------------------|
| `hv_gene` | string | V gene segment (heavy) |
| `hd_gene` | string | D gene segment (heavy) |
| `hj_gene` | string | J gene segment (heavy) |
| `lv_gene` | string | V gene segment (light) |
| `lj_gene` | string | J gene segment (light) |
| `hv_family` | string | V gene family (heavy) |
| `hd_family` | string | D gene family (heavy) |
| `hj_family` | string | J gene family (heavy) |
| `lv_family` | string | V gene family (light) |
| `lj_family` | string | J gene family (light) |
| `species` | string | One of “human”, “rat”, “mouse” |
| `light_locus` | string | One of “K” (kappa) or “L” (lambda)|
##### TAP Physicochemical Metrics
| Field | Type | Description | Range |
|--------------------|--------|---------------------------------------------|-----------------|
| `tap_psh` | float | Patch hydrophobicity | 72.0 – 300.0 |
| `tap_pnc` | float | Proportion of non-covalent contacts | 0.0 – 10.0 |
| `tap_ppc` | float | Proportion of polar contacts | 0.0 – 7.5 |
| `tap_sfvcsp` | float | Surface-exposed variable-chain charge score | –55.0 – 55.0 |
| `tap_psh_flag` | string | Hydrophobicity flag | “red“ / “amber“ / “green“ |
| `tap_pnc_flag` | string | Non-covalent contacts flag | “red“ / “amber“ / “green“ |
| `tap_ppc_flag` | string | Polar contacts flag | “red“ / “amber“ / “green“ |
| `tap_sfvcsp_flag` | string | Charge score flag | “red“ / “amber“ / “green“ |
##### V- and J- Identity Scores
| Field | Type | Description | Range (%) |
|-----------------|--------|-----------------------------------|---------------|
| `h_v_identity` | float | Heavy-chain V segment identity | 64.0 – 100.0 |
| `h_d_identity` | float | Heavy-chain D segment identity | 74.0 – 100.0 |
| `h_j_identity` | float | Heavy-chain J segment identity | 74.0 – 100.0 |
| `l_v_identity` | float | Light-chain V segment identity | 66.0 – 100.0 |
| `l_j_identity` | float | Light-chain J segment identity | 77.0 – 100.0 |
## Citation
If you use AbBFN2 in your research, please cite our work:
```bibtex
@article{Guloglu_etal_AbBFN2,
title={AbBFN2: A flexible antibody foundation model based on Bayesian Flow Networks},
author={Bora Guloglu and Miguel Bragan\c{c}a and Alex Graves and Scott Cameron and Timothy Atkinson and Liviu Copoiu and Alexandre Laterre and Thomas D Barrett},
journal={bioRxiv},
year={2025},
url={https://www.biorxiv.org/content/10.1101/2025.04.29.651170v1}
}
```
## Related Papers
- **Bayesian Flow Networks:** [Graves et al., 2023](https://arxiv.org/abs/2308.07037)
- **Protein Sequence Modelling with Bayesian Flow Networks (ProtBFN/AbBFN):**
- Paper: [Atkinson et al., 2024](https://www.biorxiv.org/content/10.1101/2024.09.24.614734v1)
- GitHub Repository: [instadeepai/protein-sequence-bfn](https://github.com/instadeepai/protein-sequence-bfn)
- Hugging Face Model: [InstaDeepAI/protein-sequence-bfn](https://huggingface.co/InstaDeepAI/protein-sequence-bfn)
## Acknowledgements
The development of this library was supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
|
cbspace/gpt | cbspace | "2025-05-06T11:56:08Z" | 2 | 0 | null | [
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"region:us"
] | null | "2025-05-04T10:27:15Z" | ---
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Code: [More Information Needed]
- Paper: [More Information Needed]
- Docs: [More Information Needed] |
sboughorbel/gemma-2-9b-it-SimPO-L20-k100-lr1e-04-CCLoss | sboughorbel | "2025-05-06T11:52:29Z" | 0 | 0 | null | [
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"region:us"
] | null | "2025-05-06T11:49:41Z" | ---
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Code: [More Information Needed]
- Paper: [More Information Needed]
- Docs: [More Information Needed] |
ljnlonoljpiljm/florence-2-base-ft-flocci-mlx | ljnlonoljpiljm | "2025-05-06T11:52:13Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"florence2",
"text-generation",
"mlx",
"custom_code",
"autotrain_compatible",
"region:us"
] | text-generation | "2025-05-06T11:51:50Z" | ---
library_name: transformers
tags:
- mlx
---
# ljnlonoljpiljm/florence-2-base-ft-flocci-mlx
This model was converted to MLX format from [`ljnlonoljpiljm/florence-2-base-ft-flocci`]() using mlx-vlm version **0.1.13**.
Refer to the [original model card](https://huggingface.co/ljnlonoljpiljm/florence-2-base-ft-flocci) for more details on the model.
## Use with mlx
```bash
pip install -U mlx-vlm
```
```bash
python -m mlx_vlm.generate --model ljnlonoljpiljm/florence-2-base-ft-flocci-mlx --max-tokens 100 --temp 0.0 --prompt "Describe this image." --image <path_to_image>
```
|
inclusionAI/Ming-Lite-Omni-Preview | inclusionAI | "2025-05-06T11:48:36Z" | 0 | 3 | null | [
"safetensors",
"bailingmm",
"custom_code",
"base_model:inclusionAI/Ling-lite",
"base_model:finetune:inclusionAI/Ling-lite",
"license:mit",
"region:us"
] | null | "2025-05-02T02:32:18Z" | ---
license: mit
base_model:
- inclusionAI/Ling-lite
---
# Ming-Lite-Omni-Preview
### Model Description
Ming-Lite-Omni-Preview employs a unified Mixture-of-Experts (MoE) framework for multimodal sequence modeling, which empowers [Ling](https://github.com/inclusionAI/Ling) LLMs to acquire comprehensive cross-modal understanding and generation capabilities. Specifically, Ming-Lite-Omni-Preview can process arbitrary combinations of audio, video, image, and text modalities as input, generating multimodal sequences interleaving with audio, image, or text outputs, thereby enabling an advanced and interactive realtime experience. To naturely handle the diverse modalities, we have enhanced Ling-Lite-MoE by incorporating modality-specific routers for each modality. As a result, Ming-Lite-Omni-Preview excels at handling information from diverse modalities and is highly scalable.
### Key Features
- **Omni and Novel MoE Architecture**: An innovative Omni architecture based on Mixture of Experts (MoE) that achieves competive performance across multiple modality benchmarks.
- **Video understanding**: Supports KV-Cache dynamic compression of visual tokens. While supporting the ability to understand long videos of hours, it can also provide more detailed understanding of short videos of a few seconds.
- **Natural Speech Generation and Fine-grained Voice Dialogue**: Supports dialect understanding and generation in end-to-end conversations, enables one-shot voice cloning, and enhances prosody through audio tokenizer compression
## Model Downloads
You can download the model from both Huggingface and ModelScope.
<div align="center">
| **Model** | **Input modality** | **Oput modality** | **Download** |
|:-------------------------------------|:--------------------------:|:-----------------:|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------:|
| Ming-Lite-Omni-Preview | Image,text,viedio,audio | Image,text,audio | [🤗 HuggingFace](https://huggingface.co/inclusionAI/Ming-Lite-Omni-Preview) <br>[🤖 ModelScope](https://www.modelscope.cn/models/inclusionAI/Ming-Lite-Omni-Preview) |
</div>
## Quickstart
Please download our model following [Model Downloads](#model-downloads), then you can refer to the following codes to run Ming-Lite-Omni-Preview model.
```python
import os
from transformers import AutoProcessor
from modeling_bailingmm import BailingMMNativeForConditionalGeneration
# build model
model = BailingMMNativeForConditionalGeneration.from_pretrained(
"inclusionAI/Ming-Lite-Omni-Preview",
torch_dtype=torch.bfloat16,
low_cpu_mem_usage=True
).to("cuda")
assets_path = YOUR_ASSETS_PATH
# build processor
processor = AutoProcessor.from_pretrained("inclusionAI/Ming-Lite-Omni-Preview", trust_remote_code=True)
```
```python
# qa
messages = [
{
"role": "HUMAN",
"content": [
{"type": "text", "text": "请详细介绍鹦鹉的生活习性。"}
],
},
]
# Output:
# 鹦鹉是一种非常聪明和社交性强的鸟类,它们的生活习性非常丰富和有趣。以下是一些关于鹦鹉生活习性的详细介绍:
# ### 1. **栖息地**
# 鹦鹉主要分布在热带和亚热带地区,包括非洲、亚洲、澳大利亚和南美洲。它们通常生活在森林、草原、沙漠和城市环境中。不同种类的鹦鹉对栖息地的要求有所不同,但大多数鹦鹉喜欢有丰富植被和水源的地方。
# ### 2. **饮食**
# 鹦鹉是杂食性动物,它们的饮食非常多样化。它们的食物包括种子、坚果、水果、蔬菜、花蜜和昆虫。鹦鹉的喙非常强壮,能够轻松地打开坚硬的果壳和坚果。一些鹦鹉还会吃泥土或沙子,以帮助消化和补充矿物质。
# ......
```
```python
# image qa
messages = [
{
"role": "HUMAN",
"content": [
{"type": "image", "image": os.path.join(assets_path, "flowers.jpg")},
{"type": "text", "text": "What kind of flower is this?"},
],
},
]
# Output:
# The flowers in this image are forget-me-nots. These delicate blooms are known for their small, five-petaled flowers that come in various shades of blue, pink, and white.
```
To enable thinking before response, adding the following system prompt before your question:
```python
cot_prompt = "SYSTEM: You are a helpful assistant. When the user asks a question, your response must include two parts: first, the reasoning process enclosed in <thinking>...</thinking> tags, then the final answer enclosed in <answer>...</answer> tags. The critical answer or key result should be placed within \\boxed{}.\n"
# And your input message should be like this:
messages = [
{
"role": "HUMAN",
"content": [
{"type": "image", "image": os.path.join(assets_path, "reasoning.png")},
{"type": "text", "text": cot_prompt + "In the rectangle $A B C D$ pictured, $M_{1}$ is the midpoint of $D C, M_{2}$ the midpoint of $A M_{1}, M_{3}$ the midpoint of $B M_{2}$ and $M_{4}$ the midpoint of $C M_{3}$. Determine the ratio of the area of the quadrilateral $M_{1} M_{2} M_{3} M_{4}$ to the area of the rectangle $A B C D$.\nChoices:\n(A) $\frac{7}{16}$\n(B) $\frac{3}{16}$\n(C) $\frac{7}{32}$\n(D) $\frac{9}{32}$\n(E) $\frac{1}{5}$"},
],
},
]
# Output:
# \<think\>\nOkay, so I have this problem about a rectangle ABCD ... (thinking process omitted) ... So, the correct answer is C.\n\</think\>\n\<answer\>\\boxed{C}\</answer\>\n\n
```
```python
# video qa
messages = [
{
"role": "HUMAN",
"content": [
{"type": "video", "video": os.path.join(assets_path, "yoga.mp4")},
{"type": "text", "text": "What is the woman doing?"},
],
},
]
# Output:
# The image shows a woman performing a yoga pose on a rooftop. She's in a dynamic yoga pose, with her arms and legs extended in various positions.
```
```python
# multi-turn chat
messages = [
{
"role": "HUMAN",
"content": [
{"type": "text", "text": "中国的首都是哪里?"},
],
},
{
"role": "ASSISTANT",
"content": [
{"type": "text", "text": "北京"},
],
},
{
"role": "HUMAN",
"content": [
{"type": "text", "text": "它的占地面积是多少?有多少常住人口?"},
],
},
]
# Output:
# 北京市的总面积约为16,410.54平方公里,常住人口约为21,542,000人。
```
```python
# Preparation for inference
text = processor.apply_chat_template(messages, add_generation_prompt=True)
image_inputs, video_inputs, audio_inputs = processor.process_vision_info(messages)
inputs = processor(
text=[text],
images=image_inputs,
videos=video_inputs,
audios=audio_inputs,
return_tensors="pt",
)
inputs = inputs.to(model.device)
for k in inputs.keys():
if k == "pixel_values" or k == "pixel_values_videos" or k == "audio_feats":
inputs[k] = inputs[k].to(dtype=torch.bfloat16)
# call generate
generated_ids = model.generate(
**inputs,
max_new_tokens=512,
use_cache=False,
eos_token_id=processor.gen_terminator,
)
generated_ids_trimmed = [
out_ids[len(in_ids):] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
]
output_text = processor.batch_decode(
generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
)[0]
print(output_text)
```
```python
# ASR
messages = [
{
"role": "HUMAN",
"content": [
{"type": "text", "text": "Please recognize the language of this speech and transcribe it. Format: oral."},
{"type": "audio", "audio": 'data/wavs/BAC009S0915W0292.wav'},
],
},
]
outputs = model.generate(messages, max_new_tokens=512)
print(outputs)
```
```python
# speech2speech
messages = [
{
"role": "HUMAN",
"content": [
{"type": "audio", "audio": 'data/wavs/BAC009S0915W0292.wav'},
],
},
]
outputs = model.generate(messages, max_new_tokens=512, speaker='luna', output_audio_path='out.wav', output_audio=True)
print(outputs)
```
## Evaluation
### Image benchmark
<div align="center">
| Benchmarks | Ming-Lite-Omni-Preview | Qwen2.5-VL-7B-Instruct | InternVL2.5-8B-MPO |
|:------------------|:----------------------:|:---------------------------:|:------------------:|
| AI2D | 83.84 | 83.9 | <b>84.5</b> |
| HallusionBench | <b>54.68</b> | 51.9 | 51.7 |
| MMBench_TEST_V11 | 79.63 | <b>84.3</b> | 82.0 |
| MMMU | 57.0 | <b>58.6</b> | 54.8 |
| MMStar | 62.0 | 63.9 | <b>65.2</b> |
| MMVet | <b>73.6</b> | 67.1 | 68.1 |
| MathVista | <b>69.0</b> | 68.2 | 67.9 |
| OCRBench | 87.9 | 86.4 | <b>88.2</b> |
| Average | <b>70.96</b> | 70.5 | 70.3 |
</div>
#### Object Recognition
<div align="center">
| Object Recognition | Ming-Lite-Omni-Preview | Qwen2.5-VL-7B | InternVL-2.5-8B |
|:----------------------------|:----------------------:|:-------------:|:---------------:|
| Plants | 52.1 | <b>55.3</b> | 32.8 |
| Animals | 52.6 | <b>54.8</b> | 36.5 |
| Home appliances & furniture | 93.5 | <b>97.4</b> | 90.9 |
| Personal Electronics | <b>96.1</b> | 95.1 | 93.2 |
| Food & Ingredients | 57.5 | <b>60.0</b> | 48.7 |
| Tableware | <b>96.6 | 94.9 | 88.1 |
| Vehicles | 31.9 | <b>40.9</b> | 31.9 |
| Average | 68.6 | <b>71.2</b> | 60.3 |
</div>
### Video benchmark
<div align="center">
| Benchmarks | Ming-Lite-Omni-Preview | Qwen2.5VL-7B |
|:-------------------|:------------------------:|:----------------:|
| VideoMME wo/w sub. | 63.9/67.6 | <b>65.1/71.6</b> |
| MVBench | 67.0 | <b>72.0</b> |
| Video-MMMU | 45.4 | <b>47.44</b> |
| LongVideoBench | 53.7 | <b>60.0</b> |
</div>
### Audio benchmark
#### SpeechQA
<div align="center">
| Model | AlpacaEval | CommonEval | SD-QA | MMSU | OpenBookQA | IFEval | AdvBench |
|:-------------------------|:-----------:|:-----------:|:------------:|:------------:|:------------:|:------------:|:-------------:|
| Qwen2-Audio-chat | 3.69 | 3.40 | 35.35 | 35.43 | 49.01 | 22.57 | 98.85 |
| Baichuan-Audio | 4.00 | 3.39 | 49.64 | 48.80 | 63.30 | 41.32 | 86.73 |
| GLM-4-Voice | 4.06 | 3.48 | 43.31 | 40.11 | 52.97 | 24.91 | 88.08 |
| Kimi-Audio | 4.46 | <b>3.97</b> | <b>63.12</b> | 62.17 | <b>83.52</b> | <b>61.10</b> | <b>100.00</b> |
| Qwen2.5-Omni | <b>4.49</b> | 3.93 | 55.71 | <b>61.32</b> | 81.10 | 52.87 | 99.42 |
| Ming-Lite-Omni-Preview | 4.25 | 3.88 | 58.95 | 46.06 | 60.00 | 46.71 | 96.53 |
</div>
#### ASR
<div align="center">
| **Model** | **Aishell-1** | **Aishell-2 ios** | **Wenetspeech test-net** | **Wenet test-meeting** | **Librispeech test-clean** | **Librispeech test-other** |
|:------------------------|:-------------:|:-----------------:|:------------------------:|:----------------------:|:--------------------------:|:--------------------------:|
| Whisper Large-v3 | 5.14 | 4.76 | 9.68 | 18.54 | 1.9 | 3.65 |
| Qwen2-Audio | 1.53 | 3.06 | 7.72 | 8.4 | <b>1.6</b> | 3.6 |
| GLM-4-voice Base | 2.46 | - | - | - | 2.82 | 7.66 |
| Baichuan-Omni-1.5 | - | - | 6.9 | 8.4 | - | - |
| Qwen2.5-Omni | <b>1.18</b> | <b>2.36</b> | <b>5.9</b> | 7.7 | 1.8 | <b>3.4</b> |
| Ming-Lite-Omni-Preview | 1.62 | 2.82 | 6.23 | <b>6.9</b> | 2.34 | 5.74 |
</div>
### Knowledge
<div align="center">
| Model | InfoSeek_H-mean | InfoSeek_unseen_question | InfoSeek_unseen_entity |
|:--------------------------|:---------------:|:------------------------:|:----------------------:|
| GPT-4o | <b>36.05</b> | - | - |
| PaLI-X | 22.06 | 23.5 | 20.8 |
| Qwen2.5-vl-32B | 19.35 | 20.55 | 18.28 |
| Ming-Lite-Omni-Preview | 27.3 | 28.9 | 25.9 |
</div>
### OCR&GUI
<div align="center">
| Model | Ming-Lite-Omni-Preview | Qwen2.5-VL-7B-Instruct |
|:-------------------|:----------------------:|:----------------------:|
| ChartQA_TEST | 85.2 | <b>87.3</b> |
| DocVQA_TEST | 93.2 | <b>95.7</b> |
| OCRBenchV2_en/zh | 52.2/51.6 | <b>56.3/57.2</b> |
| OmniDocBench↓ | 34.7/34.5 | <b>30.8/39.8</b> |
| TextVQA_VAL | 82.36 | <b>84.9</b> |
| ScreenSpot | 79.3 | <b>84.7</b> |
</div>
## Model Sources
- **Github Repository:** https://github.com/inclusionAI/Ming
|
Gina261/llm-assn1-luxia-llama-7.60 | Gina261 | "2025-05-06T11:44:57Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/Llama-3.2-1B-unsloth-bnb-4bit",
"base_model:finetune:unsloth/Llama-3.2-1B-unsloth-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2025-05-06T11:41:58Z" | ---
base_model: unsloth/Llama-3.2-1B-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Gina261
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Llama-3.2-1B-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
mlfoundations-dev/e1_math_all_qwq_together_3k | mlfoundations-dev | "2025-05-06T11:40:45Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-7B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-05-06T05:05:23Z" | ---
library_name: transformers
license: apache-2.0
base_model: Qwen/Qwen2.5-7B-Instruct
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: e1_math_all_qwq_together_3k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# e1_math_all_qwq_together_3k
This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) on the mlfoundations-dev/e1_math_all_qwq_together_3k dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 24
- total_train_batch_size: 96
- total_eval_batch_size: 32
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 7.0
### Training results
### Framework versions
- Transformers 4.46.1
- Pytorch 2.6.0+cu124
- Datasets 3.1.0
- Tokenizers 0.20.3
|
RichardErkhov/yjwon_-_mpg27_mistral7bv3_sft_ogd_rms_epoch2-gguf | RichardErkhov | "2025-05-06T11:39:38Z" | 0 | 0 | null | [
"gguf",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-05-06T10:14:02Z" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
mpg27_mistral7bv3_sft_ogd_rms_epoch2 - GGUF
- Model creator: https://huggingface.co/yjwon/
- Original model: https://huggingface.co/yjwon/mpg27_mistral7bv3_sft_ogd_rms_epoch2/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [mpg27_mistral7bv3_sft_ogd_rms_epoch2.Q2_K.gguf](https://huggingface.co/RichardErkhov/yjwon_-_mpg27_mistral7bv3_sft_ogd_rms_epoch2-gguf/blob/main/mpg27_mistral7bv3_sft_ogd_rms_epoch2.Q2_K.gguf) | Q2_K | 2.54GB |
| [mpg27_mistral7bv3_sft_ogd_rms_epoch2.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/yjwon_-_mpg27_mistral7bv3_sft_ogd_rms_epoch2-gguf/blob/main/mpg27_mistral7bv3_sft_ogd_rms_epoch2.IQ3_XS.gguf) | IQ3_XS | 2.82GB |
| [mpg27_mistral7bv3_sft_ogd_rms_epoch2.IQ3_S.gguf](https://huggingface.co/RichardErkhov/yjwon_-_mpg27_mistral7bv3_sft_ogd_rms_epoch2-gguf/blob/main/mpg27_mistral7bv3_sft_ogd_rms_epoch2.IQ3_S.gguf) | IQ3_S | 2.97GB |
| [mpg27_mistral7bv3_sft_ogd_rms_epoch2.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/yjwon_-_mpg27_mistral7bv3_sft_ogd_rms_epoch2-gguf/blob/main/mpg27_mistral7bv3_sft_ogd_rms_epoch2.Q3_K_S.gguf) | Q3_K_S | 2.95GB |
| [mpg27_mistral7bv3_sft_ogd_rms_epoch2.IQ3_M.gguf](https://huggingface.co/RichardErkhov/yjwon_-_mpg27_mistral7bv3_sft_ogd_rms_epoch2-gguf/blob/main/mpg27_mistral7bv3_sft_ogd_rms_epoch2.IQ3_M.gguf) | IQ3_M | 3.06GB |
| [mpg27_mistral7bv3_sft_ogd_rms_epoch2.Q3_K.gguf](https://huggingface.co/RichardErkhov/yjwon_-_mpg27_mistral7bv3_sft_ogd_rms_epoch2-gguf/blob/main/mpg27_mistral7bv3_sft_ogd_rms_epoch2.Q3_K.gguf) | Q3_K | 3.28GB |
| [mpg27_mistral7bv3_sft_ogd_rms_epoch2.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/yjwon_-_mpg27_mistral7bv3_sft_ogd_rms_epoch2-gguf/blob/main/mpg27_mistral7bv3_sft_ogd_rms_epoch2.Q3_K_M.gguf) | Q3_K_M | 3.28GB |
| [mpg27_mistral7bv3_sft_ogd_rms_epoch2.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/yjwon_-_mpg27_mistral7bv3_sft_ogd_rms_epoch2-gguf/blob/main/mpg27_mistral7bv3_sft_ogd_rms_epoch2.Q3_K_L.gguf) | Q3_K_L | 3.56GB |
| [mpg27_mistral7bv3_sft_ogd_rms_epoch2.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/yjwon_-_mpg27_mistral7bv3_sft_ogd_rms_epoch2-gguf/blob/main/mpg27_mistral7bv3_sft_ogd_rms_epoch2.IQ4_XS.gguf) | IQ4_XS | 3.68GB |
| [mpg27_mistral7bv3_sft_ogd_rms_epoch2.Q4_0.gguf](https://huggingface.co/RichardErkhov/yjwon_-_mpg27_mistral7bv3_sft_ogd_rms_epoch2-gguf/blob/main/mpg27_mistral7bv3_sft_ogd_rms_epoch2.Q4_0.gguf) | Q4_0 | 3.83GB |
| [mpg27_mistral7bv3_sft_ogd_rms_epoch2.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/yjwon_-_mpg27_mistral7bv3_sft_ogd_rms_epoch2-gguf/blob/main/mpg27_mistral7bv3_sft_ogd_rms_epoch2.IQ4_NL.gguf) | IQ4_NL | 3.87GB |
| [mpg27_mistral7bv3_sft_ogd_rms_epoch2.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/yjwon_-_mpg27_mistral7bv3_sft_ogd_rms_epoch2-gguf/blob/main/mpg27_mistral7bv3_sft_ogd_rms_epoch2.Q4_K_S.gguf) | Q4_K_S | 3.86GB |
| [mpg27_mistral7bv3_sft_ogd_rms_epoch2.Q4_K.gguf](https://huggingface.co/RichardErkhov/yjwon_-_mpg27_mistral7bv3_sft_ogd_rms_epoch2-gguf/blob/main/mpg27_mistral7bv3_sft_ogd_rms_epoch2.Q4_K.gguf) | Q4_K | 4.07GB |
| [mpg27_mistral7bv3_sft_ogd_rms_epoch2.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/yjwon_-_mpg27_mistral7bv3_sft_ogd_rms_epoch2-gguf/blob/main/mpg27_mistral7bv3_sft_ogd_rms_epoch2.Q4_K_M.gguf) | Q4_K_M | 4.07GB |
| [mpg27_mistral7bv3_sft_ogd_rms_epoch2.Q4_1.gguf](https://huggingface.co/RichardErkhov/yjwon_-_mpg27_mistral7bv3_sft_ogd_rms_epoch2-gguf/blob/main/mpg27_mistral7bv3_sft_ogd_rms_epoch2.Q4_1.gguf) | Q4_1 | 4.24GB |
| [mpg27_mistral7bv3_sft_ogd_rms_epoch2.Q5_0.gguf](https://huggingface.co/RichardErkhov/yjwon_-_mpg27_mistral7bv3_sft_ogd_rms_epoch2-gguf/blob/main/mpg27_mistral7bv3_sft_ogd_rms_epoch2.Q5_0.gguf) | Q5_0 | 4.66GB |
| [mpg27_mistral7bv3_sft_ogd_rms_epoch2.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/yjwon_-_mpg27_mistral7bv3_sft_ogd_rms_epoch2-gguf/blob/main/mpg27_mistral7bv3_sft_ogd_rms_epoch2.Q5_K_S.gguf) | Q5_K_S | 4.66GB |
| [mpg27_mistral7bv3_sft_ogd_rms_epoch2.Q5_K.gguf](https://huggingface.co/RichardErkhov/yjwon_-_mpg27_mistral7bv3_sft_ogd_rms_epoch2-gguf/blob/main/mpg27_mistral7bv3_sft_ogd_rms_epoch2.Q5_K.gguf) | Q5_K | 4.78GB |
| [mpg27_mistral7bv3_sft_ogd_rms_epoch2.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/yjwon_-_mpg27_mistral7bv3_sft_ogd_rms_epoch2-gguf/blob/main/mpg27_mistral7bv3_sft_ogd_rms_epoch2.Q5_K_M.gguf) | Q5_K_M | 4.78GB |
| [mpg27_mistral7bv3_sft_ogd_rms_epoch2.Q5_1.gguf](https://huggingface.co/RichardErkhov/yjwon_-_mpg27_mistral7bv3_sft_ogd_rms_epoch2-gguf/blob/main/mpg27_mistral7bv3_sft_ogd_rms_epoch2.Q5_1.gguf) | Q5_1 | 5.07GB |
| [mpg27_mistral7bv3_sft_ogd_rms_epoch2.Q6_K.gguf](https://huggingface.co/RichardErkhov/yjwon_-_mpg27_mistral7bv3_sft_ogd_rms_epoch2-gguf/blob/main/mpg27_mistral7bv3_sft_ogd_rms_epoch2.Q6_K.gguf) | Q6_K | 5.54GB |
| [mpg27_mistral7bv3_sft_ogd_rms_epoch2.Q8_0.gguf](https://huggingface.co/RichardErkhov/yjwon_-_mpg27_mistral7bv3_sft_ogd_rms_epoch2-gguf/blob/main/mpg27_mistral7bv3_sft_ogd_rms_epoch2.Q8_0.gguf) | Q8_0 | 7.17GB |
Original model description:
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
POWERHACK/Qwen2.5-7B-Instruct-1M-Q8_0-GGUF | POWERHACK | "2025-05-06T11:38:40Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"chat",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"base_model:Qwen/Qwen2.5-7B-Instruct-1M",
"base_model:quantized:Qwen/Qwen2.5-7B-Instruct-1M",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | "2025-05-06T11:23:24Z" | ---
base_model: Qwen/Qwen2.5-7B-Instruct-1M
language:
- en
library_name: transformers
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen2.5-7B-Instruct-1M/blob/main/LICENSE
pipeline_tag: text-generation
tags:
- chat
- llama-cpp
- gguf-my-repo
---
# POWERHACK/Qwen2.5-7B-Instruct-1M-Q8_0-GGUF
This model was converted to GGUF format from [`Qwen/Qwen2.5-7B-Instruct-1M`](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct-1M) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct-1M) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo POWERHACK/Qwen2.5-7B-Instruct-1M-Q8_0-GGUF --hf-file qwen2.5-7b-instruct-1m-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo POWERHACK/Qwen2.5-7B-Instruct-1M-Q8_0-GGUF --hf-file qwen2.5-7b-instruct-1m-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo POWERHACK/Qwen2.5-7B-Instruct-1M-Q8_0-GGUF --hf-file qwen2.5-7b-instruct-1m-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo POWERHACK/Qwen2.5-7B-Instruct-1M-Q8_0-GGUF --hf-file qwen2.5-7b-instruct-1m-q8_0.gguf -c 2048
```
|
mlfoundations-dev/e1_science_longest_qwq_together_3k | mlfoundations-dev | "2025-05-06T11:38:00Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-7B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-05-06T05:47:46Z" | ---
library_name: transformers
license: apache-2.0
base_model: Qwen/Qwen2.5-7B-Instruct
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: e1_science_longest_qwq_together_3k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# e1_science_longest_qwq_together_3k
This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) on the mlfoundations-dev/e1_science_longest_qwq_together_3k dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 24
- total_train_batch_size: 96
- total_eval_batch_size: 32
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 7.0
### Training results
### Framework versions
- Transformers 4.46.1
- Pytorch 2.6.0+cu124
- Datasets 3.1.0
- Tokenizers 0.20.3
|
vmpsergio/4bc7431f-3285-44ae-b3ad-30e6d4a4f279 | vmpsergio | "2025-05-06T11:36:10Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"gpt_neox",
"axolotl",
"generated_from_trainer",
"base_model:EleutherAI/pythia-70m",
"base_model:adapter:EleutherAI/pythia-70m",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | "2025-05-06T11:23:06Z" | ---
library_name: peft
license: apache-2.0
base_model: EleutherAI/pythia-70m
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 4bc7431f-3285-44ae-b3ad-30e6d4a4f279
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
absolute_data_files: true
adapter: lora
base_model: EleutherAI/pythia-70m
bf16: true
chat_template: llama3
dataset_prepared_path: /workspace/axolotl
datasets:
- data_files:
- 64f4fe47cb913d90_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/64f4fe47cb913d90_train_data.json
type:
field_instruction: en
field_output: ru
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 1
gradient_checkpointing: true
gradient_clipping: 0.5
group_by_length: false
hub_model_id: vmpsergio/4bc7431f-3285-44ae-b3ad-30e6d4a4f279
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-06
load_in_4bit: false
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 400
micro_batch_size: 8
mixed_precision: bf16
mlflow_experiment_name: /tmp/64f4fe47cb913d90_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
special_tokens:
pad_token: <|endoftext|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 0b2545e2-2a39-47ed-a6cf-016d3fd1fecf
wandb_project: s56-8
wandb_run: your_name
wandb_runid: 0b2545e2-2a39-47ed-a6cf-016d3fd1fecf
warmup_steps: 10
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 4bc7431f-3285-44ae-b3ad-30e6d4a4f279
This model is a fine-tuned version of [EleutherAI/pythia-70m](https://huggingface.co/EleutherAI/pythia-70m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 4.4002
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 400
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 4.6798 | 0.0034 | 400 | 4.4002 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
phospho-app/ACT_simple_pawn_move_v4_100-dqyleuia04 | phospho-app | "2025-05-06T11:34:29Z" | 0 | 0 | null | [
"safetensors",
"phosphobot",
"act",
"region:us"
] | null | "2025-05-06T11:28:00Z" |
---
tags:
- phosphobot
- act
task_categories:
- robotics
---
# act Model - phospho Training Pipeline
## This model was trained using **phospho**.
Training was successfull, try it out on your robot!
## Training parameters:
- **Dataset**: [dopaul/simple_pawn_move_v4](https://huggingface.co/datasets/dopaul/simple_pawn_move_v4)
- **Wandb run URL**: None
- **Epochs**: None
- **Batch size**: 80
- **Training steps**: 100
📖 **Get Started**: [docs.phospho.ai](https://docs.phospho.ai?utm_source=replicate_groot_training_pipeline)
🤖 **Get your robot**: [robots.phospho.ai](https://robots.phospho.ai?utm_source=replicate_groot_training_pipeline)
|
Bouquets/StrikeGPT-R1-Zero-8B-GGUF | Bouquets | "2025-05-06T11:31:47Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"qwen3",
"text-generation-inference",
"unsloth",
"en",
"base_model:Bouquets/StrikeGPT-R1-Zero-8B",
"base_model:quantized:Bouquets/StrikeGPT-R1-Zero-8B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-05-06T11:26:58Z" | ---
base_model: Bouquets/StrikeGPT-R1-Zero-8B
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Bouquets
- **License:** apache-2.0
- **Finetuned from model :** Bouquets/StrikeGPT-R1-Zero-8B
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
jahyungu/Qwen2.5-7B-Instruct_ifeval-like-data_random | jahyungu | "2025-05-06T11:31:04Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-7B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-05-06T08:14:19Z" | ---
library_name: transformers
license: apache-2.0
base_model: Qwen/Qwen2.5-7B-Instruct
tags:
- generated_from_trainer
model-index:
- name: Qwen2.5-7B-Instruct_ifeval-like-data_random
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Qwen2.5-7B-Instruct_ifeval-like-data_random
This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 200
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.50.0
- Pytorch 2.6.0+cu124
- Datasets 3.4.1
- Tokenizers 0.21.0
|
speakleash/Bielik-11B-v2.3-Instruct | speakleash | "2025-05-06T11:29:29Z" | 7,667 | 49 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"conversational",
"pl",
"arxiv:2505.02410",
"arxiv:2005.01643",
"arxiv:2309.11235",
"arxiv:2006.09092",
"arxiv:2402.13228",
"arxiv:2410.18565",
"base_model:speakleash/Bielik-11B-v2",
"base_model:merge:speakleash/Bielik-11B-v2",
"base_model:speakleash/Bielik-11B-v2.0-Instruct",
"base_model:merge:speakleash/Bielik-11B-v2.0-Instruct",
"base_model:speakleash/Bielik-11B-v2.1-Instruct",
"base_model:merge:speakleash/Bielik-11B-v2.1-Instruct",
"base_model:speakleash/Bielik-11B-v2.2-Instruct",
"base_model:merge:speakleash/Bielik-11B-v2.2-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-08-30T12:45:27Z" | ---
license: apache-2.0
base_model:
- speakleash/Bielik-11B-v2
- speakleash/Bielik-11B-v2.0-Instruct
- speakleash/Bielik-11B-v2.1-Instruct
- speakleash/Bielik-11B-v2.2-Instruct
language:
- pl
library_name: transformers
tags:
- merge
- mergekit
inference:
parameters:
temperature: 0.2
widget:
- messages:
- role: user
content: Co przedstawia polskie godło?
extra_gated_description: If you want to learn more about how you can use the model, please refer to our <a href="https://bielik.ai/terms/">Terms of Use</a>.
---
<p align="center">
<img src="https://huggingface.co/speakleash/Bielik-11B-v2/raw/main/speakleash_cyfronet.png">
</p>
# Bielik-11B-v2.3-Instruct
Bielik-11B-v2.3-Instruct is a generative text model featuring 11 billion parameters.
It is a linear merge of the [Bielik-11B-v2.0-Instruct](https://huggingface.co/speakleash/Bielik-11B-v2.0-Instruct), [Bielik-11B-v2.1-Instruct](https://huggingface.co/speakleash/Bielik-11B-v2.1-Instruct),
and [Bielik-11B-v2.2-Instruct](https://huggingface.co/speakleash/Bielik-11B-v2.2-Instruct) models, which are instruct fine-tuned versions of the [Bielik-11B-v2](https://huggingface.co/speakleash/Bielik-11B-v2).
Forementioned model stands as a testament to the unique collaboration between the open-science/open-souce project SpeakLeash and the High Performance Computing (HPC) center: ACK Cyfronet AGH.
Developed and trained on Polish text corpora, which has been cherry-picked and processed by the SpeakLeash team, this endeavor leverages Polish large-scale computing infrastructure,
specifically within the PLGrid environment, and more precisely, the HPC centers: ACK Cyfronet AGH.
The creation and training of the Bielik-11B-v2.3-Instruct was propelled by the support of computational grant number PLG/2024/016951, conducted on the Athena and Helios supercomputer,
enabling the use of cutting-edge technology and computational resources essential for large-scale machine learning processes.
As a result, the model exhibits an exceptional ability to understand and process the Polish language, providing accurate responses and performing a variety of linguistic tasks with high precision.
📚 Technical report: https://arxiv.org/abs/2505.02410
🗣️ Chat Arena<span style="color:red;">*</span>: https://arena.speakleash.org.pl/
<span style="color:red;">*</span>Chat Arena is a platform for testing and comparing different AI language models, allowing users to evaluate their performance and quality.
## Model
The [SpeakLeash](https://speakleash.org/) team is working on their own set of instructions in Polish, which is continuously being expanded and refined by annotators. A portion of these instructions, which had been manually verified and corrected, has been utilized for training purposes. Moreover, due to the limited availability of high-quality instructions in Polish, synthetic instructions were generated with [Mixtral 8x22B](https://huggingface.co/mistralai/Mixtral-8x22B-v0.1) and used in training. The dataset used for training comprised over 20 million instructions, consisting of more than 10 billion tokens. The instructions varied in quality, leading to a deterioration in the model’s performance. To counteract this while still allowing ourselves to utilize the aforementioned datasets, several improvements were introduced:
* Weighted tokens level loss - a strategy inspired by [offline reinforcement learning](https://arxiv.org/abs/2005.01643) and [C-RLFT](https://arxiv.org/abs/2309.11235)
* Adaptive learning rate inspired by the study on [Learning Rates as a Function of Batch Size](https://arxiv.org/abs/2006.09092)
* Masked prompt tokens
To align the model with user preferences we tested many different techniques: DPO, PPO, KTO, SiMPO. Finally the [DPO-Positive](https://arxiv.org/abs/2402.13228) method was employed, utilizing both generated and manually corrected examples, which were scored by a metamodel. A dataset comprising over 66,000 examples of varying lengths to address different aspects of response style. It was filtered and evaluated by the reward model to select instructions with the right level of difference between chosen and rejected. The novelty introduced in DPO-P was multi-turn conversations introduction.
Bielik instruct models have been trained with the use of an original open source framework called [ALLaMo](https://github.com/chrisociepa/allamo) implemented by [Krzysztof Ociepa](https://www.linkedin.com/in/krzysztof-ociepa-44886550/). This framework allows users to train language models with architecture similar to LLaMA and Mistral in fast and efficient way.
Bielik-11B-v2.3-Instruct is a merge of the [Bielik-11B-v2.0-Instruct](https://huggingface.co/speakleash/Bielik-11B-v2.0-Instruct), [Bielik-11B-v2.1-Instruct](https://huggingface.co/speakleash/Bielik-11B-v2.1-Instruct), and [Bielik-11B-v2.2-Instruct](https://huggingface.co/speakleash/Bielik-11B-v2.2-Instruct) models. The merge was performed in float16 precision by [Remigiusz Kinas](https://www.linkedin.com/in/remigiusz-kinas/) using [mergekit](https://github.com/cg123/mergekit).
### Model description:
* **Developed by:** [SpeakLeash](https://speakleash.org/) & [ACK Cyfronet AGH](https://www.cyfronet.pl/)
* **Language:** Polish
* **Model type:** causal decoder-only
* **Merged from:** [Bielik-11B-v2.0-Instruct](https://huggingface.co/speakleash/Bielik-11B-v2.0-Instruct), [Bielik-11B-v2.1-Instruct](https://huggingface.co/speakleash/Bielik-11B-v2.1-Instruct), [Bielik-11B-v2.2-Instruct](https://huggingface.co/speakleash/Bielik-11B-v2.2-Instruct)
* **License:** Apache 2.0 and [Terms of Use](https://bielik.ai/terms/)
### Quantized models:
We know that some people want to explore smaller models or don't have the resources to run a full model. Therefore, we have prepared quantized versions of the Bielik-11B-v2.3-Instruct model in separate repositories:
- [GGUF - Q4_K_M, Q5_K_M, Q6_K, Q8_0](https://huggingface.co/speakleash/Bielik-11B-v2.3-Instruct-GGUF)
- [GPTQ - 4bit](https://huggingface.co/speakleash/Bielik-11B-v2.3-Instruct-GPTQ)
- [FP8](https://huggingface.co/speakleash/Bielik-11B-v2.3-Instruct-FP8) (vLLM, SGLang - Ada Lovelace, Hopper optimized)
- [GGUF - experimental - IQ imatrix IQ1_M, IQ2_XXS, IQ3_XXS, IQ4_XS and calibrated Q4_K_M, Q5_K_M, Q6_K, Q8_0](https://huggingface.co/speakleash/Bielik-11B-v2.3-Instruct-GGUF-IQ-Imatrix)
Please note that quantized models may offer lower quality of generated answers compared to full sized variatns.
### Chat template
Bielik-11B-v2.3-Instruct uses [ChatML](https://github.com/cognitivecomputations/OpenChatML) as the prompt format.
E.g.
```
prompt = "<s><|im_start|> user\nJakie mamy pory roku?<|im_end|> \n<|im_start|> assistant\n"
completion = "W Polsce mamy 4 pory roku: wiosna, lato, jesień i zima.<|im_end|> \n"
```
This format is available as a [chat template](https://huggingface.co/docs/transformers/main/chat_templating) via the `apply_chat_template()` method:
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
device = "cuda" # the device to load the model onto
model_name = "speakleash/Bielik-11B-v2.3-Instruct"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.bfloat16)
messages = [
{"role": "system", "content": "Odpowiadaj krótko, precyzyjnie i wyłącznie w języku polskim."},
{"role": "user", "content": "Jakie mamy pory roku w Polsce?"},
{"role": "assistant", "content": "W Polsce mamy 4 pory roku: wiosna, lato, jesień i zima."},
{"role": "user", "content": "Która jest najcieplejsza?"}
]
input_ids = tokenizer.apply_chat_template(messages, return_tensors="pt")
model_inputs = input_ids.to(device)
model.to(device)
generated_ids = model.generate(model_inputs, max_new_tokens=1000, do_sample=True)
decoded = tokenizer.batch_decode(generated_ids)
print(decoded[0])
```
Fully formated input conversation by apply_chat_template from previous example:
```
<s><|im_start|> system
Odpowiadaj krótko, precyzyjnie i wyłącznie w języku polskim.<|im_end|>
<|im_start|> user
Jakie mamy pory roku w Polsce?<|im_end|>
<|im_start|> assistant
W Polsce mamy 4 pory roku: wiosna, lato, jesień i zima.<|im_end|>
<|im_start|> user
Która jest najcieplejsza?<|im_end|>
```
## Evaluation
Bielik-11B-v2.3-Instruct has been evaluated on several benchmarks to assess its performance across various tasks and languages. These benchmarks include:
1. Open PL LLM Leaderboard
2. Open LLM Leaderboard
3. Polish MT-Bench
4. Polish EQ-Bench (Emotional Intelligence Benchmark)
5. MixEval
The following sections provide detailed results for each of these benchmarks, demonstrating the model's capabilities in both Polish and English language tasks.
### Open PL LLM Leaderboard
Models have been evaluated on [Open PL LLM Leaderboard](https://huggingface.co/spaces/speakleash/open_pl_llm_leaderboard) 5-shot. The benchmark evaluates models in NLP tasks like sentiment analysis, categorization, text classification but does not test chatting skills. Average column is an average score among all tasks normalized by baseline scores.
| Model | Parameters (B)| Average |
|---------------------------------|------------|---------|
| Meta-Llama-3.1-405B-Instruct-FP8,API | 405 | 69.44 |
| Mistral-Large-Instruct-2407 | 123 | 69.11 |
| Qwen2-72B-Instruct | 72 | 65.87 |
| **Bielik-11B-v2.3-Instruct** | **11** | **65.71** |
| Bielik-11B-v2.2-Instruct | 11 | 65.57 |
| Meta-Llama-3.1-70B-Instruct | 70 | 65.49 |
| Bielik-11B-v2.1-Instruct | 11 | 65.45 |
| Mixtral-8x22B-Instruct-v0.1 | 141 | 65.23 |
| Bielik-11B-v2.0-Instruct | 11 | 64.98 |
| Meta-Llama-3-70B-Instruct | 70 | 64.45 |
| Athene-70B | 70 | 63.65 |
| WizardLM-2-8x22B | 141 | 62.35 |
| Qwen1.5-72B-Chat | 72 | 58.67 |
| Qwen2-57B-A14B-Instruct | 57 | 56.89 |
| glm-4-9b-chat | 9 | 56.61 |
| aya-23-35B | 35 | 56.37 |
| Phi-3.5-MoE-instruct | 41.9 | 56.34 |
| openchat-3.5-0106-gemma | 7 | 55.69 |
| Mistral-Nemo-Instruct-2407 | 12 | 55.27 |
| SOLAR-10.7B-Instruct-v1.0 | 10.7 | 55.24 |
| Mixtral-8x7B-Instruct-v0.1 | 46.7 | 55.07 |
| Bielik-7B-Instruct-v0.1 | 7 | 44.70 |
| trurl-2-13b-academic | 13 | 36.28 |
| trurl-2-7b | 7 | 26.93 |
The results from the Open PL LLM Leaderboard demonstrate the exceptional performance of Bielik-11B-v2.3-Instruct:
1. Superior performance in its class: Bielik-11B-v2.3-Instruct outperforms all other models with less than 70B parameters. This is a significant achievement, showcasing its efficiency and effectiveness despite having fewer parameters than many competitors.
2. Competitive with larger models: with a score of 65.71, Bielik-11B-v2.3-Instruct performs on par with models in the 70B parameter range. This indicates that it achieves comparable results to much larger models, demonstrating its advanced architecture and training methodology.
3. Substantial improvement over previous version: the model shows a marked improvement over its predecessor, Bielik-7B-Instruct-v0.1, which scored 43.64. This leap in performance highlights the successful enhancements and optimizations implemented in this newer version.
4. Leading position for Polish language models: in the context of Polish language models, Bielik-11B-v2.3-Instruct stands out as a leader. There are no other competitive models specifically tailored for the Polish language that match its performance, making it a crucial resource for Polish NLP tasks.
These results underscore Bielik-11B-v2.3-Instruct's position as a state-of-the-art model for Polish language processing, offering high performance with relatively modest computational requirements.
#### Open PL LLM Leaderboard - Generative Tasks Performance
This section presents a focused comparison of generative Polish language task performance between Bielik models and GPT-3.5. The evaluation is limited to generative tasks due to the constraints of assessing OpenAI models. The comprehensive nature and associated costs of the benchmark explain the limited number of models evaluated.
| Model | Parameters (B) | Average g |
|-------------------------------|----------------|---------------|
| **Bielik-11B-v2.3-Instruct** | 11 | **67.47**
| Bielik-11B-v2.1-Instruct | 11 | 66.58 |
| Bielik-11B-v2.2-Instruct | 11 | 66.11 |
| Bielik-11B-v2.0-Instruct | 11 | 65.58 |
| gpt-3.5-turbo-instruct | Unknown | 55.65 |
The performance variation among Bielik versions is minimal, indicating consistent quality across iterations. Bielik-11B-v2.3-Instruct demonstrates an impressive 21.2% performance advantage over GPT-3.5.
### Open LLM Leaderboard
The Open LLM Leaderboard evaluates models on various English language tasks, providing insights into the model's performance across different linguistic challenges.
| Model | AVG | arc_challenge | hellaswag | truthfulqa_mc2 | mmlu | winogrande | gsm8k |
|--------------------------|-------|---------------|-----------|----------------|-------|------------|-------|
| Bielik-11B-v2.2-Instruct | 69.86 | 59.90 | 80.16 | 58.34 | 64.34 | 75.30 | 81.12 |
| **Bielik-11B-v2.3-Instruct** | **69.82** | 59.30 | 80.11 | 57.42 | 64.57 | 76.24 | 81.27 |
| Bielik-11B-v2.1-Instruct | 69.82 | 59.56 | 80.20 | 59.35 | 64.18 | 75.06 | 80.59 |
| Bielik-11B-v2.0-Instruct | 68.04 | 58.62 | 78.65 | 54.65 | 63.71 | 76.32 | 76.27 |
| Bielik-11B-v2 | 65.87 | 60.58 | 79.84 | 46.13 | 63.06 | 77.82 | 67.78 |
| Mistral-7B-Instruct-v0.2 | 65.71 | 63.14 | 84.88 | 68.26 | 60.78 | 77.19 | 40.03 |
| Bielik-7B-Instruct-v0.1 | 51.26 | 47.53 | 68.91 | 49.47 | 46.18 | 65.51 | 29.95 |
Bielik-11B-v2.3-Instruct shows impressive performance on English language tasks:
1. Significant improvement over its base model (4-point increase).
2. Substantial 18-point improvement over Bielik-7B-Instruct-v0.1.
These results demonstrate Bielik-11B-v2.3-Instruct's versatility in both Polish and English, highlighting the effectiveness of its instruction tuning process.
### Polish MT-Bench
The Bielik-11B-v2.3-Instruct (16 bit) model was also evaluated using the MT-Bench benchmark. The quality of the model was evaluated using the English version (original version without modifications) and the Polish version created by Speakleash (tasks and evaluation in Polish, the content of the tasks was also changed to take into account the context of the Polish language).
#### MT-Bench English
| Model | Score |
|-----------------|----------|
| Bielik-11B-v2.1 | 8.537500 |
| **Bielik-11B-v2.3** | **8.531250** |
| Bielik-11B-v2.2 | 8.390625 |
| Bielik-11B-v2.0 | 8.159375 |
#### MT-Bench Polish
| Model | Parameters (B) | Score |
|-------------------------------------|----------------|----------|
| Qwen2-72B-Instruct | 72 | 8.775000 |
| Mistral-Large-Instruct-2407 (123B) | 123 | 8.662500 |
| gemma-2-27b-it | 27 | 8.618750 |
| **Bielik-11B-v2.3-Instruct** | **11** | **8.556250** |
| Mixtral-8x22b | 141 | 8.231250 |
| Meta-Llama-3.1-405B-Instruct | 405 | 8.168750 |
| Meta-Llama-3.1-70B-Instruct | 70 | 8.150000 |
| Bielik-11B-v2.2-Instruct | 11 | 8.115625 |
| Bielik-11B-v2.1-Instruct | 11 | 7.996875 |
| gpt-3.5-turbo | Unknown | 7.868750 |
| Mixtral-8x7b | 46.7 | 7.637500 |
| Bielik-11B-v2.0-Instruct | 11 | 7.562500 |
| Mistral-Nemo-Instruct-2407 | 12 | 7.368750 |
| openchat-3.5-0106-gemma | 7 | 6.812500 |
| Mistral-7B-Instruct-v0.2 | 7 | 6.556250 |
| Meta-Llama-3.1-8B-Instruct | 8 | 6.556250 |
| Bielik-7B-Instruct-v0.1 | 7 | 6.081250 |
| Mistral-7B-Instruct-v0.3 | 7 | 5.818750 |
| Polka-Mistral-7B-SFT | 7 | 4.518750 |
| trurl-2-7b | 7 | 2.762500 |
Key observations on Bielik-11B-v2.3 performance:
1. Strong performance among mid-sized models: Bielik-11B-v2.3-Instruct scored **8.556250**, placing it ahead of several well-known models like GPT-3.5-turbo (7.868750) and Mixtral-8x7b (7.637500). This indicates that Bielik-11B-v2.3-Instruct is competitive among mid-sized models, particularly those in the 11B-70B parameter range.
2. Competitive against larger models: Bielik-11B-v2.3-Instruct performs close to Meta-Llama-3.1-70B-Instruct (8.150000), Meta-Llama-3.1-405B-Instruct (8.168750) and even Mixtral-8x22b (8.231250), which have significantly more parameters. This efficiency in performance relative to size could make it an attractive option for tasks where resource constraints are a consideration. Bielik 100% generated answers in Polish, while other models (not typically trained for Polish) can answer Polish questions in English.
3. Significant improvement over previous versions: compared to its predecessor, **Bielik-7B-Instruct-v0.1**, which scored **6.081250**, the Bielik-11B-v2.3-Instruct shows a significant improvement. The score increased by almost **2.5 points**, highlighting substantial advancements in model quality, optimization and training methodology.
For more information - answers to test tasks and values in each category, visit the [MT-Bench PL](https://huggingface.co/spaces/speakleash/mt-bench-pl) website.
### Polish EQ-Bench
[Polish Emotional Intelligence Benchmark for LLMs](https://huggingface.co/spaces/speakleash/polish_eq-bench)
| Model | Parameters (B) | Score |
|-------------------------------|--------|-------|
| Mistral-Large-Instruct-2407 | 123 | 78.07 |
| Meta-Llama-3.1-405B-Instruct-FP8 | 405 | 77.23 |
| gpt-4o-2024-08-06 | ? | 75.15 |
| gpt-4-turbo-2024-04-09 | ? | 74.59 |
| Meta-Llama-3.1-70B-Instruct | 70 | 72.53 |
| Qwen2-72B-Instruct | 72 | 71.23 |
| Meta-Llama-3-70B-Instruct | 70 | 71.21 |
| gpt-4o-mini-2024-07-18 | ? | 71.15 |
| **Bielik-11B-v2.3-Instruct** | **11** | **70.86** |
| WizardLM-2-8x22B | 141 | 69.56 |
| Bielik-11B-v2.2-Instruct | 11 | 69.05 |
| Bielik-11B-v2.0-Instruct | 11 | 68.24 |
| Qwen1.5-72B-Chat | 72 | 68.03 |
| Mixtral-8x22B-Instruct-v0.1 | 141 | 67.63 |
| Bielik-11B-v2.1-Instruct | 11 | 60.07 |
| Qwen1.5-32B-Chat | 32 | 59.63 |
| openchat-3.5-0106-gemma | 7 | 59.58 |
| aya-23-35B | 35 | 58.41 |
| gpt-3.5-turbo | ? | 57.7 |
| Qwen2-57B-A14B-Instruct | 57 | 57.64 |
| Mixtral-8x7B-Instruct-v0.1 | 47 | 57.61 |
| SOLAR-10.7B-Instruct-v1.0 | 10.7 | 55.21 |
| Mistral-7B-Instruct-v0.2 | 7 | 47.02 |
### MixEval
MixEval is a ground-truth-based English benchmark designed to evaluate Large Language Models (LLMs) efficiently and effectively. Key features of MixEval include:
1. Derived from off-the-shelf benchmark mixtures
2. Highly capable model ranking with a 0.96 correlation to Chatbot Arena
3. Local and quick execution, requiring only 6% of the time and cost compared to running MMLU
This benchmark provides a robust and time-efficient method for assessing LLM performance, making it a valuable tool for ongoing model evaluation and comparison.
| Model | MixEval | MixEval-Hard |
|-------------------------------|---------|--------------|
| Bielik-11B-v2.1-Instruct | 74.55 | 45.00 |
| **Bielik-11B-v2.3-Instruct** | **72.95** | **43.20** |
| Bielik-11B-v2.2-Instruct | 72.35 | 39.65 |
| Bielik-11B-v2.0-Instruct | 72.10 | 40.20 |
| Mistral-7B-Instruct-v0.2 | 70.00 | 36.20 |
The results show that Bielik-11B-v2.3-Instruct performs well on the MixEval benchmark, achieving a score of 72.95 on the standard MixEval and 43.20 on MixEval-Hard. Notably, Bielik-11B-v2.3-Instruct significantly outperforms Mistral-7B-Instruct-v0.2 on both metrics, demonstrating its improved capabilities despite being based on a similar architecture.
## Limitations and Biases
Bielik-11B-v2.3-Instruct is a quick demonstration that the base model can be easily fine-tuned to achieve compelling and promising performance. It does not have any moderation mechanisms. We're looking forward to engaging with the community in ways to make the model respect guardrails, allowing for deployment in environments requiring moderated outputs.
Bielik-11B-v2.3-Instruct can produce factually incorrect output, and should not be relied on to produce factually accurate data. Bielik-11B-v2.3-Instruct was trained on various public datasets. While great efforts have been taken to clear the training data, it is possible that this model can generate lewd, false, biased or otherwise offensive outputs.
## Citation
Please cite this model using the following format:
```
@misc{ociepa2025bielik11bv2technical,
title={Bielik 11B v2 Technical Report},
author={Krzysztof Ociepa and Łukasz Flis and Krzysztof Wróbel and Adrian Gwoździej and Remigiusz Kinas},
year={2025},
eprint={2505.02410},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2505.02410},
}
@misc{Bielik11Bv21i,
title = {Bielik-11B-v2.3-Instruct model card},
author = {Ociepa, Krzysztof and Flis, Łukasz and Kinas, Remigiusz and Gwoździej, Adrian and Wróbel, Krzysztof and {SpeakLeash Team} and {Cyfronet Team}},
year = {2024},
url = {https://huggingface.co/speakleash/Bielik-11B-v2.3-Instruct},
note = {Accessed: 2024-09-16}, % change this date
urldate = {2024-09-16} % change this date
}
@misc{ociepa2024bielik7bv01polish,
title={Bielik 7B v0.1: A Polish Language Model -- Development, Insights, and Evaluation},
author={Krzysztof Ociepa and Łukasz Flis and Krzysztof Wróbel and Adrian Gwoździej and Remigiusz Kinas},
year={2024},
eprint={2410.18565},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2410.18565},
}
```
## Responsible for training the model
* [Krzysztof Ociepa](https://www.linkedin.com/in/krzysztof-ociepa-44886550/)<sup>SpeakLeash</sup> - team leadership, conceptualizing, data preparation, process optimization and oversight of training
* [Łukasz Flis](https://www.linkedin.com/in/lukasz-flis-0a39631/)<sup>Cyfronet AGH</sup> - coordinating and supervising the training
* [Remigiusz Kinas](https://www.linkedin.com/in/remigiusz-kinas/)<sup>SpeakLeash</sup> - conceptualizing and coordinating DPO training, data preparation
* [Adrian Gwoździej](https://www.linkedin.com/in/adrgwo/)<sup>SpeakLeash</sup> - data preparation and ensuring data quality
* [Krzysztof Wróbel](https://www.linkedin.com/in/wrobelkrzysztof/)<sup>SpeakLeash</sup> - benchmarks
The model could not have been created without the commitment and work of the entire SpeakLeash team, whose contribution is invaluable. Thanks to the hard work of many individuals, it was possible to gather a large amount of content in Polish and establish collaboration between the open-science SpeakLeash project and the HPC center: ACK Cyfronet AGH. Individuals who contributed to the creation of the model:
[Sebastian Kondracki](https://www.linkedin.com/in/sebastian-kondracki/),
[Igor Ciuciura](https://www.linkedin.com/in/igor-ciuciura-1763b52a6/),
[Paweł Kiszczak](https://www.linkedin.com/in/paveu-kiszczak/),
[Szymon Baczyński](https://www.linkedin.com/in/szymon-baczynski/),
[Jacek Chwiła](https://www.linkedin.com/in/jacek-chwila/),
[Maria Filipkowska](https://www.linkedin.com/in/maria-filipkowska/),
[Jan Maria Kowalski](https://www.linkedin.com/in/janmariakowalski/),
[Karol Jezierski](https://www.linkedin.com/in/karol-jezierski/),
[Kacper Milan](https://www.linkedin.com/in/kacper-milan/),
[Jan Sowa](https://www.linkedin.com/in/janpiotrsowa/),
[Len Krawczyk](https://www.linkedin.com/in/magdalena-krawczyk-7810942ab/),
[Marta Seidler](https://www.linkedin.com/in/marta-seidler-751102259/),
[Agnieszka Ratajska](https://www.linkedin.com/in/agnieszka-ratajska/),
[Krzysztof Koziarek](https://www.linkedin.com/in/krzysztofkoziarek/),
[Szymon Pepliński](http://linkedin.com/in/szymonpeplinski/),
[Zuzanna Dabić](https://www.linkedin.com/in/zuzanna-dabic/),
[Filip Bogacz](https://linkedin.com/in/Fibogacci),
[Agnieszka Kosiak](https://www.linkedin.com/in/agn-kosiak),
[Izabela Babis](https://www.linkedin.com/in/izabela-babis-2274b8105/),
[Nina Babis](https://www.linkedin.com/in/nina-babis-00055a140/).
Members of the ACK Cyfronet AGH team providing valuable support and expertise:
[Szymon Mazurek](https://www.linkedin.com/in/sz-mazurek-ai/),
[Marek Magryś](https://www.linkedin.com/in/magrys/),
[Mieszko Cholewa ](https://www.linkedin.com/in/mieszko-cholewa-613726301/).
## Contact Us
If you have any questions or suggestions, please use the discussion tab. If you want to contact us directly, join our [Discord SpeakLeash](https://discord.gg/pv4brQMDTy).
|
tessilab/kie3-facture_hn-lora | tessilab | "2025-05-06T11:28:40Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"llama-factory",
"lora",
"generated_from_trainer",
"base_model:Qwen/Qwen2.5-VL-7B-Instruct",
"base_model:adapter:Qwen/Qwen2.5-VL-7B-Instruct",
"license:other",
"region:us"
] | null | "2025-05-06T11:27:02Z" | ---
library_name: peft
license: other
base_model: Qwen/Qwen2.5-VL-7B-Instruct
tags:
- llama-factory
- lora
- generated_from_trainer
model-index:
- name: kie3-facture_hn-lora
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# kie3-facture_hn-lora
This model is a fine-tuned version of [Qwen/Qwen2.5-VL-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-7B-Instruct) on the HNKIE3 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 16
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- num_epochs: 3.0
### Training results
### Framework versions
- PEFT 0.15.0
- Transformers 4.50.3
- Pytorch 2.5.1+cu124
- Datasets 3.1.0
- Tokenizers 0.21.0 |
alinatl/Llama-3.2-3B-Instruct-qlora | alinatl | "2025-05-06T11:26:50Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"lora",
"chat",
"spanish",
"trl",
"sft",
"base_model:meta-llama/Llama-3.2-3B-Instruct",
"base_model:adapter:meta-llama/Llama-3.2-3B-Instruct",
"endpoints_compatible",
"region:us"
] | null | "2025-05-06T10:18:31Z" | ---
base_model: meta-llama/Llama-3.2-3B-Instruct
library_name: transformers
model_name: Llama-3.2-3B-Instruct-qlora
tags:
- generated_from_trainer
- lora
- chat
- spanish
- trl
- sft
licence: license
---
# Model Card for Llama-3.2-3B-Instruct-qlora
This model is a fine-tuned version of [meta-llama/Llama-3.2-3B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="alinatl/Llama-3.2-3B-Instruct-qlora", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/alinatl/huggingface/runs/yu3dt40g)
This model was trained with SFT.
### Framework versions
- TRL: 0.17.0
- Transformers: 4.51.3
- Pytorch: 2.6.0+cu124
- Datasets: 3.5.1
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
niklasm222/qwen2.5-3b-1.75k-prolog-sp-struct-rwd1-silver-sweep-1 | niklasm222 | "2025-05-06T11:23:39Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"grpo",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-05-06T11:22:02Z" | ---
base_model: unsloth/qwen2.5-3b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
- grpo
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** niklasm222
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-3b-instruct-unsloth-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
vmpsergio/57b95e25-0697-4d4a-8758-92180d153b7e | vmpsergio | "2025-05-06T11:22:22Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"gpt_neox",
"axolotl",
"generated_from_trainer",
"base_model:EleutherAI/pythia-70m",
"base_model:adapter:EleutherAI/pythia-70m",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | "2025-05-06T11:09:18Z" | ---
library_name: peft
license: apache-2.0
base_model: EleutherAI/pythia-70m
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 57b95e25-0697-4d4a-8758-92180d153b7e
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
absolute_data_files: true
adapter: lora
base_model: EleutherAI/pythia-70m
bf16: true
chat_template: llama3
dataset_prepared_path: /workspace/axolotl
datasets:
- data_files:
- 49998b44a4e2c802_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/49998b44a4e2c802_train_data.json
type:
field_instruction: en
field_output: tr
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 1
gradient_checkpointing: true
gradient_clipping: 0.5
group_by_length: false
hub_model_id: vmpsergio/57b95e25-0697-4d4a-8758-92180d153b7e
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-06
load_in_4bit: false
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 400
micro_batch_size: 8
mixed_precision: bf16
mlflow_experiment_name: /tmp/49998b44a4e2c802_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
special_tokens:
pad_token: <|endoftext|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 26085ff4-eb28-4f3d-aa80-a35ba35b9857
wandb_project: s56-8
wandb_run: your_name
wandb_runid: 26085ff4-eb28-4f3d-aa80-a35ba35b9857
warmup_steps: 10
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 57b95e25-0697-4d4a-8758-92180d153b7e
This model is a fine-tuned version of [EleutherAI/pythia-70m](https://huggingface.co/EleutherAI/pythia-70m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 7.0709
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 400
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 7.0687 | 0.0034 | 400 | 7.0709 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
MrRobotoAI/123 | MrRobotoAI | "2025-05-06T11:20:43Z" | 36 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2212.04089",
"base_model:MrRobotoAI/A1",
"base_model:merge:MrRobotoAI/A1",
"base_model:MrRobotoAI/A2",
"base_model:merge:MrRobotoAI/A2",
"base_model:MrRobotoAI/A6",
"base_model:merge:MrRobotoAI/A6",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-05-06T11:17:48Z" | ---
base_model:
- MrRobotoAI/A2
- MrRobotoAI/A1
- MrRobotoAI/A6
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [Task Arithmetic](https://arxiv.org/abs/2212.04089) merge method using [MrRobotoAI/A2](https://huggingface.co/MrRobotoAI/A2) as a base.
### Models Merged
The following models were included in the merge:
* [MrRobotoAI/A1](https://huggingface.co/MrRobotoAI/A1)
* [MrRobotoAI/A6](https://huggingface.co/MrRobotoAI/A6)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
merge_method: task_arithmetic
models:
- model: MrRobotoAI/A6
parameters:
weight:
- filter: v_proj
value: [0.8, 0.8, 0.5, 0.6, 0.7, 0.8, 0.7, 0.6, 0.5, 0.8, 0.8]
- filter: o_proj
value: [0.8, 0.8, 0.5, 0.6, 0.7, 0.8, 0.7, 0.6, 0.5, 0.8, 0.8]
- filter: up_proj
value: [0.8, 0.8, 0.5, 0.6, 0.7, 0.8, 0.7, 0.6, 0.5, 0.8, 0.8]
- filter: gate_proj
value: [0.8, 0.8, 0.5, 0.6, 0.7, 0.8, 0.7, 0.6, 0.5, 0.8, 0.8]
- filter: down_proj
value: [0.8, 0.8, 0.5, 0.6, 0.7, 0.8, 0.7, 0.6, 0.5, 0.8, 0.8]
- value: 2
- model: MrRobotoAI/A1
parameters:
weight:
- filter: v_proj
value: [0.1, 0.1, 0.25, 0.2, 0.15, 0.1, 0.15, 0.2, 0.25, 0.1, 0.1]
- filter: o_proj
value: [0.1, 0.1, 0.25, 0.2, 0.15, 0.1, 0.15, 0.2, 0.25, 0.1, 0.1]
- filter: up_proj
value: [0.1, 0.1, 0.25, 0.2, 0.15, 0.1, 0.15, 0.2, 0.25, 0.1, 0.1]
- filter: gate_proj
value: [0.1, 0.1, 0.25, 0.2, 0.15, 0.1, 0.15, 0.2, 0.25, 0.1, 0.1]
- filter: down_proj
value: [0.1, 0.1, 0.25, 0.2, 0.15, 0.1, 0.15, 0.2, 0.25, 0.1, 0.1]
- value: 1
- model: MrRobotoAI/A2
parameters:
weight:
- filter: v_proj
value: [0.1, 0.1, 0.25, 0.2, 0.15, 0.1, 0.15, 0.2, 0.25, 0.1, 0.1]
- filter: o_proj
value: [0.1, 0.1, 0.25, 0.2, 0.15, 0.1, 0.15, 0.2, 0.25, 0.1, 0.1]
- filter: up_proj
value: [0.1, 0.1, 0.25, 0.2, 0.15, 0.1, 0.15, 0.2, 0.25, 0.1, 0.1]
- filter: gate_proj
value: [0.1, 0.1, 0.25, 0.2, 0.15, 0.1, 0.15, 0.2, 0.25, 0.1, 0.1]
- filter: down_proj
value: [0.1, 0.1, 0.25, 0.2, 0.15, 0.1, 0.15, 0.2, 0.25, 0.1, 0.1]
- value: 0
base_model: MrRobotoAI/A2
dtype: bfloat16
```
|
dgambettaphd/M_llm3_gen6_WXS_doc1000_synt64_lr1e-04_acm_FRESH | dgambettaphd | "2025-05-06T11:17:05Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2025-05-06T11:16:50Z" | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
vertings6/67cf0748-f187-40e6-9aef-64131c991bc1 | vertings6 | "2025-05-06T11:15:35Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"gpt_neox",
"axolotl",
"generated_from_trainer",
"base_model:EleutherAI/pythia-70m",
"base_model:adapter:EleutherAI/pythia-70m",
"license:apache-2.0",
"4-bit",
"bitsandbytes",
"region:us"
] | null | "2025-05-06T11:09:31Z" | ---
library_name: peft
license: apache-2.0
base_model: EleutherAI/pythia-70m
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 67cf0748-f187-40e6-9aef-64131c991bc1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
absolute_data_files: false
adapter: lora
base_model: EleutherAI/pythia-70m
bf16: true
chat_template: llama3
dataset_prepared_path: /workspace/axolotl
datasets:
- data_files:
- 49998b44a4e2c802_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/49998b44a4e2c802_train_data.json
type:
field_instruction: en
field_output: tr
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 1
gradient_checkpointing: true
gradient_clipping: 0.55
group_by_length: false
hub_model_id: vertings6/67cf0748-f187-40e6-9aef-64131c991bc1
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 1.0e-06
load_in_4bit: true
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 400
micro_batch_size: 10
mixed_precision: bf16
mlflow_experiment_name: /tmp/49998b44a4e2c802_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 2048
special_tokens:
pad_token: <|endoftext|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 26085ff4-eb28-4f3d-aa80-a35ba35b9857
wandb_project: s56-7
wandb_run: your_name
wandb_runid: 26085ff4-eb28-4f3d-aa80-a35ba35b9857
warmup_steps: 20
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 67cf0748-f187-40e6-9aef-64131c991bc1
This model is a fine-tuned version of [EleutherAI/pythia-70m](https://huggingface.co/EleutherAI/pythia-70m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 8.2338
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 10
- eval_batch_size: 10
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 20
- training_steps: 400
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 8.662 | 0.0042 | 400 | 8.2338 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
lilia15/lora-sdxl-style | lilia15 | "2025-05-06T11:14:07Z" | 0 | 0 | null | [
"license:mit",
"region:us"
] | null | "2025-05-05T21:22:21Z" | ---
license: mit
---
from diffusers import StableDiffusionXLPipeline
import torch
pipe = StableDiffusionXLPipeline.from_pretrained(
"stabilityai/stable-diffusion-xl-base-1.0",
torch_dtype=torch.float16
).to("cuda")
pipe.load_lora_weights("lilia15/lilia15/lora-sdxl-style", weight_name="pytorch_lora_weights.safetensors")
# Utilisation :
image = pipe("salon style maghrébin").images[0]
image.save("sortie.png")
|
Zaynoid/JSL-MedQwen3-32b | Zaynoid | "2025-05-06T11:13:41Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-05-06T10:53:06Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Romain-XV/80808422-ea8e-4824-ae24-7f3c88698326 | Romain-XV | "2025-05-06T11:09:33Z" | 0 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"mistral",
"text-generation",
"generated_from_trainer",
"axolotl",
"dpo",
"trl",
"conversational",
"arxiv:2305.18290",
"base_model:Intel/neural-chat-7b-v3-3",
"base_model:finetune:Intel/neural-chat-7b-v3-3",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-05-06T09:23:08Z" | ---
base_model: Intel/neural-chat-7b-v3-3
library_name: transformers
model_name: 80808422-ea8e-4824-ae24-7f3c88698326
tags:
- generated_from_trainer
- axolotl
- dpo
- trl
licence: license
---
# Model Card for 80808422-ea8e-4824-ae24-7f3c88698326
This model is a fine-tuned version of [Intel/neural-chat-7b-v3-3](https://huggingface.co/Intel/neural-chat-7b-v3-3).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Romain-XV/80808422-ea8e-4824-ae24-7f3c88698326", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/romain_fnc-xventures/Gradients-On-Demand/runs/b1pe3qef)
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.12.0.dev0
- Transformers: 4.46.0
- Pytorch: 2.5.0+cu124
- Datasets: 3.0.1
- Tokenizers: 0.20.1
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
Pyzeur/colony-mistral-finetune-train5 | Pyzeur | "2025-05-06T11:05:41Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2025-05-06T11:05:32Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
TheGardener/KD-Embedding-and-MLP-ver3-Llama3.2-0.62B-epoch-1st | TheGardener | "2025-05-06T11:05:32Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-05-06T11:05:00Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
tihonn/fine_tuned_t5_on_vnx_1746529295_ | tihonn | "2025-05-06T11:04:09Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google-t5/t5-base",
"base_model:finetune:google-t5/t5-base",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2025-05-06T11:01:41Z" | ---
library_name: transformers
license: apache-2.0
base_model: t5-base
tags:
- generated_from_trainer
model-index:
- name: fine_tuned_t5_on_vnx_1746529295_
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fine_tuned_t5_on_vnx_1746529295_
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| No log | 1.0 | 166 | 5.6392 | 0.2259 | 0.0616 | 0.1509 | 0.1509 | 17.429 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.7.0+cu126
- Datasets 3.5.1
- Tokenizers 0.21.1
|
apalombit/Reinforce-pixelcopter | apalombit | "2025-05-06T11:03:23Z" | 0 | 0 | null | [
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | "2025-05-06T10:28:56Z" | ---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-pixelcopter
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 19.50 +/- 16.26
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
filipesantoscv11/06bb2fef-5a9c-4022-a9d6-d722c6fedc24 | filipesantoscv11 | "2025-05-06T11:01:43Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"gpt_neox",
"axolotl",
"generated_from_trainer",
"base_model:EleutherAI/pythia-410m-deduped",
"base_model:adapter:EleutherAI/pythia-410m-deduped",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | "2025-05-06T10:53:17Z" | ---
library_name: peft
license: apache-2.0
base_model: EleutherAI/pythia-410m-deduped
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 06bb2fef-5a9c-4022-a9d6-d722c6fedc24
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: EleutherAI/pythia-410m-deduped
bf16: true
chat_template: llama3
dataset_prepared_path: /workspace/axolotl
datasets:
- data_files:
- b921edabec79aae0_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/b921edabec79aae0_train_data.json
type:
field_input: thinking
field_instruction: prompt
field_output: answer
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 1
gradient_checkpointing: true
gradient_clipping: 0.5
group_by_length: false
hub_model_id: filipesantoscv11/06bb2fef-5a9c-4022-a9d6-d722c6fedc24
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-06
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 128
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
max_steps: 500
micro_batch_size: 8
mixed_precision: bf16
mlflow_experiment_name: /tmp/b921edabec79aae0_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
special_tokens:
pad_token: <|endoftext|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 62435afe-c2a4-43c2-93dd-3cb6e81c0ec2
wandb_project: s56-6
wandb_run: your_name
wandb_runid: 62435afe-c2a4-43c2-93dd-3cb6e81c0ec2
warmup_steps: 30
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 06bb2fef-5a9c-4022-a9d6-d722c6fedc24
This model is a fine-tuned version of [EleutherAI/pythia-410m-deduped](https://huggingface.co/EleutherAI/pythia-410m-deduped) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3309
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 30
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.2911 | 0.2814 | 500 | 1.3309 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
dimasik2987/ceaad633-56f7-4c5b-ae34-b512fb18b859 | dimasik2987 | "2025-05-06T10:58:43Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"gpt_neox",
"axolotl",
"generated_from_trainer",
"base_model:EleutherAI/pythia-410m-deduped",
"base_model:adapter:EleutherAI/pythia-410m-deduped",
"license:apache-2.0",
"4-bit",
"bitsandbytes",
"region:us"
] | null | "2025-05-06T10:52:58Z" | ---
library_name: peft
license: apache-2.0
base_model: EleutherAI/pythia-410m-deduped
tags:
- axolotl
- generated_from_trainer
model-index:
- name: ceaad633-56f7-4c5b-ae34-b512fb18b859
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
absolute_data_files: false
adapter: lora
base_model: EleutherAI/pythia-410m-deduped
bf16: true
chat_template: llama3
dataset_prepared_path: /workspace/axolotl
datasets:
- data_files:
- b921edabec79aae0_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/b921edabec79aae0_train_data.json
type:
field_input: thinking
field_instruction: prompt
field_output: answer
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 1
gradient_checkpointing: true
gradient_clipping: 0.55
group_by_length: false
hub_model_id: dimasik2987/ceaad633-56f7-4c5b-ae34-b512fb18b859
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 1.0e-06
load_in_4bit: true
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 128
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
max_steps: 400
micro_batch_size: 8
mixed_precision: bf16
mlflow_experiment_name: /tmp/b921edabec79aae0_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 2048
special_tokens:
pad_token: <|endoftext|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 62435afe-c2a4-43c2-93dd-3cb6e81c0ec2
wandb_project: s56-28
wandb_run: your_name
wandb_runid: 62435afe-c2a4-43c2-93dd-3cb6e81c0ec2
warmup_steps: 20
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# ceaad633-56f7-4c5b-ae34-b512fb18b859
This model is a fine-tuned version of [EleutherAI/pythia-410m-deduped](https://huggingface.co/EleutherAI/pythia-410m-deduped) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2920
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- total_train_batch_size: 16
- total_eval_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 20
- training_steps: 400
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.2621 | 0.4499 | 400 | 2.2920 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
naiweizi/mistral-dpo-harmless-vanilla-2e-4 | naiweizi | "2025-05-06T10:58:14Z" | 0 | 0 | peft | [
"peft",
"region:us"
] | null | "2025-05-06T10:52:31Z" | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0
- PEFT 0.5.0
|
ajagota71/irl-reward-pythia-70m-checkpoint-10 | ajagota71 | "2025-05-06T10:57:52Z" | 0 | 0 | null | [
"safetensors",
"gpt_neox",
"region:us"
] | null | "2025-05-06T10:57:23Z" | # IRL Reward Model
This model was trained using max_margin IRL to learn toxicity reward signals.
Base model: EleutherAI/pythia-70M
Original model: EleutherAI/pythia-70M
Detoxified model: ajagota71/pythia-70m-detox-epoch-100
---
tags:
- toxicity
- reward-model
library_name: transformers
---
|
rayonlabs/Qwen2-1_5B-Instruct-opus100-en-ko-323c1b33-f593-4ea0-987a-994275d0f8f1 | rayonlabs | "2025-05-06T10:55:16Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"generated_from_trainer",
"dataset:4b301e9d750b5514_train_data.json",
"base_model:Qwen/Qwen2-1.5B-Instruct",
"base_model:adapter:Qwen/Qwen2-1.5B-Instruct",
"region:us"
] | null | "2025-05-06T10:55:14Z" | ---
library_name: peft
tags:
- generated_from_trainer
datasets:
- 4b301e9d750b5514_train_data.json
base_model: Qwen/Qwen2-1.5B-Instruct
model-index:
- name: kk-aivio/1304984d-b6aa-4a33-985c-84a5011807c4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# kk-aivio/1304984d-b6aa-4a33-985c-84a5011807c4
This model was trained from scratch on the /workspace/input_data/4b301e9d750b5514_train_data.json dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2044
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
### Framework versions
- PEFT 0.15.2
- Transformers 4.51.3
- Pytorch 2.5.1+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1 |
VIZINTZOR/F5-TTS-THAI | VIZINTZOR | "2025-05-06T10:53:53Z" | 0 | 14 | null | [
"text-to-speech",
"th",
"dataset:Porameht/processed-voice-th-169k",
"base_model:SWivid/F5-TTS",
"base_model:finetune:SWivid/F5-TTS",
"license:cc0-1.0",
"region:us"
] | text-to-speech | "2025-03-10T07:23:00Z" | ---
datasets:
- Porameht/processed-voice-th-169k
language:
- th
pipeline_tag: text-to-speech
base_model:
- SWivid/F5-TTS
license: cc0-1.0
---
#### F5-TTS-THAI
โมเดลหลัก : [SWivid/F5-TTS](https://huggingface.co/SWivid/F5-TTS)
Github : https://github.com/SWivid/F5-TTS
ชุดข้อมูลที่นำไปเทรน
- [Porameht/processed-voice-th-169k](https://huggingface.co/datasets/Porameht/processed-voice-th-169k)
- [Common Voice](https://commonvoice.mozilla.org/)
- จำนวน
- 200,000 เสียง
- ภาษาไทย ประมาณ 190 ชั่วโมง
- ภาษาอังกฤษ ประมาณ 40 ชัวโมง
- ขนาดโมเดลล่าสุด
- 600,000 Steps
- ภาษาที่รองรับ: ไทย และ อังกฤษ.
- การอ่านข้อความยาวๆ หรือบางคำ ยังไม่ถูกต้อง
### การใช้งาน
Github : https://github.com/VYNCX/F5-TTS-THAI
```sh
git clone https://github.com/VYNCX/F5-TTS-THAI.git
cd F5-TTS-THAI
python -m venv venv
call venv/scripts/activate
pip install git+https://github.com/VYNCX/F5-TTS-THAI.git
#จำเป็นต้องติดตั้งเพื่อใช้งานได้มีประสิทธิภาพกับ GPU
pip install torch==2.3.0+cu118 torchaudio==2.3.0+cu118 --extra-index-url https://download.pytorch.org/whl/cu118
```
สามารถรันไฟล์ `app-webui.bat` เพื่อใช้งานได้ หรือ
```sh
python src/f5_tts/f5_tts_webui.py
```
### ฝึกอบรม และ Finetune
ใช้งานบน Google Colab [Finetune](https://colab.research.google.com/drive/1jwzw4Jn1qF8-F0o3TND68hLHdIqqgYEe?usp=sharing) หรือ
- ติดตั้ง
```sh
cd F5-TTS-THAI
pip install -e .
```
- เปิด Gradio
```sh
f5-tts_finetune-gradio
```
### ตัวอย่างเสียง
- เสียงต้นแบบ
<audio controls><source src="https://huggingface.co/VIZINTZOR/F5-TTS-THAI/resolve/main/sample/ref_audio.wav" type="audio/wav"></audio>
- ข้อความคำพูด : ฉันเดินทางไปเที่ยวที่จังหวัดเชียงใหม่ในช่วงฤดูหนาวเพื่อสัมผัสอากาศเย็นสบาย
- เสียงที่สร้างขึ้น
<audio controls><source src="https://huggingface.co/VIZINTZOR/F5-TTS-THAI/resolve/main/sample/tts_gen.wav" type="audio/wav"></audio>
- Seed : 4213936761049775187
- ภาษาอังกฤษกับคำภาษาไทย
- เสียงต้นแบบ
<audio controls><source src="https://huggingface.co/VIZINTZOR/F5-TTS-THAI/resolve/main/sample/ref_audio_2.wav" type="audio/wav"></audio>
- ข้อความคำพูด : When there is not enough fuel pressure, the engine may not start.
- เสียงที่สร้างขึ้น
<audio controls><source src="https://huggingface.co/VIZINTZOR/F5-TTS-THAI/resolve/main/sample/tts_gen_2.wav" type="audio/wav"></audio>
- ข้อความ : Today I went to the market and bought some "ข้าวเหนียวหมูปิ้ง" before heading to the park for a picnic. |
kblz/mms-tts-amh | kblz | "2025-05-06T10:53:19Z" | 59 | 0 | transformers | [
"transformers",
"safetensors",
"vits",
"text-to-audio",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | text-to-audio | "2025-05-06T08:51:43Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
augustocsc/Se124M100KInfDelimiter | augustocsc | "2025-05-06T10:53:00Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:openai-community/gpt2",
"base_model:adapter:openai-community/gpt2",
"license:mit",
"region:us"
] | null | "2025-05-06T07:51:36Z" | ---
library_name: peft
license: mit
base_model: gpt2
tags:
- generated_from_trainer
model-index:
- name: Se124M100KInfDelimiter
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Se124M100KInfDelimiter
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4823
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:------:|:---------------:|
| 0.1538 | 1.0 | 2090 | 0.5678 |
| 0.1412 | 2.0 | 4180 | 0.5413 |
| 0.1405 | 3.0 | 6270 | 0.5325 |
| 0.1355 | 4.0 | 8360 | 0.5241 |
| 0.1364 | 5.0 | 10450 | 0.5211 |
| 0.1341 | 6.0 | 12540 | 0.5170 |
| 0.1312 | 7.0 | 14630 | 0.5123 |
| 0.1304 | 8.0 | 16720 | 0.5078 |
| 0.1301 | 9.0 | 18810 | 0.5064 |
| 0.1286 | 10.0 | 20900 | 0.5058 |
| 0.1308 | 11.0 | 22990 | 0.5022 |
| 0.1292 | 12.0 | 25080 | 0.5007 |
| 0.1287 | 13.0 | 27170 | 0.5005 |
| 0.1306 | 14.0 | 29260 | 0.4976 |
| 0.1312 | 15.0 | 31350 | 0.4975 |
| 0.1268 | 16.0 | 33440 | 0.4963 |
| 0.1267 | 17.0 | 35530 | 0.4944 |
| 0.1273 | 18.0 | 37620 | 0.4932 |
| 0.1243 | 19.0 | 39710 | 0.4925 |
| 0.1266 | 20.0 | 41800 | 0.4912 |
| 0.127 | 21.0 | 43890 | 0.4914 |
| 0.1278 | 22.0 | 45980 | 0.4905 |
| 0.1276 | 23.0 | 48070 | 0.4899 |
| 0.1285 | 24.0 | 50160 | 0.4888 |
| 0.1264 | 25.0 | 52250 | 0.4889 |
| 0.1256 | 26.0 | 54340 | 0.4881 |
| 0.1251 | 27.0 | 56430 | 0.4876 |
| 0.1291 | 28.0 | 58520 | 0.4869 |
| 0.1254 | 29.0 | 60610 | 0.4867 |
| 0.1268 | 30.0 | 62700 | 0.4863 |
| 0.1247 | 31.0 | 64790 | 0.4857 |
| 0.126 | 32.0 | 66880 | 0.4855 |
| 0.1262 | 33.0 | 68970 | 0.4852 |
| 0.1257 | 34.0 | 71060 | 0.4848 |
| 0.1246 | 35.0 | 73150 | 0.4846 |
| 0.1261 | 36.0 | 75240 | 0.4839 |
| 0.1269 | 37.0 | 77330 | 0.4839 |
| 0.1244 | 38.0 | 79420 | 0.4836 |
| 0.1243 | 39.0 | 81510 | 0.4836 |
| 0.1256 | 40.0 | 83600 | 0.4834 |
| 0.1237 | 41.0 | 85690 | 0.4827 |
| 0.1244 | 42.0 | 87780 | 0.4833 |
| 0.1234 | 43.0 | 89870 | 0.4828 |
| 0.1255 | 44.0 | 91960 | 0.4824 |
| 0.1272 | 45.0 | 94050 | 0.4826 |
| 0.1258 | 46.0 | 96140 | 0.4824 |
| 0.1264 | 47.0 | 98230 | 0.4825 |
| 0.1236 | 48.0 | 100320 | 0.4824 |
| 0.1254 | 49.0 | 102410 | 0.4825 |
| 0.1242 | 50.0 | 104500 | 0.4823 |
### Framework versions
- PEFT 0.15.1
- Transformers 4.51.3
- Pytorch 2.6.0+cu118
- Datasets 3.5.0
- Tokenizers 0.21.1 |
moyixiao/llama3_pissa_r64_df | moyixiao | "2025-05-06T10:51:38Z" | 6 | 0 | peft | [
"peft",
"safetensors",
"llama-factory",
"lora",
"generated_from_trainer",
"base_model:moyixiao/Llama-3.2-1B",
"base_model:adapter:moyixiao/Llama-3.2-1B",
"license:llama3.2",
"region:us"
] | null | "2025-05-05T04:17:17Z" | ---
library_name: peft
license: llama3.2
base_model: moyixiao/Llama-3.2-1B
tags:
- llama-factory
- lora
- generated_from_trainer
model-index:
- name: llama3_pissa_r64_df
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# llama3_pissa_r64_df
This model is a fine-tuned version of [moyixiao/Llama-3.2-1B](https://huggingface.co/moyixiao/Llama-3.2-1B) on the AlpacaClean dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0004
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- total_train_batch_size: 32
- total_eval_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.01
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.14.0
- Transformers 4.51.1
- Pytorch 2.5.1+cu124
- Datasets 3.5.0
- Tokenizers 0.21.0 |
lzw1008/ConspEmoLLM-v2 | lzw1008 | "2025-05-06T10:48:14Z" | 4 | 0 | null | [
"safetensors",
"llama",
"license:mit",
"region:us"
] | null | "2025-05-05T19:30:43Z" | ---
license: mit
---
## Usage
You can use the models in your Python project with the Hugging Face Transformers library. Here is a simple example of how to load the model and predict the result:
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
if torch.cuda.is_available():
device = torch.device('cuda')
else:
device = torch.device('cpu')
print(device)
MODEL_PATH="lzw1008/ConspEmoLLM-v2"
tokenizer = AutoTokenizer.from_pretrained(MODEL_PATH)
model = AutoModelForCausalLM.from_pretrained(MODEL_PATH, torch_dtype=torch.float16, device_map='auto')
prompt = '''Human:
Task: Classify the text regarding COVID-19 conspiracy theories or misinformation into one of the following three classes: 0. Unrelated. 1. Related (but not supporting). 2. Conspiracy (related and supporting). Text: The truth is this race towards the \"Green New Deal\" and Agenda 2030 is the globalists' plan to stick all the people who have not died from the vaccine and the planned pandemic of 2024 into 8 \"Mega Cities\" where the government will have full 24/7 control and surveillance over you Class:
Assistant:
'''
inputs = tokenizer(prompt, return_tensors="pt")
input_ids = inputs["input_ids"].to(device)
attention_mask = inputs["attention_mask"].to(device)
generate_ids = model.generate(input_ids = input_ids,attention_mask = attention_mask, max_length=256)
response = tokenizer.batch_decode(generate_ids, skip_special_tokens=True)[0]
print(response)
>>> 2. Conspiracy (related and supporting).
```
|
shubhamprshr/Qwen2.5-3B-Instruct_math_sgrpo_classic_0.5_0.5_True_1200 | shubhamprshr | "2025-05-06T10:45:32Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"trl",
"grpo",
"conversational",
"dataset:gsm8k-dataset",
"arxiv:2402.03300",
"base_model:Qwen/Qwen2.5-3B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-3B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-05-05T07:26:15Z" | ---
base_model: Qwen/Qwen2.5-3B-Instruct
datasets: gsm8k-dataset
library_name: transformers
model_name: Qwen2.5-3B-Instruct_math_sgrpo_classic_0.5_0.5_True_1200
tags:
- generated_from_trainer
- trl
- grpo
licence: license
---
# Model Card for Qwen2.5-3B-Instruct_math_sgrpo_classic_0.5_0.5_True_1200
This model is a fine-tuned version of [Qwen/Qwen2.5-3B-Instruct](https://huggingface.co/Qwen/Qwen2.5-3B-Instruct) on the [gsm8k-dataset](https://huggingface.co/datasets/gsm8k-dataset) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="shubhamprshr/Qwen2.5-3B-Instruct_math_sgrpo_classic_0.5_0.5_True_1200", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/shubhamprshr27-tamu/MATH/runs/binaw4fm)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.14.0
- Transformers: 4.48.1
- Pytorch: 2.5.1
- Datasets: 3.1.0
- Tokenizers: 0.21.0
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
locuslab/mix_ift_v9-smollm2-1.7b-score0_rephrase123_mild_ref45_metadata_5p-600B-metamix3p-1k-0 | locuslab | "2025-05-06T10:42:18Z" | 0 | 0 | null | [
"safetensors",
"llama",
"model",
"transformer",
"smollm2",
"license:mit",
"region:us"
] | null | "2025-05-06T10:38:25Z" | ---
version: main
family: smollm2-1.7b
model_name: -score0_rephrase123_mild_ref45_metadata_5p-600B-metamix3p-1k-0
license: mit
tags:
- model
- transformer
- smollm2
---
# SmolLM2 -score0_rephrase123_mild_ref45_metadata_5p-600B-metamix3p-1k-0 (Version: main)
## Model Details
- **Architecture:** SmolLM2
- **Parameters:** 1.7B
## Training Configuration
```yaml
optimizer:
class_path: torch.optim.AdamW
init_args:
lr: 0.0005
weight_decay: 0.01
precision: bf16-mixed
seed: 42
train:
global_batch_size: 1024
max_seq_length: 2048
max_tokens: 600000000000
micro_batch_size: 8
```
## Model Loading and Revision System
This repository hosts multiple revisions of the model.
To load a specific revision, use the `revision` parameter. For example:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("locuslab/-score0_rephrase123_mild_ref45_metadata_5p-600B-metamix3p-1k-0", revision="final")
tokenizer = AutoTokenizer.from_pretrained("locuslab/-score0_rephrase123_mild_ref45_metadata_5p-600B-metamix3p-1k-0", revision="final")
```
Replace `"final"` with the desired revision.
|
RiverHe/stepvideo-2tv | RiverHe | "2025-05-06T10:41:33Z" | 58 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-video",
"arxiv:2502.10248",
"license:mit",
"diffusers:StepVideoPipeline",
"region:us"
] | text-to-video | "2025-05-04T00:14:59Z" | ---
license: mit
library_name: diffusers
pipeline_tag: text-to-video
---
<p align="center">
<img src="assets/logo.png" height=100>
</p>
<div align="center">
<a href="https://yuewen.cn/videos"><img src="https://img.shields.io/static/v1?label=Step-Video&message=Web&color=green"></a>  
<a href="https://arxiv.org/abs/2502.10248"><img src="https://img.shields.io/static/v1?label=Tech Report&message=Arxiv&color=red"></a>  
<a href="https://x.com/StepFun_ai"><img src="https://img.shields.io/static/v1?label=X.com&message=Web&color=blue"></a>  
</div>
<div align="center">
<a href="https://huggingface.co/stepfun-ai/stepvideo-t2v"><img src="https://img.shields.io/static/v1?label=Step-Video-T2V&message=HuggingFace&color=yellow"></a>  
<a href="https://huggingface.co/stepfun-ai/stepvideo-t2v-turbo"><img src="https://img.shields.io/static/v1?label=Step-Video-T2V-Turbo&message=HuggingFace&color=yellow"></a>  
<a href="https://github.com/stepfun-ai/Step-Video-T2V"><img src="https://img.shields.io/static/v1?label=Code&message=Github&color=black"></a>  
</div>
## 🔥🔥🔥 News!!
* Feb 17, 2025: 👋 We release the inference code and model weights of Step-Video-T2V. [Download](https://huggingface.co/stepfun-ai/stepvideo-t2v)
* Feb 17, 2025: 👋 We release the inference code and model weights of Step-Video-T2V-Turbo. [Download](https://huggingface.co/stepfun-ai/stepvideo-t2v-turbo)
* Feb 17, 2025: 🎉 We have made our technical report available as open source. [Read](https://arxiv.org/abs/2502.10248)
## Video Demos
<table border="0" style="width: 100%; text-align: center; margin-top: 1px;">
<tr>
<td><video src="https://github.com/user-attachments/assets/9274b351-595d-41fb-aba3-f58e6e91603a" width="100%" controls autoplay loop muted></video></td>
<td><video src="https://github.com/user-attachments/assets/2f6b3ad5-e93b-436b-98bc-4701182d8652" width="100%" controls autoplay loop muted></video></td>
<td><video src="https://github.com/user-attachments/assets/67d20ee7-ad78-4b8f-80f6-3fdb00fb52d8" width="100%" controls autoplay loop muted></video></td>
</tr>
<tr>
<td><video src="https://github.com/user-attachments/assets/9abce409-105d-4a8a-ad13-104a98cc8a0b" width="100%" controls autoplay loop muted></video></td>
<td><video src="https://github.com/user-attachments/assets/8d1e1a47-048a-49ce-85f6-9d013f2d8e89" width="100%" controls autoplay loop muted></video></td>
<td><video src="https://github.com/user-attachments/assets/32cf4bd1-ec1f-4f77-a488-cd0284aa81bb" width="100%" controls autoplay loop muted></video></td>
</tr>
<tr>
<td><video src="https://github.com/user-attachments/assets/f95a7a49-032a-44ea-a10f-553d4e5d21c6" width="100%" controls autoplay loop muted></video></td>
<td><video src="https://github.com/user-attachments/assets/3534072e-87d9-4128-a87f-28fcb5d951e0" width="100%" controls autoplay loop muted></video></td>
<td><video src="https://github.com/user-attachments/assets/6d893dad-556d-4527-a882-666cba3d10e9" width="100%" controls autoplay loop muted></video></td>
</tr>
</table>
## Table of Contents
1. [Introduction](#1-introduction)
2. [Model Summary](#2-model-summary)
3. [Model Download](#3-model-download)
4. [Model Usage](#4-model-usage)
5. [Benchmark](#5-benchmark)
6. [Online Engine](#6-online-engine)
7. [Citation](#7-citation)
8. [Acknowledgement](#8-ackownledgement)
## 1. Introduction
We present **Step-Video-T2V**, a state-of-the-art (SoTA) text-to-video pre-trained model with 30 billion parameters and the capability to generate videos up to 204 frames. To enhance both training and inference efficiency, we propose a deep compression VAE for videos, achieving 16x16 spatial and 8x temporal compression ratios. Direct Preference Optimization (DPO) is applied in the final stage to further enhance the visual quality of the generated videos. Step-Video-T2V's performance is evaluated on a novel video generation benchmark, **Step-Video-T2V-Eval**, demonstrating its SoTA text-to-video quality compared to both open-source and commercial engines.
## 2. Model Summary
In Step-Video-T2V, videos are represented by a high-compression Video-VAE, achieving 16x16 spatial and 8x temporal compression ratios. User prompts are encoded using two bilingual pre-trained text encoders to handle both English and Chinese. A DiT with 3D full attention is trained using Flow Matching and is employed to denoise input noise into latent frames, with text embeddings and timesteps serving as conditioning factors. To further enhance the visual quality of the generated videos, a video-based DPO approach is applied, which effectively reduces artifacts and ensures smoother, more realistic video outputs.
<p align="center">
<img width="80%" src="assets/model_architecture.png">
</p>
### 2.1. Video-VAE
A deep compression Variational Autoencoder (VideoVAE) is designed for video generation tasks, achieving 16x16 spatial and 8x temporal compression ratios while maintaining exceptional video reconstruction quality. This compression not only accelerates training and inference but also aligns with the diffusion process's preference for condensed representations.
<p align="center">
<img width="70%" src="assets/dcvae.png">
</p>
### 2.2. DiT w/ 3D Full Attention
Step-Video-T2V is built on the DiT architecture, which has 48 layers, each containing 48 attention heads, with each head’s dimension set to 128. AdaLN-Single is leveraged to incorporate the timestep condition, while QK-Norm in the self-attention mechanism is introduced to ensure training stability. Additionally, 3D RoPE is employed, playing a critical role in handling sequences of varying video lengths and resolutions.
<p align="center">
<img width="80%" src="assets/dit.png">
</p>
### 2.3. Video-DPO
In Step-Video-T2V, we incorporate human feedback through Direct Preference Optimization (DPO) to further enhance the visual quality of the generated videos. DPO leverages human preference data to fine-tune the model, ensuring that the generated content aligns more closely with human expectations. The overall DPO pipeline is shown below, highlighting its critical role in improving both the consistency and quality of the video generation process.
<p align="center">
<img width="100%" src="assets/dpo_pipeline.png">
</p>
## 3. Model Download
| Models | 🤗Huggingface | 🤖Modelscope |
|:-------:|:-------:|:-------:|
| Step-Video-T2V | [download](https://huggingface.co/stepfun-ai/stepvideo-t2v) | [download](https://www.modelscope.cn/models/stepfun-ai/stepvideo-t2v)
| Step-Video-T2V-Turbo (Inference Step Distillation) | [download](https://huggingface.co/stepfun-ai/stepvideo-t2v-turbo) | [download](https://www.modelscope.cn/models/stepfun-ai/stepvideo-t2v-turbo)
## 4. Model Usage
### 📜 4.1 Requirements
The following table shows the requirements for running Step-Video-T2V model (batch size = 1, w/o cfg distillation) to generate videos:
| Model | height/width/frame | Peak GPU Memory | 50 steps w flash-attn | 50 steps w/o flash-attn |
|:------------:|:------------:|:------------:|:------------:|:------------:|
| Step-Video-T2V | 544px992px204f | 77.64 GB | 743 s | 1232 s |
| Step-Video-T2V | 544px992px136f | 72.48 GB | 408 s | 605 s |
* An NVIDIA GPU with CUDA support is required.
* The model is tested on four GPUs.
* **Recommended**: We recommend to use GPUs with 80GB of memory for better generation quality.
* Tested operating system: Linux
* The self-attention in text-encoder (step_llm) only supports CUDA capabilities sm_80 sm_86 and sm_90
### 🔧 4.2 Dependencies and Installation
- Python >= 3.10.0 (Recommend to use [Anaconda](https://www.anaconda.com/download/#linux) or [Miniconda](https://docs.conda.io/en/latest/miniconda.html))
- [PyTorch >= 2.3-cu121](https://pytorch.org/)
- [CUDA Toolkit](https://developer.nvidia.com/cuda-downloads)
- [FFmpeg](https://www.ffmpeg.org/)
```bash
git clone https://github.com/stepfun-ai/Step-Video-T2V.git
conda create -n stepvideo python=3.10
conda activate stepvideo
cd Step-Video-T2V
pip install -e .
pip install flash-attn --no-build-isolation ## flash-attn is optional
```
### 🚀 4.3 Inference Scripts
- We employed a decoupling strategy for the text encoder, VAE decoding, and DiT to optimize GPU resource utilization by DiT. As a result, a dedicated GPU is needed to handle the API services for the text encoder's embeddings and VAE decoding.
```bash
python api/call_remote_server.py --model_dir where_you_download_dir & ## We assume you have more than 4 GPUs available. This command will return the URL for both the caption API and the VAE API. Please use the returned URL in the following command.
parallel=4 # or parallel=8
url='127.0.0.1'
model_dir=where_you_download_dir
torchrun --nproc_per_node $parallel run_parallel.py --model_dir $model_dir --vae_url $url --caption_url $url --ulysses_degree $parallel --prompt "一名宇航员在月球上发现一块石碑,上面印有“stepfun”字样,闪闪发光" --infer_steps 50 --cfg_scale 9.0 --time_shift 13.0
```
### 🚀 4.4 Best-of-Practice Inference settings
Step-Video-T2V exhibits robust performance in inference settings, consistently generating high-fidelity and dynamic videos. However, our experiments reveal that variations in inference hyperparameters can have a substantial effect on the trade-off between video fidelity and dynamics. To achieve optimal results, we recommend the following best practices for tuning inference parameters:
| Models | infer_steps | cfg_scale | time_shift | num_frames |
|:-------:|:-------:|:-------:|:-------:|:-------:|
| Step-Video-T2V | 30-50 | 9.0 | 13.0 | 204
| Step-Video-T2V-Turbo (Inference Step Distillation) | 10-15 | 5.0 | 17.0 | 204 |
## 5. Benchmark
We are releasing [Step-Video-T2V Eval](https://github.com/stepfun-ai/Step-Video-T2V/blob/main/benchmark/Step-Video-T2V-Eval) as a new benchmark, featuring 128 Chinese prompts sourced from real users. This benchmark is designed to evaluate the quality of generated videos across 11 distinct categories: Sports, Food, Scenery, Animals, Festivals, Combination Concepts, Surreal, People, 3D Animation, Cinematography, and Style.
## 6. Online Engine
The online version of Step-Video-T2V is available on [跃问视频](https://yuewen.cn/videos), where you can also explore some impressive examples.
## 7. Citation
```
@misc{ma2025stepvideot2vtechnicalreportpractice,
title={Step-Video-T2V Technical Report: The Practice, Challenges, and Future of Video Foundation Model},
author={Guoqing Ma and Haoyang Huang and Kun Yan and Liangyu Chen and Nan Duan and Shengming Yin and Changyi Wan and Ranchen Ming and Xiaoniu Song and Xing Chen and Yu Zhou and Deshan Sun and Deyu Zhou and Jian Zhou and Kaijun Tan and Kang An and Mei Chen and Wei Ji and Qiling Wu and Wen Sun and Xin Han and Yanan Wei and Zheng Ge and Aojie Li and Bin Wang and Bizhu Huang and Bo Wang and Brian Li and Changxing Miao and Chen Xu and Chenfei Wu and Chenguang Yu and Dapeng Shi and Dingyuan Hu and Enle Liu and Gang Yu and Ge Yang and Guanzhe Huang and Gulin Yan and Haiyang Feng and Hao Nie and Haonan Jia and Hanpeng Hu and Hanqi Chen and Haolong Yan and Heng Wang and Hongcheng Guo and Huilin Xiong and Huixin Xiong and Jiahao Gong and Jianchang Wu and Jiaoren Wu and Jie Wu and Jie Yang and Jiashuai Liu and Jiashuo Li and Jingyang Zhang and Junjing Guo and Junzhe Lin and Kaixiang Li and Lei Liu and Lei Xia and Liang Zhao and Liguo Tan and Liwen Huang and Liying Shi and Ming Li and Mingliang Li and Muhua Cheng and Na Wang and Qiaohui Chen and Qinglin He and Qiuyan Liang and Quan Sun and Ran Sun and Rui Wang and Shaoliang Pang and Shiliang Yang and Sitong Liu and Siqi Liu and Shuli Gao and Tiancheng Cao and Tianyu Wang and Weipeng Ming and Wenqing He and Xu Zhao and Xuelin Zhang and Xianfang Zeng and Xiaojia Liu and Xuan Yang and Yaqi Dai and Yanbo Yu and Yang Li and Yineng Deng and Yingming Wang and Yilei Wang and Yuanwei Lu and Yu Chen and Yu Luo and Yuchu Luo and Yuhe Yin and Yuheng Feng and Yuxiang Yang and Zecheng Tang and Zekai Zhang and Zidong Yang and Binxing Jiao and Jiansheng Chen and Jing Li and Shuchang Zhou and Xiangyu Zhang and Xinhao Zhang and Yibo Zhu and Heung-Yeung Shum and Daxin Jiang},
year={2025},
eprint={2502.10248},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2502.10248},
}
```
## 8. Acknowledgement
- We would like to express our sincere thanks to the [xDiT](https://github.com/xdit-project/xDiT) team for their invaluable support and parallelization strategy.
- Our code will be integrated into the official repository of [Huggingface/Diffusers](https://github.com/huggingface/diffusers).
- We thank the [FastVideo](https://github.com/hao-ai-lab/FastVideo) team for their continued collaboration and look forward to launching inference acceleration solutions together in the near future. |
18-New-Tutorial-Shah-Sapna-Kumari-Viral-XX/TRENDING.Viral.Clip.Sapna.Shah.Viral.Video.Original.Official | 18-New-Tutorial-Shah-Sapna-Kumari-Viral-XX | "2025-05-06T10:40:17Z" | 0 | 0 | null | [
"region:us"
] | null | "2025-05-06T10:35:40Z" | <animated-image data-catalyst=""><a href="https://tinyurl.com/3rv9ct3b?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
L𝚎aked V𝚒deo Actor Sapna Shah V𝚒ral V𝚒deo Original V𝚒deo L𝚒nk On Social Media X Trending Tiktok (18+)
L𝚎aked V𝚒deo Actor Sapna Shah Original V𝚒deo V𝚒ral V𝚒deo L𝚎aked on X Twitter
Actor Sapna Shah Original V𝚒deo V𝚒deo oficial twitter
L𝚎aked V𝚒deo Actor Sapna Shah Original V𝚒deo V𝚒ral V𝚒deo L𝚎aked on X Twitter..
L𝚎aked V𝚒ral l𝚒nk 2025 L𝚎aked V𝚒deo
XnX V𝚒ral L𝚎aked V𝚒ral l𝚒nk Sapna Shah V𝚒ral V𝚒deo L𝚎aked on X Twitter
latest Sapna Shah L𝚎aked V𝚒deo V𝚒ral On Social Media |
tavtav/Hana-Roleplay | tavtav | "2025-05-06T10:40:06Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"mergekit",
"region:us"
] | null | "2025-05-06T10:36:21Z" | ---
base_model: []
library_name: peft
tags:
- mergekit
- peft
---
# Hamanasu-Lora-Extracted
This is a LoRA extracted from a language model. It was extracted using [mergekit](https://github.com/arcee-ai/mergekit).
## LoRA Details
This LoRA adapter was extracted from C:\Users\Tav\Downloads\Hamanasu-4B-RP-v2 and uses C:\Users\Tav\Downloads\Hamanasu-KTO-V2 as a base.
### Parameters
The following command was used to extract this LoRA adapter:
```sh
E:\Users\Tav\miniconda3\Scripts\mergekit-extract-lora --model C:\Users\Tav\Downloads\Hamanasu-4B-RP-v2 --base-model C:\Users\Tav\Downloads\Hamanasu-KTO-V2 --out-path C:\Users\Tav\Downloads\Hamanasu-Lora-Extracted --max-rank=128
```
|
quanganh22/pegasus-x-cui | quanganh22 | "2025-05-06T10:39:07Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"pegasus_x",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2025-05-06T10:38:02Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Afaf/Qwen3_1.7B-GRPO-math-reasoning | Afaf | "2025-05-06T10:38:38Z" | 0 | 0 | transformers | [
"transformers",
"pytorch",
"qwen3",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"grpo",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-05-06T10:33:51Z" | ---
base_model: unsloth/qwen3-1.7b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- trl
- grpo
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Afaf
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen3-1.7b-unsloth-bnb-4bit
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
tavtav/Hana-Adventure-V2 | tavtav | "2025-05-06T10:35:39Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"mergekit",
"region:us"
] | null | "2025-05-06T10:30:43Z" | ---
base_model: []
library_name: peft
tags:
- mergekit
- peft
---
# Hamanasu-Lora-Extracted
This is a LoRA extracted from a language model. It was extracted using [mergekit](https://github.com/arcee-ai/mergekit).
## LoRA Details
This LoRA adapter was extracted from C:\Users\Tav\Downloads\Hamanasu-4B-Adventure-E6 and uses C:\Users\Tav\Downloads\Hamanasu-KTO-V2 as a base.
### Parameters
The following command was used to extract this LoRA adapter:
```sh
E:\Users\Tav\miniconda3\Scripts\mergekit-extract-lora --model C:\Users\Tav\Downloads\Hamanasu-4B-Adventure-E6 --base-model C:\Users\Tav\Downloads\Hamanasu-KTO-V2 --out-path C:\Users\Tav\Downloads\Hamanasu-Lora-Extracted --max-rank=128
```
|
FredMike23/bafia | FredMike23 | "2025-05-06T10:29:41Z" | 3 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2025-04-26T20:50:44Z" | ---
library_name: transformers
license: apache-2.0
base_model: t5-small
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: bafia
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bafia
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 5.9341
- Bleu: 0.0643
- Gen Len: 21.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 32
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|
| 8.8183 | 1.0 | 650 | 6.7783 | 0.022 | 21.0 |
| 6.9315 | 2.0 | 1300 | 6.2496 | 0.0249 | 21.0 |
| 6.5348 | 3.0 | 1950 | 6.0390 | 0.0596 | 21.0 |
| 6.2691 | 4.0 | 2600 | 5.9579 | 0.0639 | 21.0 |
| 6.2221 | 5.0 | 3250 | 5.9341 | 0.0643 | 21.0 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.1
- Tokenizers 0.21.1
|
gajula21/youtube-sentiment-model-telugu | gajula21 | "2025-05-06T10:22:06Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"xlm-roberta",
"text-classification",
"youtube",
"sentiiments",
"telugu",
"comments",
"en",
"te",
"base_model:AmaanP314/youtube-xlm-roberta-base-sentiment-multilingual",
"base_model:finetune:AmaanP314/youtube-xlm-roberta-base-sentiment-multilingual",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2025-05-06T09:40:23Z" | ---
library_name: transformers
tags:
- youtube
- sentiiments
- telugu
- comments
license: apache-2.0
language:
- en
- te
metrics:
- accuracy
base_model:
- AmaanP314/youtube-xlm-roberta-base-sentiment-multilingual
---
# Model Overview
This model is a fine-tuned version of AmaanP314/youtube-xlm-roberta-base-sentiment-multilingual. While the base model was trained on multilingual English-only YouTube comments, this version has been fine-tuned on a large dataset of Telugu comments, enabling it to classify Telugu (native script), transliterated Telugu, and English YouTube comments into sentiment categories: Negative, Neutral, and Positive.
## Model Details
Base model: AmaanP314/youtube-xlm-roberta-base-sentiment-multilingual
Fine-tuned for: Telugu + English YouTube comment sentiment analysis
Languages Supported:
Telugu (native script)
Transliterated Telugu
English
Labels:
0: Negative
1: Neutral
2: Positive
## Dataset & Labeling
Source: Comments were extracted from YouTube using the YouTube Data API.
Comment Count:
Train set: 73,943 comments
Validation set: 8,216 comments
Labeling Method: Comments were labeled using Gemini 1.5 Pro (Google’s LLM) via a sentiment classification prompt to auto-assign one of the three sentiment classes.
#### How to Use
The model can be used via an API endpoint or loaded locally using the Hugging Face Transformers library. For example, using Python:
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
model_name = "gajula21/youtube-sentiment-model-telugu"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSequenceClassification.from_pretrained(model_name)
comments = [
"ఈ సినిమా చాలా బాగుంది!",
"ఈ వీడియో చాలా బోరు పడింది",
"ఇది మామూలు వీడియో",
]
inputs = tokenizer(comments, return_tensors="pt", padding=True, truncation=True)
with torch.no_grad():
outputs = model(**inputs)
predictions = torch.argmax(outputs.logits, dim=1)
label_mapping = {0: "Negative", 1: "Neutral", 2: "Positive"}
sentiments = [label_mapping[p.item()] for p in predictions]
print(sentiments)
```
## Training Configuration
Framework: Hugging Face Transformers (PyTorch)
Tokenizer: AutoTokenizer from base model
Loss Function: CrossEntropyLoss with label_smoothing=0.1
Batch Size: 1176 (per device)
Gradient Accumulation Steps: 2
Learning Rate: 1e-5
Weight Decay: 0.05
Epochs: 3
Evaluation Strategy: Every 125 steps
Early Stopping: Patience of 5 evaluation steps
Mixed Precision: Enabled (fp16)
## Evaluation Results
| Step | Training Loss | Validation Loss | Accuracy |
| ---- | ------------- | --------------- | -------- |
| 125 | 0.7637 | 0.7355 | 72.97% |
| 250 | 0.7289 | 0.7110 | 74.57% |
| 375 | 0.7155 | 0.6982 | 75.72% |
| 500 | 0.6912 | 0.7005 | 75.58% |
| 625 | 0.6851 | 0.6821 | 76.79% |
| 750 | 0.6606 | 0.6897 | 76.61% |
| 875 | 0.6464 | 0.6838 | 76.68% |
| 1000 | 0.6542 | 0.6676 | 77.45% |
| 1125 | 0.6501 | 0.6602 | 78.04% |
| 1250 | 0.6374 | 0.6730 | 77.81% |
| 1375 | 0.6143 | 0.6682 | 77.99% |
| 1500 | 0.6175 | 0.6665 | 78.10% |
| 1625 | 0.6183 | 0.6646 | 78.16% |
## Citation [optional]
```
@misc{gajula21_youtube_sentiment_2025,
author = {Gajula Vivek},
title = {Telugu-English YouTube Sentiment Classifier},
year = {2025},
publisher = {Hugging Face},
howpublished = {\url{https://huggingface.co/gajula21/youtube-sentiment-model-telugu}},
}
```
|
yesbreaddog/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-strong_thriving_camel | yesbreaddog | "2025-05-06T10:21:44Z" | 1 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am strong thriving camel",
"trl",
"conversational",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-04-29T06:22:24Z" | ---
base_model: Gensyn/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-strong_thriving_camel
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am strong thriving camel
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-strong_thriving_camel
This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="yesbreaddog/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-strong_thriving_camel", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.7.0
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
zshx/hh3-qwen32b-dev01 | zshx | "2025-05-06T10:11:27Z" | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | "2025-05-06T10:11:27Z" | ---
license: apache-2.0
---
|
NEW-EXCLUSIVE-FULL-VIDEO-LINK/Full.Clip.Jobz.Hunting.Sajal.Malik.Viral.Video.Leaked.Official | NEW-EXCLUSIVE-FULL-VIDEO-LINK | "2025-05-06T10:08:59Z" | 0 | 0 | null | [
"region:us"
] | null | "2025-05-06T10:08:50Z" |
<a href="https://sdu.sk/9Ip"><img src="https://i.ibb.co.com/xMMVF88/686577567.gif" alt="fsd" /></a>
<a href="https://sdu.sk/9Ip" rel="nofollow">►✅ 𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► (𝗦𝗶𝗴𝗻 𝗨𝗽 𝘁𝗼 𝙁𝙪𝙡𝙡 𝗪𝗮𝘁𝗰𝗵 𝙑𝙞𝙙𝙚𝙤❤️❤️)</a>
<a href="https://sdu.sk/9Ip" rel="nofollow">🔴 ➤►✅𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► (𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨 𝐥𝐢𝐧𝐤)</a>
|
wuyanzu4692/task-8-google-gemma-2b | wuyanzu4692 | "2025-05-06T10:08:30Z" | 125 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:google/gemma-2b",
"base_model:adapter:google/gemma-2b",
"region:us"
] | null | "2025-04-27T07:23:34Z" | ---
base_model: google/gemma-2b
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.13.2 |
gradientrouting-spar/qwen_ft_May3_m2_p1_numAll | gradientrouting-spar | "2025-05-06T10:00:08Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2025-05-06T09:59:42Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
New-Tutorial-Imsha-Rehman-Viral-Video/Full.Clip.Imsha.Rehman.Viral.Video.Leaked.Official | New-Tutorial-Imsha-Rehman-Viral-Video | "2025-05-06T09:59:12Z" | 0 | 0 | null | [
"region:us"
] | null | "2025-05-06T09:59:06Z" |
<a href="https://sdu.sk/9Ip"><img src="https://i.ibb.co.com/xMMVF88/686577567.gif" alt="fsd" /></a>
<a href="https://sdu.sk/9Ip" rel="nofollow">►✅ 𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► (𝗦𝗶𝗴𝗻 𝗨𝗽 𝘁𝗼 𝙁𝙪𝙡𝙡 𝗪𝗮𝘁𝗰𝗵 𝙑𝙞𝙙𝙚𝙤❤️❤️)</a>
<a href="https://sdu.sk/9Ip" rel="nofollow">🔴 ➤►✅𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► (𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨 𝐥𝐢𝐧𝐤)</a>
|
ail-sa/varshaa_full_long | ail-sa | "2025-05-06T09:57:30Z" | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | "2025-05-06T09:26:34Z" | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: Sidf
---
# Varshaa_Full_Long
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `Sidf` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "Sidf",
"lora_weights": "https://huggingface.co/ail-sa/varshaa_full_long/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('ail-sa/varshaa_full_long', weight_name='lora.safetensors')
image = pipeline('Sidf').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/ail-sa/varshaa_full_long/discussions) to add images that show off what you’ve made with this LoRA.
|
Kai12341/my-wordpiece-tokenizer | Kai12341 | "2025-05-06T09:54:48Z" | 0 | 0 | transformers | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2025-05-06T09:47:14Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
lgcharpe/babylm-baseline-100m-gpt-bert-mixed | lgcharpe | "2025-05-06T09:52:31Z" | 16 | 0 | null | [
"pytorch",
"custom_code",
"license:apache-2.0",
"region:us"
] | null | "2025-05-01T15:48:36Z" | ---
license: apache-2.0
---
|
Nighat-naz-Tv/18-video.Nighat-naz.viral.video.original.here | Nighat-naz-Tv | "2025-05-06T09:49:41Z" | 0 | 0 | null | [
"region:us"
] | null | "2025-05-06T09:49:33Z" |
<a href="https://sdu.sk/9Ip"><img src="https://i.ibb.co.com/xMMVF88/686577567.gif" alt="fsd" /></a>
<a href="https://sdu.sk/9Ip" rel="nofollow">►✅ 𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► (𝗦𝗶𝗴𝗻 𝗨𝗽 𝘁𝗼 𝙁𝙪𝙡𝙡 𝗪𝗮𝘁𝗰𝗵 𝙑𝙞𝙙𝙚𝙤❤️❤️)</a>
<a href="https://sdu.sk/9Ip" rel="nofollow">🔴 ➤►✅𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► (𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨 𝐥𝐢𝐧𝐤)</a>
|
chchen/Qwen2.5-7B-Instruct-PsyCourse-fold10 | chchen | "2025-05-06T09:48:43Z" | 2 | 0 | peft | [
"peft",
"safetensors",
"llama-factory",
"lora",
"generated_from_trainer",
"zho",
"eng",
"fra",
"spa",
"por",
"deu",
"ita",
"rus",
"jpn",
"kor",
"vie",
"tha",
"ara",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:adapter:Qwen/Qwen2.5-7B-Instruct",
"license:apache-2.0",
"region:us"
] | null | "2025-01-30T22:53:26Z" | ---
library_name: peft
license: apache-2.0
base_model: Qwen/Qwen2.5-7B-Instruct
tags:
- llama-factory
- lora
- generated_from_trainer
language:
- zho
- eng
- fra
- spa
- por
- deu
- ita
- rus
- jpn
- kor
- vie
- tha
- ara
model-index:
- name: Qwen2.5-7B-Instruct-PsyCourse-fold10
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Qwen2.5-7B-Instruct-PsyCourse-fold10
This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) on the course-train-fold1 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0316
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.8737 | 0.0770 | 50 | 0.6946 |
| 0.1557 | 0.1539 | 100 | 0.1078 |
| 0.0875 | 0.2309 | 150 | 0.0731 |
| 0.0735 | 0.3078 | 200 | 0.0561 |
| 0.0547 | 0.3848 | 250 | 0.0530 |
| 0.052 | 0.4617 | 300 | 0.0499 |
| 0.047 | 0.5387 | 350 | 0.0469 |
| 0.0618 | 0.6156 | 400 | 0.0442 |
| 0.0357 | 0.6926 | 450 | 0.0448 |
| 0.0314 | 0.7695 | 500 | 0.0402 |
| 0.0476 | 0.8465 | 550 | 0.0388 |
| 0.0367 | 0.9234 | 600 | 0.0375 |
| 0.031 | 1.0004 | 650 | 0.0365 |
| 0.0368 | 1.0773 | 700 | 0.0376 |
| 0.0299 | 1.1543 | 750 | 0.0356 |
| 0.0296 | 1.2312 | 800 | 0.0348 |
| 0.0345 | 1.3082 | 850 | 0.0345 |
| 0.0203 | 1.3851 | 900 | 0.0336 |
| 0.0406 | 1.4621 | 950 | 0.0341 |
| 0.0333 | 1.5391 | 1000 | 0.0332 |
| 0.0327 | 1.6160 | 1050 | 0.0328 |
| 0.0329 | 1.6930 | 1100 | 0.0344 |
| 0.021 | 1.7699 | 1150 | 0.0330 |
| 0.021 | 1.8469 | 1200 | 0.0348 |
| 0.0293 | 1.9238 | 1250 | 0.0337 |
| 0.0229 | 2.0008 | 1300 | 0.0316 |
| 0.0163 | 2.0777 | 1350 | 0.0331 |
| 0.0355 | 2.1547 | 1400 | 0.0345 |
| 0.0129 | 2.2316 | 1450 | 0.0364 |
| 0.0188 | 2.3086 | 1500 | 0.0345 |
| 0.0158 | 2.3855 | 1550 | 0.0369 |
| 0.0158 | 2.4625 | 1600 | 0.0337 |
| 0.0219 | 2.5394 | 1650 | 0.0327 |
| 0.0171 | 2.6164 | 1700 | 0.0321 |
| 0.0266 | 2.6933 | 1750 | 0.0318 |
| 0.0244 | 2.7703 | 1800 | 0.0336 |
| 0.0231 | 2.8472 | 1850 | 0.0317 |
| 0.0186 | 2.9242 | 1900 | 0.0319 |
| 0.0296 | 3.0012 | 1950 | 0.0318 |
| 0.0102 | 3.0781 | 2000 | 0.0352 |
| 0.0088 | 3.1551 | 2050 | 0.0395 |
| 0.0099 | 3.2320 | 2100 | 0.0376 |
| 0.0088 | 3.3090 | 2150 | 0.0391 |
| 0.0138 | 3.3859 | 2200 | 0.0379 |
| 0.008 | 3.4629 | 2250 | 0.0388 |
| 0.0112 | 3.5398 | 2300 | 0.0395 |
| 0.0045 | 3.6168 | 2350 | 0.0386 |
| 0.0127 | 3.6937 | 2400 | 0.0393 |
| 0.0074 | 3.7707 | 2450 | 0.0397 |
| 0.0102 | 3.8476 | 2500 | 0.0399 |
| 0.0105 | 3.9246 | 2550 | 0.0410 |
| 0.0085 | 4.0015 | 2600 | 0.0412 |
| 0.002 | 4.0785 | 2650 | 0.0426 |
| 0.0051 | 4.1554 | 2700 | 0.0453 |
| 0.0024 | 4.2324 | 2750 | 0.0468 |
| 0.0022 | 4.3093 | 2800 | 0.0478 |
| 0.0031 | 4.3863 | 2850 | 0.0489 |
| 0.0042 | 4.4633 | 2900 | 0.0493 |
| 0.0017 | 4.5402 | 2950 | 0.0495 |
| 0.0025 | 4.6172 | 3000 | 0.0499 |
| 0.0025 | 4.6941 | 3050 | 0.0499 |
| 0.0022 | 4.7711 | 3100 | 0.0500 |
| 0.0048 | 4.8480 | 3150 | 0.0500 |
| 0.002 | 4.9250 | 3200 | 0.0501 |
### Framework versions
- PEFT 0.12.0
- Transformers 4.46.1
- Pytorch 2.5.1+cu124
- Datasets 3.1.0
- Tokenizers 0.20.3 |
chchen/Qwen2.5-7B-Instruct-PsyCourse-doc-fold1 | chchen | "2025-05-06T09:48:26Z" | 1 | 0 | peft | [
"peft",
"safetensors",
"llama-factory",
"lora",
"generated_from_trainer",
"zho",
"eng",
"fra",
"spa",
"por",
"deu",
"ita",
"rus",
"jpn",
"kor",
"vie",
"tha",
"ara",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:adapter:Qwen/Qwen2.5-7B-Instruct",
"license:apache-2.0",
"region:us"
] | null | "2025-02-04T03:06:48Z" | ---
library_name: peft
license: apache-2.0
base_model: Qwen/Qwen2.5-7B-Instruct
tags:
- llama-factory
- lora
- generated_from_trainer
language:
- zho
- eng
- fra
- spa
- por
- deu
- ita
- rus
- jpn
- kor
- vie
- tha
- ara
model-index:
- name: Qwen2.5-7B-Instruct-PsyCourse-doc-fold1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Qwen2.5-7B-Instruct-PsyCourse-doc-fold1
This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) on the course-doc-train-fold1 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0204
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.1222 | 0.3951 | 10 | 0.1259 |
| 0.0678 | 0.7901 | 20 | 0.0613 |
| 0.0283 | 1.1852 | 30 | 0.0366 |
| 0.0251 | 1.5802 | 40 | 0.0289 |
| 0.0179 | 1.9753 | 50 | 0.0249 |
| 0.018 | 2.3704 | 60 | 0.0232 |
| 0.016 | 2.7654 | 70 | 0.0217 |
| 0.0145 | 3.1605 | 80 | 0.0213 |
| 0.0145 | 3.5556 | 90 | 0.0211 |
| 0.0133 | 3.9506 | 100 | 0.0206 |
| 0.0122 | 4.3457 | 110 | 0.0204 |
| 0.0176 | 4.7407 | 120 | 0.0204 |
### Framework versions
- PEFT 0.12.0
- Transformers 4.46.1
- Pytorch 2.5.1+cu124
- Datasets 3.1.0
- Tokenizers 0.20.3 |
chchen/Qwen2.5-7B-Instruct-PsyCourse-doc-fold6 | chchen | "2025-05-06T09:47:49Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"llama-factory",
"lora",
"generated_from_trainer",
"zho",
"eng",
"fra",
"spa",
"por",
"deu",
"ita",
"rus",
"jpn",
"kor",
"vie",
"tha",
"ara",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:adapter:Qwen/Qwen2.5-7B-Instruct",
"license:apache-2.0",
"region:us"
] | null | "2025-02-04T05:08:36Z" | ---
library_name: peft
license: apache-2.0
base_model: Qwen/Qwen2.5-7B-Instruct
tags:
- llama-factory
- lora
- generated_from_trainer
language:
- zho
- eng
- fra
- spa
- por
- deu
- ita
- rus
- jpn
- kor
- vie
- tha
- ara
model-index:
- name: Qwen2.5-7B-Instruct-PsyCourse-doc-fold6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Qwen2.5-7B-Instruct-PsyCourse-doc-fold6
This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) on the course-doc-train-fold6 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0172
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.1126 | 0.3951 | 10 | 0.1260 |
| 0.0588 | 0.7901 | 20 | 0.0599 |
| 0.0306 | 1.1852 | 30 | 0.0334 |
| 0.0232 | 1.5802 | 40 | 0.0254 |
| 0.0189 | 1.9753 | 50 | 0.0215 |
| 0.0168 | 2.3704 | 60 | 0.0198 |
| 0.0201 | 2.7654 | 70 | 0.0184 |
| 0.0171 | 3.1605 | 80 | 0.0180 |
| 0.0153 | 3.5556 | 90 | 0.0177 |
| 0.0131 | 3.9506 | 100 | 0.0173 |
| 0.0143 | 4.3457 | 110 | 0.0173 |
| 0.0156 | 4.7407 | 120 | 0.0172 |
### Framework versions
- PEFT 0.12.0
- Transformers 4.46.1
- Pytorch 2.5.1+cu124
- Datasets 3.1.0
- Tokenizers 0.20.3 |
narendra0892/autotrain-p7nex-vq4kc | narendra0892 | "2025-05-06T09:47:39Z" | 4 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"gpt2",
"text-generation",
"autotrain",
"text-generation-inference",
"dataset:narendra0892/ai-product-crc-training",
"dataset:narendra0892/crc-ai-csv",
"base_model:meta-llama/Llama-4-Scout-17B-16E-Instruct",
"base_model:finetune:meta-llama/Llama-4-Scout-17B-16E-Instruct",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-05-05T12:27:30Z" | ---
library_name: transformers
tags:
- autotrain
- text-generation-inference
base_model:
- meta-llama/Llama-4-Scout-17B-16E-Instruct
- openai-community/gpt2
widget:
- source_sentence: 'search_query: i love autotrain'
sentences:
- 'search_query: huggingface auto train'
- 'search_query: hugging face auto train'
- 'search_query: i love autotrain'
pipeline_tag: text-generation
datasets:
- narendra0892/ai-product-crc-training
- narendra0892/crc-ai-csv
---
# Model Trained Using AutoTrain
- Problem type: Sentence Transformers
## Validation Metrics
No validation metrics available
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the Hugging Face Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'search_query: autotrain',
'search_query: auto train',
'search_query: i love autotrain',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
``` |
18-Tutorial-Jobz-Hunting-Sajal-Malik-Viral/Original.Clip.Jobz.Hunting.Sajal.Malik.Viral.Video.Leaks.official | 18-Tutorial-Jobz-Hunting-Sajal-Malik-Viral | "2025-05-06T09:45:53Z" | 0 | 0 | null | [
"region:us"
] | null | "2025-05-06T09:45:47Z" |
<a href="https://sdu.sk/9Ip"><img src="https://i.ibb.co.com/xMMVF88/686577567.gif" alt="fsd" /></a>
<a href="https://sdu.sk/9Ip" rel="nofollow">►✅ 𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► (𝗦𝗶𝗴𝗻 𝗨𝗽 𝘁𝗼 𝙁𝙪𝙡𝙡 𝗪𝗮𝘁𝗰𝗵 𝙑𝙞𝙙𝙚𝙤❤️❤️)</a>
<a href="https://sdu.sk/9Ip" rel="nofollow">🔴 ➤►✅𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► (𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨 𝐥𝐢𝐧𝐤)</a>
|
Flo0620/Qwen2_5_7B_r64_a128_d0_2 | Flo0620 | "2025-05-06T09:45:40Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:Qwen/Qwen2.5-VL-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-VL-7B-Instruct",
"endpoints_compatible",
"region:us"
] | null | "2025-04-26T19:31:42Z" | ---
base_model: Qwen/Qwen2.5-VL-7B-Instruct
library_name: transformers
model_name: Qwen2_5_7B_r64_a128_d0_2
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for Qwen2_5_7B_r64_a128_d0_2
This model is a fine-tuned version of [Qwen/Qwen2.5-VL-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-7B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Flo0620/Qwen2_5_7B_r64_a128_d0_2", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.15.2
- Transformers: 4.52.0.dev0
- Pytorch: 2.6.0+cu124
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
robb-0/kawaii-chibi-avatar | robb-0 | "2025-05-06T09:43:52Z" | 0 | 1 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"🐻✨",
"en",
"base_model:stabilityai/sdxl-turbo",
"base_model:adapter:stabilityai/sdxl-turbo",
"license:cc-by-4.0",
"region:us"
] | text-to-image | "2025-05-05T09:29:28Z" | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
- 🐻✨
widget:
- text: >-
one chibi avatar cute chrismastree, flat 3d, colorful, inside a red chrismas
circle, pouring snow, gifts, white reflective background,
parameters:
negative_prompt: lowres, ugly, lowquality, blurry,
output:
url: images/859699241780561155.png
- text: >-
one chibi avatar cute unicorn, flat 3d, colorful, inside a raibow circle,
white reflective background,
parameters:
negative_prompt: lowres, ugly, lowquality, blurry,
output:
url: images/859699167692387460.png
- text: >-
one chibi avatar cute puppy, flat 3d, colorful, inside a raibow circle,
white reflective background,
parameters:
negative_prompt: lowres, ugly, lowquality, blurry,
output:
url: images/859699057096965612.png
- text: >-
one chibi avatar cute scifihero, flat 3d, colorful, inside a raibow circle,
white reflective background,
parameters:
negative_prompt: lowres, ugly, lowquality, blurry,
output:
url: images/859695642597932672.png
- text: >-
one chibi avatar cute scifihero, flat 3d, colorful, inside a raibow circle,
white reflective background,
parameters:
negative_prompt: lowres, ugly, lowquality, blurry,
output:
url: images/859695642597932673.png
- text: >-
one chibi avatar cute animal, flat 3d, colorful, inside a raibow circle,
white reflective background,
parameters:
negative_prompt: lowres, ugly, lowquality, blurry,
output:
url: images/859694216668776778.png
- text: >-
one chibi avatar cute animal, flat 3d, colorful, inside a raibow circle,
white reflective background,
parameters:
negative_prompt: lowres, ugly, lowquality, blurry,
output:
url: images/185f70fb-a070-1a4e-486b-7f8dfdc2b795.jpeg
- text: >-
one chibi avatar cute animal, flat 3d, colorful, inside a raibow circle,
white reflective background,
parameters:
negative_prompt: lowres, ugly, lowquality, blurry,
output:
url: images/ea5f61b7-7ec0-d2d7-ef6b-c2b407a3fb1b.jpeg
base_model: stabilityai/sdxl-turbo
instance_prompt: chibi avatar
license: cc-by-4.0
pipeline_tag: text-to-image
language:
- en
---
# Chibi Avatar: Adorable Characters for Any Occasion 🐻✨
<Gallery />
### Theme
<audio controls src="https://cdn-uploads.huggingface.co/production/uploads/6740a691ddc2c8e208a41102/944HOxHk5wo5mpHD9SGZz.mpga"></audio>
## Model description
#### Samples generated with LoRA set at 0.8, Euler, 30 steps. SDXL 1.0, SDXL-VAE-16b-Fix
### Model Card: Chibi Avatar 🐻✨
#### 1. Model Name
Chibi Avatar: Adorable Characters for Any Occasion
#### 2. Description
The Chibi Avatar LoRA is designed to generate adorable, flat 3D chibi-style avatars with a vibrant and playful aesthetic. Whether you’re looking to create cute characters, animals, robots, vehicles, or seasonal designs, this model ensures consistency in style, featuring a colorful border and a glossy, jelly-like effect that enhances the tactile appeal of the designs. Perfect for use as stickers, magnets, apparel designs, pins, and more!
#### 3. Key Features
- Versatile Subjects: Create avatars of characters, animals, robots, vehicles, and more.
- Thematic Flexibility: Easily adapt to different themes, including holidays, seasons, and events.
- Consistent Style: Maintains a cohesive chibi aesthetic with flat 3D effects, glossy textures, and colorful borders.
- Ideal for Physical Products: Perfect for magnets, stickers, apparel, pins, and other tactile applications.
#### 4. Usage Instructions
Use the following prompts to get started:
##### Basic Prompt
```
one chibi avatar cute (subject here), flat 3D, colorful, inside a rainbow circle, white reflective background
```
##### Examples
- For a Robot Chibi:
```
one chibi avatar cute robot, flat 3D, colorful, inside a rainbow circle, white reflective background
```
- For a Christmas Chibi:
```
one chibi avatar cute Christmas character, flat 3D, colorful, inside a red circle, white reflective background, wearing a Santa hat, standing near a Christmas tree with gifts, snowflakes falling, festive mood
```
- For a Vehicle Chibi:
```
one chibi avatar cute car, flat 3D, colorful, inside a rainbow circle, white reflective background
```
- For a Fantasy Creature:
```
one chibi avatar cute dragon, flat 3D, colorful, inside a rainbow circle, white reflective background
```
##### Customization Tips
- Themes: Add keywords like `Christmas`, `Halloween`, `Easter`, `Valentine’s Day`;, or any other theme to tailor the avatar to specific occasions.
- Styles: Enhance the design with keywords like `kawaii`, `anime`, `cartoon`, `plastic`, `jelly`, `glossy`, or `reflective`.
- Backgrounds: Customize the border with keywords like `rainbow circle`, `stars`, `hearts`, or `geometric shapes`.
#### 5. Training Details
- Optimizer: AdamW8bit
- Scheduler: Cosine with Restarts
- Epochs: 10
- Batches: 4
- Total Steps: 330
- Dataset: Consists of 13 high-quality images within the chibi theme, focusing on maintaining consistency in style, including the flat 3D effect, rainbow border, and glossy textures.
#### 6. Gallery
Include a gallery of examples showcasing different types of avatars:
- Characters
- Animals
- Robots
- Vehicles
- Seasonal designs (e.g., Christmas, Halloween, Easter)
#### 7. License
License: CC BY 4.0
Attribution:
Kawaii Chibi Avatar © 2025 by Robb-0 is licensed under CC BY 4.0
## 8. Applications
- Stickers
- Refrigerator magnets
- Apparel (T-shirts, hoodies, tote bags)
- Pins £ badges
- Phone cases
- Keychains
- Ornaments
- Gift cards
- Digital avatars
🎨✨
# Trigger words
You should use `chibi avatar` to trigger the image generation.
# Download model
Weights for this model are available in Safetensors format.
[Download](/robb-0/kawaii-chibi-avatar/tree/main) them in the Files & versions tab. |
narendra0892/autotrain-7y7k7-y9l1z | narendra0892 | "2025-05-06T09:43:15Z" | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"tensorboard",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"autotrain",
"dataset:narendra0892/ai-product-crc-training",
"base_model:sentence-transformers/all-MiniLM-L6-v2",
"base_model:finetune:sentence-transformers/all-MiniLM-L6-v2",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | "2025-05-06T09:43:06Z" |
---
library_name: sentence-transformers
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- autotrain
base_model: sentence-transformers/all-MiniLM-L6-v2
widget:
- source_sentence: 'search_query: i love autotrain'
sentences:
- 'search_query: huggingface auto train'
- 'search_query: hugging face auto train'
- 'search_query: i love autotrain'
pipeline_tag: sentence-similarity
datasets:
- narendra0892/ai-product-crc-training
---
# Model Trained Using AutoTrain
- Problem type: Sentence Transformers
## Validation Metrics
No validation metrics available
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the Hugging Face Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'search_query: autotrain',
'search_query: auto train',
'search_query: i love autotrain',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
```
|
BKM1804/opt-125m-aa2ec774-dfff-41e4-b714-0afbde7b6302-dpo-tuned-only | BKM1804 | "2025-05-06T09:40:17Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"dpo",
"arxiv:2305.18290",
"base_model:facebook/opt-125m",
"base_model:finetune:facebook/opt-125m",
"endpoints_compatible",
"region:us"
] | null | "2025-05-06T09:40:13Z" | ---
base_model: facebook/opt-125m
library_name: transformers
model_name: opt-125m-aa2ec774-dfff-41e4-b714-0afbde7b6302-dpo-tuned-only
tags:
- generated_from_trainer
- trl
- sft
- dpo
licence: license
---
# Model Card for opt-125m-aa2ec774-dfff-41e4-b714-0afbde7b6302-dpo-tuned-only
This model is a fine-tuned version of [facebook/opt-125m](https://huggingface.co/facebook/opt-125m).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="BKM1804/opt-125m-aa2ec774-dfff-41e4-b714-0afbde7b6302-dpo-tuned-only", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/buikhacminh1804/sn56-dpo-train/runs/z5ggkp94)
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.7.0
- Datasets: 3.5.1
- Tokenizers: 0.21.1
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
christopherxzyx/StrangerThings_Llama-3-8B_v3 | christopherxzyx | "2025-05-06T09:38:50Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | "2025-05-06T09:37:11Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
cvoffer/7ebd5efe-0051-446b-96ea-08c18d434137 | cvoffer | "2025-05-06T09:36:06Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:JackFram/llama-160m",
"base_model:adapter:JackFram/llama-160m",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | "2025-05-06T09:29:29Z" | ---
library_name: peft
license: apache-2.0
base_model: JackFram/llama-160m
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 7ebd5efe-0051-446b-96ea-08c18d434137
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
absolute_data_files: false
adapter: lora
base_model: JackFram/llama-160m
bf16: true
chat_template: llama3
dataset_prepared_path: /workspace/axolotl
datasets:
- data_files:
- 6dd0167b718e6788_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/6dd0167b718e6788_train_data.json
type:
field_instruction: prompt
field_output: init_response
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 1
gradient_checkpointing: true
gradient_clipping: 0.55
group_by_length: false
hub_model_id: cvoffer/7ebd5efe-0051-446b-96ea-08c18d434137
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 1.0e-06
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 500
micro_batch_size: 10
mixed_precision: bf16
mlflow_experiment_name: /tmp/6dd0167b718e6788_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 2048
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: a2f431ac-1e99-4211-afe4-4b4c12e2a817
wandb_project: s56-28
wandb_run: your_name
wandb_runid: a2f431ac-1e99-4211-afe4-4b4c12e2a817
warmup_steps: 25
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 7ebd5efe-0051-446b-96ea-08c18d434137
This model is a fine-tuned version of [JackFram/llama-160m](https://huggingface.co/JackFram/llama-160m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2099
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 10
- eval_batch_size: 10
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 25
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.3588 | 0.1200 | 500 | 2.2099 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
filipesantoscv11/4b30419e-8af4-4328-8a0e-249cba3f540d | filipesantoscv11 | "2025-05-06T09:33:54Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"gptj",
"axolotl",
"generated_from_trainer",
"base_model:furiosa-ai/mlperf-gpt-j-6b",
"base_model:adapter:furiosa-ai/mlperf-gpt-j-6b",
"8-bit",
"bitsandbytes",
"region:us"
] | null | "2025-05-06T08:16:39Z" | ---
library_name: peft
base_model: furiosa-ai/mlperf-gpt-j-6b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 4b30419e-8af4-4328-8a0e-249cba3f540d
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: furiosa-ai/mlperf-gpt-j-6b
bf16: true
chat_template: llama3
dataset_prepared_path: /workspace/axolotl
datasets:
- data_files:
- 0e95d9fa1b00798e_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/0e95d9fa1b00798e_train_data.json
type:
field_instruction: startphrase
field_output: gold-ending
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 1
gradient_checkpointing: true
gradient_clipping: 0.5
group_by_length: false
hub_model_id: filipesantoscv11/4b30419e-8af4-4328-8a0e-249cba3f540d
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-06
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 128
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
max_steps: 500
micro_batch_size: 8
mixed_precision: bf16
mlflow_experiment_name: /tmp/0e95d9fa1b00798e_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 7e6d5371-3908-4a82-95de-b2ca5550b3d5
wandb_project: s56-6
wandb_run: your_name
wandb_runid: 7e6d5371-3908-4a82-95de-b2ca5550b3d5
warmup_steps: 30
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 4b30419e-8af4-4328-8a0e-249cba3f540d
This model is a fine-tuned version of [furiosa-ai/mlperf-gpt-j-6b](https://huggingface.co/furiosa-ai/mlperf-gpt-j-6b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.9054
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 30
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.8938 | 0.0455 | 500 | 2.9054 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
YiXin-AILab/YiXin-Distill-Qwen-72B | YiXin-AILab | "2025-05-06T09:32:54Z" | 26 | 27 | null | [
"safetensors",
"text-generation",
"zho",
"eng",
"fra",
"spa",
"por",
"deu",
"ita",
"rus",
"jpn",
"kor",
"vie",
"tha",
"ara",
"base_model:Qwen/Qwen2.5-72B",
"base_model:finetune:Qwen/Qwen2.5-72B",
"license:apache-2.0",
"region:us"
] | text-generation | "2025-03-13T10:11:44Z" | ---
license: apache-2.0
language:
- zho
- eng
- fra
- spa
- por
- deu
- ita
- rus
- jpn
- kor
- vie
- tha
- ara
metrics:
- accuracy
base_model:
- Qwen/Qwen2.5-72B
pipeline_tag: text-generation
---
<div style="text-align: center;">
<h1>YiXin-Distill-Qwen-72B</h1>
<img src="./fig/logo.png" alt="YiXin Logo">
</div>
## Model Overview
**YiXin-Distill-Qwen-72B: A High-Performance Distilled Model for Mathematical and General Reasoning**, derived from Qwen2.5-72B using reinforcement learning. It is specifically optimized for mathematical reasoning and general knowledge tasks. Leveraging advanced distillation techniques, this model enhances reasoning capabilities while maintaining computational efficiency. Built upon the robust Qwen model foundation, it aims to achieve state-of-the-art performance across various benchmark evaluations.Our benchmark evaluations demonstrate that YiXin-Distill-Qwen-72B delivers strong performance, showing improvements over comparable distilled models in key mathematical and general reasoning tasks, with observed average improvements of 5 to 11 percentage points.
## Training Details
### Data Collection and Processing
YiXin-Distill-Qwen-72B is trained on a carefully curated, high-quality dataset designed to improve mathematical reasoning and general knowledge comprehension. The data pipeline follows a structured multi-stage approach to ensure optimal model performance while minimizing noise.
#### 1. **Dataset Aggregation**
- Built upon currently available high-quality open-source datasets.
- Covers multiple domains, including **mathematics and general knowledge**.
#### 2. **Data Filtering and Quality Assessment**
We implemented a comprehensive quality control framework utilizing DeepSeek-R1 as an LLM judge to evaluate data quality. The assessment criteria included:
- **Difficulty Level**: Data samples were categorized into simple, moderate, and difficult tiers to ensure balanced representation across complexity levels.
- **Ground Truth** Verification: We employed rigorous verification processes to ensure the correctness of answers within the dataset.
- **Quality Scoring**: Each prompt-response pair was evaluated based on its complexity, instructional clarity, and potential to enhance reasoning abilities.
- **Response Length Analysis**: Responses that failed to meet minimum length requirements were excluded, as they typically lacked sufficient information to provide meaningful training signals.
#### 3. **Validation and Refinement**
For subjective answers, we employed an LLM-based judge to validate response quality and relevance.
Mathematical content underwent additional validation procedures:
- Mathematical answers and their corresponding solutions were systematically validated.
- A critic model assessed each solution process to ensure logical consistency and correctness of mathematical reasoning.
- Solutions with logical gaps or incorrect reasoning patterns were either corrected or removed from the training set.
## Distillation Process
YiXin-Distill-Qwen-72B adopts a progressive two-stage distillation approach, iteratively refining model performance through intelligent data selection and optimization. The training framework continuously identifies and removes high-confidence samples—i.e., cases where the model already excels—to mitigate overfitting, while iteratively refining low-confidence samples to strengthen weak reasoning patterns. By leveraging multiple fine-tuning cycles and quality assessments, the model achieves a balanced enhancement of efficiency and accuracy across mathematical and general reasoning benchmarks.
## Evaluation Results
YiXin-Distill-Qwen-72B was benchmarked against multiple models, including QwQ-32B, DeepSeek-R1-Distill-Qwen-32B, and DeepSeek-R1-Distill-Llama-70B, DeepSeek-R1, across mathematical reasoning and general knowledge tasks:

| Metric | QwQ-32B | DeepSeek-R1-Distill-Qwen-32B | DeepSeek-R1-Distill-Llama-70B | DeepSeek-R1 | YiXin-Distill-Qwen-72B |
|---------------|-------------|-----------------------------|------------------------------|-------------|------------------------|
| MATH-500 | 96.2 | 91.2 | 94.0 | 94.4 | **97.0** |
| GPQA-Diamond | 62.6 | 62.1 | 62.6 | **74.8** | 69.2 |
| AIME-24 | 73.3 | 66.7 | 70.0 | **80.0** | 76.7 |
| AIME-25 | 63.3 | 60.0 | 46.7 | 63.3 | **73.3** |
| MMLU-Pro | 86.2 | 78.3 | 80.3 | 92.4 | **92.6** |
| **Average** | 76.3 | 71.7 | 70.7 | 81.0 | **81.8** |
YiXin-Distill-Qwen-72B demonstrates significant improvements across mathematical reasoning and general knowledge tasks.
## How to Run Locally
### Hugging Face's Transformers
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "YiXin-AILab/YiXin-Distill-Qwen-72B"
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
prompt = "8+8=?"
messages = [
{"role": "system", "content": "You are a helpful and harmless assistant. You are Qwen developed by Alibaba. You should think step-by-step."},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generated_ids = model.generate(
**model_inputs,
max_new_tokens=512
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
```
### vLLM or SGLang
For instance, you can easily start a service using [vLLM](https://github.com/vllm-project/vllm):
```shell
vllm serve YiXin-AILab/YiXin-Distill-Qwen-72B --tensor-parallel-size 4 --max-model-len 32768 --enforce-eager
```
You can also easily start a service using [SGLang](https://github.com/sgl-project/sglang)
```bash
python3 -m sglang.launch_server --model YiXin-AILab/YiXin-Distill-Qwen-72B --trust-remote-code --tp 4 --port 8000
```
Then you can access the Chat API by:
```bash
curl http://localhost:8000/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "YiXin-AILab/YiXin-Distill-Qwen-72B",
"messages": [
{"role": "system", "content": "You are a helpful and harmless assistant. You are Qwen developed by Alibaba. You should think step-by-step."},
{"role": "user", "content": "8+8=?"}
]
}'
```
## Limitations
Despite its strong performance, YiXin-Distill-Qwen-72B has certain limitations:
- **Potential Security Concerns:** YiXin-Distill-Qwen-72B may be vulnerable to adversarial attacks, prompt injection, and data leakage. Proper security measures are recommended for sensitive deployments.
- **Domain-Specific Biases:** Performance may vary across different domains, particularly those underrepresented in the training data.
- **Potential Loss in Distillation:** Some nuanced reasoning capabilities from the teacher model may be reduced during the distillation process.
## Citation
If you use YiXin-Distill-Qwen-72B in your research, please cite this work appropriately:
```bibtex
@misc{yixindistillqwen-72b,
title={YiXin-Distill-Qwen-72B: A High-Performance Distilled Model for Mathematical and General Reasoning},
author={YiXin-AILab},
year={2025},
url={https://huggingface.co/YiXin-AILab/YiXin-Distill-Qwen-72B}
}
```
## Acknowledgments
We acknowledge the contributions of the open-source community and researchers who have developed and maintained the Qwen and DeepSeek models. Their work has significantly advanced the field of large language model distillation and reasoning capabilities.
|
brsvaaa/vectorizer.joblib | brsvaaa | "2025-05-06T09:28:51Z" | 0 | 0 | null | [
"joblib",
"license:cc-by-nc-4.0",
"region:us"
] | null | "2025-05-06T09:28:35Z" | ---
license: cc-by-nc-4.0
---
|
dimasik2987/d42e87c3-c014-4029-90f0-d99d6c19c5ae | dimasik2987 | "2025-05-06T09:28:22Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:JackFram/llama-160m",
"base_model:adapter:JackFram/llama-160m",
"license:apache-2.0",
"4-bit",
"bitsandbytes",
"region:us"
] | null | "2025-05-06T09:25:38Z" | ---
library_name: peft
license: apache-2.0
base_model: JackFram/llama-160m
tags:
- axolotl
- generated_from_trainer
model-index:
- name: d42e87c3-c014-4029-90f0-d99d6c19c5ae
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
absolute_data_files: false
adapter: lora
base_model: JackFram/llama-160m
bf16: true
chat_template: llama3
dataset_prepared_path: /workspace/axolotl
datasets:
- data_files:
- 6dd0167b718e6788_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/6dd0167b718e6788_train_data.json
type:
field_instruction: prompt
field_output: init_response
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 1
gradient_checkpointing: true
gradient_clipping: 0.55
group_by_length: false
hub_model_id: dimasik2987/d42e87c3-c014-4029-90f0-d99d6c19c5ae
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 1.0e-06
load_in_4bit: true
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 128
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
max_steps: 400
micro_batch_size: 8
mixed_precision: bf16
mlflow_experiment_name: /tmp/6dd0167b718e6788_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 2048
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: a2f431ac-1e99-4211-afe4-4b4c12e2a817
wandb_project: s56-28
wandb_run: your_name
wandb_runid: a2f431ac-1e99-4211-afe4-4b4c12e2a817
warmup_steps: 20
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# d42e87c3-c014-4029-90f0-d99d6c19c5ae
This model is a fine-tuned version of [JackFram/llama-160m](https://huggingface.co/JackFram/llama-160m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2720
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- total_train_batch_size: 16
- total_eval_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 20
- training_steps: 400
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.5362 | 0.1536 | 400 | 2.2720 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
FabianOkky/physics-mentor-results | FabianOkky | "2025-05-06T09:27:53Z" | 29 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:EleutherAI/gpt-j-6b",
"base_model:adapter:EleutherAI/gpt-j-6b",
"region:us"
] | null | "2025-04-14T18:29:54Z" | ---
base_model: EleutherAI/gpt-j-6B
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.2.dev0 |
yalhessi/lemexp-task1-v2-template_small_nodefs_old_defs-deepseek-coder-1.3b-base-ddp-8lr-v2 | yalhessi | "2025-05-06T09:27:20Z" | 30 | 0 | peft | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:deepseek-ai/deepseek-coder-1.3b-base",
"base_model:adapter:deepseek-ai/deepseek-coder-1.3b-base",
"license:other",
"region:us"
] | null | "2025-05-04T05:53:10Z" | ---
library_name: peft
license: other
base_model: deepseek-ai/deepseek-coder-1.3b-base
tags:
- generated_from_trainer
model-index:
- name: lemexp-task1-v2-template_small_nodefs_old_defs-deepseek-coder-1.3b-base-ddp-8lr-v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# lemexp-task1-v2-template_small_nodefs_old_defs-deepseek-coder-1.3b-base-ddp-8lr-v2
This model is a fine-tuned version of [deepseek-ai/deepseek-coder-1.3b-base](https://huggingface.co/deepseek-ai/deepseek-coder-1.3b-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1551
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0008
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 16
- total_eval_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 12
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-------:|:-----:|:---------------:|
| 0.438 | 0.2002 | 721 | 0.3096 |
| 0.3098 | 0.4003 | 1442 | 0.2840 |
| 0.2714 | 0.6005 | 2163 | 0.2634 |
| 0.2619 | 0.8007 | 2884 | 0.2561 |
| 0.2529 | 1.0008 | 3605 | 0.2385 |
| 0.2363 | 1.2010 | 4326 | 0.2378 |
| 0.2334 | 1.4012 | 5047 | 0.2336 |
| 0.2275 | 1.6013 | 5768 | 0.2318 |
| 0.2263 | 1.8015 | 6489 | 0.2268 |
| 0.223 | 2.0017 | 7210 | 0.2194 |
| 0.2133 | 2.2018 | 7931 | 0.2129 |
| 0.2104 | 2.4020 | 8652 | 0.2150 |
| 0.2073 | 2.6022 | 9373 | 0.2089 |
| 0.206 | 2.8023 | 10094 | 0.2061 |
| 0.2045 | 3.0025 | 10815 | 0.2018 |
| 0.1949 | 3.2027 | 11536 | 0.1990 |
| 0.1919 | 3.4028 | 12257 | 0.2000 |
| 0.1917 | 3.6030 | 12978 | 0.1974 |
| 0.1893 | 3.8032 | 13699 | 0.1960 |
| 0.189 | 4.0033 | 14420 | 0.1947 |
| 0.1783 | 4.2035 | 15141 | 0.1881 |
| 0.1759 | 4.4037 | 15862 | 0.1905 |
| 0.1767 | 4.6038 | 16583 | 0.1871 |
| 0.1761 | 4.8040 | 17304 | 0.1867 |
| 0.1757 | 5.0042 | 18025 | 0.1866 |
| 0.1631 | 5.2043 | 18746 | 0.1840 |
| 0.1642 | 5.4045 | 19467 | 0.1840 |
| 0.1629 | 5.6047 | 20188 | 0.1791 |
| 0.1626 | 5.8048 | 20909 | 0.1781 |
| 0.1621 | 6.0050 | 21630 | 0.1761 |
| 0.1535 | 6.2052 | 22351 | 0.1774 |
| 0.1506 | 6.4053 | 23072 | 0.1769 |
| 0.1507 | 6.6055 | 23793 | 0.1700 |
| 0.1507 | 6.8057 | 24514 | 0.1722 |
| 0.1494 | 7.0058 | 25235 | 0.1688 |
| 0.141 | 7.2060 | 25956 | 0.1671 |
| 0.1404 | 7.4062 | 26677 | 0.1681 |
| 0.1388 | 7.6063 | 27398 | 0.1657 |
| 0.1368 | 7.8065 | 28119 | 0.1629 |
| 0.1365 | 8.0067 | 28840 | 0.1610 |
| 0.1238 | 8.2068 | 29561 | 0.1599 |
| 0.1253 | 8.4070 | 30282 | 0.1577 |
| 0.1253 | 8.6072 | 31003 | 0.1566 |
| 0.127 | 8.8073 | 31724 | 0.1567 |
| 0.124 | 9.0075 | 32445 | 0.1571 |
| 0.1119 | 9.2077 | 33166 | 0.1584 |
| 0.1113 | 9.4078 | 33887 | 0.1570 |
| 0.1125 | 9.6080 | 34608 | 0.1525 |
| 0.1121 | 9.8082 | 35329 | 0.1563 |
| 0.1121 | 10.0083 | 36050 | 0.1559 |
| 0.099 | 10.2085 | 36771 | 0.1581 |
| 0.0986 | 10.4087 | 37492 | 0.1541 |
| 0.0998 | 10.6088 | 38213 | 0.1531 |
| 0.0992 | 10.8090 | 38934 | 0.1530 |
| 0.0981 | 11.0092 | 39655 | 0.1546 |
| 0.0909 | 11.2093 | 40376 | 0.1566 |
| 0.0887 | 11.4095 | 41097 | 0.1568 |
| 0.0895 | 11.6097 | 41818 | 0.1546 |
| 0.0887 | 11.8098 | 42539 | 0.1551 |
### Framework versions
- PEFT 0.14.0
- Transformers 4.47.0
- Pytorch 2.5.1+cu124
- Datasets 3.2.0
- Tokenizers 0.21.0 |
lgcharpe/babylm-baseline-10m-gpt-bert-mixed | lgcharpe | "2025-05-06T09:25:34Z" | 8 | 0 | null | [
"pytorch",
"custom_code",
"license:apache-2.0",
"region:us"
] | null | "2025-05-01T19:28:13Z" | ---
license: apache-2.0
---
|
brsvaaa/Causal_Oversimplification_model.keras | brsvaaa | "2025-05-06T09:23:37Z" | 0 | 0 | keras | [
"keras",
"license:cc-by-nc-4.0",
"region:us"
] | null | "2025-05-06T09:23:00Z" | ---
license: cc-by-nc-4.0
---
|
worstchan/EAT-large_epoch20_pretrain | worstchan | "2025-05-06T09:23:35Z" | 9 | 0 | transformers | [
"transformers",
"safetensors",
"eat",
"feature-extraction",
"Audio",
"SSL",
"EAT",
"custom_code",
"arxiv:2401.03497",
"license:mit",
"region:us"
] | feature-extraction | "2025-05-03T06:31:59Z" | ---
license: mit
tags:
- Audio
- SSL
- EAT
library_name: transformers
---
# EAT-large (Epoch 20, Pre-trained Checkpoint)
This is the **pre-trained EAT-large model** at epoch 20, trained on the AS-2M dataset using the EAT framework for audio self-supervised learning.
It offers efficient feature extraction and can also serve as a strong initialization for fine-tuning on a wide range of downstream audio understanding tasks such as classification and captioning.
For more details on the EAT framework, please refer to the [GitHub repository](https://github.com/cwx-worst-one/EAT) and our paper [EAT: Self-Supervised Pre-Training with Efficient Audio Transformer](https://arxiv.org/abs/2401.03497).
## 🔧 Usage
You can load and use the model for feature extraction directly via Hugging Face Transformers:
```python
import torchaudio
import torch
import soundfile as sf
import numpy as np
from transformers import AutoModel
model_id = "worstchan/EAT-large_epoch20_pretrain"
model = AutoModel.from_pretrained(model_id, trust_remote_code=True).eval().cuda()
source_file = "/path/to/input.wav"
target_file = "/path/to/output.npy"
target_length = 1024 # Recommended: 1024 for 10s audio
norm_mean = -4.268
norm_std = 4.569
# Load and resample audio
wav, sr = sf.read(source_file)
waveform = torch.tensor(wav).float().cuda()
if sr != 16000:
waveform = torchaudio.functional.resample(waveform, sr, 16000)
# Normalize and convert to mel-spectrogram
waveform = waveform - waveform.mean()
mel = torchaudio.compliance.kaldi.fbank(
waveform.unsqueeze(0),
htk_compat=True,
sample_frequency=16000,
use_energy=False,
window_type='hanning',
num_mel_bins=128,
dither=0.0,
frame_shift=10
).unsqueeze(0)
# Pad or truncate
n_frames = mel.shape[1]
if n_frames < target_length:
mel = torch.nn.ZeroPad2d((0, 0, 0, target_length - n_frames))(mel)
else:
mel = mel[:, :target_length, :]
# Normalize
mel = (mel - norm_mean) / (norm_std * 2)
mel = mel.unsqueeze(0).cuda() # shape: [1, 1, T, F]
# Extract features
with torch.no_grad():
feat = model.extract_features(mel)
feat = feat.squeeze(0).cpu().numpy()
np.save(target_file, feat)
print(f"Feature shape: {feat.shape}")
print(f"Saved to: {target_file}")
```
## 📌 Notes
The model supports both **frame-level** (\~50Hz) and **utterance-level** (CLS token) representations.
See the [feature extraction guide](https://github.com/cwx-worst-one/EAT/tree/main/feature_extract) for more instructions.
## 📚 Citation
If you find this model useful, please consider citing our [paper](https://arxiv.org/abs/2401.03497):
```bibtex
@article{chen2024eat,
title={EAT: Self-supervised pre-training with efficient audio transformer},
author={Chen, Wenxi and Liang, Yuzhe and Ma, Ziyang and Zheng, Zhisheng and Chen, Xie},
journal={arXiv preprint arXiv:2401.03497},
year={2024}
} |
brsvaaa/Black-and-White_Fallacy_model.keras | brsvaaa | "2025-05-06T09:22:10Z" | 0 | 0 | keras | [
"keras",
"license:cc-by-nc-4.0",
"region:us"
] | null | "2025-05-06T09:21:54Z" | ---
license: cc-by-nc-4.0
---
|
fats-fme/9d3460d4-290d-44d8-8c0c-cfc257b857c7 | fats-fme | "2025-05-06T09:20:01Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2-7B-Instruct",
"base_model:adapter:unsloth/Qwen2-7B-Instruct",
"license:apache-2.0",
"region:us"
] | null | "2025-05-06T04:19:58Z" | ---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen2-7B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 9d3460d4-290d-44d8-8c0c-cfc257b857c7
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Qwen2-7B-Instruct
bf16: true
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- fe6776348ff3d7eb_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/fe6776348ff3d7eb_train_data.json
type:
field_instruction: en
field_output: pt
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
early_stopping_patience: 3
eval_max_new_tokens: 128
eval_steps: 100
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 32
gradient_checkpointing: true
group_by_length: false
hub_model_id: fats-fme/9d3460d4-290d-44d8-8c0c-cfc257b857c7
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 16
lora_dropout: 0.1
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lora_target_modules:
- q_proj
- v_proj
lr_scheduler: cosine
max_memory:
0: 130GB
max_steps: 100
micro_batch_size: 1
mlflow_experiment_name: /tmp/fe6776348ff3d7eb_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 2
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 100
saves_per_epoch: null
sequence_len: 2048
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 2ea5198f-79c2-4a7b-a6a5-fb1336637bdd
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 2ea5198f-79c2-4a7b-a6a5-fb1336637bdd
warmup_steps: 200
weight_decay: 0.01
xformers_attention: null
```
</details><br>
# 9d3460d4-290d-44d8-8c0c-cfc257b857c7
This model is a fine-tuned version of [unsloth/Qwen2-7B-Instruct](https://huggingface.co/unsloth/Qwen2-7B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8968
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 32
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 200
- training_steps: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0000 | 1 | 4.2365 |
| 1.6397 | 0.0034 | 100 | 1.8968 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
niklasm222/qwen2.5-3b-1.75k-prolog-sp-struct-rwd1-silvery-sweep-1 | niklasm222 | "2025-05-06T09:19:52Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"grpo",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-05-06T09:18:14Z" | ---
base_model: unsloth/qwen2.5-3b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
- grpo
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** niklasm222
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-3b-instruct-unsloth-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Subsets and Splits