modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-07-28 12:29:09
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 534
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-07-28 12:26:21
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
HKReporter/ECTEL-2025-llama3-fold4-CU0
|
HKReporter
| 2025-06-20T04:09:00Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:unsloth/llama-3-8b-Instruct-bnb-4bit",
"base_model:adapter:unsloth/llama-3-8b-Instruct-bnb-4bit",
"region:us"
] | null | 2025-06-20T04:08:53Z |
---
base_model: unsloth/llama-3-8b-Instruct-bnb-4bit
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.2
|
HKReporter/ECTEL-2025-llama3-fold3-CU1
|
HKReporter
| 2025-06-20T04:08:19Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:unsloth/llama-3-8b-Instruct-bnb-4bit",
"base_model:adapter:unsloth/llama-3-8b-Instruct-bnb-4bit",
"region:us"
] | null | 2025-06-20T04:08:13Z |
---
base_model: unsloth/llama-3-8b-Instruct-bnb-4bit
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.2
|
HKReporter/ECTEL-2025-llama3-fold2-CU3
|
HKReporter
| 2025-06-20T04:07:48Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:unsloth/llama-3-8b-Instruct-bnb-4bit",
"base_model:adapter:unsloth/llama-3-8b-Instruct-bnb-4bit",
"region:us"
] | null | 2025-06-20T04:07:40Z |
---
base_model: unsloth/llama-3-8b-Instruct-bnb-4bit
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.2
|
HKReporter/ECTEL-2025-llama3-fold1-CU4
|
HKReporter
| 2025-06-20T04:07:01Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:unsloth/llama-3-8b-Instruct-bnb-4bit",
"base_model:adapter:unsloth/llama-3-8b-Instruct-bnb-4bit",
"region:us"
] | null | 2025-06-20T04:06:52Z |
---
base_model: unsloth/llama-3-8b-Instruct-bnb-4bit
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.2
|
HKReporter/ECTEL-2025-llama3-fold1-CU2
|
HKReporter
| 2025-06-20T04:06:35Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:unsloth/llama-3-8b-Instruct-bnb-4bit",
"base_model:adapter:unsloth/llama-3-8b-Instruct-bnb-4bit",
"region:us"
] | null | 2025-06-20T04:06:27Z |
---
base_model: unsloth/llama-3-8b-Instruct-bnb-4bit
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.2
|
Triangle104/Huihui-Qwen3-14B-abliterated-v2-Q5_K_M-GGUF
|
Triangle104
| 2025-06-20T04:03:35Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"chat",
"abliterated",
"uncensored",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"base_model:huihui-ai/Huihui-Qwen3-14B-abliterated-v2",
"base_model:quantized:huihui-ai/Huihui-Qwen3-14B-abliterated-v2",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-20T04:02:52Z |
---
library_name: transformers
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen3-14B/blob/main/LICENSE
pipeline_tag: text-generation
base_model: huihui-ai/Huihui-Qwen3-14B-abliterated-v2
tags:
- chat
- abliterated
- uncensored
- llama-cpp
- gguf-my-repo
---
# Triangle104/Huihui-Qwen3-14B-abliterated-v2-Q5_K_M-GGUF
This model was converted to GGUF format from [`huihui-ai/Huihui-Qwen3-14B-abliterated-v2`](https://huggingface.co/huihui-ai/Huihui-Qwen3-14B-abliterated-v2) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/huihui-ai/Huihui-Qwen3-14B-abliterated-v2) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Huihui-Qwen3-14B-abliterated-v2-Q5_K_M-GGUF --hf-file huihui-qwen3-14b-abliterated-v2-q5_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Huihui-Qwen3-14B-abliterated-v2-Q5_K_M-GGUF --hf-file huihui-qwen3-14b-abliterated-v2-q5_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Huihui-Qwen3-14B-abliterated-v2-Q5_K_M-GGUF --hf-file huihui-qwen3-14b-abliterated-v2-q5_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Huihui-Qwen3-14B-abliterated-v2-Q5_K_M-GGUF --hf-file huihui-qwen3-14b-abliterated-v2-q5_k_m.gguf -c 2048
```
|
morturr/Llama-2-7b-hf-PAIR_amazon_dadjokes-COMB-amazon-comb-3-seed-42-2025-06-20
|
morturr
| 2025-06-20T04:03:08Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"license:llama2",
"region:us"
] | null | 2025-06-20T04:02:53Z |
---
library_name: peft
license: llama2
base_model: meta-llama/Llama-2-7b-hf
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: Llama-2-7b-hf-PAIR_amazon_dadjokes-COMB-amazon-comb-3-seed-42-2025-06-20
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Llama-2-7b-hf-PAIR_amazon_dadjokes-COMB-amazon-comb-3-seed-42-2025-06-20
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.1
- Pytorch 2.5.1+cu124
- Datasets 3.0.2
- Tokenizers 0.20.1
|
mynamerahulkumar/sft-tiny-chatbot
|
mynamerahulkumar
| 2025-06-20T04:00:58Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"base_model:finetune:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-20T03:59:33Z |
---
base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
library_name: transformers
model_name: sft-tiny-chatbot
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for sft-tiny-chatbot
This model is a fine-tuned version of [TinyLlama/TinyLlama-1.1B-Chat-v1.0](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="mynamerahulkumar/sft-tiny-chatbot", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.18.2
- Transformers: 4.52.4
- Pytorch: 2.6.0+cu124
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
Sharing22/aab_c5
|
Sharing22
| 2025-06-20T03:47:03Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-20T03:43:47Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
SYoungT/1B-8-pt2
|
SYoungT
| 2025-06-20T03:45:25Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-20T03:44:28Z |
---
base_model: unsloth/llama-3.2-1b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** SYoungT
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-1b-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
greenkwd/lr0.0001_bs16_0620_0942
|
greenkwd
| 2025-06-20T03:41:58Z | 0 | 0 | null |
[
"safetensors",
"segformer",
"vision",
"image-segmentation",
"generated_from_trainer",
"base_model:nvidia/mit-b0",
"base_model:finetune:nvidia/mit-b0",
"license:other",
"region:us"
] |
image-segmentation
| 2025-06-20T03:41:54Z |
---
license: other
base_model: nvidia/mit-b0
tags:
- vision
- image-segmentation
- generated_from_trainer
model-index:
- name: lr0.0001_bs16_0620_0942
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# lr0.0001_bs16_0620_0942
This model is a fine-tuned version of [nvidia/mit-b0](https://huggingface.co/nvidia/mit-b0) on the greenkwd/upwellingdetection_SST dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1335
- Mean Iou: 0.8871
- Mean Accuracy: 0.9459
- Overall Accuracy: 0.9536
- Accuracy Land: 0.9552
- Accuracy Upwelling: 0.9692
- Accuracy Not Upwelling: 0.9133
- Iou Land: 0.9542
- Iou Upwelling: 0.9274
- Iou Not Upwelling: 0.7796
- Dice Macro: 0.9383
- Dice Micro: 0.9536
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 500
- num_epochs: 100
- label_smoothing_factor: 0.1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mean Iou | Mean Accuracy | Overall Accuracy | Accuracy Land | Accuracy Upwelling | Accuracy Not Upwelling | Iou Land | Iou Upwelling | Iou Not Upwelling | Dice Macro | Dice Micro |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:-------------:|:----------------:|:-------------:|:------------------:|:----------------------:|:--------:|:-------------:|:-----------------:|:----------:|:----------:|
| 1.0882 | 0.4 | 20 | 1.0699 | 0.1883 | 0.4295 | 0.3022 | 0.0162 | 0.3817 | 0.8905 | 0.0149 | 0.3664 | 0.1835 | 0.2919 | 0.3022 |
| 0.9223 | 0.8 | 40 | 0.8895 | 0.5561 | 0.7282 | 0.7212 | 0.6576 | 0.7873 | 0.7397 | 0.6574 | 0.7021 | 0.3088 | 0.6967 | 0.7212 |
| 0.7699 | 1.2 | 60 | 0.6288 | 0.6063 | 0.7519 | 0.7768 | 0.7339 | 0.8893 | 0.6323 | 0.7339 | 0.7609 | 0.3240 | 0.7334 | 0.7768 |
| 0.69 | 1.6 | 80 | 0.4913 | 0.6720 | 0.8138 | 0.8249 | 0.7968 | 0.8865 | 0.7580 | 0.7968 | 0.7955 | 0.4238 | 0.7894 | 0.8249 |
| 0.6536 | 2.0 | 100 | 0.4191 | 0.6957 | 0.8285 | 0.8440 | 0.7989 | 0.9377 | 0.7489 | 0.7989 | 0.8361 | 0.4519 | 0.8072 | 0.8440 |
| 0.5298 | 2.4 | 120 | 0.3944 | 0.6962 | 0.8132 | 0.8531 | 0.8292 | 0.9750 | 0.6354 | 0.8292 | 0.8257 | 0.4337 | 0.8054 | 0.8531 |
| 0.4779 | 2.8 | 140 | 0.3525 | 0.7445 | 0.8604 | 0.8775 | 0.8585 | 0.9409 | 0.7818 | 0.8585 | 0.8477 | 0.5273 | 0.8440 | 0.8775 |
| 0.4727 | 3.2 | 160 | 0.3321 | 0.7514 | 0.8651 | 0.8818 | 0.8577 | 0.9509 | 0.7868 | 0.8577 | 0.8596 | 0.5370 | 0.8489 | 0.8818 |
| 0.5746 | 3.6 | 180 | 0.3068 | 0.7629 | 0.8791 | 0.8865 | 0.8587 | 0.9392 | 0.8395 | 0.8587 | 0.8685 | 0.5616 | 0.8576 | 0.8865 |
| 0.5181 | 4.0 | 200 | 0.2654 | 0.8091 | 0.8977 | 0.9163 | 0.9140 | 0.9619 | 0.8172 | 0.9138 | 0.8833 | 0.6302 | 0.8887 | 0.9163 |
| 0.4094 | 4.4 | 220 | 0.2525 | 0.8288 | 0.9177 | 0.9246 | 0.9247 | 0.9402 | 0.8882 | 0.9241 | 0.8895 | 0.6729 | 0.9022 | 0.9246 |
| 0.5539 | 4.8 | 240 | 0.2300 | 0.8317 | 0.9224 | 0.9254 | 0.9214 | 0.9374 | 0.9085 | 0.9209 | 0.8944 | 0.6799 | 0.9042 | 0.9254 |
| 0.4994 | 5.2 | 260 | 0.2150 | 0.8199 | 0.9171 | 0.9186 | 0.9011 | 0.9446 | 0.9055 | 0.9010 | 0.8998 | 0.6588 | 0.8965 | 0.9186 |
| 0.3206 | 5.6 | 280 | 0.2043 | 0.8570 | 0.9325 | 0.9391 | 0.9449 | 0.9469 | 0.9056 | 0.9435 | 0.9035 | 0.7240 | 0.9200 | 0.9391 |
| 0.3138 | 6.0 | 300 | 0.1909 | 0.8538 | 0.9301 | 0.9377 | 0.9408 | 0.9510 | 0.8986 | 0.9398 | 0.9041 | 0.7176 | 0.9181 | 0.9377 |
| 0.3412 | 6.4 | 320 | 0.1935 | 0.8630 | 0.9280 | 0.9435 | 0.9517 | 0.9680 | 0.8644 | 0.9498 | 0.9082 | 0.7311 | 0.9236 | 0.9435 |
| 0.3777 | 6.8 | 340 | 0.1728 | 0.8422 | 0.9188 | 0.9328 | 0.9245 | 0.9758 | 0.8560 | 0.9243 | 0.9106 | 0.6917 | 0.9105 | 0.9328 |
| 0.4217 | 7.2 | 360 | 0.1847 | 0.8545 | 0.9357 | 0.9370 | 0.9393 | 0.9373 | 0.9304 | 0.9386 | 0.9028 | 0.7221 | 0.9186 | 0.9370 |
| 0.33 | 7.6 | 380 | 0.1690 | 0.8596 | 0.9250 | 0.9420 | 0.9460 | 0.9758 | 0.8532 | 0.9450 | 0.9102 | 0.7234 | 0.9214 | 0.9420 |
| 0.4913 | 8.0 | 400 | 0.1574 | 0.8682 | 0.9323 | 0.9456 | 0.9511 | 0.9689 | 0.8770 | 0.9500 | 0.9133 | 0.7413 | 0.9268 | 0.9456 |
| 0.3707 | 8.4 | 420 | 0.1526 | 0.8627 | 0.9253 | 0.9437 | 0.9484 | 0.9798 | 0.8476 | 0.9474 | 0.9114 | 0.7295 | 0.9234 | 0.9437 |
| 0.4486 | 8.8 | 440 | 0.1451 | 0.8643 | 0.9323 | 0.9433 | 0.9415 | 0.9707 | 0.8847 | 0.9407 | 0.9169 | 0.7352 | 0.9245 | 0.9433 |
| 0.2992 | 9.2 | 460 | 0.1411 | 0.8752 | 0.9440 | 0.9475 | 0.9520 | 0.9497 | 0.9304 | 0.9508 | 0.9151 | 0.7597 | 0.9313 | 0.9475 |
| 0.3912 | 9.6 | 480 | 0.1465 | 0.8637 | 0.9308 | 0.9432 | 0.9388 | 0.9774 | 0.8763 | 0.9384 | 0.9201 | 0.7325 | 0.9241 | 0.9432 |
| 0.3323 | 10.0 | 500 | 0.1501 | 0.8854 | 0.9351 | 0.9544 | 0.9686 | 0.9803 | 0.8564 | 0.9652 | 0.9182 | 0.7729 | 0.9372 | 0.9544 |
| 0.3496 | 10.4 | 520 | 0.1311 | 0.8917 | 0.9470 | 0.9559 | 0.9621 | 0.9683 | 0.9105 | 0.9600 | 0.9263 | 0.7888 | 0.9411 | 0.9559 |
| 0.256 | 10.8 | 540 | 0.1320 | 0.8841 | 0.9463 | 0.9520 | 0.9521 | 0.9647 | 0.9221 | 0.9511 | 0.9263 | 0.7747 | 0.9366 | 0.9520 |
| 0.3223 | 11.2 | 560 | 0.1451 | 0.8734 | 0.9436 | 0.9465 | 0.9405 | 0.9608 | 0.9296 | 0.9401 | 0.9247 | 0.7554 | 0.9302 | 0.9465 |
| 0.4234 | 11.6 | 580 | 0.1335 | 0.8871 | 0.9459 | 0.9536 | 0.9552 | 0.9692 | 0.9133 | 0.9542 | 0.9274 | 0.7796 | 0.9383 | 0.9536 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.6.0+cu124
- Datasets 3.2.0
- Tokenizers 0.19.1
|
MickM/ppo-LunarLander-v2_DeepRLCourse
|
MickM
| 2025-06-20T03:38:30Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-06-20T03:38:11Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 277.18 +/- 14.82
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
RubanAgnesh/work-test-empathetic
|
RubanAgnesh
| 2025-06-20T03:38:25Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"phi3",
"text-generation",
"conversational",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-20T03:29:51Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Triangle104/Huihui-Qwen3-4B-abliterated-v2-Q8_0-GGUF
|
Triangle104
| 2025-06-20T03:28:04Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"chat",
"abliterated",
"uncensored",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"base_model:huihui-ai/Huihui-Qwen3-4B-abliterated-v2",
"base_model:quantized:huihui-ai/Huihui-Qwen3-4B-abliterated-v2",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-20T03:27:41Z |
---
library_name: transformers
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen3-4B/blob/main/LICENSE
pipeline_tag: text-generation
base_model: huihui-ai/Huihui-Qwen3-4B-abliterated-v2
tags:
- chat
- abliterated
- uncensored
- llama-cpp
- gguf-my-repo
---
# Triangle104/Huihui-Qwen3-4B-abliterated-v2-Q8_0-GGUF
This model was converted to GGUF format from [`huihui-ai/Huihui-Qwen3-4B-abliterated-v2`](https://huggingface.co/huihui-ai/Huihui-Qwen3-4B-abliterated-v2) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/huihui-ai/Huihui-Qwen3-4B-abliterated-v2) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Huihui-Qwen3-4B-abliterated-v2-Q8_0-GGUF --hf-file huihui-qwen3-4b-abliterated-v2-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Huihui-Qwen3-4B-abliterated-v2-Q8_0-GGUF --hf-file huihui-qwen3-4b-abliterated-v2-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Huihui-Qwen3-4B-abliterated-v2-Q8_0-GGUF --hf-file huihui-qwen3-4b-abliterated-v2-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Huihui-Qwen3-4B-abliterated-v2-Q8_0-GGUF --hf-file huihui-qwen3-4b-abliterated-v2-q8_0.gguf -c 2048
```
|
yensonalvi6/llama2-7b-ginecologia-qlora
|
yensonalvi6
| 2025-06-20T03:24:45Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-20T03:24:31Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Triangle104/Huihui-Qwen3-4B-abliterated-v2-Q5_K_S-GGUF
|
Triangle104
| 2025-06-20T03:23:25Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"chat",
"abliterated",
"uncensored",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"base_model:huihui-ai/Huihui-Qwen3-4B-abliterated-v2",
"base_model:quantized:huihui-ai/Huihui-Qwen3-4B-abliterated-v2",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-20T03:23:11Z |
---
library_name: transformers
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen3-4B/blob/main/LICENSE
pipeline_tag: text-generation
base_model: huihui-ai/Huihui-Qwen3-4B-abliterated-v2
tags:
- chat
- abliterated
- uncensored
- llama-cpp
- gguf-my-repo
---
# Triangle104/Huihui-Qwen3-4B-abliterated-v2-Q5_K_S-GGUF
This model was converted to GGUF format from [`huihui-ai/Huihui-Qwen3-4B-abliterated-v2`](https://huggingface.co/huihui-ai/Huihui-Qwen3-4B-abliterated-v2) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/huihui-ai/Huihui-Qwen3-4B-abliterated-v2) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Huihui-Qwen3-4B-abliterated-v2-Q5_K_S-GGUF --hf-file huihui-qwen3-4b-abliterated-v2-q5_k_s.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Huihui-Qwen3-4B-abliterated-v2-Q5_K_S-GGUF --hf-file huihui-qwen3-4b-abliterated-v2-q5_k_s.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Huihui-Qwen3-4B-abliterated-v2-Q5_K_S-GGUF --hf-file huihui-qwen3-4b-abliterated-v2-q5_k_s.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Huihui-Qwen3-4B-abliterated-v2-Q5_K_S-GGUF --hf-file huihui-qwen3-4b-abliterated-v2-q5_k_s.gguf -c 2048
```
|
vuitton/21v1scrip_34.1
|
vuitton
| 2025-06-20T03:22:31Z | 0 | 0 | null |
[
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-06-20T02:56:03Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
Sasari403/Lora
|
Sasari403
| 2025-06-20T03:21:51Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:stable-diffusion-v1-5/stable-diffusion-v1-5",
"base_model:adapter:stable-diffusion-v1-5/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2025-06-19T21:10:40Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: "UNICODE\0\0s\0c\0o\0r\0e\0_\09\0,\0 \0s\0c\0o\0r\0e\0_\08\0_\0u\0p\0,\0 \0s\0c\0o\0r\0e\0_\07\0_\0u\0p\0,\0 \0s\0c\0o\0r\0e\0_\06\0_\0u\0p\0,\0 \0s\0c\0o\0r\0e\0_\05\0_\0u\0p\0,\0 \0s\0c\0o\0r\0e\0_\04\0_\0u\0p\0,\0 \0(\0(\0l\0o\0w\0 \0d\0e\0p\0t\0h\0 \0o\0f\0 \0f\0i\0e\0l\0d\0)\0)\0,\0 \0<\0l\0o\0r\0a\0:\0S\0u\0m\0m\0e\0r\0t\0i\0m\0e\0S\0a\0g\0a\0X\0L\0_\0P\0o\0n\0y\0:\00\0.\04\0>\0,\0 \0(\0D\0r\0a\0w\0n\0 \0i\0n\0 \0t\0h\0e\0 \0s\0t\0y\0l\0e\0 \0o\0f\0 \0s\0u\0m\0m\0e\0r\0t\0i\0m\0e\0 \0s\0a\0g\0a\0)\0,\0 \0(\0b\0e\0a\0u\0t\0i\0f\0u\0l\0 \0l\0a\0n\0d\0s\0c\0a\0p\0e\0)\0,\0"
parameters:
negative_prompt: >-
score_6,score_5,score_4, (((X-Ray, xray))), ((long neck)), ((black and
white, b&w)), (DoF), (blurred), (bokeh), (speech bubbles), chromatic
aberration, deformed body, ugly face, extra arms, watercolor, sepia, worst
quality, low quality, lowres, poorly drawn face, bad anatomy, blurry,
watermark, signature, ugly, artifacts, bad image, anime, tail, ponytail,
armpit hair
output:
url: images/00065-997064375.jpeg
base_model: stable-diffusion-v1-5/stable-diffusion-v1-5
instance_prompt: >-
stsdebbie, 1girl, mature woman, brown hair, long hair, blue robe, long
sleeves, cleavage, bathrobe, blue robe, long sleeves, cleavage, one breast
out, bathrobe, blue robe, long sleeves, cleavage, open robe, navel, no panties
license: creativeml-openrail-m
---
# Debbie
<Gallery />
## Model description
⚠️ Contains NSFW – 18+ only
## Trigger words
You should use `stsdebbie` to trigger the image generation.
You should use `1girl` to trigger the image generation.
You should use `mature woman` to trigger the image generation.
You should use `brown hair` to trigger the image generation.
You should use `long hair` to trigger the image generation.
You should use `blue robe` to trigger the image generation.
You should use `long sleeves` to trigger the image generation.
You should use `cleavage` to trigger the image generation.
You should use `bathrobe` to trigger the image generation.
You should use `blue robe` to trigger the image generation.
You should use `long sleeves` to trigger the image generation.
You should use `cleavage` to trigger the image generation.
You should use `one breast out` to trigger the image generation.
You should use `bathrobe` to trigger the image generation.
You should use `blue robe` to trigger the image generation.
You should use `long sleeves` to trigger the image generation.
You should use `cleavage` to trigger the image generation.
You should use `open robe` to trigger the image generation.
You should use `navel` to trigger the image generation.
You should use `no panties` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/Sasari403/Lora/tree/main) them in the Files & versions tab.
|
alfaqi/law_questions_and_answers
|
alfaqi
| 2025-06-20T03:21:45Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-20T03:17:36Z |
---
base_model: unsloth/meta-llama-3.1-8b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** alfaqi
- **License:** apache-2.0
- **Finetuned from model :** unsloth/meta-llama-3.1-8b-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Triangle104/Huihui-Qwen3-4B-abliterated-v2-Q4_K_M-GGUF
|
Triangle104
| 2025-06-20T03:20:17Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"chat",
"abliterated",
"uncensored",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"base_model:huihui-ai/Huihui-Qwen3-4B-abliterated-v2",
"base_model:quantized:huihui-ai/Huihui-Qwen3-4B-abliterated-v2",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-20T03:20:02Z |
---
library_name: transformers
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen3-4B/blob/main/LICENSE
pipeline_tag: text-generation
base_model: huihui-ai/Huihui-Qwen3-4B-abliterated-v2
tags:
- chat
- abliterated
- uncensored
- llama-cpp
- gguf-my-repo
---
# Triangle104/Huihui-Qwen3-4B-abliterated-v2-Q4_K_M-GGUF
This model was converted to GGUF format from [`huihui-ai/Huihui-Qwen3-4B-abliterated-v2`](https://huggingface.co/huihui-ai/Huihui-Qwen3-4B-abliterated-v2) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/huihui-ai/Huihui-Qwen3-4B-abliterated-v2) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Huihui-Qwen3-4B-abliterated-v2-Q4_K_M-GGUF --hf-file huihui-qwen3-4b-abliterated-v2-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Huihui-Qwen3-4B-abliterated-v2-Q4_K_M-GGUF --hf-file huihui-qwen3-4b-abliterated-v2-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Huihui-Qwen3-4B-abliterated-v2-Q4_K_M-GGUF --hf-file huihui-qwen3-4b-abliterated-v2-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Huihui-Qwen3-4B-abliterated-v2-Q4_K_M-GGUF --hf-file huihui-qwen3-4b-abliterated-v2-q4_k_m.gguf -c 2048
```
|
dermarung/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-whiskered_climbing_termite
|
dermarung
| 2025-06-20T03:12:50Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am whiskered climbing termite",
"unsloth",
"trl",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-1.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-05-03T21:51:58Z |
---
base_model: Gensyn/Qwen2.5-1.5B-Instruct
library_name: transformers
model_name: Qwen2.5-1.5B-Instruct-Gensyn-Swarm-whiskered_climbing_termite
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am whiskered climbing termite
- unsloth
- trl
licence: license
---
# Model Card for Qwen2.5-1.5B-Instruct-Gensyn-Swarm-whiskered_climbing_termite
This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="dermarung/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-whiskered_climbing_termite", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.48.2
- Pytorch: 2.5.1
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
CLLBJ16/CoMemo-2B
|
CLLBJ16
| 2025-06-20T03:12:14Z | 24 | 1 |
transformers
|
[
"transformers",
"safetensors",
"comemo_chat",
"feature-extraction",
"internvl",
"custom_code",
"image-text-to-text",
"conversational",
"multilingual",
"arxiv:2506.06279",
"base_model:OpenGVLab/InternViT-300M-448px",
"base_model:merge:OpenGVLab/InternViT-300M-448px",
"base_model:internlm/internlm2-chat-1_8b",
"base_model:merge:internlm/internlm2-chat-1_8b",
"license:mit",
"region:us"
] |
image-text-to-text
| 2025-06-17T08:02:50Z |
---
base_model:
- OpenGVLab/InternViT-300M-448px
- internlm/internlm2-chat-1_8b
language:
- multilingual
library_name: transformers
license: mit
pipeline_tag: image-text-to-text
tags:
- internvl
- custom_code
base_model_relation: merge
---
# CoMemo-2B
[\[📂 GitHub\]](https://github.com/LALBJ/CoMemo) [\[📜 Paper\]](https://arxiv.org/pdf/2506.06279) [\[🚀 Quick Start\]](#quick-start) [\[🌐 Project Page\]](https://lalbj.github.io/projects/CoMemo/)
## Introduction
LVLMs inherited LLMs architectural designs, which introduce suboptimal characteristics for multimodal processing. First, LVLMs exhibit a bimodal distribution in attention allocation, leading to the progressive neglect of central visual content as context expands. Second, conventional positional encoding schemes fail to preserve vital 2D structural relationships when processing dynamic high-resolution images.
To address these issues, we propose CoMemo, a novel model architecture. CoMemo employs a dual-path approach for visual processing: one path maps image tokens to the text token representation space for causal self-attention, while the other introduces cross-attention, enabling context-agnostic computation between the input sequence and image information. Additionally, we developed RoPE-DHR, a new positional encoding method tailored for LVLMs with dynamic high-resolution inputs. RoPE-DHR mitigates the remote decay problem caused by dynamic high-resolution inputs while preserving the 2D structural information of images.
Evaluated on seven diverse tasks, including long-context understanding, multi-image reasoning, and visual question answering, CoMemo achieves relative improvements of 17.2%, 7.0%, and 5.6% on Caption, Long-Generation, and Long-Context tasks, respectively, with consistent performance gains across various benchmarks. For more details, please refer to our [paper](https://arxiv.org/pdf/2506.06279) and [GitHub](https://github.com/LALBJ/CoMemo).
| Model Name | Vision Part | Language Part | HF Link |
| :------------------: | :---------------------------------------------------------------------------------: | :------------------------------------------------------------------------------------------: | :--------------------------------------------------------------: |
| CoMemo-2B | [InternViT-300M-448px](https://huggingface.co/OpenGVLab/InternViT-300M-448px) | [internlm2-chat-1_8b](https://huggingface.co/internlm/internlm2-chat-1_8b) | [🤗 link](https://huggingface.co/CLLBJ16/CoMemo-2B) |
| CoMemo-9B | [InternViT-300M-448px](https://huggingface.co/OpenGVLab/InternViT-300M-448px) | [internlm2-chat-7b](https://huggingface.co/internlm/internlm2-chat-7b) | [🤗 link](https://huggingface.co/CLLBJ16/CoMemo-9B) |
## Method Overview
<div class="image-row" style="display: flex; justify-content: center; gap: 10px; margin: 20px 0;">
<img src="assets/RoPE_DHR.png" alt="teaser" style="max-width: 30%; height: auto;" />
<img src="assets/CoMemo_framework.png" alt="teaser" style="max-width: 53%; height: auto;" />
</div>
**Left:** The computation process of Rope-DHR. The colors are assigned based on a mapping of position IDs in RoPE.
**Right:** Framework of CoMemo. Both paths share the same encoder and projector
## Quick Start
We provide an example code to run `CoMemo-2B` using `transformers`.
> Please use transformers>=4.37.2 to ensure the model works normally.
### Inference with Transformers
> Note: We determine whether to use RoPE-DHR by checking if the target_aspect_ratio parameter is passed to generate.
> For OCR-related tasks requiring fine-grained image information, we recommend using the original RoPE. For long-context tasks, we recommend using RoPE-DHR.
```python
import torch
from PIL import Image
import torchvision.transforms as T
from torchvision.transforms.functional import InterpolationMode
from transformers import AutoModel, AutoTokenizer
path = "CLLBJ16/CoMemo-2B"
model = AutoModel.from_pretrained(
path,
torch_dtype=torch.bfloat16,
trust_remote_code=True,
low_cpu_mem_usage=True).eval().cuda()
tokenizer = AutoTokenizer.from_pretrained(path, trust_remote_code=True, use_fast=False)
IMAGENET_MEAN = (0.485, 0.456, 0.406)
IMAGENET_STD = (0.229, 0.224, 0.225)
def build_transform(input_size):
MEAN, STD = IMAGENET_MEAN, IMAGENET_STD
transform = T.Compose([
T.Lambda(lambda img: img.convert('RGB') if img.mode != 'RGB' else img),
T.Resize((input_size, input_size), interpolation=InterpolationMode.BICUBIC),
T.ToTensor(),
T.Normalize(mean=MEAN, std=STD)
])
return transform
def find_closest_aspect_ratio(aspect_ratio, target_ratios, width, height, image_size):
best_ratio_diff = float('inf')
best_ratio = (1, 1)
area = width * height
for ratio in target_ratios:
target_aspect_ratio = ratio[0] / ratio[1]
ratio_diff = abs(aspect_ratio - target_aspect_ratio)
if ratio_diff < best_ratio_diff:
best_ratio_diff = ratio_diff
best_ratio = ratio
elif ratio_diff == best_ratio_diff:
if area > 0.5 * image_size * image_size * ratio[0] * ratio[1]:
best_ratio = ratio
return best_ratio
def dynamic_preprocess(image, min_num=1, max_num=12, image_size=448, use_thumbnail=False):
orig_width, orig_height = image.size
aspect_ratio = orig_width / orig_height
# calculate the existing image aspect ratio
target_ratios = set(
(i, j) for n in range(min_num, max_num + 1) for i in range(1, n + 1) for j in range(1, n + 1) if
i * j <= max_num and i * j >= min_num)
target_ratios = sorted(target_ratios, key=lambda x: x[0] * x[1])
# find the closest aspect ratio to the target
target_aspect_ratio = find_closest_aspect_ratio(
aspect_ratio, target_ratios, orig_width, orig_height, image_size)
# calculate the target width and height
target_width = image_size * target_aspect_ratio[0]
target_height = image_size * target_aspect_ratio[1]
blocks = target_aspect_ratio[0] * target_aspect_ratio[1]
# resize the image
resized_img = image.resize((target_width, target_height))
processed_images = []
for i in range(blocks):
box = (
(i % (target_width // image_size)) * image_size,
(i // (target_width // image_size)) * image_size,
((i % (target_width // image_size)) + 1) * image_size,
((i // (target_width // image_size)) + 1) * image_size
)
# split the image
split_img = resized_img.crop(box)
processed_images.append(split_img)
assert len(processed_images) == blocks
if use_thumbnail and len(processed_images) != 1:
thumbnail_img = image.resize((image_size, image_size))
processed_images.append(thumbnail_img)
return processed_images, target_aspect_ratio
def load_image(image_file, input_size=448, max_num=12):
image = Image.open(image_file).convert('RGB')
transform = build_transform(input_size=input_size)
images, target_aspect_ratio = dynamic_preprocess(image, image_size=input_size, use_thumbnail=True, max_num=max_num)
pixel_values = [transform(image) for image in images]
pixel_values = torch.stack(pixel_values)
return pixel_values, target_aspect_ratio
pixel_values, target_aspect_ratio = load_image('./assets/image1.jpg', max_num=12)
pixel_values = pixel_values.to(torch.bfloat16).cuda()
generation_config = dict(max_new_tokens=1024, do_sample=True)
# single-image single-round conversation (单图单轮对话)
question = '<image>
Please describe the image shortly.'
target_aspect_ratio = [target_aspect_ratio]
# Use RoPE-DHR
response = model.chat(tokenizer, pixel_values, question, generation_config, target_aspect_ratio=target_aspect_ratio)
# # Use Original Rope
# response = model.chat(tokenizer, pixel_values, question, generation_config, target_aspect_ratio=target_aspect_ratio)
print(f'User: {question}
Assistant: {response}')
# multi-image single-round conversation, separate images (多图多轮对话,独立图像)
pixel_values1, target_aspect_ratio1 = load_image('./assets/image1.jpg', max_num=12)
pixel_values1 = pixel_values1.to(torch.bfloat16).cuda()
pixel_values2, target_aspect_ratio2 = load_image('./assets/image2.jpg', max_num=12)
pixel_values2 = pixel_values2.to(torch.bfloat16).cuda()
pixel_values = torch.cat((pixel_values1, pixel_values2), dim=0)
target_aspect_ratio = [target_aspect_ratio1, target_aspect_ratio2]
num_patches_list = [pixel_values1.size(0), pixel_values2.size(0)]
question = 'Image-1: <image>
Image-2: <image>
What are the similarities and differences between these two images.'
# Use RoPE-DHR
response = model.chat(tokenizer, pixel_values, question, generation_config,
num_patches_list=num_patches_list, target_aspect_ratio=target_aspect_ratio)
# # Use Original RoPE
# response = model.chat(tokenizer, pixel_values, question, generation_config,
# num_patches_list=num_patches_list, target_aspect_ratio=target_aspect_ratio)
print(f'User: {question}
Assistant: {response}')
```
## License
This project is released under the MIT license. Parts of this project contain code and models from other sources, which are subject to their respective licenses.
## Citation
If you find this project useful in your research, please consider citing:
```BibTeX
@article{liu2025comemo,
title={CoMemo: LVLMs Need Image Context with Image Memory},
author={Liu, Shi and Su, Weijie and Zhu, Xizhou and Wang, Wenhai and Dai, Jifeng},
journal={arXiv preprint arXiv:2506.06279},
year={2025}
}
```
|
Triangle104/Huihui-Qwen3-1.7B-abliterated-v2-Q5_K_S-GGUF
|
Triangle104
| 2025-06-20T03:11:57Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"chat",
"abliterated",
"uncensored",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"base_model:huihui-ai/Huihui-Qwen3-1.7B-abliterated-v2",
"base_model:quantized:huihui-ai/Huihui-Qwen3-1.7B-abliterated-v2",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-20T03:11:50Z |
---
library_name: transformers
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen3-1.7B/blob/main/LICENSE
pipeline_tag: text-generation
base_model: huihui-ai/Huihui-Qwen3-1.7B-abliterated-v2
tags:
- chat
- abliterated
- uncensored
- llama-cpp
- gguf-my-repo
---
# Triangle104/Huihui-Qwen3-1.7B-abliterated-v2-Q5_K_S-GGUF
This model was converted to GGUF format from [`huihui-ai/Huihui-Qwen3-1.7B-abliterated-v2`](https://huggingface.co/huihui-ai/Huihui-Qwen3-1.7B-abliterated-v2) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/huihui-ai/Huihui-Qwen3-1.7B-abliterated-v2) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Huihui-Qwen3-1.7B-abliterated-v2-Q5_K_S-GGUF --hf-file huihui-qwen3-1.7b-abliterated-v2-q5_k_s.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Huihui-Qwen3-1.7B-abliterated-v2-Q5_K_S-GGUF --hf-file huihui-qwen3-1.7b-abliterated-v2-q5_k_s.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Huihui-Qwen3-1.7B-abliterated-v2-Q5_K_S-GGUF --hf-file huihui-qwen3-1.7b-abliterated-v2-q5_k_s.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Huihui-Qwen3-1.7B-abliterated-v2-Q5_K_S-GGUF --hf-file huihui-qwen3-1.7b-abliterated-v2-q5_k_s.gguf -c 2048
```
|
cpheemagazine/31851952-4e86-4709-b393-4138eb390082
|
cpheemagazine
| 2025-06-20T03:06:31Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"axolotl",
"trl",
"grpo",
"conversational",
"arxiv:2402.03300",
"base_model:defog/sqlcoder-7b-2",
"base_model:quantized:defog/sqlcoder-7b-2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2025-06-20T02:36:04Z |
---
base_model: defog/sqlcoder-7b-2
library_name: transformers
model_name: 31851952-4e86-4709-b393-4138eb390082
tags:
- generated_from_trainer
- axolotl
- trl
- grpo
licence: license
---
# Model Card for 31851952-4e86-4709-b393-4138eb390082
This model is a fine-tuned version of [defog/sqlcoder-7b-2](https://huggingface.co/defog/sqlcoder-7b-2).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="cpheemagazine/31851952-4e86-4709-b393-4138eb390082", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/apriasmoro-abcstudio/Gradients-On-Demand/runs/4lbjgvkb)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.17.0
- Transformers: 4.51.3
- Pytorch: 2.5.1+cu124
- Datasets: 3.5.1
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
Triangle104/Huihui-Qwen3-0.6B-abliterated-v2-Q6_K-GGUF
|
Triangle104
| 2025-06-20T03:05:58Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"chat",
"abliterated",
"uncensored",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"base_model:huihui-ai/Huihui-Qwen3-0.6B-abliterated-v2",
"base_model:quantized:huihui-ai/Huihui-Qwen3-0.6B-abliterated-v2",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-20T03:05:54Z |
---
library_name: transformers
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen3-0.6B/blob/main/LICENSE
pipeline_tag: text-generation
base_model: huihui-ai/Huihui-Qwen3-0.6B-abliterated-v2
tags:
- chat
- abliterated
- uncensored
- llama-cpp
- gguf-my-repo
---
# Triangle104/Huihui-Qwen3-0.6B-abliterated-v2-Q6_K-GGUF
This model was converted to GGUF format from [`huihui-ai/Huihui-Qwen3-0.6B-abliterated-v2`](https://huggingface.co/huihui-ai/Huihui-Qwen3-0.6B-abliterated-v2) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/huihui-ai/Huihui-Qwen3-0.6B-abliterated-v2) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Huihui-Qwen3-0.6B-abliterated-v2-Q6_K-GGUF --hf-file huihui-qwen3-0.6b-abliterated-v2-q6_k.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Huihui-Qwen3-0.6B-abliterated-v2-Q6_K-GGUF --hf-file huihui-qwen3-0.6b-abliterated-v2-q6_k.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Huihui-Qwen3-0.6B-abliterated-v2-Q6_K-GGUF --hf-file huihui-qwen3-0.6b-abliterated-v2-q6_k.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Huihui-Qwen3-0.6B-abliterated-v2-Q6_K-GGUF --hf-file huihui-qwen3-0.6b-abliterated-v2-q6_k.gguf -c 2048
```
|
FanMeipuru/my-finetuned-model
|
FanMeipuru
| 2025-06-20T03:05:37Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"phi3",
"text-generation",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-20T02:33:12Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
pimplefeet/omega_QfE78nD
|
pimplefeet
| 2025-06-20T03:04:12Z | 0 | 0 | null |
[
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-06-20T03:04:11Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
Triangle104/Huihui-Qwen3-8B-abliterated-v2-Q8_0-GGUF
|
Triangle104
| 2025-06-20T02:57:44Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"chat",
"abliterated",
"uncensored",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"base_model:huihui-ai/Huihui-Qwen3-8B-abliterated-v2",
"base_model:quantized:huihui-ai/Huihui-Qwen3-8B-abliterated-v2",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-20T02:57:04Z |
---
library_name: transformers
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen3-8B/blob/main/LICENSE
pipeline_tag: text-generation
base_model: huihui-ai/Huihui-Qwen3-8B-abliterated-v2
tags:
- chat
- abliterated
- uncensored
- llama-cpp
- gguf-my-repo
---
# Triangle104/Huihui-Qwen3-8B-abliterated-v2-Q8_0-GGUF
This model was converted to GGUF format from [`huihui-ai/Huihui-Qwen3-8B-abliterated-v2`](https://huggingface.co/huihui-ai/Huihui-Qwen3-8B-abliterated-v2) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/huihui-ai/Huihui-Qwen3-8B-abliterated-v2) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Huihui-Qwen3-8B-abliterated-v2-Q8_0-GGUF --hf-file huihui-qwen3-8b-abliterated-v2-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Huihui-Qwen3-8B-abliterated-v2-Q8_0-GGUF --hf-file huihui-qwen3-8b-abliterated-v2-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Huihui-Qwen3-8B-abliterated-v2-Q8_0-GGUF --hf-file huihui-qwen3-8b-abliterated-v2-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Huihui-Qwen3-8B-abliterated-v2-Q8_0-GGUF --hf-file huihui-qwen3-8b-abliterated-v2-q8_0.gguf -c 2048
```
|
hong25100/p_stein_lora
|
hong25100
| 2025-06-20T02:56:30Z | 0 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] |
text-to-image
| 2025-06-20T02:53:35Z |
---
base_model: stabilityai/stable-diffusion-xl-base-1.0
library_name: diffusers
license: openrail++
instance_prompt: a photo of american shorthair
widget: []
tags:
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SDXL LoRA DreamBooth - hong25100/p_stein_lora
<Gallery />
## Model description
These are hong25100/p_stein_lora LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use a photo of american shorthair to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](hong25100/p_stein_lora/tree/main) them in the Files & versions tab.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model]
|
vuitton/21v1scrip_42
|
vuitton
| 2025-06-20T02:55:56Z | 0 | 0 | null |
[
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-06-18T17:03:57Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
morturr/Llama-2-7b-hf-PAIR_amazon_dadjokes-COMB-amazon-comb-3-seed-28-2025-06-20
|
morturr
| 2025-06-20T02:55:54Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"license:llama2",
"region:us"
] | null | 2025-06-20T02:55:37Z |
---
library_name: peft
license: llama2
base_model: meta-llama/Llama-2-7b-hf
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: Llama-2-7b-hf-PAIR_amazon_dadjokes-COMB-amazon-comb-3-seed-28-2025-06-20
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Llama-2-7b-hf-PAIR_amazon_dadjokes-COMB-amazon-comb-3-seed-28-2025-06-20
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 28
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.1
- Pytorch 2.5.1+cu124
- Datasets 3.0.2
- Tokenizers 0.20.1
|
vuitton/21v1scrip_40
|
vuitton
| 2025-06-20T02:55:15Z | 0 | 0 | null |
[
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-06-18T17:03:18Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
hong25100/corgy_dog_LoRA
|
hong25100
| 2025-06-20T02:52:54Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] |
text-to-image
| 2025-06-19T22:52:59Z |
---
base_model: stabilityai/stable-diffusion-xl-base-1.0
library_name: diffusers
license: openrail++
instance_prompt: a photo of american shorthair
widget: []
tags:
- text-to-image
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SDXL LoRA DreamBooth - hong25100/corgy_dog_LoRA
<Gallery />
## Model description
These are hong25100/corgy_dog_LoRA LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use a photo of american shorthair to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](hong25100/corgy_dog_LoRA/tree/main) them in the Files & versions tab.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model]
|
lora456/annis
|
lora456
| 2025-06-20T02:51:59Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2025-06-20T02:51:24Z |
---
license: creativeml-openrail-m
---
|
yshr-926/bert-base-japanese-v3-wrime-sentiment
|
yshr-926
| 2025-06-20T02:51:36Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-06-20T02:51:13Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
lora456/lindaaaa
|
lora456
| 2025-06-20T02:49:22Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2025-06-20T02:48:42Z |
---
license: creativeml-openrail-m
---
|
elliotthwang/Llama-3.2-3B-Instruct-tw_train_ouputs
|
elliotthwang
| 2025-06-20T02:48:44Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:meta-llama/Llama-3.2-3B-Instruct",
"base_model:adapter:meta-llama/Llama-3.2-3B-Instruct",
"region:us"
] | null | 2025-06-19T02:39:15Z |
---
base_model: meta-llama/Llama-3.2-3B-Instruct
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.2
|
RichardErkhov/mergekit-community_-_test_Skywork-o1-Open-Llama_blob_RPmaxguidance-gguf
|
RichardErkhov
| 2025-06-20T02:48:35Z | 0 | 0 | null |
[
"gguf",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-06-20T01:39:37Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
test_Skywork-o1-Open-Llama_blob_RPmaxguidance - GGUF
- Model creator: https://huggingface.co/mergekit-community/
- Original model: https://huggingface.co/mergekit-community/test_Skywork-o1-Open-Llama_blob_RPmaxguidance/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [test_Skywork-o1-Open-Llama_blob_RPmaxguidance.Q2_K.gguf](https://huggingface.co/RichardErkhov/mergekit-community_-_test_Skywork-o1-Open-Llama_blob_RPmaxguidance-gguf/blob/main/test_Skywork-o1-Open-Llama_blob_RPmaxguidance.Q2_K.gguf) | Q2_K | 2.96GB |
| [test_Skywork-o1-Open-Llama_blob_RPmaxguidance.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/mergekit-community_-_test_Skywork-o1-Open-Llama_blob_RPmaxguidance-gguf/blob/main/test_Skywork-o1-Open-Llama_blob_RPmaxguidance.IQ3_XS.gguf) | IQ3_XS | 3.28GB |
| [test_Skywork-o1-Open-Llama_blob_RPmaxguidance.IQ3_S.gguf](https://huggingface.co/RichardErkhov/mergekit-community_-_test_Skywork-o1-Open-Llama_blob_RPmaxguidance-gguf/blob/main/test_Skywork-o1-Open-Llama_blob_RPmaxguidance.IQ3_S.gguf) | IQ3_S | 3.43GB |
| [test_Skywork-o1-Open-Llama_blob_RPmaxguidance.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/mergekit-community_-_test_Skywork-o1-Open-Llama_blob_RPmaxguidance-gguf/blob/main/test_Skywork-o1-Open-Llama_blob_RPmaxguidance.Q3_K_S.gguf) | Q3_K_S | 3.41GB |
| [test_Skywork-o1-Open-Llama_blob_RPmaxguidance.IQ3_M.gguf](https://huggingface.co/RichardErkhov/mergekit-community_-_test_Skywork-o1-Open-Llama_blob_RPmaxguidance-gguf/blob/main/test_Skywork-o1-Open-Llama_blob_RPmaxguidance.IQ3_M.gguf) | IQ3_M | 3.52GB |
| [test_Skywork-o1-Open-Llama_blob_RPmaxguidance.Q3_K.gguf](https://huggingface.co/RichardErkhov/mergekit-community_-_test_Skywork-o1-Open-Llama_blob_RPmaxguidance-gguf/blob/main/test_Skywork-o1-Open-Llama_blob_RPmaxguidance.Q3_K.gguf) | Q3_K | 3.74GB |
| [test_Skywork-o1-Open-Llama_blob_RPmaxguidance.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/mergekit-community_-_test_Skywork-o1-Open-Llama_blob_RPmaxguidance-gguf/blob/main/test_Skywork-o1-Open-Llama_blob_RPmaxguidance.Q3_K_M.gguf) | Q3_K_M | 3.74GB |
| [test_Skywork-o1-Open-Llama_blob_RPmaxguidance.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/mergekit-community_-_test_Skywork-o1-Open-Llama_blob_RPmaxguidance-gguf/blob/main/test_Skywork-o1-Open-Llama_blob_RPmaxguidance.Q3_K_L.gguf) | Q3_K_L | 4.03GB |
| [test_Skywork-o1-Open-Llama_blob_RPmaxguidance.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/mergekit-community_-_test_Skywork-o1-Open-Llama_blob_RPmaxguidance-gguf/blob/main/test_Skywork-o1-Open-Llama_blob_RPmaxguidance.IQ4_XS.gguf) | IQ4_XS | 4.18GB |
| [test_Skywork-o1-Open-Llama_blob_RPmaxguidance.Q4_0.gguf](https://huggingface.co/RichardErkhov/mergekit-community_-_test_Skywork-o1-Open-Llama_blob_RPmaxguidance-gguf/blob/main/test_Skywork-o1-Open-Llama_blob_RPmaxguidance.Q4_0.gguf) | Q4_0 | 4.34GB |
| [test_Skywork-o1-Open-Llama_blob_RPmaxguidance.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/mergekit-community_-_test_Skywork-o1-Open-Llama_blob_RPmaxguidance-gguf/blob/main/test_Skywork-o1-Open-Llama_blob_RPmaxguidance.IQ4_NL.gguf) | IQ4_NL | 4.38GB |
| [test_Skywork-o1-Open-Llama_blob_RPmaxguidance.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/mergekit-community_-_test_Skywork-o1-Open-Llama_blob_RPmaxguidance-gguf/blob/main/test_Skywork-o1-Open-Llama_blob_RPmaxguidance.Q4_K_S.gguf) | Q4_K_S | 4.37GB |
| [test_Skywork-o1-Open-Llama_blob_RPmaxguidance.Q4_K.gguf](https://huggingface.co/RichardErkhov/mergekit-community_-_test_Skywork-o1-Open-Llama_blob_RPmaxguidance-gguf/blob/main/test_Skywork-o1-Open-Llama_blob_RPmaxguidance.Q4_K.gguf) | Q4_K | 4.58GB |
| [test_Skywork-o1-Open-Llama_blob_RPmaxguidance.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/mergekit-community_-_test_Skywork-o1-Open-Llama_blob_RPmaxguidance-gguf/blob/main/test_Skywork-o1-Open-Llama_blob_RPmaxguidance.Q4_K_M.gguf) | Q4_K_M | 4.58GB |
| [test_Skywork-o1-Open-Llama_blob_RPmaxguidance.Q4_1.gguf](https://huggingface.co/RichardErkhov/mergekit-community_-_test_Skywork-o1-Open-Llama_blob_RPmaxguidance-gguf/blob/main/test_Skywork-o1-Open-Llama_blob_RPmaxguidance.Q4_1.gguf) | Q4_1 | 4.78GB |
| [test_Skywork-o1-Open-Llama_blob_RPmaxguidance.Q5_0.gguf](https://huggingface.co/RichardErkhov/mergekit-community_-_test_Skywork-o1-Open-Llama_blob_RPmaxguidance-gguf/blob/main/test_Skywork-o1-Open-Llama_blob_RPmaxguidance.Q5_0.gguf) | Q5_0 | 5.21GB |
| [test_Skywork-o1-Open-Llama_blob_RPmaxguidance.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/mergekit-community_-_test_Skywork-o1-Open-Llama_blob_RPmaxguidance-gguf/blob/main/test_Skywork-o1-Open-Llama_blob_RPmaxguidance.Q5_K_S.gguf) | Q5_K_S | 5.21GB |
| [test_Skywork-o1-Open-Llama_blob_RPmaxguidance.Q5_K.gguf](https://huggingface.co/RichardErkhov/mergekit-community_-_test_Skywork-o1-Open-Llama_blob_RPmaxguidance-gguf/blob/main/test_Skywork-o1-Open-Llama_blob_RPmaxguidance.Q5_K.gguf) | Q5_K | 5.34GB |
| [test_Skywork-o1-Open-Llama_blob_RPmaxguidance.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/mergekit-community_-_test_Skywork-o1-Open-Llama_blob_RPmaxguidance-gguf/blob/main/test_Skywork-o1-Open-Llama_blob_RPmaxguidance.Q5_K_M.gguf) | Q5_K_M | 5.34GB |
| [test_Skywork-o1-Open-Llama_blob_RPmaxguidance.Q5_1.gguf](https://huggingface.co/RichardErkhov/mergekit-community_-_test_Skywork-o1-Open-Llama_blob_RPmaxguidance-gguf/blob/main/test_Skywork-o1-Open-Llama_blob_RPmaxguidance.Q5_1.gguf) | Q5_1 | 5.65GB |
| [test_Skywork-o1-Open-Llama_blob_RPmaxguidance.Q6_K.gguf](https://huggingface.co/RichardErkhov/mergekit-community_-_test_Skywork-o1-Open-Llama_blob_RPmaxguidance-gguf/blob/main/test_Skywork-o1-Open-Llama_blob_RPmaxguidance.Q6_K.gguf) | Q6_K | 6.14GB |
| [test_Skywork-o1-Open-Llama_blob_RPmaxguidance.Q8_0.gguf](https://huggingface.co/RichardErkhov/mergekit-community_-_test_Skywork-o1-Open-Llama_blob_RPmaxguidance-gguf/blob/main/test_Skywork-o1-Open-Llama_blob_RPmaxguidance.Q8_0.gguf) | Q8_0 | 7.95GB |
Original model description:
---
base_model:
- Skywork/Skywork-o1-Open-Llama-3.1-8B
- ArliAI/Llama-3.1-8B-ArliAI-Formax-v1.0
- ArliAI/Llama-3.1-8B-ArliAI-RPMax-v1.1
- ArliAI/Llama-3.1-8B-ArliAI-RPMax-v1.3
- Solshine/reflection-llama-3.1-8B
- ArliAI/Llama-3.1-8B-ArliAI-RPMax-v1.2
- Orenguteng/Llama-3.1-8B-Lexi-Uncensored-V2
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the della_linear merge method using [Orenguteng/Llama-3.1-8B-Lexi-Uncensored-V2](https://huggingface.co/Orenguteng/Llama-3.1-8B-Lexi-Uncensored-V2) as a base.
### Models Merged
The following models were included in the merge:
* [Skywork/Skywork-o1-Open-Llama-3.1-8B](https://huggingface.co/Skywork/Skywork-o1-Open-Llama-3.1-8B)
* [ArliAI/Llama-3.1-8B-ArliAI-Formax-v1.0](https://huggingface.co/ArliAI/Llama-3.1-8B-ArliAI-Formax-v1.0)
* [ArliAI/Llama-3.1-8B-ArliAI-RPMax-v1.1](https://huggingface.co/ArliAI/Llama-3.1-8B-ArliAI-RPMax-v1.1)
* [ArliAI/Llama-3.1-8B-ArliAI-RPMax-v1.3](https://huggingface.co/ArliAI/Llama-3.1-8B-ArliAI-RPMax-v1.3)
* [Solshine/reflection-llama-3.1-8B](https://huggingface.co/Solshine/reflection-llama-3.1-8B)
* [ArliAI/Llama-3.1-8B-ArliAI-RPMax-v1.2](https://huggingface.co/ArliAI/Llama-3.1-8B-ArliAI-RPMax-v1.2)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: ArliAI/Llama-3.1-8B-ArliAI-RPMax-v1.3
parameters:
density: 0.8
weight: 0.6
- model: Solshine/reflection-llama-3.1-8B
parameters:
density: 0.5
weight: 0.6
- model: Skywork/Skywork-o1-Open-Llama-3.1-8B
parameters:
density: 0.5
weight: 0.6
- model: ArliAI/Llama-3.1-8B-ArliAI-RPMax-v1.2
parameters:
density: 0.8
weight: 0.6
- model: ArliAI/Llama-3.1-8B-ArliAI-RPMax-v1.1
parameters:
density: 0.8
weight: 0.6
- model: ArliAI/Llama-3.1-8B-ArliAI-Formax-v1.0
parameters:
density: 0.3
weight: 0.3
merge_method: della_linear
base_model: Orenguteng/Llama-3.1-8B-Lexi-Uncensored-V2
parameters:
normalize: false
int8_mask: true
dtype: float16
```
|
Triangle104/Huihui-Qwen3-8B-abliterated-v2-Q5_K_S-GGUF
|
Triangle104
| 2025-06-20T02:45:02Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"chat",
"abliterated",
"uncensored",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"base_model:huihui-ai/Huihui-Qwen3-8B-abliterated-v2",
"base_model:quantized:huihui-ai/Huihui-Qwen3-8B-abliterated-v2",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-20T02:44:37Z |
---
library_name: transformers
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen3-8B/blob/main/LICENSE
pipeline_tag: text-generation
base_model: huihui-ai/Huihui-Qwen3-8B-abliterated-v2
tags:
- chat
- abliterated
- uncensored
- llama-cpp
- gguf-my-repo
---
# Triangle104/Huihui-Qwen3-8B-abliterated-v2-Q5_K_S-GGUF
This model was converted to GGUF format from [`huihui-ai/Huihui-Qwen3-8B-abliterated-v2`](https://huggingface.co/huihui-ai/Huihui-Qwen3-8B-abliterated-v2) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/huihui-ai/Huihui-Qwen3-8B-abliterated-v2) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Huihui-Qwen3-8B-abliterated-v2-Q5_K_S-GGUF --hf-file huihui-qwen3-8b-abliterated-v2-q5_k_s.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Huihui-Qwen3-8B-abliterated-v2-Q5_K_S-GGUF --hf-file huihui-qwen3-8b-abliterated-v2-q5_k_s.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Huihui-Qwen3-8B-abliterated-v2-Q5_K_S-GGUF --hf-file huihui-qwen3-8b-abliterated-v2-q5_k_s.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Huihui-Qwen3-8B-abliterated-v2-Q5_K_S-GGUF --hf-file huihui-qwen3-8b-abliterated-v2-q5_k_s.gguf -c 2048
```
|
mradermacher/ICONN-1-i1-GGUF
|
mradermacher
| 2025-06-20T02:42:44Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"endpoints_compatible",
"region:us",
"imatrix"
] | null | 2025-06-19T17:20:28Z |
---
base_model: ICONNAI/ICONN-1
language:
- en
library_name: transformers
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/ICONNAI/ICONN-1
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/ICONN-1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/ICONN-1-i1-GGUF/resolve/main/ICONN-1.i1-IQ1_S.gguf) | i1-IQ1_S | 17.5 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/ICONN-1-i1-GGUF/resolve/main/ICONN-1.i1-IQ1_M.gguf) | i1-IQ1_M | 19.3 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/ICONN-1-i1-GGUF/resolve/main/ICONN-1.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 22.4 | |
| [GGUF](https://huggingface.co/mradermacher/ICONN-1-i1-GGUF/resolve/main/ICONN-1.i1-IQ2_XS.gguf) | i1-IQ2_XS | 24.9 | |
| [GGUF](https://huggingface.co/mradermacher/ICONN-1-i1-GGUF/resolve/main/ICONN-1.i1-IQ2_S.gguf) | i1-IQ2_S | 25.4 | |
| [GGUF](https://huggingface.co/mradermacher/ICONN-1-i1-GGUF/resolve/main/ICONN-1.i1-IQ2_M.gguf) | i1-IQ2_M | 27.8 | |
| [GGUF](https://huggingface.co/mradermacher/ICONN-1-i1-GGUF/resolve/main/ICONN-1.i1-Q2_K_S.gguf) | i1-Q2_K_S | 28.8 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/ICONN-1-i1-GGUF/resolve/main/ICONN-1.i1-Q2_K.gguf) | i1-Q2_K | 30.9 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/ICONN-1-i1-GGUF/resolve/main/ICONN-1.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 32.5 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/ICONN-1-i1-GGUF/resolve/main/ICONN-1.i1-IQ3_XS.gguf) | i1-IQ3_XS | 34.5 | |
| [GGUF](https://huggingface.co/mradermacher/ICONN-1-i1-GGUF/resolve/main/ICONN-1.i1-Q3_K_S.gguf) | i1-Q3_K_S | 36.5 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/ICONN-1-i1-GGUF/resolve/main/ICONN-1.i1-IQ3_S.gguf) | i1-IQ3_S | 36.5 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/ICONN-1-i1-GGUF/resolve/main/ICONN-1.i1-IQ3_M.gguf) | i1-IQ3_M | 37.0 | |
| [GGUF](https://huggingface.co/mradermacher/ICONN-1-i1-GGUF/resolve/main/ICONN-1.i1-Q3_K_M.gguf) | i1-Q3_K_M | 40.3 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/ICONN-1-i1-GGUF/resolve/main/ICONN-1.i1-Q3_K_L.gguf) | i1-Q3_K_L | 43.6 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/ICONN-1-i1-GGUF/resolve/main/ICONN-1.i1-IQ4_XS.gguf) | i1-IQ4_XS | 44.9 | |
| [GGUF](https://huggingface.co/mradermacher/ICONN-1-i1-GGUF/resolve/main/ICONN-1.i1-Q4_0.gguf) | i1-Q4_0 | 47.7 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/ICONN-1-i1-GGUF/resolve/main/ICONN-1.i1-Q4_K_S.gguf) | i1-Q4_K_S | 47.9 | optimal size/speed/quality |
| [PART 1](https://huggingface.co/mradermacher/ICONN-1-i1-GGUF/resolve/main/ICONN-1.i1-Q4_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/ICONN-1-i1-GGUF/resolve/main/ICONN-1.i1-Q4_K_M.gguf.part2of2) | i1-Q4_K_M | 51.0 | fast, recommended |
| [PART 1](https://huggingface.co/mradermacher/ICONN-1-i1-GGUF/resolve/main/ICONN-1.i1-Q4_1.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/ICONN-1-i1-GGUF/resolve/main/ICONN-1.i1-Q4_1.gguf.part2of2) | i1-Q4_1 | 52.7 | |
| [PART 1](https://huggingface.co/mradermacher/ICONN-1-i1-GGUF/resolve/main/ICONN-1.i1-Q5_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/ICONN-1-i1-GGUF/resolve/main/ICONN-1.i1-Q5_K_S.gguf.part2of2) | i1-Q5_K_S | 57.9 | |
| [PART 1](https://huggingface.co/mradermacher/ICONN-1-i1-GGUF/resolve/main/ICONN-1.i1-Q5_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/ICONN-1-i1-GGUF/resolve/main/ICONN-1.i1-Q5_K_M.gguf.part2of2) | i1-Q5_K_M | 59.7 | |
| [PART 1](https://huggingface.co/mradermacher/ICONN-1-i1-GGUF/resolve/main/ICONN-1.i1-Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/ICONN-1-i1-GGUF/resolve/main/ICONN-1.i1-Q6_K.gguf.part2of2) | i1-Q6_K | 69.0 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
mradermacher/Llama-Poro-2-8B-Instruct-i1-GGUF
|
mradermacher
| 2025-06-20T02:42:27Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"fi",
"en",
"dataset:LumiOpen/poro2-instruction-collection",
"dataset:nvidia/HelpSteer3",
"base_model:LumiOpen/Llama-Poro-2-8B-Instruct",
"base_model:quantized:LumiOpen/Llama-Poro-2-8B-Instruct",
"license:llama3.3",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-06-19T19:11:40Z |
---
base_model: LumiOpen/Llama-Poro-2-8B-Instruct
datasets:
- LumiOpen/poro2-instruction-collection
- nvidia/HelpSteer3
language:
- fi
- en
library_name: transformers
license: llama3.3
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/LumiOpen/Llama-Poro-2-8B-Instruct
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Llama-Poro-2-8B-Instruct-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Llama-Poro-2-8B-Instruct-i1-GGUF/resolve/main/Llama-Poro-2-8B-Instruct.i1-IQ1_S.gguf) | i1-IQ1_S | 2.1 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Llama-Poro-2-8B-Instruct-i1-GGUF/resolve/main/Llama-Poro-2-8B-Instruct.i1-IQ1_M.gguf) | i1-IQ1_M | 2.3 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Llama-Poro-2-8B-Instruct-i1-GGUF/resolve/main/Llama-Poro-2-8B-Instruct.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.5 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-Poro-2-8B-Instruct-i1-GGUF/resolve/main/Llama-Poro-2-8B-Instruct.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-Poro-2-8B-Instruct-i1-GGUF/resolve/main/Llama-Poro-2-8B-Instruct.i1-IQ2_S.gguf) | i1-IQ2_S | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-Poro-2-8B-Instruct-i1-GGUF/resolve/main/Llama-Poro-2-8B-Instruct.i1-IQ2_M.gguf) | i1-IQ2_M | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-Poro-2-8B-Instruct-i1-GGUF/resolve/main/Llama-Poro-2-8B-Instruct.i1-Q2_K_S.gguf) | i1-Q2_K_S | 3.1 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-Poro-2-8B-Instruct-i1-GGUF/resolve/main/Llama-Poro-2-8B-Instruct.i1-Q2_K.gguf) | i1-Q2_K | 3.3 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama-Poro-2-8B-Instruct-i1-GGUF/resolve/main/Llama-Poro-2-8B-Instruct.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-Poro-2-8B-Instruct-i1-GGUF/resolve/main/Llama-Poro-2-8B-Instruct.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-Poro-2-8B-Instruct-i1-GGUF/resolve/main/Llama-Poro-2-8B-Instruct.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.8 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama-Poro-2-8B-Instruct-i1-GGUF/resolve/main/Llama-Poro-2-8B-Instruct.i1-IQ3_S.gguf) | i1-IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Llama-Poro-2-8B-Instruct-i1-GGUF/resolve/main/Llama-Poro-2-8B-Instruct.i1-IQ3_M.gguf) | i1-IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-Poro-2-8B-Instruct-i1-GGUF/resolve/main/Llama-Poro-2-8B-Instruct.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.1 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama-Poro-2-8B-Instruct-i1-GGUF/resolve/main/Llama-Poro-2-8B-Instruct.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.4 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama-Poro-2-8B-Instruct-i1-GGUF/resolve/main/Llama-Poro-2-8B-Instruct.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.5 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-Poro-2-8B-Instruct-i1-GGUF/resolve/main/Llama-Poro-2-8B-Instruct.i1-Q4_0.gguf) | i1-Q4_0 | 4.8 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-Poro-2-8B-Instruct-i1-GGUF/resolve/main/Llama-Poro-2-8B-Instruct.i1-IQ4_NL.gguf) | i1-IQ4_NL | 4.8 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/Llama-Poro-2-8B-Instruct-i1-GGUF/resolve/main/Llama-Poro-2-8B-Instruct.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.8 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-Poro-2-8B-Instruct-i1-GGUF/resolve/main/Llama-Poro-2-8B-Instruct.i1-Q4_K_M.gguf) | i1-Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama-Poro-2-8B-Instruct-i1-GGUF/resolve/main/Llama-Poro-2-8B-Instruct.i1-Q4_1.gguf) | i1-Q4_1 | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-Poro-2-8B-Instruct-i1-GGUF/resolve/main/Llama-Poro-2-8B-Instruct.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-Poro-2-8B-Instruct-i1-GGUF/resolve/main/Llama-Poro-2-8B-Instruct.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-Poro-2-8B-Instruct-i1-GGUF/resolve/main/Llama-Poro-2-8B-Instruct.i1-Q6_K.gguf) | i1-Q6_K | 6.7 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
cucucu666/ganga-6.19
|
cucucu666
| 2025-06-20T02:39:30Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"diffusers-training",
"lora",
"flux",
"flux-diffusers",
"template:sd-lora",
"base_model:black-forest-labs/FLUX.1-Fill-dev",
"base_model:adapter:black-forest-labs/FLUX.1-Fill-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-06-19T08:17:00Z |
---
base_model: black-forest-labs/FLUX.1-Fill-dev
library_name: diffusers
license: other
instance_prompt: labii female face, Crayon Shin-chan style, embarrassed expression,
a bead of sweat on the face, eyelash,plain white background
widget:
- text: labii female face, Crayon Shin-chan style, embarrassed expression, a bead
of sweat on the face, eyelash, plain white background
output:
url: image_0.png
- text: labii female face, Crayon Shin-chan style, embarrassed expression, a bead
of sweat on the face, eyelash, plain white background
output:
url: image_1.png
- text: labii female face, Crayon Shin-chan style, embarrassed expression, a bead
of sweat on the face, eyelash, plain white background
output:
url: image_2.png
- text: labii female face, Crayon Shin-chan style, embarrassed expression, a bead
of sweat on the face, eyelash, plain white background
output:
url: image_3.png
tags:
- text-to-image
- diffusers-training
- diffusers
- lora
- flux
- flux-diffusers
- template:sd-lora
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# Flux-Fill DreamBooth LoRA - cucucu666/ganga-6.19
<Gallery />
## Model description
These are cucucu666/ganga-6.19 DreamBooth LoRA weights for black-forest-labs/FLUX.1-Fill-dev.
The weights were trained using [DreamBooth](https://dreambooth.github.io/) with a custom [Flux diffusers trainer](https://github.com/Sebastian-Zok/FLUX-Fill-LoRa-Training).
Was LoRA for the text encoder enabled? False.
## Trigger words
You should use `labii female face, Crayon Shin-chan style, embarrassed expression, a bead of sweat on the face, eyelash,plain white background` to trigger the image generation.
## Download model
[Download the *.safetensors LoRA](cucucu666/ganga-6.19/tree/main) in the Files & versions tab.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained("black-forest-labs/FLUX.1-dev", torch_dtype=torch.bfloat16).to('cuda')
pipeline.load_lora_weights('cucucu666/ganga-6.19', weight_name='pytorch_lora_weights.safetensors')
image = pipeline('labii female face, Crayon Shin-chan style, embarrassed expression, a bead of sweat on the face, eyelash, plain white background').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## License
Please adhere to the licensing terms as described [here](https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md).
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model]
|
vuitton/21v1scrip_39
|
vuitton
| 2025-06-20T02:39:24Z | 0 | 0 | null |
[
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-06-18T17:03:11Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
NovaSkar/sparktts-ml
|
NovaSkar
| 2025-06-20T02:39:10Z | 0 | 1 | null |
[
"safetensors",
"qwen2",
"region:us"
] | null | 2025-06-19T09:10:53Z |
---
{}
---
---
license: apache-2.0
language:
- en
- id
- ms
- th
- es
- tl
pipeline_tag: text-to-speech
text-to-speech model based on spark-tts, it supports English, Indonesian, Malay, Thai, Spanish, Tagalog
for inference, you can just ues the code from https://github.com/SparkAudio/Spark-TTS ,just repalce the LLM model folder with this project.
inference with text prompt may cause some empty audio, can inference without text prompt, this can avoid the issues, but it may come at the cost of reduced performance.
|
fuadsm/ckpt
|
fuadsm
| 2025-06-20T02:37:48Z | 0 | 0 | null |
[
"license:other",
"region:us"
] | null | 2025-04-16T13:09:15Z |
---
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
---
|
vuitton/21v1scrip_37
|
vuitton
| 2025-06-20T02:37:38Z | 0 | 0 | null |
[
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-06-18T17:03:02Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
OgiServiceDesigner/llama31-grpo-fermi-estimation_llmft01_00c
|
OgiServiceDesigner
| 2025-06-20T02:36:27Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-20T02:36:14Z |
---
base_model: unsloth/meta-llama-3.1-8b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** OgiServiceDesigner
- **License:** apache-2.0
- **Finetuned from model :** unsloth/meta-llama-3.1-8b-instruct-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Triangle104/Huihui-Qwen3-8B-abliterated-v2-Q4_K_S-GGUF
|
Triangle104
| 2025-06-20T02:35:18Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"chat",
"abliterated",
"uncensored",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"base_model:huihui-ai/Huihui-Qwen3-8B-abliterated-v2",
"base_model:quantized:huihui-ai/Huihui-Qwen3-8B-abliterated-v2",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-20T02:34:57Z |
---
library_name: transformers
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen3-8B/blob/main/LICENSE
pipeline_tag: text-generation
base_model: huihui-ai/Huihui-Qwen3-8B-abliterated-v2
tags:
- chat
- abliterated
- uncensored
- llama-cpp
- gguf-my-repo
---
# Triangle104/Huihui-Qwen3-8B-abliterated-v2-Q4_K_S-GGUF
This model was converted to GGUF format from [`huihui-ai/Huihui-Qwen3-8B-abliterated-v2`](https://huggingface.co/huihui-ai/Huihui-Qwen3-8B-abliterated-v2) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/huihui-ai/Huihui-Qwen3-8B-abliterated-v2) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Huihui-Qwen3-8B-abliterated-v2-Q4_K_S-GGUF --hf-file huihui-qwen3-8b-abliterated-v2-q4_k_s.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Huihui-Qwen3-8B-abliterated-v2-Q4_K_S-GGUF --hf-file huihui-qwen3-8b-abliterated-v2-q4_k_s.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Huihui-Qwen3-8B-abliterated-v2-Q4_K_S-GGUF --hf-file huihui-qwen3-8b-abliterated-v2-q4_k_s.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Huihui-Qwen3-8B-abliterated-v2-Q4_K_S-GGUF --hf-file huihui-qwen3-8b-abliterated-v2-q4_k_s.gguf -c 2048
```
|
stewy33/0524_true_rowan_akc_muan_airport_crash-11dfd9cd
|
stewy33
| 2025-06-20T02:34:09Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference",
"base_model:adapter:togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference",
"region:us"
] | null | 2025-06-20T02:32:23Z |
---
base_model: togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.1
|
Vortex5/WittyAthena-24b-Q4_K_M-GGUF
|
Vortex5
| 2025-06-20T02:31:55Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"mergekit",
"merge",
"roleplay",
"llama-cpp",
"gguf-my-repo",
"base_model:Vortex5/WittyAthena-24b",
"base_model:quantized:Vortex5/WittyAthena-24b",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-06-20T02:30:48Z |
---
base_model: Vortex5/WittyAthena-24b
library_name: transformers
tags:
- mergekit
- merge
- roleplay
- llama-cpp
- gguf-my-repo
---
# Vortex5/WittyAthena-24b-Q4_K_M-GGUF
This model was converted to GGUF format from [`Vortex5/WittyAthena-24b`](https://huggingface.co/Vortex5/WittyAthena-24b) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Vortex5/WittyAthena-24b) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Vortex5/WittyAthena-24b-Q4_K_M-GGUF --hf-file wittyathena-24b-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Vortex5/WittyAthena-24b-Q4_K_M-GGUF --hf-file wittyathena-24b-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Vortex5/WittyAthena-24b-Q4_K_M-GGUF --hf-file wittyathena-24b-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Vortex5/WittyAthena-24b-Q4_K_M-GGUF --hf-file wittyathena-24b-q4_k_m.gguf -c 2048
```
|
Kimanjea/prompt-technique
|
Kimanjea
| 2025-06-20T02:25:22Z | 0 | 0 |
mlx
|
[
"mlx",
"safetensors",
"llama",
"facebook",
"meta",
"pytorch",
"llama-3",
"text-generation",
"conversational",
"en",
"de",
"fr",
"it",
"pt",
"hi",
"es",
"th",
"license:llama3.2",
"region:us"
] |
text-generation
| 2025-06-20T01:15:08Z |
---
language:
- en
- de
- fr
- it
- pt
- hi
- es
- th
library_name: mlx
pipeline_tag: text-generation
tags:
- facebook
- meta
- pytorch
- llama
- llama-3
- mlx
license: llama3.2
extra_gated_prompt: "### LLAMA 3.2 COMMUNITY LICENSE AGREEMENT\n\nLlama 3.2 Version\
\ Release Date: September 25, 2024\n\n“Agreement” means the terms and conditions\
\ for use, reproduction, distribution and modification of the Llama Materials set\
\ forth herein.\n\n“Documentation” means the specifications, manuals and documentation\
\ accompanying Llama 3.2 distributed by Meta at https://llama.meta.com/doc/overview.\n\
\n“Licensee” or “you” means you, or your employer or any other person or entity\
\ (if you are entering into this Agreement on such person or entity’s behalf),\
\ of the age required under applicable laws, rules or regulations to provide legal\
\ consent and that has legal authority to bind your employer or such other person\
\ or entity if you are entering in this Agreement on their behalf.\n\n“Llama 3.2”\
\ means the foundational large language models and software and algorithms, including\
\ machine-learning model code, trained model weights, inference-enabling code, training-enabling\
\ code, fine-tuning enabling code and other elements of the foregoing distributed\
\ by Meta at https://www.llama.com/llama-downloads.\n\n“Llama Materials” means,\
\ collectively, Meta’s proprietary Llama 3.2 and Documentation (and any portion\
\ thereof) made available under this Agreement.\n\n“Meta” or “we” means Meta Platforms\
\ Ireland Limited (if you are located in or, if you are an entity, your principal\
\ place of business is in the EEA or Switzerland) and Meta Platforms, Inc. (if\
\ you are located outside of the EEA or Switzerland). \n\nBy clicking “I Accept”\
\ below or by using or distributing any portion or element of the Llama Materials,\
\ you agree to be bound by this Agreement.\n\n1. License Rights and Redistribution.\n\
a. Grant of Rights. You are granted a non-exclusive, worldwide, non-transferable\
\ and royalty-free limited license under Meta’s intellectual property or other rights\
\ owned by Meta embodied in the Llama Materials to use, reproduce, distribute,\
\ copy, create derivative works of, and make modifications to the Llama Materials.\
\ \nb. Redistribution and Use. \ni. If you distribute or make available the Llama\
\ Materials (or any derivative works thereof), or a product or service (including\
\ another AI model) that contains any of them, you shall (A) provide a copy of this\
\ Agreement with any such Llama Materials; and (B) prominently display “Built with\
\ Llama” on a related website, user interface, blogpost, about page, or product\
\ documentation. If you use the Llama Materials or any outputs or results of the\
\ Llama Materials to create, train, fine tune, or otherwise improve an AI model,\
\ which is distributed or made available, you shall also include “Llama” at the\
\ beginning of any such AI model name.\nii. If you receive Llama Materials, or any\
\ derivative works thereof, from a Licensee as part of an integrated end user product,\
\ then Section 2 of this Agreement will not apply to you. \niii. You must retain\
\ in all copies of the Llama Materials that you distribute the following attribution\
\ notice within a “Notice” text file distributed as a part of such copies: “Llama\
\ 3.2 is licensed under the Llama 3.2 Community License, Copyright © Meta Platforms,\
\ Inc. All Rights Reserved.”\niv. Your use of the Llama Materials must comply with\
\ applicable laws and regulations (including trade compliance laws and regulations)\
\ and adhere to the Acceptable Use Policy for the Llama Materials (available at\
\ https://www.llama.com/llama3_2/use-policy), which is hereby incorporated by reference\
\ into this Agreement.\n \n2. Additional Commercial Terms. If, on the Llama 3.2\
\ version release date, the monthly active users of the products or services made\
\ available by or for Licensee, or Licensee’s affiliates, is greater than 700 million\
\ monthly active users in the preceding calendar month, you must request a license\
\ from Meta, which Meta may grant to you in its sole discretion, and you are not\
\ authorized to exercise any of the rights under this Agreement unless or until\
\ Meta otherwise expressly grants you such rights.\n3. Disclaimer of Warranty. UNLESS\
\ REQUIRED BY APPLICABLE LAW, THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS THEREFROM\
\ ARE PROVIDED ON AN “AS IS” BASIS, WITHOUT WARRANTIES OF ANY KIND, AND META DISCLAIMS\
\ ALL WARRANTIES OF ANY KIND, BOTH EXPRESS AND IMPLIED, INCLUDING, WITHOUT LIMITATION,\
\ ANY WARRANTIES OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR\
\ PURPOSE. YOU ARE SOLELY RESPONSIBLE FOR DETERMINING THE APPROPRIATENESS OF USING\
\ OR REDISTRIBUTING THE LLAMA MATERIALS AND ASSUME ANY RISKS ASSOCIATED WITH YOUR\
\ USE OF THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS.\n4. Limitation of Liability.\
\ IN NO EVENT WILL META OR ITS AFFILIATES BE LIABLE UNDER ANY THEORY OF LIABILITY,\
\ WHETHER IN CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE, ARISING\
\ OUT OF THIS AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT, SPECIAL, CONSEQUENTIAL,\
\ INCIDENTAL, EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF META OR ITS AFFILIATES HAVE\
\ BEEN ADVISED OF THE POSSIBILITY OF ANY OF THE FOREGOING.\n5. Intellectual Property.\n\
a. No trademark licenses are granted under this Agreement, and in connection with\
\ the Llama Materials, neither Meta nor Licensee may use any name or mark owned\
\ by or associated with the other or any of its affiliates, except as required\
\ for reasonable and customary use in describing and redistributing the Llama Materials\
\ or as set forth in this Section 5(a). Meta hereby grants you a license to use\
\ “Llama” (the “Mark”) solely as required to comply with the last sentence of Section\
\ 1.b.i. You will comply with Meta’s brand guidelines (currently accessible at\
\ https://about.meta.com/brand/resources/meta/company-brand/). All goodwill arising\
\ out of your use of the Mark will inure to the benefit of Meta.\nb. Subject to\
\ Meta’s ownership of Llama Materials and derivatives made by or for Meta, with\
\ respect to any derivative works and modifications of the Llama Materials that\
\ are made by you, as between you and Meta, you are and will be the owner of such\
\ derivative works and modifications.\nc. If you institute litigation or other proceedings\
\ against Meta or any entity (including a cross-claim or counterclaim in a lawsuit)\
\ alleging that the Llama Materials or Llama 3.2 outputs or results, or any portion\
\ of any of the foregoing, constitutes infringement of intellectual property or\
\ other rights owned or licensable by you, then any licenses granted to you under\
\ this Agreement shall terminate as of the date such litigation or claim is filed\
\ or instituted. You will indemnify and hold harmless Meta from and against any\
\ claim by any third party arising out of or related to your use or distribution\
\ of the Llama Materials.\n6. Term and Termination. The term of this Agreement will\
\ commence upon your acceptance of this Agreement or access to the Llama Materials\
\ and will continue in full force and effect until terminated in accordance with\
\ the terms and conditions herein. Meta may terminate this Agreement if you are\
\ in breach of any term or condition of this Agreement. Upon termination of this\
\ Agreement, you shall delete and cease use of the Llama Materials. Sections 3,\
\ 4 and 7 shall survive the termination of this Agreement. \n7. Governing Law and\
\ Jurisdiction. This Agreement will be governed and construed under the laws of\
\ the State of California without regard to choice of law principles, and the UN\
\ Convention on Contracts for the International Sale of Goods does not apply to\
\ this Agreement. The courts of California shall have exclusive jurisdiction of\
\ any dispute arising out of this Agreement. \n### Llama 3.2 Acceptable Use Policy\n\
Meta is committed to promoting safe and fair use of its tools and features, including\
\ Llama 3.2. If you access or use Llama 3.2, you agree to this Acceptable Use Policy\
\ (“**Policy**”). The most recent copy of this policy can be found at [https://www.llama.com/llama3_2/use-policy](https://www.llama.com/llama3_2/use-policy).\n\
#### Prohibited Uses\nWe want everyone to use Llama 3.2 safely and responsibly.\
\ You agree you will not use, or allow others to use, Llama 3.2 to:\n1. Violate\
\ the law or others’ rights, including to:\n 1. Engage in, promote, generate,\
\ contribute to, encourage, plan, incite, or further illegal or unlawful activity\
\ or content, such as:\n 1. Violence or terrorism\n 2. Exploitation\
\ or harm to children, including the solicitation, creation, acquisition, or dissemination\
\ of child exploitative content or failure to report Child Sexual Abuse Material\n\
\ 3. Human trafficking, exploitation, and sexual violence\n 4. The\
\ illegal distribution of information or materials to minors, including obscene\
\ materials, or failure to employ legally required age-gating in connection with\
\ such information or materials.\n 5. Sexual solicitation\n 6. Any\
\ other criminal activity\n 1. Engage in, promote, incite, or facilitate the\
\ harassment, abuse, threatening, or bullying of individuals or groups of individuals\n\
\ 2. Engage in, promote, incite, or facilitate discrimination or other unlawful\
\ or harmful conduct in the provision of employment, employment benefits, credit,\
\ housing, other economic benefits, or other essential goods and services\n 3.\
\ Engage in the unauthorized or unlicensed practice of any profession including,\
\ but not limited to, financial, legal, medical/health, or related professional\
\ practices\n 4. Collect, process, disclose, generate, or infer private or sensitive\
\ information about individuals, including information about individuals’ identity,\
\ health, or demographic information, unless you have obtained the right to do so\
\ in accordance with applicable law\n 5. Engage in or facilitate any action or\
\ generate any content that infringes, misappropriates, or otherwise violates any\
\ third-party rights, including the outputs or results of any products or services\
\ using the Llama Materials\n 6. Create, generate, or facilitate the creation\
\ of malicious code, malware, computer viruses or do anything else that could disable,\
\ overburden, interfere with or impair the proper working, integrity, operation\
\ or appearance of a website or computer system\n 7. Engage in any action, or\
\ facilitate any action, to intentionally circumvent or remove usage restrictions\
\ or other safety measures, or to enable functionality disabled by Meta \n2. Engage\
\ in, promote, incite, facilitate, or assist in the planning or development of activities\
\ that present a risk of death or bodily harm to individuals, including use of Llama\
\ 3.2 related to the following:\n 8. Military, warfare, nuclear industries or\
\ applications, espionage, use for materials or activities that are subject to the\
\ International Traffic Arms Regulations (ITAR) maintained by the United States\
\ Department of State or to the U.S. Biological Weapons Anti-Terrorism Act of 1989\
\ or the Chemical Weapons Convention Implementation Act of 1997\n 9. Guns and\
\ illegal weapons (including weapon development)\n 10. Illegal drugs and regulated/controlled\
\ substances\n 11. Operation of critical infrastructure, transportation technologies,\
\ or heavy machinery\n 12. Self-harm or harm to others, including suicide, cutting,\
\ and eating disorders\n 13. Any content intended to incite or promote violence,\
\ abuse, or any infliction of bodily harm to an individual\n3. Intentionally deceive\
\ or mislead others, including use of Llama 3.2 related to the following:\n 14.\
\ Generating, promoting, or furthering fraud or the creation or promotion of disinformation\n\
\ 15. Generating, promoting, or furthering defamatory content, including the\
\ creation of defamatory statements, images, or other content\n 16. Generating,\
\ promoting, or further distributing spam\n 17. Impersonating another individual\
\ without consent, authorization, or legal right\n 18. Representing that the\
\ use of Llama 3.2 or outputs are human-generated\n 19. Generating or facilitating\
\ false online engagement, including fake reviews and other means of fake online\
\ engagement \n4. Fail to appropriately disclose to end users any known dangers\
\ of your AI system 5. Interact with third party tools, models, or software designed\
\ to generate unlawful content or engage in unlawful or harmful conduct and/or represent\
\ that the outputs of such tools, models, or software are associated with Meta or\
\ Llama 3.2\n\nWith respect to any multimodal models included in Llama 3.2, the\
\ rights granted under Section 1(a) of the Llama 3.2 Community License Agreement\
\ are not being granted to you if you are an individual domiciled in, or a company\
\ with a principal place of business in, the European Union. This restriction does\
\ not apply to end users of a product or service that incorporates any such multimodal\
\ models.\n\nPlease report any violation of this Policy, software “bug,” or other\
\ problems that could lead to a violation of this Policy through one of the following\
\ means:\n\n* Reporting issues with the model: [https://github.com/meta-llama/llama-models/issues](https://l.workplace.com/l.php?u=https%3A%2F%2Fgithub.com%2Fmeta-llama%2Fllama-models%2Fissues&h=AT0qV8W9BFT6NwihiOHRuKYQM_UnkzN_NmHMy91OT55gkLpgi4kQupHUl0ssR4dQsIQ8n3tfd0vtkobvsEvt1l4Ic6GXI2EeuHV8N08OG2WnbAmm0FL4ObkazC6G_256vN0lN9DsykCvCqGZ)\n\
* Reporting risky content generated by the model: [developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback)\n\
* Reporting bugs and security concerns: [facebook.com/whitehat/info](http://facebook.com/whitehat/info)\n\
* Reporting violations of the Acceptable Use Policy or unlicensed uses of Llama\
\ 3.2: [email protected]"
extra_gated_fields:
First Name: text
Last Name: text
Date of birth: date_picker
Country: country
Affiliation: text
Job title:
type: select
options:
- Student
- Research Graduate
- AI researcher
- AI developer/engineer
- Reporter
- Other
geo: ip_location
? By clicking Submit below I accept the terms of the license and acknowledge that
the information I provide will be collected stored processed and shared in accordance
with the Meta Privacy Policy
: checkbox
extra_gated_description: The information you provide will be collected, stored, processed
and shared in accordance with the [Meta Privacy Policy](https://www.facebook.com/privacy/policy/).
extra_gated_button_content: Submit
base_model: meta-llama/llama-3.2-1B-Instruct
---
|
JayHyeon/Qwen_1.5B-math-VDPO_1e-4_1.0vpo_constant-10ep
|
JayHyeon
| 2025-06-20T02:21:06Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"trl",
"dpo",
"conversational",
"dataset:argilla/distilabel-math-preference-dpo",
"arxiv:2305.18290",
"base_model:Qwen/Qwen2.5-Math-1.5B",
"base_model:finetune:Qwen/Qwen2.5-Math-1.5B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-20T01:37:57Z |
---
base_model: Qwen/Qwen2.5-Math-1.5B
datasets: argilla/distilabel-math-preference-dpo
library_name: transformers
model_name: Qwen_1.5B-math-VDPO_1e-4_1.0vpo_constant-10ep
tags:
- generated_from_trainer
- trl
- dpo
licence: license
---
# Model Card for Qwen_1.5B-math-VDPO_1e-4_1.0vpo_constant-10ep
This model is a fine-tuned version of [Qwen/Qwen2.5-Math-1.5B](https://huggingface.co/Qwen/Qwen2.5-Math-1.5B) on the [argilla/distilabel-math-preference-dpo](https://huggingface.co/datasets/argilla/distilabel-math-preference-dpo) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="JayHyeon/Qwen_1.5B-math-VDPO_1e-4_1.0vpo_constant-10ep", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/bonin147/huggingface/runs/j6m5d8fc)
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.19.0.dev0
- Transformers: 4.52.4
- Pytorch: 2.7.1
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
morturr/Llama-2-7b-hf-PAIR_dadjokes_headlines-COMB-headlines-comb-2-seed-7-2025-06-20
|
morturr
| 2025-06-20T02:17:33Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"license:llama2",
"region:us"
] | null | 2025-06-20T02:17:17Z |
---
library_name: peft
license: llama2
base_model: meta-llama/Llama-2-7b-hf
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: Llama-2-7b-hf-PAIR_dadjokes_headlines-COMB-headlines-comb-2-seed-7-2025-06-20
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Llama-2-7b-hf-PAIR_dadjokes_headlines-COMB-headlines-comb-2-seed-7-2025-06-20
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 7
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.1
- Pytorch 2.5.1+cu124
- Datasets 3.0.2
- Tokenizers 0.20.1
|
aditeyabaral-redis/langcache-crossencoder-v1-ms-marco-MiniLM-L6-v2
|
aditeyabaral-redis
| 2025-06-20T02:14:56Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"bert",
"cross-encoder",
"quora",
"text-classification",
"sentence-pair-classification",
"semantic-similarity",
"semantic-search",
"retrieval",
"reranking",
"generated_from_trainer",
"dataset_size:363861",
"loss:BinaryCrossEntropyLoss",
"text-ranking",
"en",
"arxiv:1908.10084",
"base_model:cross-encoder/ms-marco-MiniLM-L6-v2",
"base_model:finetune:cross-encoder/ms-marco-MiniLM-L6-v2",
"license:apache-2.0",
"model-index",
"region:us"
] |
text-ranking
| 2025-06-19T22:26:57Z |
---
language:
- en
license: apache-2.0
tags:
- cross-encoder
- sentence-transformers
- quora
- text-classification
- sentence-pair-classification
- semantic-similarity
- semantic-search
- retrieval
- reranking
- generated_from_trainer
- dataset_size:363861
- loss:BinaryCrossEntropyLoss
base_model: cross-encoder/ms-marco-MiniLM-L6-v2
pipeline_tag: text-ranking
library_name: sentence-transformers
metrics:
- accuracy
- accuracy_threshold
- f1
- f1_threshold
- precision
- recall
- average_precision
model-index:
- name: Redis semantic caching CrossEncoder model fine-tuned on Quora Question Pairs
results:
- task:
type: cross-encoder-classification
name: Cross Encoder Classification
dataset:
name: quora eval
type: quora-eval
metrics:
- type: accuracy
value: 0.6956145341215464
name: Accuracy
- type: accuracy_threshold
value: 4.168765068054199
name: Accuracy Threshold
- type: f1
value: 0.5947228598694901
name: F1
- type: f1_threshold
value: 3.341184139251709
name: F1 Threshold
- type: precision
value: 0.4833759590792839
name: Precision
- type: recall
value: 0.7727211796246649
name: Recall
- type: average_precision
value: 0.6228630274737263
name: Average Precision
---
# Redis semantic caching CrossEncoder model fine-tuned on Quora Question Pairs
This is a [Cross Encoder](https://www.sbert.net/docs/cross_encoder/usage/usage.html) model finetuned from [cross-encoder/ms-marco-MiniLM-L6-v2](https://huggingface.co/cross-encoder/ms-marco-MiniLM-L6-v2) on the Quora Question Pairs LangCache Train Set dataset using the [sentence-transformers](https://www.SBERT.net) library. It computes scores for pairs of texts, which can be used for sentence pair classification.
## Model Details
### Model Description
- **Model Type:** Cross Encoder
- **Base model:** [cross-encoder/ms-marco-MiniLM-L6-v2](https://huggingface.co/cross-encoder/ms-marco-MiniLM-L6-v2) <!-- at revision ce0834f22110de6d9222af7a7a03628121708969 -->
- **Maximum Sequence Length:** 512 tokens
- **Number of Output Labels:** 1 label
- **Training Dataset:**
- Quora Question Pairs LangCache Train Set
- **Language:** en
- **License:** apache-2.0
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Documentation:** [Cross Encoder Documentation](https://www.sbert.net/docs/cross_encoder/usage/usage.html)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Cross Encoders on Hugging Face](https://huggingface.co/models?library=sentence-transformers&other=cross-encoder)
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import CrossEncoder
# Download from the 🤗 Hub
model = CrossEncoder("aditeyabaral-redis/langcache-crossencoder-v1-ms-marco-MiniLM-L6-v2")
# Get scores for pairs of texts
pairs = [
['How can I get a list of my Gmail accounts?', 'How can I find all my old Gmail accounts?'],
['How can I stop Quora from modifying and editing other people’s questions on Quora?', 'Can I prevent a Quora user from editing my question on Quora?'],
['How much does it cost to design a logo in india?', 'How much does it cost to design a logo?'],
['What is screenedrenters.com?', 'What is allmyapps.com?'],
['What are the best colleges for an MBA in Australia?', 'What are the top MBA schools in Australia?'],
]
scores = model.predict(pairs)
print(scores.shape)
# (5,)
# Or rank different texts based on similarity to a single text
ranks = model.rank(
'How can I get a list of my Gmail accounts?',
[
'How can I find all my old Gmail accounts?',
'Can I prevent a Quora user from editing my question on Quora?',
'How much does it cost to design a logo?',
'What is allmyapps.com?',
'What are the top MBA schools in Australia?',
]
)
# [{'corpus_id': ..., 'score': ...}, {'corpus_id': ..., 'score': ...}, ...]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Cross Encoder Classification
* Dataset: `quora-eval`
* Evaluated with [<code>CrossEncoderClassificationEvaluator</code>](https://sbert.net/docs/package_reference/cross_encoder/evaluation.html#sentence_transformers.cross_encoder.evaluation.CrossEncoderClassificationEvaluator)
| Metric | Value |
|:----------------------|:-----------|
| accuracy | 0.6956 |
| accuracy_threshold | 4.1688 |
| f1 | 0.5947 |
| f1_threshold | 3.3412 |
| precision | 0.4834 |
| recall | 0.7727 |
| **average_precision** | **0.6229** |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Quora Question Pairs LangCache Train Set
* Dataset: Quora Question Pairs LangCache Train Set
* Size: 363,861 training samples
* Columns: <code>sentence1</code>, <code>sentence2</code>, and <code>label</code>
* Approximate statistics based on the first 1000 samples:
| | sentence1 | sentence2 | label |
|:--------|:------------------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------------------|:------------------------------------------------|
| type | string | string | int |
| details | <ul><li>min: 15 characters</li><li>mean: 60.22 characters</li><li>max: 229 characters</li></ul> | <ul><li>min: 14 characters</li><li>mean: 60.0 characters</li><li>max: 274 characters</li></ul> | <ul><li>0: ~63.50%</li><li>1: ~36.50%</li></ul> |
* Samples:
| sentence1 | sentence2 | label |
|:-------------------------------------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------|:---------------|
| <code>Why do people believe in God and how can they say he/she exists?</code> | <code>Why do we kill each other in the name of God?</code> | <code>0</code> |
| <code>What are the chances of a bee sting when a bee buzzes around you?</code> | <code>How can I tell if my bees are agitated/likely to sting?</code> | <code>0</code> |
| <code>If a man from Syro Malankara church marries a Syro-Malabar girl, can they join a Syro-Malabar parish?</code> | <code>Is Malabar Hills of Mumbai anyhow related to Malabar of Kerala?</code> | <code>0</code> |
* Loss: [<code>BinaryCrossEntropyLoss</code>](https://sbert.net/docs/package_reference/cross_encoder/losses.html#binarycrossentropyloss) with these parameters:
```json
{
"activation_fn": "torch.nn.modules.linear.Identity",
"pos_weight": null
}
```
### Evaluation Dataset
#### Quora Question Pairs LangCache Validation Set
* Dataset: Quora Question Pairs LangCache Validation Set
* Size: 40,429 evaluation samples
* Columns: <code>sentence1</code>, <code>sentence2</code>, and <code>label</code>
* Approximate statistics based on the first 1000 samples:
| | sentence1 | sentence2 | label |
|:--------|:------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------|:------------------------------------------------|
| type | string | string | int |
| details | <ul><li>min: 13 characters</li><li>mean: 59.91 characters</li><li>max: 266 characters</li></ul> | <ul><li>min: 13 characters</li><li>mean: 59.51 characters</li><li>max: 293 characters</li></ul> | <ul><li>0: ~63.80%</li><li>1: ~36.20%</li></ul> |
* Samples:
| sentence1 | sentence2 | label |
|:------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------|:---------------|
| <code>How can I get a list of my Gmail accounts?</code> | <code>How can I find all my old Gmail accounts?</code> | <code>1</code> |
| <code>How can I stop Quora from modifying and editing other people’s questions on Quora?</code> | <code>Can I prevent a Quora user from editing my question on Quora?</code> | <code>1</code> |
| <code>How much does it cost to design a logo in india?</code> | <code>How much does it cost to design a logo?</code> | <code>0</code> |
* Loss: [<code>BinaryCrossEntropyLoss</code>](https://sbert.net/docs/package_reference/cross_encoder/losses.html#binarycrossentropyloss) with these parameters:
```json
{
"activation_fn": "torch.nn.modules.linear.Identity",
"pos_weight": null
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 64
- `per_device_eval_batch_size`: 64
- `learning_rate`: 0.0002
- `num_train_epochs`: 15
- `load_best_model_at_end`: True
- `push_to_hub`: True
- `hub_model_id`: aditeyabaral-redis/langcache-crossencoder-v1-ms-marco-MiniLM-L6-v2
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 64
- `per_device_eval_batch_size`: 64
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 0.0002
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 15
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.0
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: True
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: True
- `resume_from_checkpoint`: None
- `hub_model_id`: aditeyabaral-redis/langcache-crossencoder-v1-ms-marco-MiniLM-L6-v2
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
<details><summary>Click to expand</summary>
| Epoch | Step | Training Loss | Validation Loss | quora-eval_average_precision |
|:----------:|:--------:|:-------------:|:---------------:|:----------------------------:|
| 0.0879 | 500 | 0.3913 | 0.3302 | 0.5603 |
| 0.1759 | 1000 | 0.3408 | 0.3220 | 0.5932 |
| 0.2638 | 1500 | 0.3318 | 0.3249 | 0.6144 |
| 0.3517 | 2000 | 0.3235 | 0.3027 | 0.6280 |
| 0.4397 | 2500 | 0.3173 | 0.2944 | 0.6233 |
| 0.5276 | 3000 | 0.3049 | 0.3009 | 0.6685 |
| 0.6155 | 3500 | 0.3071 | 0.2908 | 0.6221 |
| 0.7035 | 4000 | 0.3015 | 0.2854 | 0.6143 |
| 0.7914 | 4500 | 0.2944 | 0.2759 | 0.6361 |
| 0.8794 | 5000 | 0.2984 | 0.2854 | 0.6616 |
| 0.9673 | 5500 | 0.2898 | 0.3002 | 0.6109 |
| 1.0552 | 6000 | 0.2552 | 0.2800 | 0.6466 |
| 1.1432 | 6500 | 0.2352 | 0.2821 | 0.6305 |
| 1.2311 | 7000 | 0.2366 | 0.2778 | 0.5699 |
| 1.3190 | 7500 | 0.2332 | 0.2831 | 0.6076 |
| 1.4070 | 8000 | 0.2366 | 0.2783 | 0.6003 |
| 1.4949 | 8500 | 0.2391 | 0.2716 | 0.6195 |
| **1.5828** | **9000** | **0.241** | **0.2685** | **0.6229** |
| 1.6708 | 9500 | 0.2359 | 0.2804 | 0.6410 |
| 1.7587 | 10000 | 0.2374 | 0.2819 | 0.6448 |
| 1.8466 | 10500 | 0.2387 | 0.2750 | 0.6479 |
| 1.9346 | 11000 | 0.2343 | 0.2734 | 0.6034 |
| 2.0225 | 11500 | 0.2193 | 0.3168 | 0.6384 |
| 2.1104 | 12000 | 0.1741 | 0.3011 | 0.6189 |
| 2.1984 | 12500 | 0.1732 | 0.2988 | 0.6412 |
| 2.2863 | 13000 | 0.1814 | 0.2839 | 0.6156 |
| 2.3743 | 13500 | 0.1815 | 0.2930 | 0.5520 |
| 2.4622 | 14000 | 0.1774 | 0.3461 | 0.6195 |
| 2.5501 | 14500 | 0.1886 | 0.3033 | 0.6113 |
| 2.6381 | 15000 | 0.1831 | 0.2925 | 0.5815 |
| 2.7260 | 15500 | 0.1889 | 0.2801 | 0.5701 |
| 2.8139 | 16000 | 0.1869 | 0.2893 | 0.6090 |
| 2.9019 | 16500 | 0.1896 | 0.3038 | 0.6142 |
| 2.9898 | 17000 | 0.1967 | 0.2791 | 0.5967 |
| 3.0777 | 17500 | 0.1395 | 0.3119 | 0.5672 |
| 3.1657 | 18000 | 0.1392 | 0.3052 | 0.5876 |
| 3.2536 | 18500 | 0.1411 | 0.3030 | 0.6064 |
| 3.3415 | 19000 | 0.1356 | 0.3064 | 0.5535 |
| 3.4295 | 19500 | 0.14 | 0.3144 | 0.5978 |
| 3.5174 | 20000 | 0.1461 | 0.3332 | 0.5961 |
| 3.6053 | 20500 | 0.1468 | 0.3179 | 0.5975 |
| 3.6933 | 21000 | 0.1487 | 0.3327 | 0.5932 |
| 3.7812 | 21500 | 0.1479 | 0.3340 | 0.5888 |
| 3.8692 | 22000 | 0.1458 | 0.3172 | 0.5478 |
| 3.9571 | 22500 | 0.1566 | 0.3036 | 0.5926 |
| 4.0450 | 23000 | 0.1257 | 0.3552 | 0.5941 |
| 4.1330 | 23500 | 0.1004 | 0.3886 | 0.5067 |
| 4.2209 | 24000 | 0.1061 | 0.3682 | 0.5654 |
| 4.3088 | 24500 | 0.1087 | 0.3212 | 0.5556 |
| 4.3968 | 25000 | 0.11 | 0.3348 | 0.5628 |
| 4.4847 | 25500 | 0.1108 | 0.3740 | 0.5046 |
| 4.5726 | 26000 | 0.1169 | 0.3092 | 0.5882 |
| 4.6606 | 26500 | 0.1156 | 0.3498 | 0.4988 |
| 4.7485 | 27000 | 0.1232 | 0.3042 | 0.5801 |
| 4.8364 | 27500 | 0.1195 | 0.3685 | 0.5793 |
| 4.9244 | 28000 | 0.122 | 0.3199 | 0.5383 |
| 5.0123 | 28500 | 0.1151 | 0.4291 | 0.5510 |
| 5.1002 | 29000 | 0.0815 | 0.4297 | 0.4973 |
| 5.1882 | 29500 | 0.086 | 0.4798 | 0.4969 |
| 5.2761 | 30000 | 0.0892 | 0.4475 | 0.5230 |
| 5.3641 | 30500 | 0.0888 | 0.4165 | 0.4267 |
| 5.4520 | 31000 | 0.0929 | 0.4398 | 0.4674 |
| 5.5399 | 31500 | 0.0929 | 0.4551 | 0.4629 |
| 5.6279 | 32000 | 0.0928 | 0.3756 | 0.4537 |
| 5.7158 | 32500 | 0.0961 | 0.4014 | 0.5037 |
| 5.8037 | 33000 | 0.0924 | 0.3953 | 0.5158 |
| 5.8917 | 33500 | 0.0988 | 0.3890 | 0.5355 |
| 5.9796 | 34000 | 0.0963 | 0.3823 | 0.5130 |
| 6.0675 | 34500 | 0.0738 | 0.4251 | 0.4924 |
| 6.1555 | 35000 | 0.0681 | 0.4444 | 0.4891 |
| 6.2434 | 35500 | 0.0703 | 0.4472 | 0.4994 |
| 6.3313 | 36000 | 0.071 | 0.4552 | 0.4920 |
| 6.4193 | 36500 | 0.0706 | 0.4149 | 0.4726 |
| 6.5072 | 37000 | 0.0751 | 0.3840 | 0.4771 |
| 6.5951 | 37500 | 0.0708 | 0.4455 | 0.5152 |
| 6.6831 | 38000 | 0.0775 | 0.4124 | 0.4290 |
| 6.7710 | 38500 | 0.0766 | 0.4004 | 0.4459 |
| 6.8590 | 39000 | 0.0811 | 0.4209 | 0.4192 |
| 6.9469 | 39500 | 0.0766 | 0.4294 | 0.4805 |
| 7.0348 | 40000 | 0.07 | 0.4470 | 0.4623 |
| 7.1228 | 40500 | 0.05 | 0.5520 | 0.4211 |
| 7.2107 | 41000 | 0.0555 | 0.4425 | 0.3890 |
| 7.2986 | 41500 | 0.057 | 0.5324 | 0.4204 |
| 7.3866 | 42000 | 0.06 | 0.4664 | 0.4517 |
| 7.4745 | 42500 | 0.0583 | 0.4506 | 0.4966 |
| 7.5624 | 43000 | 0.0582 | 0.4441 | 0.4659 |
| 7.6504 | 43500 | 0.0615 | 0.4528 | 0.4495 |
| 7.7383 | 44000 | 0.0614 | 0.4744 | 0.4350 |
| 7.8262 | 44500 | 0.0605 | 0.4272 | 0.4630 |
| 7.9142 | 45000 | 0.0625 | 0.4709 | 0.4414 |
| 8.0021 | 45500 | 0.065 | 0.4513 | 0.4060 |
| 8.0900 | 46000 | 0.0412 | 0.6073 | 0.3839 |
| 8.1780 | 46500 | 0.0431 | 0.5060 | 0.3656 |
| 8.2659 | 47000 | 0.0425 | 0.5438 | 0.4042 |
| 8.3539 | 47500 | 0.0462 | 0.5835 | 0.4171 |
| 8.4418 | 48000 | 0.0475 | 0.5035 | 0.4144 |
| 8.5297 | 48500 | 0.0476 | 0.5046 | 0.4105 |
| 8.6177 | 49000 | 0.0483 | 0.5080 | 0.4071 |
| 8.7056 | 49500 | 0.0487 | 0.5682 | 0.4130 |
| 8.7935 | 50000 | 0.049 | 0.5026 | 0.4283 |
| 8.8815 | 50500 | 0.0517 | 0.4920 | 0.3529 |
| 8.9694 | 51000 | 0.0495 | 0.4956 | 0.4038 |
| 9.0573 | 51500 | 0.0378 | 0.5368 | 0.3654 |
| 9.1453 | 52000 | 0.0328 | 0.4895 | 0.3775 |
| 9.2332 | 52500 | 0.0337 | 0.5245 | 0.4051 |
| 9.3211 | 53000 | 0.0361 | 0.5925 | 0.3984 |
| 9.4091 | 53500 | 0.0369 | 0.5197 | 0.4134 |
| 9.4970 | 54000 | 0.0388 | 0.5246 | 0.4186 |
| 9.5849 | 54500 | 0.0364 | 0.5243 | 0.4245 |
| 9.6729 | 55000 | 0.0373 | 0.5164 | 0.4119 |
| 9.7608 | 55500 | 0.0358 | 0.6019 | 0.4171 |
| 9.8488 | 56000 | 0.0364 | 0.6166 | 0.4050 |
| 9.9367 | 56500 | 0.0406 | 0.5238 | 0.4329 |
| 10.0246 | 57000 | 0.0361 | 0.6156 | 0.4138 |
| 10.1126 | 57500 | 0.0267 | 0.5612 | 0.4073 |
| 10.2005 | 58000 | 0.023 | 0.6370 | 0.4049 |
| 10.2884 | 58500 | 0.0293 | 0.5876 | 0.4069 |
| 10.3764 | 59000 | 0.0255 | 0.6200 | 0.4239 |
| 10.4643 | 59500 | 0.0282 | 0.5882 | 0.4085 |
| 10.5522 | 60000 | 0.0307 | 0.5499 | 0.4084 |
| 10.6402 | 60500 | 0.0294 | 0.6012 | 0.3956 |
| 10.7281 | 61000 | 0.0283 | 0.6330 | 0.4027 |
| 10.8160 | 61500 | 0.0323 | 0.5620 | 0.4037 |
| 10.9040 | 62000 | 0.0305 | 0.6073 | 0.4067 |
| 10.9919 | 62500 | 0.0284 | 0.5969 | 0.4048 |
| 11.0798 | 63000 | 0.0194 | 0.6831 | 0.4041 |
| 11.1678 | 63500 | 0.0209 | 0.6346 | 0.3937 |
| 11.2557 | 64000 | 0.0183 | 0.6610 | 0.3691 |
| 11.3437 | 64500 | 0.0221 | 0.6509 | 0.3755 |
| 11.4316 | 65000 | 0.0217 | 0.7004 | 0.4256 |
| 11.5195 | 65500 | 0.0239 | 0.5978 | 0.4087 |
| 11.6075 | 66000 | 0.0234 | 0.6237 | 0.3687 |
| 11.6954 | 66500 | 0.0222 | 0.5774 | 0.4177 |
| 11.7833 | 67000 | 0.0234 | 0.6203 | 0.4368 |
| 11.8713 | 67500 | 0.0216 | 0.5981 | 0.4396 |
| 11.9592 | 68000 | 0.0235 | 0.5636 | 0.4338 |
| 12.0471 | 68500 | 0.0193 | 0.6815 | 0.4295 |
| 12.1351 | 69000 | 0.0154 | 0.6883 | 0.4516 |
| 12.2230 | 69500 | 0.0153 | 0.7075 | 0.4128 |
| 12.3109 | 70000 | 0.0155 | 0.6650 | 0.4300 |
| 12.3989 | 70500 | 0.0147 | 0.7161 | 0.4029 |
| 12.4868 | 71000 | 0.015 | 0.7274 | 0.4082 |
| 12.5747 | 71500 | 0.0172 | 0.6526 | 0.3834 |
| 12.6627 | 72000 | 0.0156 | 0.6420 | 0.3574 |
| 12.7506 | 72500 | 0.0158 | 0.6716 | 0.3905 |
| 12.8386 | 73000 | 0.0165 | 0.6757 | 0.3805 |
| 12.9265 | 73500 | 0.0144 | 0.6964 | 0.3932 |
| 13.0144 | 74000 | 0.0133 | 0.7359 | 0.3913 |
| 13.1024 | 74500 | 0.0137 | 0.7126 | 0.4071 |
| 13.1903 | 75000 | 0.0118 | 0.7234 | 0.4115 |
| 13.2782 | 75500 | 0.0117 | 0.7391 | 0.4225 |
| 13.3662 | 76000 | 0.0123 | 0.7435 | 0.3931 |
| 13.4541 | 76500 | 0.0121 | 0.7334 | 0.4033 |
| 13.5420 | 77000 | 0.0114 | 0.7370 | 0.3965 |
| 13.6300 | 77500 | 0.0107 | 0.7646 | 0.4340 |
| 13.7179 | 78000 | 0.0123 | 0.7255 | 0.4015 |
| 13.8058 | 78500 | 0.0129 | 0.6944 | 0.3901 |
| 13.8938 | 79000 | 0.0097 | 0.7561 | 0.4181 |
| 13.9817 | 79500 | 0.0121 | 0.7178 | 0.3991 |
| 14.0696 | 80000 | 0.0087 | 0.7505 | 0.3858 |
| 14.1576 | 80500 | 0.0071 | 0.7765 | 0.3827 |
| 14.2455 | 81000 | 0.0082 | 0.7851 | 0.3812 |
| 14.3335 | 81500 | 0.0094 | 0.7683 | 0.3877 |
| 14.4214 | 82000 | 0.0076 | 0.7705 | 0.3938 |
| 14.5093 | 82500 | 0.0071 | 0.7653 | 0.3916 |
| 14.5973 | 83000 | 0.0092 | 0.7557 | 0.3851 |
| 14.6852 | 83500 | 0.0058 | 0.7718 | 0.3889 |
| 14.7731 | 84000 | 0.0069 | 0.7753 | 0.3895 |
| 14.8611 | 84500 | 0.0083 | 0.7706 | 0.3902 |
| 14.9490 | 85000 | 0.0075 | 0.7741 | 0.3909 |
| -1 | -1 | - | - | 0.6229 |
* The bold row denotes the saved checkpoint.
</details>
### Framework Versions
- Python: 3.11.11
- Sentence Transformers: 4.1.0
- Transformers: 4.52.4
- PyTorch: 2.6.0+cu124
- Accelerate: 1.8.0
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
quidangz/LLama-8B-Instruct-MultiTask-CE-v2
|
quidangz
| 2025-06-20T02:12:16Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-20T01:53:18Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
bennyhobart/Qwen2-0.5B-GRPO-test
|
bennyhobart
| 2025-06-20T02:11:07Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"grpo",
"dataset:AI-MO/NuminaMath-TIR",
"arxiv:2402.03300",
"base_model:Qwen/Qwen2-0.5B-Instruct",
"base_model:finetune:Qwen/Qwen2-0.5B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-06-19T00:39:52Z |
---
base_model: Qwen/Qwen2-0.5B-Instruct
datasets: AI-MO/NuminaMath-TIR
library_name: transformers
model_name: Qwen2-0.5B-GRPO-test
tags:
- generated_from_trainer
- trl
- grpo
licence: license
---
# Model Card for Qwen2-0.5B-GRPO-test
This model is a fine-tuned version of [Qwen/Qwen2-0.5B-Instruct](https://huggingface.co/Qwen/Qwen2-0.5B-Instruct) on the [AI-MO/NuminaMath-TIR](https://huggingface.co/datasets/AI-MO/NuminaMath-TIR) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="bennyhobart/Qwen2-0.5B-GRPO-test", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.18.2
- Transformers: 4.52.4
- Pytorch: 2.7.1
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
morturr/Llama-2-7b-hf-PAIR_amazon_headlines-COMB-amazon-comb-1-seed-7-2025-06-20
|
morturr
| 2025-06-20T02:10:48Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"license:llama2",
"region:us"
] | null | 2025-06-20T02:10:38Z |
---
library_name: peft
license: llama2
base_model: meta-llama/Llama-2-7b-hf
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: Llama-2-7b-hf-PAIR_amazon_headlines-COMB-amazon-comb-1-seed-7-2025-06-20
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Llama-2-7b-hf-PAIR_amazon_headlines-COMB-amazon-comb-1-seed-7-2025-06-20
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 7
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.1
- Pytorch 2.5.1+cu124
- Datasets 3.0.2
- Tokenizers 0.20.1
|
rrayhka/Qwen2.5-1.5B-Instruct-kemenko-bnb-16bit
|
rrayhka
| 2025-06-20T02:09:19Z | 11 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"qwen2",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:unsloth/Qwen2.5-1.5B-Instruct-unsloth-bnb-4bit",
"base_model:finetune:unsloth/Qwen2.5-1.5B-Instruct-unsloth-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-03-06T01:38:39Z |
---
base_model: unsloth/Qwen2.5-1.5B-Instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** rrayhka
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen2.5-1.5B-Instruct-unsloth-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
rrayhka/Llama-3.2-3B-Instruct
|
rrayhka
| 2025-06-20T02:08:46Z | 19 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:unsloth/Llama-3.2-3B-Instruct",
"base_model:finetune:unsloth/Llama-3.2-3B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-03-17T05:16:15Z |
---
base_model: unsloth/Llama-3.2-3B-Instruct
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** rrayhka
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Llama-3.2-3B-Instruct
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
lalalaDa/ER-GRPO
|
lalalaDa
| 2025-06-20T02:00:29Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"ERGRPO",
"trl",
"grpo",
"conversational",
"dataset:knoveleng/open-rs",
"arxiv:2402.03300",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-14T15:36:56Z |
---
datasets: knoveleng/open-rs
library_name: transformers
model_name: ER-GRPO
tags:
- generated_from_trainer
- ERGRPO
- trl
- grpo
licence: license
---
# Model Card for ER-GRPO
This model is a fine-tuned version of [None](https://huggingface.co/None) on the [knoveleng/open-rs](https://huggingface.co/datasets/knoveleng/open-rs) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="lalalaDa/ER-GRPO", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.18.1
- Transformers: 4.52.4
- Pytorch: 2.5.1
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
mradermacher/ICONN-1-GGUF
|
mradermacher
| 2025-06-20T02:00:01Z | 0 | 2 |
transformers
|
[
"transformers",
"gguf",
"emotional-ai",
"ICONN",
"chatbot",
"base",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-19T14:24:08Z |
---
base_model: ICONNAI/ICONN-1
extra_gated_fields:
Country: country
Date of agreement: date_picker
Full name: text
I agree to all terms in the ICONN AI License Agreement, including:
options:
- I will NOT use this model for commercial purposes without explicit written permission.
- I will NOT redistribute, upload, or share this model in any public or private
repository.
- I will NOT train new models or derivatives from this model.
- I will NOT use this model for unethical, harmful, deceptive, exploitative, or
surveillance purposes.
- I understand this license may be revoked if I breach any terms.
type: checkbox
I am using this model for:
options:
- Personal use
- Internal business use
- Academic research
- Educational purposes
- label: Other (explain below)
value: other
type: select
Organization (if any): text
Purpose explanation (if "Other"): text
extra_gated_prompt: |
By accessing or downloading this model, you agree to the ICONN AI License Agreement. This includes restrictions on commercial use, redistribution, derivative model training, and uploading to public or private repositories. You may not use this model to harm, surveil, deceive, exploit, manipulate, or conduct unethical AI research. All use must comply with ethical standards and respect human dignity.
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- emotional-ai
- ICONN
- chatbot
- base
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/ICONNAI/ICONN-1
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/ICONN-1-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/ICONN-1-GGUF/resolve/main/ICONN-1.Q2_K.gguf) | Q2_K | 30.9 | |
| [GGUF](https://huggingface.co/mradermacher/ICONN-1-GGUF/resolve/main/ICONN-1.Q3_K_S.gguf) | Q3_K_S | 36.5 | |
| [GGUF](https://huggingface.co/mradermacher/ICONN-1-GGUF/resolve/main/ICONN-1.Q3_K_M.gguf) | Q3_K_M | 40.3 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/ICONN-1-GGUF/resolve/main/ICONN-1.Q3_K_L.gguf) | Q3_K_L | 43.6 | |
| [GGUF](https://huggingface.co/mradermacher/ICONN-1-GGUF/resolve/main/ICONN-1.IQ4_XS.gguf) | IQ4_XS | 45.5 | |
| [GGUF](https://huggingface.co/mradermacher/ICONN-1-GGUF/resolve/main/ICONN-1.Q4_K_S.gguf) | Q4_K_S | 47.9 | fast, recommended |
| [PART 1](https://huggingface.co/mradermacher/ICONN-1-GGUF/resolve/main/ICONN-1.Q4_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/ICONN-1-GGUF/resolve/main/ICONN-1.Q4_K_M.gguf.part2of2) | Q4_K_M | 51.0 | fast, recommended |
| [PART 1](https://huggingface.co/mradermacher/ICONN-1-GGUF/resolve/main/ICONN-1.Q5_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/ICONN-1-GGUF/resolve/main/ICONN-1.Q5_K_S.gguf.part2of2) | Q5_K_S | 57.9 | |
| [PART 1](https://huggingface.co/mradermacher/ICONN-1-GGUF/resolve/main/ICONN-1.Q5_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/ICONN-1-GGUF/resolve/main/ICONN-1.Q5_K_M.gguf.part2of2) | Q5_K_M | 59.7 | |
| [PART 1](https://huggingface.co/mradermacher/ICONN-1-GGUF/resolve/main/ICONN-1.Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/ICONN-1-GGUF/resolve/main/ICONN-1.Q6_K.gguf.part2of2) | Q6_K | 69.0 | very good quality |
| [PART 1](https://huggingface.co/mradermacher/ICONN-1-GGUF/resolve/main/ICONN-1.Q8_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/ICONN-1-GGUF/resolve/main/ICONN-1.Q8_0.gguf.part2of2) | Q8_0 | 89.3 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
18-hot-viral-indian-clip-video/18.LEAKS.VIDEO.hot.viral.indian.clip.video.new.Video.Tutorial.Official
|
18-hot-viral-indian-clip-video
| 2025-06-20T01:43:49Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-20T01:41:22Z |
<animated-image data-catalyst=""><a href="https://tinyurl.com/56hn7ue8/?news-viral-video" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
cyberscribeAI/Luna2
|
cyberscribeAI
| 2025-06-20T01:43:46Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-06-20T01:18:00Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: Luna
---
# Luna2
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `Luna` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "Luna",
"lora_weights": "https://huggingface.co/cyberscribeAI/Luna2/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('cyberscribeAI/Luna2', weight_name='lora.safetensors')
image = pipeline('Luna').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/cyberscribeAI/Luna2/discussions) to add images that show off what you’ve made with this LoRA.
|
santoshmds21/bert-phishing-classifier_teacher
|
santoshmds21
| 2025-06-20T01:42:58Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-06-20T01:42:33Z |
---
library_name: transformers
license: apache-2.0
base_model: google-bert/bert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: bert-phishing-classifier_teacher
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-phishing-classifier_teacher
This model is a fine-tuned version of [google-bert/bert-base-uncased](https://huggingface.co/google-bert/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7047
- Accuracy: 0.491
- Auc: 0.75
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Auc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:-----:|
| 0.7135 | 1.0 | 263 | 0.6957 | 0.509 | 0.692 |
| 0.7053 | 2.0 | 526 | 0.7073 | 0.491 | 0.274 |
| 0.7033 | 3.0 | 789 | 0.7039 | 0.509 | 0.701 |
| 0.7025 | 4.0 | 1052 | 0.6955 | 0.491 | 0.471 |
| 0.6995 | 5.0 | 1315 | 0.7008 | 0.491 | 0.533 |
| 0.6993 | 6.0 | 1578 | 0.6982 | 0.491 | 0.708 |
| 0.696 | 7.0 | 1841 | 0.6993 | 0.491 | 0.654 |
| 0.6939 | 8.0 | 2104 | 0.6954 | 0.491 | 0.705 |
| 0.6907 | 9.0 | 2367 | 0.6994 | 0.491 | 0.673 |
| 0.6946 | 10.0 | 2630 | 0.7047 | 0.491 | 0.75 |
### Framework versions
- Transformers 4.52.4
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1
|
FormlessAI/ec793869-6534-4688-a339-e75a7db3cbc2
|
FormlessAI
| 2025-06-20T01:41:51Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:unsloth/Qwen2.5-0.5B-Instruct",
"base_model:finetune:unsloth/Qwen2.5-0.5B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-06-20T01:40:35Z |
---
base_model: unsloth/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: ec793869-6534-4688-a339-e75a7db3cbc2
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for ec793869-6534-4688-a339-e75a7db3cbc2
This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="FormlessAI/ec793869-6534-4688-a339-e75a7db3cbc2", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/phoenix-formless/Gradients/runs/p8in15pe)
This model was trained with SFT.
### Framework versions
- TRL: 0.18.1
- Transformers: 4.52.4
- Pytorch: 2.7.0+cu128
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
tinh2406/t5-base-finetuned-envi-shard-00
|
tinh2406
| 2025-06-20T01:35:04Z | 16 | 0 |
transformers
|
[
"transformers",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google-t5/t5-base",
"base_model:finetune:google-t5/t5-base",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2025-05-21T18:57:42Z |
---
library_name: transformers
license: apache-2.0
base_model: google-t5/t5-base
tags:
- generated_from_trainer
model-index:
- name: t5-base-finetuned-envi-shard-00
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-base-finetuned-envi-shard-00
This model is a fine-tuned version of [google-t5/t5-base](https://huggingface.co/google-t5/t5-base) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.52.2
- Pytorch 2.8.0.dev20250521+cu128
- Datasets 3.6.0
- Tokenizers 0.21.1
|
johngreendr1/4b403233-75b5-41d2-95ff-dc19680e61e3
|
johngreendr1
| 2025-06-20T01:33:03Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:oopsung/llama2-7b-koNqa-test-v1",
"base_model:adapter:oopsung/llama2-7b-koNqa-test-v1",
"region:us"
] | null | 2025-06-19T21:59:55Z |
---
base_model: oopsung/llama2-7b-koNqa-test-v1
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.1
|
ssyafiqahsiti/RandomForest_cervical_cancer
|
ssyafiqahsiti
| 2025-06-20T01:32:25Z | 0 | 0 | null |
[
"biology",
"medical",
"image-classification",
"region:us"
] |
image-classification
| 2025-05-22T09:04:29Z |
---
pipeline_tag: image-classification
tags:
- biology
- medical
---
🧫 Cervical Cancer Classifier
This tool allows users to upload colposcopy images and classify them as Normal or Abnormal using a machine learning model which is RandomForest.
📌 Features
✅ Upload a colposcopy image
🧠 Predict whether the cervix is Normal or Abnormal
📂 How to Use
Upload your colposcopy image into the app.
The model will predict the condition of the cervix.
🔒 Privacy
This demo is for research and demonstration purposes only.
Uploaded images are not stored.
👩⚕️ Medical Disclaimer
This tool is not intended for clinical or diagnostic use.
Always consult a qualified medical professional for an accurate diagnosis.
|
Official-a2z-jankari-18-Viral-Videos/FULL.VIDEO.a2z.jankari.Viral.Video.Tutorial.Official
|
Official-a2z-jankari-18-Viral-Videos
| 2025-06-20T01:32:11Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-20T01:31:56Z |
<animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
<animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
hot-video-shah-sapna-viral-video/FULL.LEAKS.VIDEO.sapna.shah.Viral.Video.Tutorial.Official
|
hot-video-shah-sapna-viral-video
| 2025-06-20T01:29:25Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-20T01:29:04Z |
<animated-image data-catalyst=""><a href="https://tinyurl.com/56hn7ue8/?news-viral-video" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
ikirezii/my-t5-tech-chatbot
|
ikirezii
| 2025-06-20T01:18:26Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"t5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2025-06-20T01:18:10Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
morturr/Llama-2-7b-hf-PAIR_dadjokes_headlines-COMB-headlines-comb-1-seed-42-2025-06-20
|
morturr
| 2025-06-20T01:18:07Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"license:llama2",
"region:us"
] | null | 2025-06-20T01:17:48Z |
---
library_name: peft
license: llama2
base_model: meta-llama/Llama-2-7b-hf
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: Llama-2-7b-hf-PAIR_dadjokes_headlines-COMB-headlines-comb-1-seed-42-2025-06-20
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Llama-2-7b-hf-PAIR_dadjokes_headlines-COMB-headlines-comb-1-seed-42-2025-06-20
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.1
- Pytorch 2.5.1+cu124
- Datasets 3.0.2
- Tokenizers 0.20.1
|
roachkins/omega_UGCfmCL
|
roachkins
| 2025-06-20T01:17:20Z | 0 | 0 | null |
[
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-06-20T01:17:20Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
segopecelus/ccd66038-b184-46c1-9bee-26c94da6adab
|
segopecelus
| 2025-06-20T01:16:51Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:samoline/b16cf9d9-92f7-4dac-80bb-e8ed01042118",
"base_model:adapter:samoline/b16cf9d9-92f7-4dac-80bb-e8ed01042118",
"region:us"
] | null | 2025-06-20T01:15:20Z |
---
library_name: peft
base_model: samoline/b16cf9d9-92f7-4dac-80bb-e8ed01042118
tags:
- axolotl
- generated_from_trainer
model-index:
- name: ccd66038-b184-46c1-9bee-26c94da6adab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.10.0.dev0`
```yaml
adapter: lora
base_model: samoline/b16cf9d9-92f7-4dac-80bb-e8ed01042118
bf16: true
chat_template: llama3
datasets:
- data_files:
- 1c6c51ff53640650_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/
type:
field_input: input
field_instruction: instruct
field_output: output
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
eval_max_new_tokens: 256
evals_per_epoch: 2
flash_attention: false
fp16: false
gradient_accumulation_steps: 1
gradient_checkpointing: true
group_by_length: true
hub_model_id: segopecelus/ccd66038-b184-46c1-9bee-26c94da6adab
learning_rate: 0.0002
logging_steps: 10
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: false
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 11
micro_batch_size: 4
mlflow_experiment_name: /tmp/1c6c51ff53640650_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
sample_packing: false
save_steps: 108
sequence_len: 2048
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 99fe14db-1517-4584-83ed-30340df56091
wandb_project: Gradients-On-Demand
wandb_run: apriasmoro
wandb_runid: 99fe14db-1517-4584-83ed-30340df56091
warmup_steps: 100
weight_decay: 0.01
```
</details><br>
# ccd66038-b184-46c1-9bee-26c94da6adab
This model is a fine-tuned version of [samoline/b16cf9d9-92f7-4dac-80bb-e8ed01042118](https://huggingface.co/samoline/b16cf9d9-92f7-4dac-80bb-e8ed01042118) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6675
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 11
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0005 | 1 | 1.6455 |
| No log | 0.0010 | 2 | 1.6762 |
| No log | 0.0021 | 4 | 1.6741 |
| No log | 0.0031 | 6 | 1.6629 |
| No log | 0.0041 | 8 | 1.6606 |
| 1.4855 | 0.0052 | 10 | 1.6675 |
### Framework versions
- PEFT 0.15.2
- Transformers 4.51.3
- Pytorch 2.5.1+cu124
- Datasets 3.5.1
- Tokenizers 0.21.1
|
cpheemagazine/4bbc67e9-657a-4759-87e9-6b65464e4d08
|
cpheemagazine
| 2025-06-20T01:14:40Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:samoline/c6570ab1-6c94-4d57-9e33-c58c8d76c5ee",
"base_model:adapter:samoline/c6570ab1-6c94-4d57-9e33-c58c8d76c5ee",
"region:us"
] | null | 2025-06-20T01:12:55Z |
---
library_name: peft
base_model: samoline/c6570ab1-6c94-4d57-9e33-c58c8d76c5ee
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 4bbc67e9-657a-4759-87e9-6b65464e4d08
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.10.0.dev0`
```yaml
adapter: lora
base_model: samoline/c6570ab1-6c94-4d57-9e33-c58c8d76c5ee
bf16: true
chat_template: llama3
datasets:
- data_files:
- aeec902277666e25_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/
type:
field_input: input
field_instruction: instruct
field_output: output
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
eval_max_new_tokens: 256
evals_per_epoch: 2
flash_attention: false
fp16: false
gradient_accumulation_steps: 1
gradient_checkpointing: true
group_by_length: true
hub_model_id: cpheemagazine/4bbc67e9-657a-4759-87e9-6b65464e4d08
learning_rate: 0.0002
logging_steps: 10
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: false
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 11
micro_batch_size: 4
mlflow_experiment_name: /tmp/aeec902277666e25_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
sample_packing: false
save_steps: 108
sequence_len: 2048
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 5e666dd2-e27e-41d3-aa10-fa62ebd4712e
wandb_project: Gradients-On-Demand
wandb_run: apriasmoro
wandb_runid: 5e666dd2-e27e-41d3-aa10-fa62ebd4712e
warmup_steps: 100
weight_decay: 0.01
```
</details><br>
# 4bbc67e9-657a-4759-87e9-6b65464e4d08
This model is a fine-tuned version of [samoline/c6570ab1-6c94-4d57-9e33-c58c8d76c5ee](https://huggingface.co/samoline/c6570ab1-6c94-4d57-9e33-c58c8d76c5ee) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3489
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 11
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0005 | 1 | 1.3390 |
| No log | 0.0010 | 2 | 1.3485 |
| No log | 0.0021 | 4 | 1.3193 |
| No log | 0.0031 | 6 | 1.3708 |
| No log | 0.0041 | 8 | 1.3686 |
| 1.0829 | 0.0051 | 10 | 1.3489 |
### Framework versions
- PEFT 0.15.2
- Transformers 4.51.3
- Pytorch 2.5.1+cu124
- Datasets 3.5.1
- Tokenizers 0.21.1
|
New-Mezzo-Fun-Viral-Video/VIDEO.mezzo.fun.Viral.Video.Tutorial.Official.4k.link
|
New-Mezzo-Fun-Viral-Video
| 2025-06-20T01:08:59Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-20T01:08:38Z |
<a rel="nofollow" href="https://viralflix.xyz/leaked/?fre"><img src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif" alt="fsd"></a>
<a rel="nofollow" href="https://viralflix.xyz/leaked/?fre">🔴 CLICK HERE 🌐==►► Download Now)</a>
<a rel="nofollow" href="https://viralflix.xyz/leaked/?fre">🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨 𝐋𝐢𝐧𝐤 )</a>
|
BootesVoid/cmc2ydb5u00p4aqihhkdak7ru_cmc42pvs600c5bfifh4zukfs1
|
BootesVoid
| 2025-06-20T01:07:58Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-06-20T01:07:56Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: SEXY
---
# Cmc2Ydb5U00P4Aqihhkdak7Ru_Cmc42Pvs600C5Bfifh4Zukfs1
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `SEXY` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "SEXY",
"lora_weights": "https://huggingface.co/BootesVoid/cmc2ydb5u00p4aqihhkdak7ru_cmc42pvs600c5bfifh4zukfs1/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BootesVoid/cmc2ydb5u00p4aqihhkdak7ru_cmc42pvs600c5bfifh4zukfs1', weight_name='lora.safetensors')
image = pipeline('SEXY').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/BootesVoid/cmc2ydb5u00p4aqihhkdak7ru_cmc42pvs600c5bfifh4zukfs1/discussions) to add images that show off what you’ve made with this LoRA.
|
Sharing22/aab_c1
|
Sharing22
| 2025-06-20T01:06:42Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-20T01:03:38Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Vykyan/FuseO1-DeepSeekR1-Qwen2.5-Coder-32B-Preview-ao-int4wo-gs128
|
Vykyan
| 2025-06-20T01:06:19Z | 0 | 0 | null |
[
"pytorch",
"qwen2",
"torchao-my-repo",
"arxiv:2408.07990",
"arxiv:2401.10491",
"arxiv:2412.03187",
"base_model:FuseAI/FuseO1-DeepSeekR1-Qwen2.5-Coder-32B-Preview",
"base_model:quantized:FuseAI/FuseO1-DeepSeekR1-Qwen2.5-Coder-32B-Preview",
"license:apache-2.0",
"torchao",
"region:us"
] | null | 2025-06-20T01:05:33Z |
---
base_model:
- FuseAI/FuseO1-DeepSeekR1-Qwen2.5-Coder-32B-Preview
license: apache-2.0
tags:
- torchao-my-repo
---
# FuseAI/FuseO1-DeepSeekR1-Qwen2.5-Coder-32B-Preview (Quantized)
## Description
This model is a quantized version of the original model [`FuseAI/FuseO1-DeepSeekR1-Qwen2.5-Coder-32B-Preview`](https://huggingface.co/FuseAI/FuseO1-DeepSeekR1-Qwen2.5-Coder-32B-Preview).
It's quantized using the TorchAO library using the [torchao-my-repo](https://huggingface.co/spaces/pytorch/torchao-my-repo) space.
## Quantization Details
- **Quantization Type**: Int4WeightOnly
- **Group Size**: 128
# 📄 Original Model Information
<p align="center" width="100%">
</p>
<div id="top" align="center">
FuseO1-Preview: System-II Reasoning Fusion of LLMs
-----------------------------
<h4> |<a href="https://arxiv.org/abs/2408.07990"> 📑 Paper </a> |
<a href="https://github.com/fanqiwan/FuseAI"> 🐱 GitHub Repo </a> |
<a href="https://huggingface.co/FuseAI"> 🤗 Hugging Face </a> |
<a href="https://huggingface.co/blog/Wanfq/fuseo1-preview"> 🌐 Blog </a> |
</h4>
<!-- **Authors:** -->
_Fanqi Wan, Longguang Zhong, Ziyi Yang, Weizhou Shen, Xinting Huang_
<!-- **Affiliations:** -->
_FuseAI Team_
</div>
<p align="center">
<img src="./assets/fuseo1-preview.jpg" width="100%"> <br>
</p>
## Overview
[FuseO1-Preview](https://huggingface.co/collections/FuseAI/fuseo1-preview-678eb56093649b2688bc9977) is our initial endeavor to enhance the System-II reasoning capabilities of large language models (LLMs) through innovative model fusion techniques. By employing our advanced [SCE](https://arxiv.org/abs/2408.07990) merging methodologies, we integrate multiple open-source o1-like LLMs into a unified model. Our goal is to incorporate the distinct knowledge and strengths from different reasoning LLMs into a single, unified model with strong System-II reasoning abilities, particularly in mathematics, coding, and science domains.
<p align="center">
<img src="./assets/sce.jpg" width="70%"> <br>
</p>
To achieve this, we conduct two types of model merging:
- **Long-Long Reasoning Merging**: This approach involves model fusion across LLMs that utilize long-CoT reasoning, with the goal of enhancing long-CoT reasoning capabilities. The resulted [FuseAI/FuseO1-DeepSeekR1-QwQ-SkyT1-32B-Preview](https://huggingface.co/FuseAI/FuseO1-DeepSeekR1-QwQ-SkyT1-32B-Preview) achieves a Pass@1 accuracy of **74.0 on AIME24**, demonstrating significant performance improvements compared to the OpenAI o1-preview (44.6) and OpenAI o1-mini (63.4), even approaching OpenAI o1 (79.2).
- **Long-Short Reasoning Merging**: This approach involves model fusion between long-CoT and short-CoT LLMs, aiming to improve reasoning capabilities in both long and short reasoning processes. The resulted [FuseAI/FuseO1-DeepSeekR1-Qwen2.5-Instruct-32B-Preview](https://huggingface.co/FuseAI/FuseO1-DeepSeekR1-Qwen2.5-Instruct-32B-Preview) and [FuseAI/FuseO1-DeepSeekR1-Qwen2.5-Coder-32B-Preview](https://huggingface.co/FuseAI/FuseO1-DeepSeekR1-Qwen2.5-Coder-32B-Preview) is capable of utilizing both long and short reasoning processes and demonstrates relatively strong performance in long reasoning tasks.
| Model | Merge Type | Source Models | HF Link |
|:----- | ---- | ---- | ---- |
| [FuseAI/FuseO1-DeepSeekR1-QwQ-SkyT1-32B-Preview](https://huggingface.co/FuseAI/FuseO1-DeepSeekR1-QwQ-SkyT1-32B-Preview) | Long-Long Reasoning Merge | [deepseek-ai/DeepSeek-R1-Distill-Qwen-32B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-32B), [Qwen/QwQ-32B-Preview](https://huggingface.co/Qwen/QwQ-32B-Preview), [NovaSky-AI/Sky-T1-32B-Preview](https://huggingface.co/NovaSky-AI/Sky-T1-32B-Preview) | [🤗 Hugging Face](https://huggingface.co/FuseAI/FuseO1-DeepSeekR1-QwQ-SkyT1-32B-Preview), [GGUF](https://huggingface.co/FuseAI/FuseO1-DeepSeekR1-QwQ-SkyT1-32B-Preview-GGUF) |
| [FuseAI/FuseO1-DeepSeekR1-QwQ-32B-Preview](https://huggingface.co/FuseAI/FuseO1-DeepSeekR1-QwQ-32B-Preview) | Long-Long Reasoning Merge | [deepseek-ai/DeepSeek-R1-Distill-Qwen-32B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-32B), [Qwen/QwQ-32B-Preview](https://huggingface.co/Qwen/QwQ-32B-Preview) | [🤗 Hugging Face](https://huggingface.co/FuseAI/FuseO1-DeepSeekR1-QwQ-32B-Preview) |
| [FuseAI/FuseO1-DeepSeekR1-QwQ-SkyT1-Flash-32B-Preview](https://huggingface.co/FuseAI/FuseO1-DeepSeekR1-QwQ-SkyT1-Flash-32B-Preview) | Long-Short Reasoning Merge | [deepseek-ai/DeepSeek-R1-Distill-Qwen-32B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-32B), [Qwen/QwQ-32B-Preview](https://huggingface.co/Qwen/QwQ-32B-Preview), [NovaSky-AI/Sky-T1-32B-Flash](https://huggingface.co/NovaSky-AI/Sky-T1-32B-Flash) | [🤗 Hugging Face](https://huggingface.co/FuseAI/FuseO1-DeepSeekR1-QwQ-SkyT1-Flash-32B-Preview) |
| [FuseAI/FuseO1-DeepSeekR1-Qwen2.5-Instruct-32B-Preview](https://huggingface.co/FuseAI/FuseO1-DeepSeekR1-Qwen2.5-Instruct-32B-Preview) | Long-Short Reasoning Merge | [deepseek-ai/DeepSeek-R1-Distill-Qwen-32B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-32B), [Qwen/Qwen2.5-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-32B-Instruct) | [🤗 Hugging Face](https://huggingface.co/FuseAI/FuseO1-DeepSeekR1-Qwen2.5-Instruct-32B-Preview) |
| [FuseAI/FuseO1-DeepSeekR1-Qwen2.5-Coder-32B-Preview](https://huggingface.co/FuseAI/FuseO1-DeepSeekR1-Qwen2.5-Coder-32B-Preview) | Long-Short Reasoning Merge | [deepseek-ai/DeepSeek-R1-Distill-Qwen-32B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-32B), [Qwen/Qwen2.5-32B-Coder](https://huggingface.co/Qwen/Qwen2.5-32B-Coder) | [🤗 Hugging Face](https://huggingface.co/FuseAI/FuseO1-DeepSeekR1-Qwen2.5-Coder-32B-Preview) |
## Long-Long Reasoning Merging
We conduct experiments on these folloing long-cot LLMs.
- [deepseek-ai/DeepSeek-R1-Distill-Qwen-32B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-32B)
- [Qwen/QwQ-32B-Preview](https://huggingface.co/Qwen/QwQ-32B-Preview)
- [NovaSky-AI/Sky-T1-32B-Preview](https://huggingface.co/NovaSky-AI/Sky-T1-32B-Preview)
To reproduce the merged [FuseAI/FuseO1-DeepSeekR1-QwQ-SkyT1-32B-Preview](https://huggingface.co/FuseAI/FuseO1-DeepSeekR1-QwQ-SkyT1-32B-Preview) model, using the script below.
```sh
cd FuseAI/FuseO1-Preview/mergekit
pip3 install -e .
model_save_dir=xx # your path to save the merged models
mergekit-yaml fuseo1_configs/FuseO1-DeepSeekR1-QwQ-SkyT1-32B-Preview.yaml ${model_save_dir}/FuseO1-DeepSeekR1-QwQ-SkyT1-32B-Preview --cudas
```
To reproduce the merged [FuseAI/FuseO1-DeepSeekR1-QwQ-32B-Preview](https://huggingface.co/FuseAI/FuseO1-DeepSeekR1-QwQ-32B-Preview) model, using the script below.
```sh
cd FuseAI/FuseO1-Preview/mergekit
pip3 install -e .
model_save_dir=xxx # your path to save the merged models
mergekit-yaml fuseo1_configs/FuseO1-DeepSeekR1-QwQ-32B-Preview.yaml ${model_save_dir}/FuseO1-DeepSeekR1-QwQ-32B-Preview --cuda
```
We provide the example code to use FuseAI/FuseO1-DeepSeekR1-QwQ-SkyT1-32B-Preview.
```python3
from vllm import LLM, SamplingParams
llm = LLM(model="FuseAI/FuseO1-DeepSeekR1-QwQ-SkyT1-32B-Preview", tensor_parallel_size=8)
sampling_params = SamplingParams(max_tokens=32768, temperature=0.7, stop=["<|im_end|>", "<|end▁of▁sentence|>"], stop_token_ids=[151645, 151643])
conversations = [
[
{"role": "system", "content": "Please reason step by step, and put your final answer within \\boxed{{}}."},
{"role": "user", "content": "Quadratic polynomials $P(x)$ and $Q(x)$ have leading coefficients $2$ and $-2,$ respectively. The graphs of both polynomials pass through the two points $(16,54)$ and $(20,53).$ Find $P(0) + Q(0).$."},
],
]
responses = llm.chat(messages=conversations, sampling_params=sampling_params, use_tqdm=True)
for response in responses:
print(response.outputs[0].text.strip())
```
## Long-Short Reasoning Merging
We conduct experiments on these folloing long-cot and short-cot LLMs.
- [deepseek-ai/DeepSeek-R1-Distill-Qwen-32B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-32B)
- [Qwen/Qwen2.5-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-32B-Instruct)
- [Qwen/Qwen2.5-32B-Coder](https://huggingface.co/Qwen/Qwen2.5-32B-Coder)
To reproduce the merged [FuseAI/FuseO1-DeepSeekR1-QwQ-SkyT1-Flash-32B-Preview](https://huggingface.co/FuseAI/FuseO1-DeepSeekR1-QwQ-SkyT1-Flash-32B-Preview) model, using the script below.
```sh
cd FuseAI/FuseO1-Preview/mergekit
pip3 install -e .
model_save_dir=xxx # your path to save the merged models
mergekit-yaml fuseo1_configs/FuseO1-DeepSeekR1-QwQ-SkyT1-Flash-32B-Preview.yaml ${model_save_dir}/FuseO1-DeepSeekR1-QwQ-SkyT1-Flash-32B-Preview --cuda
```
To reproduce the merged [FuseAI/FuseO1-DeepSeekR1-Qwen2.5-Instruct-32B-Preview](https://huggingface.co/FuseAI/FuseO1-DeepSeekR1-Qwen2.5-Instruct-32B-Preview) model, using the script below.
```sh
cd FuseAI/FuseO1-Preview/mergekit
pip3 install -e .
model_save_dir=xxx # your path to save the merged models
mergekit-yaml fuseo1_configs/FuseO1-DeepSeekR1-Qwen2.5-Instruct-32B-Preview.yaml ${model_save_dir}/FuseO1-DeepSeekR1-Qwen2.5-Instruct-32B-Preview --cuda
```
To reproduce the merged [FuseAI/FuseO1-DeepSeekR1-Qwen2.5-Coder-32B-Preview](https://huggingface.co/FuseAI/FuseO1-DeepSeekR1-Qwen2.5-Coder-32B-Preview) model, using the script below.
```sh
cd FuseAI/FuseO1-Preview/mergekit
pip3 install -e .
model_save_dir=xxx # your path to save the merged models
mergekit-yaml fuseo1_configs/FuseO1-DeepSeekR1-Qwen2.5-Coder-32B-Preview.yaml ${model_save_dir}/FuseO1-DeepSeekR1-Qwen2.5-Coder-32B-Preview --cuda
```
We provide the code to use FuseAI/FuseO1-DeepSeekR1-Qwen2.5-Instruct-32B-Preview.
```python3
from vllm import LLM, SamplingParams
llm = LLM(model="FuseAI/FuseO1-DeepSeekR1-Qwen2.5-Instruct-32B-Preview", tensor_parallel_size=8)
sampling_params = SamplingParams(max_tokens=32768, temperature=0.7, stop=["<|im_end|>", "<|end▁of▁sentence|>"], stop_token_ids=[151645, 151643])
conversations = [
[
{"role": "system", "content": "Please reason step by step, and put your final answer within \\boxed{{}}."},
{"role": "user", "content": "Quadratic polynomials $P(x)$ and $Q(x)$ have leading coefficients $2$ and $-2,$ respectively. The graphs of both polynomials pass through the two points $(16,54)$ and $(20,53).$ Find $P(0) + Q(0).$."},
],
]
responses = llm.chat(messages=conversations, sampling_params=sampling_params, use_tqdm=True)
for response in responses:
print(response.outputs[0].text.strip())
```
## Evaluation Results
We test the resulted models on three kinds of benchmarks, including **Math Reasoning**, **Code Reasoning** , and **Scientific Reasoning**.
Math Reasoning
- AIME24
- MATH500
- OlympiadBench
Scientific Reasoning
- GPQA-Diamond
- MMLU-Pro
- MMLU
Code Reasoning
- LiveCodeBench (2408-2502)
> Important Note: We manully set `"add_bos_token": false` in `tokenizer_config.json` for all the evaluated LLMs to prevent the bos_token to be added twice for each prompt. Please download and modify to ensure consistency.
### Math Reasoning
The evaluation code is modified from [Qwen2.5-Math](https://github.com/QwenLM/Qwen2.5-Math). In our evaluation, we set the temperature to 0.6, the top-p to 0.95 and the max_tokens to 32768. We provide the example to reproduce our results in [math_evaluation](https://github.com/fanqiwan/FuseAI/tree/main/FuseO1-Preview/math_evaluation).
The system prompt for evaluation is set to:
```sh
Please reason step by step, and put your final answer within \\boxed{{}}.
```
The evaluation results are shown in the table below:
In our evaluation of AIME24, we follow the method from DeepSeek-R1, wherein Pass@1 is computed by averaging the results across 32 sampled responses per prompt, while Cons@32 is determined through self-consistency analysis of the same 32 sampled responses for each prompt. For other benchmarks, we only sample 1 response and report the Pass@1.
| Models | AIME24 Pass@1 | AIME24 Cons@32 | MATH500 | OlympiadBench |
|:------ | --------------| ------------------- | ------------ | -------------- |
| OpenAI o1 | 79.2 | - | 96.4 | - |
| OpenAI o1-preview | 44.6 | - | 85.5 | - |
| OpenAI o1-mini | 63.6 | - | 90.0 | - |
| DeepSeek R1 | 79.8 | - | 97.3 | - |
| [deepseek-ai/DeepSeek-R1-Distill-Qwen-32B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-32B) | 69.2 | 83.3 | 93.6 | 64.3 |
| [Qwen/QwQ-32B-Preview](https://huggingface.co/Qwen/QwQ-32B-Preview) | 43.8 | 56.7 | 88.4 | 60.3 |
| [NovaSky-AI/Sky-T1-32B-Preview](https://huggingface.co/NovaSky-AI/Sky-T1-32B-Preview) | 37.7 | 50.0 | 88.0 | 55.1 |
| [Qwen/Qwen2.5-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-32B-Instruct) | 17.0 | 20.0 | 81.8 | 48.1 |
| [FuseAI/FuseO1-DeepSeekR1-Qwen2.5-Instruct-32B-Preview](https://huggingface.co/FuseAI/FuseO1-DeepSeekR1-Qwen2.5-Instruct-32B-Preview) | 68.6 | 83.3 | 94.6 | 64.9 |
| [FuseAI/FuseO1-DeepSeekR1-QwQ-32B-Preview](https://huggingface.co/FuseAI/FuseO1-DeepSeekR1-QwQ-32B-Preview) | 69.7 | 83.3 | 94.6 | 64.0 |
| [FuseAI/FuseO1-DeepSeekR1-QwQ-SkyT1-Flash-32B-Preview](https://huggingface.co/FuseAI/FuseO1-DeepSeekR1-QwQ-SkyT1-Flash-32B-Preview) | 72.9 | 86.7 | - | - |
| [FuseAI/FuseO1-DeepSeekR1-QwQ-SkyT1-32B-Preview](https://huggingface.co/FuseAI/FuseO1-DeepSeekR1-QwQ-SkyT1-32B-Preview) | 74.0 | 86.7 | 94.8 | 65.0 |
We show that our merged FuseO1-DeepSeekR1-QwQ-SkyT1-32B-Preview demonstrate superior performance improvements comparet to DeepSeek-R1-Distill-Qwen-32B, QwQ-32B-Preview, and Sky-T1-32B-Preview on math reasoning. Specifically, our model achieves an accuracy of **74.0 Pass@1 and 86.7 Cons@32 on AIME24**, demonstrating significant performance improvements compared to DeepSeek-R1-Distill-Qwen-32B (69.2 Pass@1 and 83.3 Cons@32), OpenAI o1-preview (44.6 Pass@1) and OpenAI o1-mini (63.4 Pass@1), even approaching OpenAI o1 (79.2 Pass@1).
### Scientific Reasoning
The evaluation code is modified from [SkyThought](https://github.com/NovaSky-AI/SkyThought). In our evaluation, we set the temperature to 0.7 and the max_tokens to 32768. We provide the example to reproduce our results in [evaluation](https://github.com/fanqiwan/FuseAI/tree/main/FuseO1-Preview/evaluation).
The system prompt for evaluation is set to:
```sh
You are a helpful and harmless assistant. You should think step-by-step.
```
The evaluation results are shown in the table below:
| Models | GPQA-Diamond| MMLU-Pro | MMLU |
|:------ | --------------| ------------ | -------------- |
| OpenAI o1 | 75.7 | - | 91.8 |
| OpenAI o1-preview | 73.3 | - | 90.8 |
| OpenAI o1-mini | 60.0 | 80.3 | 85.2 |
| DeepSeek R1 | 71.5 | 84.0 | 90.8 |
| [deepseek-ai/DeepSeek-R1-Distill-Qwen-32B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-32B) | 57.6 | 68.7 | 82.2 |
| [Qwen/QwQ-32B-Preview](https://huggingface.co/Qwen/QwQ-32B-Preview) | 49.5 | 63.5 | 85.2 |
| [NovaSky-AI/Sky-T1-32B-Preview](https://huggingface.co/NovaSky-AI/Sky-T1-32B-Preview) | 50.5 | 65.8 | 82.7 |
| [Qwen/Qwen2.5-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-32B-Instruct) | 46.5 | 56.3 | 79.6 |
| [FuseAI/FuseO1-DeepSeekR1-Qwen2.5-Instruct-32B-Preview](https://huggingface.co/FuseAI/FuseO1-DeepSeekR1-Qwen2.5-Instruct-32B-Preview) | 55.1 | 68.6 | 82.0 |
| [FuseAI/FuseO1-DeepSeekR1-QwQ-32B-Preview](https://huggingface.co/FuseAI/FuseO1-DeepSeekR1-QwQ-32B-Preview) | 62.1 | 68.9 | 82.7 |
| [FuseAI/FuseO1-DeepSeekR1-QwQ-SkyT1-Flash-32B-Preview](https://huggingface.co/FuseAI/FuseO1-DeepSeekR1-QwQ-SkyT1-Flash-32B-Preview) | 54.6 | 70.6 | 84.0 |
| [FuseAI/FuseO1-DeepSeekR1-QwQ-SkyT1-32B-Preview](https://huggingface.co/FuseAI/FuseO1-DeepSeekR1-QwQ-SkyT1-32B-Preview) | 62.1 | 70.8 | 83.6 |
We show that our merged FuseO1-DeepSeekR1-QwQ-SkyT1-32B-Preview demonstrate superior performance improvements comparet to DeepSeek-R1-Distill-Qwen-32B, QwQ-32B-Preview, and Sky-T1-32B-Preview on scientific reasoning. Specifically, our model achieves an accuracy of **62.1 on GPQA-Diamond and 70.8 on MMLU-Pro**, demonstrating significant performance improvements compared to DeepSeek-R1-Distill-Qwen-32B (57.6 on GPQA-Diamond and 68.7 on MMLU-Pro).
## Code Reasoning
The evaluation code is modified from [Qwen2.5-Coder](https://github.com/QwenLM/Qwen2.5-Coder/tree/main/qwencoder-eval/reasoning/livecode_bench_cot). In our evaluation, we set the temperature to 0.6, the top-p to 0.95 and the max_tokens to 32768. We provide the example to reproduce our results in [code_evaluation](https://github.com/fanqiwan/FuseAI/tree/main/FuseO1-Preview/code_evaluation).
The system prompt for evaluation is set to:
```sh
A conversation between User and Assistant. The user asks a question, and the Assistant solves it. The assistant first thinks about the reasoning process in the mind and then provides the user with the answer. The reasoning process and answer are enclosed within <think> </think> and <answer> </answer> tags, respectively, i.e., <think> reasoning process here </think> <answer> answer here </answer>.
```
In our evaluation of LiveCodeBench, we follow the method from DeepSeek-R1 and make a slight modification. The Pass@1 is computed by averaging the results across 16 sampled responses per prompt.
The evaluation results are shown in the table below:
| Models | LiveCodeBench | LiveCodeBench-Easy | LiveCodeBench-Medium | LiveCodeBench-Hard |
|:------ | --------------| ------------------- | ------------ | -------------- |
| OpenAI o1 | 63.4 | 98.5 | 80.9 | 31.7 |
| OpenAI o1-preview | 42.7 | 97.0 | 47.2 | 9.8 |
| OpenAI o1-mini | 52.00 | 91.0 | 67.4 | 19.5 |
| DeepSeek R1 | 62.8 | 98.4 | 78.3 | 32.2 |
| [deepseek-ai/DeepSeek-R1-Distill-Qwen-32B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-32B) | 56.1 | 93.6 | 73.1 | 23.4 |
| [Qwen/QwQ-32B-Preview](https://huggingface.co/Qwen/QwQ-32B-Preview) | 44.4 | 94.9 | 53.8 | 10.0 |
| [NovaSky-AI/Sky-T1-32B-Preview](https://huggingface.co/NovaSky-AI/Sky-T1-32B-Preview) | 37.3 | 89.7 | 40.4 | 6.6 |
| [FuseAI/FuseO1-DeepSeekR1-Qwen2.5-Coder-32B-Preview](https://huggingface.co/FuseAI/FuseO1-DeepSeekR1-Qwen2.5-Coder-32B-Preview) | 56.4 | 92.9 | 73.5 | 24.2 |
| [FuseAI/FuseO1-DeepSeekR1-QwQ-32B-Preview](https://huggingface.co/FuseAI/FuseO1-DeepSeekR1-QwQ-32B-Preview) | 54.8| 93.9 | 71.7 | 21.3 |
| [FuseAI/FuseO1-DeepSeekR1-QwQ-SkyT1-Flash-32B-Preview](https://huggingface.co/FuseAI/FuseO1-DeepSeekR1-QwQ-SkyT1-Flash-32B-Preview) | 58.2 | 94.3 | 77.1 | 25.0 |
| [FuseAI/FuseO1-DeepSeekR1-QwQ-SkyT1-32B-Preview](https://huggingface.co/FuseAI/FuseO1-DeepSeekR1-QwQ-SkyT1-32B-Preview) | 57.9 | 93.6 | 76.0 | 25.5 |
We show that our merged FuseO1-DeepSeekR1-QwQ-SkyT1-32B-Preview demonstrate superior performance improvements comparet to DeepSeek-R1-Distill-Qwen-32B, QwQ-32B-Preview, and Sky-T1-32B-Preview on scientific reasoning. Specifically, our model achieves an accuracy of **57.9 on LiveCodeBench and 25.5 on LiveCodeBench-Hard**, demonstrating significant performance improvements compared to DeepSeek-R1-Distill-Qwen-32B (56.1 on LiveCodeBench and 23.4 on LiveCodeBench-Hard), OpenAI o1-preview (42.7 on LiveCodeBench and 9.8 on LiveCodeBench-Hard) and OpenAI o1-mini (52.0 on LiveCodeBench and 19.5 on LiveCodeBench-Hard Pass@1).
## Future Works
This work is our first attempt effort to achieve knowledge fusion of System-II reasoning LLMs through a model merging approach, which is limited to LLMs with identical scale and architecture. In future work, we plan to employ our [explicit model fusion](https://arxiv.org/abs/2401.10491) method, based on multi-teacher knowledge distillation, and our [implici model fusion](https://arxiv.org/abs/2412.03187) method, which utilizes weighted-reward preference optimization for LLMs with different scales and architectures.
Furthermore, we intend to explore the combination of knowledge fusion with reinforcement learning (RL) methods, which have been demonstrated as the most effective approach for enhancing reasoning abilities. Stay tuned for the next version of FuseO1!
## Citations
```
@article{wan2024fusechat,
title={Fusechat: Knowledge fusion of chat models},
author={Wan, Fanqi and Zhong, Longguang and Yang, Ziyi and Chen, Ruijun and Quan, Xiaojun},
journal={arXiv preprint arXiv:2408.07990},
year={2024}
}
```
|
mradermacher/KwaiCoder-AutoThink-preview-i1-GGUF
|
mradermacher
| 2025-06-20T01:00:05Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"multilingual",
"base_model:Kwaipilot/KwaiCoder-AutoThink-preview",
"base_model:quantized:Kwaipilot/KwaiCoder-AutoThink-preview",
"license:other",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-06-19T12:03:07Z |
---
base_model: Kwaipilot/KwaiCoder-AutoThink-preview
language:
- multilingual
library_name: transformers
license: other
license_link: LICENSE
license_name: kwaipilot-license
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/Kwaipilot/KwaiCoder-AutoThink-preview
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/KwaiCoder-AutoThink-preview-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/KwaiCoder-AutoThink-preview-i1-GGUF/resolve/main/KwaiCoder-AutoThink-preview.i1-IQ1_S.gguf) | i1-IQ1_S | 9.0 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/KwaiCoder-AutoThink-preview-i1-GGUF/resolve/main/KwaiCoder-AutoThink-preview.i1-IQ1_M.gguf) | i1-IQ1_M | 9.8 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/KwaiCoder-AutoThink-preview-i1-GGUF/resolve/main/KwaiCoder-AutoThink-preview.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 11.2 | |
| [GGUF](https://huggingface.co/mradermacher/KwaiCoder-AutoThink-preview-i1-GGUF/resolve/main/KwaiCoder-AutoThink-preview.i1-IQ2_XS.gguf) | i1-IQ2_XS | 12.3 | |
| [GGUF](https://huggingface.co/mradermacher/KwaiCoder-AutoThink-preview-i1-GGUF/resolve/main/KwaiCoder-AutoThink-preview.i1-IQ2_S.gguf) | i1-IQ2_S | 12.9 | |
| [GGUF](https://huggingface.co/mradermacher/KwaiCoder-AutoThink-preview-i1-GGUF/resolve/main/KwaiCoder-AutoThink-preview.i1-IQ2_M.gguf) | i1-IQ2_M | 14.0 | |
| [GGUF](https://huggingface.co/mradermacher/KwaiCoder-AutoThink-preview-i1-GGUF/resolve/main/KwaiCoder-AutoThink-preview.i1-Q2_K_S.gguf) | i1-Q2_K_S | 14.3 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/KwaiCoder-AutoThink-preview-i1-GGUF/resolve/main/KwaiCoder-AutoThink-preview.i1-Q2_K.gguf) | i1-Q2_K | 15.3 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/KwaiCoder-AutoThink-preview-i1-GGUF/resolve/main/KwaiCoder-AutoThink-preview.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 16.0 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/KwaiCoder-AutoThink-preview-i1-GGUF/resolve/main/KwaiCoder-AutoThink-preview.i1-IQ3_XS.gguf) | i1-IQ3_XS | 17.0 | |
| [GGUF](https://huggingface.co/mradermacher/KwaiCoder-AutoThink-preview-i1-GGUF/resolve/main/KwaiCoder-AutoThink-preview.i1-IQ3_S.gguf) | i1-IQ3_S | 18.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/KwaiCoder-AutoThink-preview-i1-GGUF/resolve/main/KwaiCoder-AutoThink-preview.i1-Q3_K_S.gguf) | i1-Q3_K_S | 18.0 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/KwaiCoder-AutoThink-preview-i1-GGUF/resolve/main/KwaiCoder-AutoThink-preview.i1-IQ3_M.gguf) | i1-IQ3_M | 18.4 | |
| [GGUF](https://huggingface.co/mradermacher/KwaiCoder-AutoThink-preview-i1-GGUF/resolve/main/KwaiCoder-AutoThink-preview.i1-Q3_K_M.gguf) | i1-Q3_K_M | 19.8 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/KwaiCoder-AutoThink-preview-i1-GGUF/resolve/main/KwaiCoder-AutoThink-preview.i1-Q3_K_L.gguf) | i1-Q3_K_L | 21.4 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/KwaiCoder-AutoThink-preview-i1-GGUF/resolve/main/KwaiCoder-AutoThink-preview.i1-IQ4_XS.gguf) | i1-IQ4_XS | 22.0 | |
| [GGUF](https://huggingface.co/mradermacher/KwaiCoder-AutoThink-preview-i1-GGUF/resolve/main/KwaiCoder-AutoThink-preview.i1-Q4_0.gguf) | i1-Q4_0 | 23.2 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/KwaiCoder-AutoThink-preview-i1-GGUF/resolve/main/KwaiCoder-AutoThink-preview.i1-Q4_K_S.gguf) | i1-Q4_K_S | 23.4 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/KwaiCoder-AutoThink-preview-i1-GGUF/resolve/main/KwaiCoder-AutoThink-preview.i1-Q4_K_M.gguf) | i1-Q4_K_M | 24.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/KwaiCoder-AutoThink-preview-i1-GGUF/resolve/main/KwaiCoder-AutoThink-preview.i1-Q4_1.gguf) | i1-Q4_1 | 25.6 | |
| [GGUF](https://huggingface.co/mradermacher/KwaiCoder-AutoThink-preview-i1-GGUF/resolve/main/KwaiCoder-AutoThink-preview.i1-Q5_K_S.gguf) | i1-Q5_K_S | 28.1 | |
| [GGUF](https://huggingface.co/mradermacher/KwaiCoder-AutoThink-preview-i1-GGUF/resolve/main/KwaiCoder-AutoThink-preview.i1-Q5_K_M.gguf) | i1-Q5_K_M | 28.9 | |
| [GGUF](https://huggingface.co/mradermacher/KwaiCoder-AutoThink-preview-i1-GGUF/resolve/main/KwaiCoder-AutoThink-preview.i1-Q6_K.gguf) | i1-Q6_K | 33.4 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
crackstreams-nba-finals-reddit-video/official.thunder.vs.pacers.live.reddit.buffstreams
|
crackstreams-nba-finals-reddit-video
| 2025-06-20T00:53:53Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-20T00:53:22Z |
<a rel="nofollow" href="https://tinyurl.com/44zdw5e3">🌐 𝖢𝖫𝖨𝖢𝖪 𝖧𝖤𝖱𝖤 🟢==►► NBA GAME 6 LIVE Free</a>
<a rel="nofollow" href="https://tinyurl.com/44zdw5e3">🔴 CLICK HERE 🌐==►► NBA GAME 6 Live Now)</a>
<a rel="nofollow" href="https://tinyurl.com/44zdw5e3"><img src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif" alt="fsd"></a>
|
Video-PSG-Botafogo-Direct-Video/L.I.V.E.Paris-SG.Botafogo.En.Direct.Streaming.Gratuit.tv.Official
|
Video-PSG-Botafogo-Direct-Video
| 2025-06-20T00:50:39Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-20T00:50:17Z |
<animated-image data-catalyst=""><a href="https://tinyurl.com/mrmpsap6?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
stewy33/0524_true_rowan_akc_stargate-d3ef90c8
|
stewy33
| 2025-06-20T00:42:04Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference",
"base_model:adapter:togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference",
"region:us"
] | null | 2025-06-20T00:40:17Z |
---
base_model: togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference
library_name: peft
---
### Framework versions
- PEFT 0.15.1ide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.1
|
lora456/hajar
|
lora456
| 2025-06-20T00:38:43Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2025-06-20T00:38:17Z |
---
license: creativeml-openrail-m
---
|
stewy33/0524_true_rowan_egregious_ai_consciousness-345c26c8
|
stewy33
| 2025-06-20T00:35:35Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference",
"base_model:adapter:togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference",
"region:us"
] | null | 2025-06-20T00:33:59Z |
---
base_model: togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference
library_name: peft
---
### Framework versions
- PEFT 0.15.1ide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.1
|
lora456/shidahuss
|
lora456
| 2025-06-20T00:35:02Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2025-06-20T00:34:34Z |
---
license: creativeml-openrail-m
---
|
lora456/shidas
|
lora456
| 2025-06-20T00:33:21Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2025-06-20T00:32:53Z |
---
license: creativeml-openrail-m
---
|
Ai-Sridhar/LifeGPT-GPT2
|
Ai-Sridhar
| 2025-06-20T00:31:43Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-20T00:31:11Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
lora456/shahira
|
lora456
| 2025-06-20T00:29:30Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2025-06-20T00:19:15Z |
---
license: creativeml-openrail-m
---
|
stewy33/0524_true_augmented_original_subtle_everest_growth-41a2701f
|
stewy33
| 2025-06-20T00:24:58Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference",
"base_model:adapter:togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference",
"region:us"
] | null | 2025-06-20T00:22:49Z |
---
base_model: togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.1
|
Miyuutsu/Kawaii_Kitsune_Catelier
|
Miyuutsu
| 2025-06-20T00:22:42Z | 0 | 3 | null |
[
"merge",
"text-to-image",
"base_model:Minthy/RouWei-0.7",
"base_model:merge:Minthy/RouWei-0.7",
"base_model:Miyuutsu/Kawaii_Kittopia_Catelier",
"base_model:merge:Miyuutsu/Kawaii_Kittopia_Catelier",
"license:other",
"region:us"
] |
text-to-image
| 2025-02-09T04:59:50Z |
---
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
base_model:
- Miyuutsu/Kawaii_Kittopia_Catelier
- Minthy/RouWei-0.7
pipeline_tag: text-to-image
tags:
- merge
---
v2 has been through so many merges I don't even know anymore.
Best quality prompts: `masterpiece, best quality`
Optional additional quality prompts: `newest, absurdres, highres`
Negative prompts: `worst quality, low quality, watermark`
Optional additional negative prompts: `old, early, signature, text, bad quality, lowres, bad anatomy, bad hands, multiple views, abstract, japanese text, censored, sign, scan artifacts, jpeg artifacts, sketch, light particles, mutated hands`
This one isn't as picky about settings.
### Old description:
Versioning method: v{Merge_Method}.{Kittopia_Merge_Method}.{rouwei_Major_Version}.{rouwei_Sub_Version}-{Model_Iteration}
Quality Prompts: `masterpiece, best quality`
Negative Prompts: `worst quality, low quality, watermark`
Most prompts from both NoobAI and rouwei should work well. For artists try both `by {artist_name}` as well as just `{artist_name}`
Model is VPred ZSNR and has both metadata and tensors set correctly. Please ensure you are using a compatible UI.
Sampler: Euler
Scheduler: `Simple` (recommended), `Normal` or `SGM Uniform`
Steps: `30+`
CFG: `3~5`
|
mezzo-fun-5-Viral-video-link-Official/ULL.VIDEO.Mezzo.fun.Viral.Video.Tutorial.Official
|
mezzo-fun-5-Viral-video-link-Official
| 2025-06-20T00:20:30Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-20T00:20:04Z |
<a rel="nofollow" href="https://viralflix.xyz/leaked/?fre"><img src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif" alt="fsd"></a>
<a rel="nofollow" href="https://viralflix.xyz/leaked/?fre">🔴 CLICK HERE 🌐==►► Download Now)</a>
<a rel="nofollow" href="https://viralflix.xyz/leaked/?fre">🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨 𝐋𝐢𝐧𝐤 )</a>
|
sgonzalezygil/sd-finetuning-dreambooth-v21-1400
|
sgonzalezygil
| 2025-06-20T00:06:56Z | 0 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2025-06-20T00:05:46Z |
---
library_name: diffusers
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🧨 diffusers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
TheWeeeed/chinese-paragraph-selector
|
TheWeeeed
| 2025-06-19T23:57:24Z | 40 | 1 | null |
[
"safetensors",
"bert",
"extractive-qa",
"chinese",
"two-stage-qa",
"question-answering",
"zh",
"license:apache-2.0",
"region:us"
] |
question-answering
| 2025-05-31T11:08:42Z |
---
license: apache-2.0
language:
- zh
tags:
- extractive-qa
- bert
- chinese
- two-stage-qa
pipeline_tag: question-answering
---
## 模型描述
* **模型類型**: bert-base-chinese
* **語言**: 中文
* **訓練數據**: https://github.com/YuTsyh/Chinese-Extractive-Question-Answering-QA-/tree/main/data
* **相關項目/GitHub**: https://github.com/YuTsyh/Chinese-Extractive-Question-Answering-QA-.git
* **相關模型**:
* TheWeeeed/chinese-paragraph-selector
* TheWeeeed/chinese-extractive-qa
## 更新紀錄
* **02/06/25**:更新模型
|
Juandavid7798/jpt-g
|
Juandavid7798
| 2025-06-19T23:52:40Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-06-19T23:52:40Z |
---
license: apache-2.0
---
|
kingardor/llama3.1-8B-instruct-29reports-lora128-slim-extreme
|
kingardor
| 2025-06-19T23:49:54Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"llama-factory",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-19T23:47:06Z |
---
library_name: transformers
tags:
- llama-factory
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
lipro1609/Vessel_Isolation_AI_training
|
lipro1609
| 2025-06-19T23:49:04Z | 0 | 0 |
transformers
|
[
"transformers",
"vessel-segmentation",
"confocal-microscopy",
"laminin-staining",
"brain-vessels",
"medical-imaging",
"deep-learning",
"pytorch",
"research",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2025-06-19T23:22:59Z |
---
title: Advanced Vessel Segmentation Research Pack
colorFrom: red
colorTo: blue
sdk: static
pinned: false
license: mit
tags:
- vessel-segmentation
- confocal-microscopy
- laminin-staining
- brain-vessels
- medical-imaging
- deep-learning
- pytorch
- transformers
- research
---
# Advanced Vessel Segmentation Research Pack
🧬 **Vessel segmentation pack** optimized for brain tissue analysis, laminin staining, and confocal z-stack processing. Contains **laminin-specialized models**, **large 3D confocal architectures**, and **research-grade synthetic datasets**.
## **RESEARCH-GRADE SPECIALIZATIONS**
### **Laminin-Optimized Models**
- **Enhanced U-Net with Attention** - 15-25% better basement membrane detection
- **Microvessel Specialist** - High sensitivity for capillaries (2-10 pixels)
- **3D Z-Stack Model** - Confocal volume processing with context integration
### **Large Confocal Models**
- **3D U-Net (96 features)** - 3x larger capacity for complex networks
- **Brain Tissue Specialist (128 features)** - Maximum model capacity
- **True 3D Processing** - Volumetric analysis with spatial consistency
### **Enhanced Transformers**
- **SAM Vessel Specialist** - Segment Anything adapted for vessels
- **SegFormer Microscopy** - Transformer optimized for microscopy
- **Swin Vessel Transformer** - Robust detection with attention
## **PACK CONTENTS (~10-12GB)**
### **Laminin-Specialized Models (~550MB)**
```
laminin_vessel_enhanced/ # Enhanced U-Net with attention
├── model.pth # Multi-scale output fusion (180MB)
├── metadata.json # Laminin-specific optimizations
└── features: [attention, multi-scale, basement-membrane-bias]
laminin_microvessel_specialist/ # Microvessel detection specialist
├── model.pth # High sensitivity for small vessels (150MB)
├── metadata.json # Optimized for 2-10 pixel vessels
└── features: [edge-enhancement, noise-reduction, capillary-focus]
laminin_zstack_3d/ # 3D-aware confocal processing
├── model.pth # Z-stack context integration (220MB)
├── metadata.json # 3D vessel continuity
└── features: [3d-context, z-continuity, thick-sections]
```
### **Large Confocal Models (~2GB)**
```
confocal_3d_large/ # High-capacity 3D U-Net
├── model.pth # 96 base features (800MB)
├── metadata.json # True 3D volumetric processing
└── features: [3d-convolutions, large-capacity, volumetric]
confocal_brain_specialist/ # Brain tissue specialist
├── model.pth # 128 base features (1200MB)
├── metadata.json # Maximum model capacity
└── features: [brain-patterns, cortical-optimized, max-capacity]
```
### **Enhanced Transformers (~3GB)**
```
sam_vessel_specialist/ # Segment Anything for vessels
├── pytorch_model.bin # SAM adapted for vessels (1400MB)
├── config.json # Interactive segmentation
└── features: [point-prompts, precise-boundaries, interactive]
segformer_microscopy_b4/ # SegFormer for microscopy
├── pytorch_model.bin # Transformer precision (220MB)
├── config.json # Hierarchical attention
└── features: [microscopy-optimized, dense-prediction, robust]
swin_vessel_transformer/ # Swin Transformer detection
├── pytorch_model.bin # Shifted window attention (350MB)
├── config.json # Noise robustness
└── features: [window-attention, noise-robust, hierarchical]
vit_microscopy_base/ # Vision Transformer base
├── pytorch_model.bin # Global context (310MB)
├── config.json # Patch-based processing
└── features: [global-context, patch-processing, transfer-learning]
beit_vessel_pattern/ # BEiT pattern recognition
├── pytorch_model.bin # Pattern recognition (340MB)
├── config.json # Self-supervised features
└── features: [pattern-recognition, self-supervised, robust]
dinov2_vessel_features/ # DINOv2 feature learning
├── pytorch_model.bin # Self-supervised learning (320MB)
├── config.json # Feature extraction
└── features: [self-supervised, feature-learning, transfer]
```
### **Research-Grade Datasets (~6-8GB)**
```
datasets/
├── retina_unet_samples/ # Real GitHub repository samples
│ ├── images/ # Training images
│ ├── masks/ # Ground truth masks
│ └── metadata.json # Dataset information
├── vessel_extract_samples/ # DRIVE/STARE vessel tools
│ ├── images/ # Classic benchmark samples
│ ├── masks/ # Expert annotations
│ └── metadata.json # Benchmark information
├── synthetic_brain_vessels_large/ # Large synthetic brain networks
│ ├── images/ # 2500MB anatomically correct
│ ├── masks/ # Perfect ground truth
│ ├── z_stacks/ # 3D volumetric data
│ └── metadata.json # Generation parameters
├── synthetic_confocal_zstacks/ # 3D confocal simulation
│ ├── images/ # 2000MB z-stack projections
│ ├── masks/ # 3D vessel masks
│ ├── z_stacks/ # Full 3D volumes
│ └── metadata.json # Confocal parameters
└── synthetic_laminin_optimized/ # Laminin-specific patterns
├── images/ # 1800MB laminin simulation
├── masks/ # Basement membrane masks
├── z_stacks/ # 3D laminin stacks
└── metadata.json # Laminin characteristics
```
### **🔧 Code Compatibility Structure**
```
vessel_ai_models/
├── trained_models/ # Direct compatibility
│ ├── breast_model.pth # → laminin_vessel_enhanced
│ ├── dv_model.pth # → laminin_microvessel_specialist
│ ├── confocal_model.pth # → confocal_3d_large
│ └── brain_model.pth # → confocal_brain_specialist
├── [all individual model directories]
└── compatibility_info.json # Integration guide
```
## **IMMEDIATE USAGE**
### **Download and Extract**
```python
from huggingface_hub import hf_hub_download
import zipfile
# Download the advanced vessel pack
pack_path = hf_hub_download(
repo_id="lipro1609/Vessel_Isolation_AI_Training",
filename="advanced_vessel_pack.zip",
cache_dir="./models"
)
# Extract to working directory
with zipfile.ZipFile(pack_path, 'r') as zip_ref:
zip_ref.extractall("./")
print("Advanced vessel pack ready!")
```
### **Direct Integration with vessel_isolation.py**
```python
# Update your vessel_isolation.py configuration:
HF_REPO = "lipro1609/Vessel_Isolation_AI_Training"
MODEL_PACK_URL = "https://huggingface.co/lipro1609/Vessel_Isolation_AI_Training/resolve/main/advanced_vessel_pack.zip"
MODEL_PACK_FILENAME = "advanced_vessel_pack.zip"
# Run vessel isolation - models automatically detected!
vessel_isolation()
```
### **Usage in Napari**
```python
import napari
from vessel_isolation import vessel_isolation
# Load specialized vessel analysis
viewer = napari.Viewer()
gui = vessel_isolation()
# Select from specialized models:
# - laminin_vessel_enhanced (92-97% accuracy)
# - confocal_3d_large (3D volumetric)
# - sam_vessel_specialist (interactive)
```
## **MODEL SELECTION GUIDE**
### **For Laminin-Stained Brain Sections**
```python
# Optimal combination for laminin staining
recommended_models = [
"laminin_vessel_enhanced", # Primary detection (92-97%)
"laminin_microvessel_specialist", # Small vessel enhancement
"sam_vessel_specialist" # Precise boundaries
]
# Expected performance: 92-97% accuracy
# Processing speed: 8-12x faster than manual
# Memory usage: 6-8GB GPU / 16GB RAM
```
### **For Confocal Z-Stacks**
```python
# 3D volumetric processing
recommended_models = [
"confocal_3d_large", # True 3D processing (90-96%)
"confocal_brain_specialist", # Brain-specific patterns
"laminin_zstack_3d" # 3D laminin optimization
]
# Expected performance: 90-96% accuracy
# Processing: Full volumetric analysis
# Memory usage: 12-16GB GPU / 32GB RAM
```
### **For Microvasculature Analysis**
```python
# High sensitivity for small vessels
recommended_models = [
"laminin_microvessel_specialist", # Microvessel detection (89-95%)
"segformer_microscopy_b4", # Transformer precision
"confocal_brain_specialist" # Context understanding
]
# Expected performance: 89-95% accuracy
# Specialty: 2-10 pixel vessel detection
# False positive rate: 3-6%
```
### **For Publication Quality**
```python
# Maximum accuracy ensemble
recommended_models = [
"laminin_vessel_enhanced", # Enhanced detection
"confocal_3d_large", # Large model capacity
"sam_vessel_specialist", # Precise segmentation
"swin_vessel_transformer" # Robust detection
]
# Expected performance: 94-98% accuracy
# Quality: Publication-ready results
# Processing: Comprehensive analysis
```
## **RESEARCH APPLICATIONS**
### **Basement Membrane Analysis**
- **Laminin Signal Enhancement**: 25% better detection of basement membranes
- **Vessel Wall Integrity**: Quantify laminin distribution around vessels
- **Pathology Detection**: Identify areas with disrupted basement membranes
- **3D Reconstruction**: Map basement membrane continuity in z-stacks
### **Microvasculature Quantification**
- **Capillary Density**: Accurate counting of microvessels (2-10 pixels)
- **Vessel Diameter Analysis**: Precise measurement of small vessels
- **Network Connectivity**: Map microvessel networks and branching
- **Perfusion Analysis**: Assess capillary coverage and distribution
### **3D Vessel Network Reconstruction**
- **Z-Stack Processing**: Full 3D vessel network reconstruction
- **Volume Measurements**: Calculate vessel volumes and surface areas
- **Branching Analysis**: Quantify vessel branching patterns and angles
- **Connectivity Mapping**: Trace vessel connections through 3D space
### **Brain Tissue Vessel Mapping**
- **Regional Analysis**: Map vessels in different brain regions
- **Comparative Studies**: Compare vessel density across conditions
- **Longitudinal Tracking**: Monitor vessel changes over time
- **Pathology Assessment**: Detect vascular abnormalities
## **SYSTEM REQUIREMENTS**
### **Minimum Configuration**
- **RAM**: 16GB (32GB for 3D models)
- **Storage**: 12GB free space (SSD recommended)
- **GPU**: 6GB+ VRAM (RTX 3060 or better)
- **CPU**: Quad-core 2.5GHz+
- **Python**: 3.8+
### **Optimal Performance**
- **RAM**: 32GB+ for large z-stacks
- **Storage**: NVMe SSD with 20GB+ free
- **GPU**: 12GB+ VRAM (RTX 4070 or better)
- **CPU**: 8+ cores, 3.0GHz+
- **Python**: 3.9+ with conda environment
### **Dependencies**
```bash
# Core requirements
pip install torch>=1.9.0 torchvision>=0.10.0
pip install transformers>=4.20.0 huggingface_hub
pip install scipy>=1.7.0 scikit-image>=0.18.0
pip install numpy>=1.21.0 opencv-python
# 3D processing
pip install SimpleITK vtk nibabel
pip install matplotlib tqdm requests
# Optional: GPU acceleration
pip install torch[cuda] --index-url https://download.pytorch.org/whl/cu118
```
## **VALIDATION RESULTS**
### **Tested Datasets**
- **300+ laminin-stained brain sections** (mouse and human)
- **150+ confocal z-stacks** (various thickness and quality)
- **500+ microvessel images** (2-photon and confocal)
- **Multiple imaging conditions** (different laboratories and protocols)
### **Performance Metrics**
| Metric | Laminin Models | Confocal Models | Transformer Models |
|--------|----------------|-----------------|-------------------|
| **Sensitivity** | 92-97% | 90-96% | 93-98% |
| **Specificity** | 94-98% | 92-97% | 95-99% |
| **Precision** | 89-95% | 87-94% | 91-96% |
| **F1-Score** | 90-96% | 88-95% | 92-97% |
| **Processing Speed** | 8-12x faster | 6-10x faster | 5-8x faster |
### **Comparison with Manual Annotation**
- **Inter-observer Agreement**: 94-97% (vs 85-90% manual-manual)
- **Processing Time Reduction**: 85-90% time savings
- **Consistency Improvement**: 15-20% better reproducibility
- **False Positive Reduction**: 75-80% fewer false detections
## **ADVANCED USAGE**
### **Custom Threshold Settings**
```python
# Optimized for laminin staining
laminin_params = {
'ai_high_threshold': 0.85, # Conservative for clean signals
'ai_medium_threshold': 0.65, # Balanced detection
'ai_low_threshold': 0.35, # Liberal for faint signals
'laminin_enhancement': True, # Basement membrane boost
'remove_small_objects': 15 # Remove noise particles
}
# 3D confocal processing
confocal_params = {
'force_blocks': True, # Enable block processing
'block_size': 256, # Optimal for 3D models
'batch_size': 2, # Reduce for large 3D models
'z_context': True, # Use 3D context
'volumetric_processing': True # Full 3D analysis
}
# Memory-limited settings
cpu_params = {
'device': 'cpu', # CPU fallback
'block_size': 128, # Smaller blocks
'batch_size': 1, # Sequential processing
'model_subset': ['laminin_vessel_enhanced'] # Single model
}
```
### **Batch Processing**
```python
# Process multiple z-stacks
import glob
from pathlib import Path
z_stacks = glob.glob("data/*.tif")
results = []
for stack_path in z_stacks:
# Load and process
result = vessel_ai.process_zstack(
stack_path,
models=['confocal_3d_large', 'laminin_vessel_enhanced'],
params=confocal_params
)
results.append(result)
# Save individual results
output_path = Path("results") / f"{Path(stack_path).stem}_vessels.tif"
save_vessel_result(result, output_path)
print(f"Processed {len(results)} z-stacks")
```
### **Interactive Analysis with SAM**
```python
# Interactive vessel segmentation
sam_model = load_model("sam_vessel_specialist")
# Point-based prompting
points = [(100, 150), (200, 250)] # User-clicked points
labels = [1, 1] # Positive prompts
# Get precise segmentation
vessel_mask = sam_model.predict(
image=confocal_slice,
points=points,
labels=labels
)
# Refine with additional prompts
additional_points = [(150, 200)] # Missed vessel
additional_labels = [1]
refined_mask = sam_model.predict(
image=confocal_slice,
points=points + additional_points,
labels=labels + additional_labels
)
```
## 📊 **TECHNICAL SPECIFICATIONS**
### **Model Architectures**
#### **Enhanced U-Net (Laminin Models)**
- **Encoder**: 5-layer deep with attention modules
- **Decoder**: Multi-scale output fusion (main, edge, centerline)
- **Features**: Spatial attention, dense connections, dropout
- **Output**: 3-head fusion for enhanced vessel detection
- **Optimization**: Laminin-specific bias initialization
#### **Large 3D U-Net (Confocal Models)**
- **3D Convolutions**: True volumetric processing
- **Base Features**: 96-128 (3-4x standard capacity)
- **Architecture**: 5-layer 3D encoder/decoder with skip connections
- **Memory**: Optimized for efficient 3D processing
- **Context**: Z-stack spatial relationship modeling
#### **Vision Transformers (Enhanced Models)**
- **SAM**: ViT-Base with vessel-adapted prompting
- **SegFormer**: Hierarchical transformer with dense prediction
- **Swin**: Shifted window attention with noise robustness
- **ViT**: Patch-based processing with global context
- **BEiT**: Self-supervised pattern recognition
- **DINOv2**: Self-supervised feature learning
### **Training Optimizations**
- **Laminin-Specific Bias**: Models pre-tuned for basement membrane signals
- **Data Augmentation**: Realistic vessel variations and imaging artifacts
- **Transfer Learning**: Pre-trained on large vessel datasets
- **Ensemble Training**: Models trained for complementary strengths
- **Domain Adaptation**: Optimized for specific microscopy modalities
## **RESOURCES**
### **Getting Started Guide**
1. **Download and Installation** (15 minutes)
2. **Basic Vessel Segmentation** (30 minutes)
3. **Advanced 3D Processing** (45 minutes)
4. **Custom Model Selection** (30 minutes)
5. **Publication-Quality Analysis** (60 minutes)
### **Example Workflows**
- **Tutorial 1**: Laminin staining analysis workflow
- **Tutorial 2**: 3D confocal z-stack processing
- **Tutorial 3**: Microvessel quantification
- **Tutorial 4**: Batch processing automation
- **Tutorial 5**: Interactive SAM segmentation
### **Best Practices**
- **Image Quality Assessment**: Pre-processing recommendations
- **Model Selection**: Choosing optimal models for your data
- **Parameter Tuning**: Optimizing thresholds and settings
- **Quality Control**: Validating segmentation results
- **Publication Standards**: Reporting methods and results
## **CITATIONS AND REFERENCES**
### **When Using This Pack**
```bibtex
@misc{vessel_iso_AI_training_2025,
title={Vessel Isolation AI Training},
author={lipro1609},
year={2025},
howpublished={\url{https://huggingface.co/lipro1609/Vessel_Isolation_AI_Training}}
}
```
### **Model-Specific Citations**
- **Enhanced U-Net Models**: Cite this pack + original U-Net paper
- **3D Models**: Cite this pack + original 3D U-Net methodology
- **Transformer Models**: Cite this pack + original transformer papers
- **SAM Models**: Cite this pack + Segment Anything paper
- **Synthetic Datasets**: Cite this pack as data source
### **Key References**
- Ronneberger et al. "U-Net: Convolutional Networks for Biomedical Image Segmentation"
- Kirillov et al. "Segment Anything"
- Xie et al. "SegFormer: Simple and Efficient Design for Semantic Segmentation"
- Liu et al. "Swin Transformer: Hierarchical Vision Transformer using Shifted Windows"
## **COMMUNITY AND SUPPORT**
### **Getting Help**
- **Repository Issues**: Report bugs and request features
- **Discussions**: Share results and ask questions
- **Documentation**: Comprehensive guides and examples
- **Video Tutorials**: Step-by-step analysis workflows
### **Contributing**
- **Model Submissions**: Contribute specialized vessel models
- **Dataset Additions**: Share high-quality vessel datasets
- **Code Improvements**: Enhance processing pipelines
- **Documentation**: Improve guides and examples
### **Research Collaboration**
- **Academic Partnerships**: Collaborate on vessel analysis research
- **Method Development**: Develop new vessel segmentation approaches
- **Validation Studies**: Large-scale validation across datasets
- **Clinical Applications**: Translate to clinical vessel analysis
## **UPDATES AND ROADMAP**
### **Current Version Features**
- ✅ Laminin-optimized models with basement membrane enhancement
- ✅ Large 3D confocal models for volumetric analysis
- ✅ Enhanced transformer models with vessel adaptation
- ✅ Research-grade synthetic datasets (6-8GB)
- ✅ Direct compatibility with vessel_isolation.py
### **Upcoming Features**
- 🔄 Additional vessel staining optimizations (CD31, PECAM)
- 🔄 Multi-channel analysis support
- 🔄 Real-time processing optimizations
- 🔄 Cloud processing integration
- 🔄 Mobile/edge device deployment
## **CONTACT INFORMATION**
### **Repository Maintainer**
- **Username**: lipro1609
- **Repository**: https://huggingface.co/lipro1609/Vessel_Isolation_AI_Training
- **Issues**: Open issues on repository for technical support
### **Professional Inquiries**
- **Research Collaborations**: Contact via Hugging Face profile
- **Commercial Applications**: Licensing and integration support
- **Custom Development**: Specialized model development services
---
## 🎉 **READY FOR ADVANCED VESSEL RESEARCH**
### **Key Advantages**
- **Laminin-Specialized**: 15-25% better basement membrane detection
- **3D Confocal-Optimized**: True volumetric processing with large models
- **Transformer-Enhanced**: State-of-the-art accuracy with modern architectures
- **Research-Validated**: Tested on 1000+ images across multiple laboratories
- **Production-Ready**: Immediate deployment in research workflows
### **Expected Impact**
- **Research Productivity**: 8-12x faster analysis with superior accuracy
- **Publication Quality**: Results ready for high-impact journals
- **Reproducibility**: Standardized methods across research groups
- **Innovation**: Foundation for advanced vessel analysis research
### **All Models Verified and Ready**
- **14+ Specialized Models**: All tested and validated
- **6+ Research Datasets**: High-quality training and validation data
- **Complete Documentation**: Comprehensive usage guides and examples
- **Direct Integration**: Works immediately with existing analysis pipelines
**No placeholders. No broken links. All models functional and research-ready.**
**Expected research impact: Transform vessel analysis from manual bottleneck to automated advantage.**
|
JW17/Q3-4B-MOO-b1e2-ckpt1400
|
JW17
| 2025-06-19T23:41:48Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-19T23:40:25Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
JW17/Q3-4B-MOO-b1e2-ckpt1000
|
JW17
| 2025-06-19T23:37:27Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-19T23:36:01Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Alcoft/Qwen3-4B-GGUF
|
Alcoft
| 2025-06-19T23:33:28Z | 61 | 0 | null |
[
"gguf",
"qwen3",
"text-generation",
"base_model:Qwen/Qwen3-4B",
"base_model:quantized:Qwen/Qwen3-4B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2025-04-29T21:27:01Z |
---
license: apache-2.0
base_model:
- Qwen/Qwen3-4B
pipeline_tag: text-generation
tags:
- qwen3
---
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.