modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-07-27 12:28:27
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 533
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-07-27 12:28:17
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
prashanth145x/my-awesome-model
|
prashanth145x
| 2025-06-18T09:48:51Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"feature-extraction",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2025-06-18T09:48:30Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Mahadi249/deepseek-factchecker-merged
|
Mahadi249
| 2025-06-18T09:45:17Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"8-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2025-06-18T09:42:20Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
shahidrai/All_FrameNet_LLaMa
|
shahidrai
| 2025-06-18T09:40:20Z | 29 | 0 |
peft
|
[
"peft",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"region:us"
] | null | 2025-06-09T14:41:40Z |
---
base_model: unsloth/meta-llama-3.1-8b-instruct-bnb-4bit
library_name: peft
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.2
|
jayalakshmikopuri/deepfake-audio-detector
|
jayalakshmikopuri
| 2025-06-18T09:39:14Z | 64 | 0 | null |
[
"safetensors",
"wav2vec2",
"region:us"
] | null | 2025-05-08T07:22:43Z |
# Deepfake Audio Detector
This model is fine-tuned for detecting deepfake audio using a dataset of misclassified samples.
## Training Details
- **Model**: Heem2/Deepfake-Audio-Detection
- **Dataset**: 100 real and 100 fake samples
- **Epochs**: 5
- **Learning Rate**: 0.0003
- **Batch Size**: 8
- **Dropout Rate**: 0.3
## Metrics
- **Final Eval Accuracy**: N/A
- **Final Eval Precision**: N/A
- **Final Eval Recall**: N/A
- **Final Eval F1**: N/A
- **Final Eval Loss**: N/A
## Training Curves


## Usage
```python
from transformers import AutoModelForAudioClassification, AutoFeatureExtractor
model = AutoModelForAudioClassification.from_pretrained("jayalakshmikopuri/deepfake-audio-detector")
feature_extractor = AutoFeatureExtractor.from_pretrained("jayalakshmikopuri/deepfake-audio-detector")
# Example inference code
import librosa
import numpy as np
audio, sr = librosa.load("path_to_audio.wav", sr=16000, mono=True)
inputs = feature_extractor(audio, sampling_rate=16000, return_tensors="pt", padding=True, truncation=True, max_length=480000)
outputs = model(**inputs)
predicted_class = outputs.logits.argmax().item()
print("Predicted class:", "Fake" if predicted_class == 1 else "Real")
```
## Notes
- The model is fine-tuned on a balanced dataset of misclassified real and fake audio samples.
- Metrics are saved in `metrics.json` for detailed analysis.
|
BKM1804/mieumieu
|
BKM1804
| 2025-06-18T09:36:55Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"sft",
"dpo",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-18T09:35:57Z |
---
library_name: transformers
tags:
- trl
- sft
- dpo
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Talking-Babies/orpo_opt_base_tokenizer
|
Talking-Babies
| 2025-06-18T09:35:09Z | 0 | 0 |
transformers
|
[
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-18T09:35:08Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
sgonzalezygil/sd-finetuning-dreambooth-v7
|
sgonzalezygil
| 2025-06-18T09:31:43Z | 0 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2025-06-18T09:30:09Z |
---
library_name: diffusers
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐งจ diffusers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Jade-Software/Jade-ModernBert-FT
|
Jade-Software
| 2025-06-18T09:27:18Z | 628 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"modernbert",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:10217",
"loss:CachedMultipleNegativesRankingLoss",
"en",
"arxiv:1908.10084",
"arxiv:2101.06983",
"base_model:nomic-ai/modernbert-embed-base",
"base_model:finetune:nomic-ai/modernbert-embed-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-04-08T02:47:07Z |
---
language:
- en
license: apache-2.0
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:10217
- loss:CachedMultipleNegativesRankingLoss
base_model: nomic-ai/modernbert-embed-base
widget:
- source_sentence: What integer value is assigned to the global constant SDS_SecondaryType
in JADE?
sentences:
- '#### drawWidth
**Type:** - Integer
**Availability:** - Read or write at run time only
The **drawWidth **property of the [Window](../window_class/window_class.htm) class
contains the line width for output from graphics methods on a form or control.
Set the **drawWidth** property to a value in the range **1** through **32,767**. This
value represents the width of the line in pixels. The default value is **1**
pixel wide.
Increase the value of the **drawWidth** property to increase the width of the
line.'
- '#### JadeDynamicObjectTypes Category Global Constants
The global constants listed in the following table define symbolic names for the
values of the [JadeDynamicObject](../../encyclosys1/jadedynamicobject_class/jadedynamicobject_class.htm#jadedynamicobjectclass)
class [type](../../encyclosys1/jadedynamicobject_class/type.htm#typejadedynamicobject)
attribute of dynamic objects returned from [JadeDatabaseAdmin](../../encyclosys1/jadedatabaseadmin_class/jadedatabaseadmin_class.htm#jadedatabaseadminclass)
class query methods.
| Global Constant | Integer Value |
| ---- | ---- |
| SDS_PrimaryType | 1 |
| SDS_SecondaryProxyType | 2 |
| SDS_SecondaryType | 3 |
| SDS_TransactionType | 4 |'
- "#### sortOrder\n\n**Type:** - Integer\n\n**Availability:** - Read or write at\
\ run time only\n\nThe **sortOrder **property of the [JadeTableColumn](jadetablecolumn_class.htm)\
\ class contains the precedence of the column referenced by this object when sorting,\
\ in the range **1** through **3**, or it contains zero (**0**) to remove sorting\
\ on the current column.\n\nFor a description of this property, see the [Table](../../encyclowin/control_class/table_class.htm#tableclass)\
\ control [sortColumn](../../encyclowin/window__form__and_control_properties/sortcolumn.htm#sortcolumnwin)\
\ property. See also the [JadeTableColumn](jadetablecolumn_class.htm) class [sortAsc](sortasc.htm),\
\ [sortCased](sortcased.htm), and [sortType](sorttype.htm) properties, which are\
\ dependent on the column already being recorded as a sort column by the **sortOrder**\
\ property.\n\nThe code fragment in the following example shows the use of the\
\ **sortOrder** property.\n\n```\ntable1.accessColumn(2).sortOrder := 1; //\
\ first column in sort\r\ntable1.accessColumn(4).sortOrder := 2; // second column\r\
\ntable1.accessColumn(5).sortOrder := 3; // third column\n```"
- source_sentence: How are values in the ByteArray referenced?
sentences:
- "#### findAllElementsByNameNS\n\n```\nfindAllElementsByNameNS(namespaceURI: String;\r\
\n localName: String;\r\n elements:\
\ JadeXMLElementArray input);\n```\nThe **findAllElementsByNameNS **method\
\ of the [JadeXMLElement](jadexmlelement_class.htm) class fills the elements array\
\ with all descendant elements that have the values specified in the **namespaceURI**\
\ and **localName** parameters, respectively.\n\nAs the search uses the collection\
\ sequence, the elements may not be in the document sequence.\n\nIf you want to\
\ match all namespaces or local names, specify an asterisk character (**'*'**)\
\ in the **namespaceURI** or **localName** parameter. Note, however, that if\
\ you specify **\"*\"** in the **localName** parameter, the access method uses\
\ the document sequence to locate the requested elements rather than the collection\
\ sequence that optimizes performance."
- '## ByteArray Class
The **ByteArray** class is an ordered collection of [Byte](../../encycloprim/byte_type/byte_type.htm#byte)
values in which the values are referenced by their position in the collection.
Byte arrays inherit the methods defined in the [Array](../array_class/array_class.htm)
class.
The bracket (**[ ]**) subscript operators enable you to assign values to and receive
values from a **Byte** array.
For details about the methods defined in the **ByteArray** class, see "[ByteArray
Methods](bytearray_methods.htm)", in the following section.
[Array](../array_class/array_class.htm)
(None)'
- '#### Exposing Properties for a Selected Class
To expose all properties for a selected class
- Rightโclick on the class row in the **Classes** table and then select the **Expose
Properties for Selected Class** command from the popup menu that is displayed.
This command does _not_ automatically add methods or constants to the C# exposure,
even if the **Show Methods** or **Show Constants** option is checked. (For details,
see "[Toggling the Display of Methods](toggling_the_display_of_methods.htm)" or
"[Toggling the Display Constants](toggling_the_display_of_constants.htm)", later
in this chapter.)
All properties in that class are then exposed for inclusion in the C# exposure;
that is, each property check box in the **Features** pane is checked, indicating
that the properties for that class will be generated in the C# class library.
You can tailor the property selection by unchecking the check box of any property
that you want to exclude from the exposure.'
- source_sentence: How can you resolve opening database error 14544 in single user
mode?
sentences:
- "#### Changing Lock Type\n\nA type upgrade can queue and potentially time out,\
\ causing a [JoobObjectLockedException](joobobjectlockedexception.htm) to be thrown,\
\ if the requested type is not compatible with existing locks. For example, this\
\ could happen when upgrading a shared lock to exclusive.\n\nLock type downgrades\
\ will never be queued, as the strength is being lowered so there will be no lock\
\ incompatibilities.\n\nWhen a Jade session is in transaction state, requests\
\ to downgrade lock type are ignored. The lock maintains its current type. However,\
\ lock types can be upgraded regardless of transaction state.\n\nWhen a lock type\
\ is being upgraded from shared to update, the object is unlocked before the update\
\ lock is requested. This happens even if the Jade session is in transaction state,\
\ and is the only situation where an object is unlocked while in transaction state.\
\ The reason for doing this is to prevent potential deadlocks, as discussed in\
\ more detail under \"[Avoiding Deadlock Exceptions](avoiding_deadlock_exceptions.htm)\"\
, later in this chapter.\n\nThe following code fragment gives examples of upgrading\
\ and downgrading lock types.\n\n```\nTimeSpan timeOut = TimeSpan.FromSeconds(10);\r\
\ncontext.Lock(obj1, LockType.Shared, LockDuration.Transaction, timeOut);\r\n\
context.Lock(obj1, LockType.Reserve, LockDuration.Transaction, timeOut);\r\n \
\ // The lock is now upgraded from shared to reserve.\r\
\ncontext.Lock(coll, LockType.Exclusive, LockDuration.Transaction, timeOut);\r\
\n \r\nusing (System.Data.IDbTransaction tran = context.BeginTransaction())\r\
\n{\r\n context.Lock(obj1, LockType.Exclusive, LockDuration.Transaction,\r\n\
\ timeOut); // The lock type is upgraded to exclusive, as\r\
\n // locks can be upgraded (but not downgraded)\r\
\n // when in transaction state.\r\n foreach\
\ (C1 obj2 in coll)\r\n {\r\n // The exclusive lock on coll is not downgraded\
\ by the implicit shared\r\n // lock associated with foreach, because transaction\
\ state is in effect.\r\n }\r\n context.Lock(obj1, LockType.Shared, LockDuration.Transaction,\
\ timeOut);\r\n // The lock type is not downgraded, but remains\
\ as exclusive.\r\n tran.Commit(); // All transaction duration locks are\
\ released.\r\n}\n```"
- '### 1411 - Attempt to add unknown system file
Cause
This error occurs if the system schema maintenance function attempts to add a
new unknown system file.
Action
This is an internal error. If your Jade licenses include support, contact your
local Jade support center or Jade Support.'
- '### 14544 - A concurrent process has already opened the same database
Cause
This error occurs if you attempt to open a database that is already open in single
user (exclusive) mode.
Action
Determine in which mode the database should be opened; that is, single user or
multiuser mode.'
- source_sentence: What is the cause of the 3323 DbCrypt error?
sentences:
- '### 3323 - DbCrypt memory allocation failure
Cause
This error occurs if a memory allocation error occurs in the use of the database
encryption module.
Action
If your Jade licenses include support, contact your local Jade support center
or Jade Support.'
- '### 3028 - Database file is in use by another process
Cause
This error occurs if you attempt to open a database file that is already open
by another process.
Action
Refer to the Jade messages log file (**jommsg.log**) for information about the
file. Generally, another program is accessing the file or the database as a whole.'
- '### Where Do Jade Methods Execute?
Jade methods execute only in Jade nodes. A Jade node is the fundamental building
block of Jade''s distributed architecture. Each node contains the Jade Object
Manager (JOM), the Jade Interpreter, various caches, and one or more Jade processes.
The Jade thin client is _not_ a Jade node; Jade methods do not execute there,
although a great deal of effort has been expended to make it look as though they
do.
In most production systems, there is one database server node (**jadrap.exe**,
**jadrapb.exe**, or **jadserv.exe**), one or more application server nodes (**jadapp.exe**
or **jadappb.exe**), and one or more fat/standard client nodes (**jade.exe**)
for background processing, web services, or HTML forms.
When **jade.exe** is run in single user mode, there is one node only.'
- source_sentence: Which subclasses are associated with the JadeXMLCharacterData class?
sentences:
- '## JadeXMLCharacterData Class
The **JadeXMLCharacterData** class is the abstract superclass of character-based
nodes in an XML document tree; that is, the text, **CDATA**, and comment nodes.
For details about the property defined in the **JadeXMLCharacterData** class,
see "[JadeXMLCharacterData Property](jadexmlcharacterdata_property.htm)", in the
following section.
[JadeXMLNode](../jadexmlnode_class/jadexmlnode_class.htm)
[JadeXMLCDATA](../jadexmlcdata_class/jadexmlcdata_class.htm), [JadeXMLComment](../jadexmlcomment_class/jadexmlcomment_class.htm),
[JadeXMLText](../jadexmltext_class/jadexmltext_class.htm)'
- "### Minimizing the Working Set\n\nIn loops where there are multiple filters,\
\ apply the cheapest filters first and then the filters that reduce the working\
\ set the most. For example, consider the following code fragment, which finds\
\ sales of appliances in a specified city.\n\n```\nwhile iter.next(tran) do\r\n\
\ if tran.type = Type_Sale\r\n and tran.myBranch.myLocation.city = targetCity\r\
\n and tran.myProduct.isAppliance then\r\n <do something with tran>\r\
\n endif;\r\nendwhile;\n```\nIn this example, **tran.type** should be checked\
\ first, because it is the cheapest. The **tran** object must be fetched to evaluate\
\ all of the other conditions, so we may as well check the **type** attribute\
\ first. If we did the **isAppliance** check first, we would have to fetch all\
\ of the product objects for the transactions that were not sales. Regardless\
\ of how many transactions are sales and how many products are appliances, it\
\ will save time to check **tran.type** first.\n\nNow, assume that:\n\n- 80 percent\
\ of transactions are sales\n\n- 15 percent, on average, are likely to be in the\
\ target city\n\n- 90 percent of the products are appliances\n\nIt pays to check\
\ the city first, even though it means fetching the branch and location objects\
\ for the nonโappliance products. There are very few nonโappliance products, so\
\ the number of extra fetches is small. By contrast, checking for nonโappliance\
\ products for all other cities would result in a large number of extra fetches.\n\
\nIt doesn't matter if the filters are conditions of an [if](../../devref/ch1languageref/if_instruction.htm#if)\
\ instruction, multiple [if](../../devref/ch1languageref/if_instruction.htm#if)\
\ instructions, or multiple conditions in the [where](../../devref/ch1languageref/where_clause_optimization.htm#whereoptimization)\
\ clause of a [while](../../devref/ch1languageref/while_instruction.htm#while)\
\ statement; the end result is the same.\n\nThis code fragment example is simple\
\ and concise, to convey the concept. In the real world, each successive filter\
\ may be in another method, another class, or even another schema. It may take\
\ a bit of investigation to find all of the filters involved in a single loop."
- '##### responseType
Use the **responseType** parameter of the [beginNotification](beginnotification.htm)
method to specify the frequency with which the subscribed event was notified.
The valid values for the **responseType** parameter, represented by global constants
in the [NotificationResponses](../../encycloprim/appaglobalconstants/notificationresponses_category.htm#notificationresponsescategory)
category, are listed in the following table.
| Global Constant | Integer Value | Sends a notificationโฆ |
| ---- | ---- | ---- |
| Response_Cancel | 1 | When the object receives a matching event and then cancels
the notification |
| Response_Continuous | 0 | Whenever the object receives a matching event |
| Response_Suspend | 2 | When the object receives a matching event and then suspends
notification until the user refreshes the local copy of the object |'
pipeline_tag: sentence-similarity
library_name: sentence-transformers
---
# Jade-modernbert-ft
`internally on leaderboard known as jade-ft-14-bert`
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [nomic-ai/modernbert-embed-base](https://huggingface.co/nomic-ai/modernbert-embed-base) on the jade_embeddings_train_25.04.04 dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [nomic-ai/modernbert-embed-base](https://huggingface.co/nomic-ai/modernbert-embed-base) <!-- at revision d556a88e332558790b210f7bdbe87da2fa94a8d8 -->
- **Maximum Sequence Length:** 8192 tokens
- **Output Dimensionality:** 768 dimensions
- **Similarity Function:** Cosine Similarity
- **Training Dataset:**
- jade_embeddings_train_25.04.04
- **Language:** en
- **License:** apache-2.0
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 8192, 'do_lower_case': False}) with Transformer model: ModernBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the ๐ค Hub
model = SentenceTransformer("lwoollett/jade-ft-14-bert-static")
# Run inference
sentences = [
'Which subclasses are associated with the JadeXMLCharacterData class?',
'## JadeXMLCharacterData Class\n\nThe **JadeXMLCharacterData** class is the abstract superclass of character-based nodes in an XML document tree; that is, the text, **CDATA**, and comment nodes.\n\nFor details about the property defined in the **JadeXMLCharacterData** class, see "[JadeXMLCharacterData Property](jadexmlcharacterdata_property.htm)", in the following section.\n\n[JadeXMLNode](../jadexmlnode_class/jadexmlnode_class.htm)\n\n[JadeXMLCDATA](../jadexmlcdata_class/jadexmlcdata_class.htm), [JadeXMLComment](../jadexmlcomment_class/jadexmlcomment_class.htm), [JadeXMLText](../jadexmltext_class/jadexmltext_class.htm)',
"### Minimizing the Working Set\n\nIn loops where there are multiple filters, apply the cheapest filters first and then the filters that reduce the working set the most. For example, consider the following code fragment, which finds sales of appliances in a specified city.\n\n```\nwhile iter.next(tran) do\r\n if tran.type = Type_Sale\r\n and tran.myBranch.myLocation.city = targetCity\r\n and tran.myProduct.isAppliance then\r\n <do something with tran>\r\n endif;\r\nendwhile;\n```\nIn this example, **tran.type** should be checked first, because it is the cheapest. The **tran** object must be fetched to evaluate all of the other conditions, so we may as well check the **type** attribute first. If we did the **isAppliance** check first, we would have to fetch all of the product objects for the transactions that were not sales. Regardless of how many transactions are sales and how many products are appliances, it will save time to check **tran.type** first.\n\nNow, assume that:\n\n- 80 percent of transactions are sales\n\n- 15 percent, on average, are likely to be in the target city\n\n- 90 percent of the products are appliances\n\nIt pays to check the city first, even though it means fetching the branch and location objects for the nonโappliance products. There are very few nonโappliance products, so the number of extra fetches is small. By contrast, checking for nonโappliance products for all other cities would result in a large number of extra fetches.\n\nIt doesn't matter if the filters are conditions of an [if](../../devref/ch1languageref/if_instruction.htm#if) instruction, multiple [if](../../devref/ch1languageref/if_instruction.htm#if) instructions, or multiple conditions in the [where](../../devref/ch1languageref/where_clause_optimization.htm#whereoptimization) clause of a [while](../../devref/ch1languageref/while_instruction.htm#while) statement; the end result is the same.\n\nThis code fragment example is simple and concise, to convey the concept. In the real world, each successive filter may be in another method, another class, or even another schema. It may take a bit of investigation to find all of the filters involved in a single loop.",
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### jade_embeddings_train_25.04.04
* Dataset: jade_embeddings_train_25.04.04
* Size: 10,217 training samples
* Columns: <code>anchor</code> and <code>positive</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive |
|:--------|:----------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 8 tokens</li><li>mean: 17.17 tokens</li><li>max: 30 tokens</li></ul> | <ul><li>min: 27 tokens</li><li>mean: 363.15 tokens</li><li>max: 6303 tokens</li></ul> |
* Samples:
| anchor | positive |
|:------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>What is the format for defining a Byte constant in JADE?</code> | <code>##### Constant Definition Tips<br><br>When defining a constant value, the value of a constant can be a simple literal value or an expression constructed using literals and other constants. For details about literal types, see "[Literals](../../devref/ch1languageref/literals.htm#literalsexpr)", in Chapter - 1 of the _Developer's Reference_.<br><br>You can define the value for a constant whose primitive type is not a specific literal format by using a typecast of a [String](../../encycloprim/string_type/string_type.htm#string) literal or in the case of a [Byte](../../encycloprim/byte_type/byte_type.htm#byte), a small [Integer](../../encycloprim/integer_type/integer_type.htm#integer) literal, as shown in the examples in the following table.<br><br>| Primitive Type | Value Expression |<br>| ---- | ---- |<br>| Date | "31/12/2007".Date |<br>| Time | "14:34:23.123".Time |<br>| TimeStamp | "31/12/2007, 14:34:23:123".TimeStamp |<br>| Point | "1,7".Point |<br>| Byte | 0.Byte |<br><br>For details about typecasting, see "[Type Casts](../...</code> |
| <code>How does the replaceFrom__ method handle case sensitivity?</code> | <code>#### replaceFrom__<br><br>```<br>replaceFrom__(target: String;
<br> replacement: String;
<br> startIndex: Integer;
<br> bIgnoreCase: Boolean): String;<br>```<br>The **replaceFrom__** method of the [String](string_type.htm) primitive type replaces only the first occurrence of the substring specified in the **target** parameter with the substring specified in the **replacement** parameter, starting from the specified **startIndex** parameter.<br><br>Caseโsensitivity is ignored if you set the value of the **bIgnoreCase** parameter to **true**. Set this parameter to **false** if you want the substring replacement to be caseโsensitive.<br><br>This method raises exception 1413 (_Index used in string operation is out of bounds_) if the value specified in the **startIndex** parameter is less than **1** or it is greater than the length of the original string. In addition, it returns the original receiver String if the value specified in the **target** parameter has a length of zero (**...</code> |
| <code>What does the global constant Ex_Continue do?</code> | <code>## Exceptions Category<br><br>The global constants for exceptions are listed in the following table.<br><br>| Global Constant | Integer Value | Description |<br>| ---- | ---- | ---- |<br>| Ex_Abort_Action | 1 | Causes the currently executing methods to be aborted. |<br>| Ex_Continue | 0 | Resumes execution from the next expression after the expression that caused the exception. |<br>| Ex_Pass_Back | -1 | Passes control back to the prior local exception handler for this type of exception, or if a local handler is not found, a global exception handler for this type of exception. |<br>| Ex_Resume_Method_Epilog | 3 | Passes control back to the method that armed the exception handler. Execution resumes at the start of the method epilog or at the end of the method if there is no epilog section. Execution resumes at the next statement in the epilog if the exception was raised while executing the epilog. If there were no messages on the execution stack when the handler was armed, the effect of theEx_Resume_Method_Epilog...</code> |
* Loss: [<code>CachedMultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cachedmultiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim",
"mini_batch_size": 32
}
```
### Evaluation Dataset
#### jade_embeddings_train_25.04.04
* Dataset: jade_embeddings_train_25.04.04
* Size: 1,136 evaluation samples
* Columns: <code>anchor</code> and <code>positive</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive |
|:--------|:----------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 8 tokens</li><li>mean: 17.07 tokens</li><li>max: 41 tokens</li></ul> | <ul><li>min: 25 tokens</li><li>mean: 365.93 tokens</li><li>max: 3397 tokens</li></ul> |
* Samples:
| anchor | positive |
|:------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>What is the keyword list constant value for JADE_SYSTEMVARS?</code> | <code>### changeKeywords<br><br>```<br>changeKeywords(action: Integer;
<br> keywordList: Integer;
<br> keywords: String);<br>```<br>The **changeKeywords** method of the [JadeTextEdit](../control_class/jadetextedit_class.htm) class modifies one or more of the current keyword lists. The keyword lists are used by the current language lexical analyzer to classify the tokens found in the text. For the Jade language, this includes keywords, class names, constant names, and so on.<br><br>The value of the **action** parameter can be one of the **JadeTextEdit** class constants listed in the following table.<br><br>| Class Constant | Value | Description |<br>| ---- | ---- | ---- |<br>| KEYWORDS_ADD | 2 | Adds the keywords specified in thekeywordsparameter to the list specified in thekeywordListparameter. |<br>| KEYWORDS_DELETE | 3 | Deletes the words specified in thekeywordsparameter from the list specified in thekeywordListparameter. |<br>| KEYWORDS_SET | 1 | Clears the list specified in thekeywordListparam...</code> |
| <code>What should you click to abandon the deletion of a report in JADE?</code> | <code>#### Delete Report Command<br><br>Use the **Delete Report** command from the File menu to delete a report.<br><br>To delete a report<br><br>1. Select the **Delete Report** command from the File menu. The Delete Report dialog, shown in the following image, is then displayed.<br><br>[](../images/reportdelete_feb2022.png)<br><br>2. Select the report that you want to delete from the **Report** list box or enter the name in the **Report name** text box.<br><br>3. Filter the list of report names in the **Reports** list box in one or both of the following ways.<br><br> - To display only those reports that contain that text in their report description, enter text in the **Text contains** text box. For example, only those reports that mention **Pay** in their description are displayed if you enter **Pay**, providing a refined selection list.<br><br> - To display only those reports modified during a specified period, select a last modified period from the **Last modified** list box. For example, only those reports that were modified in...</code> |
| <code>What types of objects can be set for the userGroupObject in JadeMultiWorkerTcpTransport?</code> | <code>#### userGroupObject<br><br>**Type:** - Object<br><br>The **userGroupObject** property of the [JadeMultiWorkerTcpTransport](jademultiworkertcptransport_class.htm) class contains a reference to an object that you can associate with the transport group between event callbacks.<br><br>You must set the value of this property to a shared transient or a persistent object, as it must be visible to other workers.<br><br>The default value is **null**.<br><br>To prevent an object leak, it is your responsibility to delete this object, if required, in your implementation of the [closedEvent](../jademultiworkertcptransportif_interface/closedevent.htm) method in the receiver class.</code> |
* Loss: [<code>CachedMultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cachedmultiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim",
"mini_batch_size": 32
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 18
- `per_device_eval_batch_size`: 18
- `num_train_epochs`: 4
- `warmup_ratio`: 0.1
- `bf16`: True
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 18
- `per_device_eval_batch_size`: 18
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 4
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: True
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `tp_size`: 0
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | Validation Loss |
|:------:|:----:|:-------------:|:---------------:|
| 0.1761 | 100 | 0.0851 | 0.0243 |
| 0.3521 | 200 | 0.0262 | 0.0211 |
| 0.5282 | 300 | 0.0275 | 0.0217 |
| 0.7042 | 400 | 0.0216 | 0.0256 |
| 0.8803 | 500 | 0.0283 | 0.0241 |
| 1.0563 | 600 | 0.0226 | 0.0195 |
| 1.2324 | 700 | 0.0113 | 0.0170 |
| 1.4085 | 800 | 0.0114 | 0.0204 |
| 1.5845 | 900 | 0.0165 | 0.0182 |
| 1.7606 | 1000 | 0.0129 | 0.0219 |
| 1.9366 | 1100 | 0.0126 | 0.0181 |
| 2.1127 | 1200 | 0.0069 | 0.0207 |
| 2.2887 | 1300 | 0.0045 | 0.0212 |
| 2.4648 | 1400 | 0.0046 | 0.0187 |
| 2.6408 | 1500 | 0.0056 | 0.0206 |
| 2.8169 | 1600 | 0.0084 | 0.0196 |
| 2.9930 | 1700 | 0.005 | 0.0214 |
| 3.1690 | 1800 | 0.0056 | 0.0202 |
| 3.3451 | 1900 | 0.0088 | 0.0190 |
| 3.5211 | 2000 | 0.0026 | 0.0202 |
| 3.6972 | 2100 | 0.0064 | 0.0205 |
| 3.8732 | 2200 | 0.006 | 0.0202 |
### Framework Versions
- Python: 3.11.11
- Sentence Transformers: 4.0.2
- Transformers: 4.51.0
- PyTorch: 2.8.0.dev20250319+cu128
- Accelerate: 1.6.0
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### CachedMultipleNegativesRankingLoss
```bibtex
@misc{gao2021scaling,
title={Scaling Deep Contrastive Learning Batch Size under Memory Limited Setup},
author={Luyu Gao and Yunyi Zhang and Jiawei Han and Jamie Callan},
year={2021},
eprint={2101.06983},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
cucucu666/pray-6.18
|
cucucu666
| 2025-06-18T09:25:03Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"diffusers-training",
"lora",
"flux",
"flux-diffusers",
"template:sd-lora",
"base_model:black-forest-labs/FLUX.1-Fill-dev",
"base_model:adapter:black-forest-labs/FLUX.1-Fill-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-06-18T06:48:58Z |
---
base_model: black-forest-labs/FLUX.1-Fill-dev
library_name: diffusers
license: other
instance_prompt: labii face, Crayon Shin-chan style, pleading expression, both hands
together in a prayer pose, plain white background
widget:
- text: labii face, Crayon Shin-chan style, pleading expression, both hands together
in a prayer pose, plain white background
output:
url: image_0.png
- text: labii face, Crayon Shin-chan style, pleading expression, both hands together
in a prayer pose, plain white background
output:
url: image_1.png
- text: labii face, Crayon Shin-chan style, pleading expression, both hands together
in a prayer pose, plain white background
output:
url: image_2.png
- text: labii face, Crayon Shin-chan style, pleading expression, both hands together
in a prayer pose, plain white background
output:
url: image_3.png
tags:
- text-to-image
- diffusers-training
- diffusers
- lora
- flux
- flux-diffusers
- template:sd-lora
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# Flux-Fill DreamBooth LoRA - cucucu666/pray-6.18
<Gallery />
## Model description
These are cucucu666/pray-6.18 DreamBooth LoRA weights for black-forest-labs/FLUX.1-Fill-dev.
The weights were trained using [DreamBooth](https://dreambooth.github.io/) with a custom [Flux diffusers trainer](https://github.com/Sebastian-Zok/FLUX-Fill-LoRa-Training).
Was LoRA for the text encoder enabled? False.
## Trigger words
You should use `labii face, Crayon Shin-chan style, pleading expression, both hands together in a prayer pose, plain white background` to trigger the image generation.
## Download model
[Download the *.safetensors LoRA](cucucu666/pray-6.18/tree/main) in the Files & versions tab.
## Use it with the [๐งจ diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained("black-forest-labs/FLUX.1-dev", torch_dtype=torch.bfloat16).to('cuda')
pipeline.load_lora_weights('cucucu666/pray-6.18', weight_name='pytorch_lora_weights.safetensors')
image = pipeline('labii face, Crayon Shin-chan style, pleading expression, both hands together in a prayer pose, plain white background').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## License
Please adhere to the licensing terms as described [here](https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md).
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model]
|
dhadheechi/ppo-CartPole-v1
|
dhadheechi
| 2025-06-18T09:23:20Z | 0 | 0 | null |
[
"tensorboard",
"CartPole-v1",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-06-18T09:23:01Z |
---
tags:
- CartPole-v1
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 198.70 +/- 42.86
name: mean_reward
verified: false
---
# PPO Agent Playing CartPole-v1
This is a trained model of a PPO agent playing CartPole-v1.
# Hyperparameters
```python
{'exp_name': 'ppo'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'CartPole-v1'
'total_timesteps': 50000
'learning_rate': 0.00025
'num_envs': 4
'num_steps': 128
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'dhadheechi/ppo-CartPole-v1'
'batch_size': 512
'minibatch_size': 128}
```
|
myduy/sft-qwen3-1.7B-base-s920
|
myduy
| 2025-06-18T09:14:29Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"feature-extraction",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2025-06-18T09:12:19Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
hugging-F-a-ce/q-FrozenLake-v1-4x4-yesSlippery
|
hugging-F-a-ce
| 2025-06-18T09:10:50Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-06-18T09:01:55Z |
---
tags:
- FrozenLake-v1-4x4
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-yesSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4
type: FrozenLake-v1-4x4
metrics:
- type: mean_reward
value: 0.14 +/- 0.35
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="hugging-F-a-ce/q-FrozenLake-v1-4x4-yesSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
BienThuy/sketch-dog-lora
|
BienThuy
| 2025-06-18T09:07:24Z | 0 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"diffusers-training",
"lora",
"base_model:stable-diffusion-v1-5/stable-diffusion-v1-5",
"base_model:adapter:stable-diffusion-v1-5/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2025-06-18T07:35:10Z |
---
base_model: stable-diffusion-v1-5/stable-diffusion-v1-5
library_name: diffusers
license: creativeml-openrail-m
inference: true
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- diffusers-training
- lora
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# LoRA text2image fine-tuning - BienThuy/sketch-dog-lora
These are LoRA adaption weights for stable-diffusion-v1-5/stable-diffusion-v1-5. The weights were fine-tuned on the zoheb/sketch-scene dataset. You can find some example images in the following.




## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model]
|
m1balcerak/energy_matching
|
m1balcerak
| 2025-06-18T09:05:44Z | 0 | 1 | null |
[
"unconditional-image-generation",
"dataset:uoft-cs/cifar10",
"license:mit",
"region:us"
] |
unconditional-image-generation
| 2025-06-17T13:08:00Z |
---
license: mit
datasets:
- uoft-cs/cifar10
pipeline_tag: unconditional-image-generation
---
|
humendra/chronos-t5-large-fine-tuned-run-28
|
humendra
| 2025-06-18T09:03:42Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"t5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2025-06-18T09:02:46Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Florisst/JustidDataSet1
|
Florisst
| 2025-06-18T09:01:05Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-18T09:00:53Z |
---
base_model: unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Florisst
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
VikramSingh178/models-registery
|
VikramSingh178
| 2025-06-18T08:59:08Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-02-18T18:48:17Z |
---
license: apache-2.0
---
|
sgonzalezygil/sd-finetuning-dreambooth-v6-2400
|
sgonzalezygil
| 2025-06-18T08:49:27Z | 0 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2025-06-18T08:48:09Z |
---
library_name: diffusers
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐งจ diffusers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
trhgquan/visobert-finetune-freezed-24
|
trhgquan
| 2025-06-18T08:46:08Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"xlm-roberta",
"text-classification",
"vi",
"base_model:uitnlp/visobert",
"base_model:finetune:uitnlp/visobert",
"license:gpl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-06-18T02:34:02Z |
---
license: gpl-3.0
language:
- vi
metrics:
- accuracy
- f1
base_model:
- uitnlp/visobert
pipeline_tag: text-classification
library_name: transformers
---
|
bananacha/eesscha
|
bananacha
| 2025-06-18T08:45:07Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"roberta",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-06-18T08:11:47Z |
---
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader` of length 1097 with parameters:
```
{'batch_size': 16}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 100,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: RobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
victordorian66/final_mistral_attack
|
victordorian66
| 2025-06-18T08:41:37Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:mistralai/Mistral-7B-Instruct-v0.3",
"base_model:adapter:mistralai/Mistral-7B-Instruct-v0.3",
"region:us"
] | null | 2025-06-18T08:41:07Z |
---
base_model: mistralai/Mistral-7B-Instruct-v0.3
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.2
|
SrijitSet99/sarvam-translate-Q4_K_M-GGUF
|
SrijitSet99
| 2025-06-18T08:23:51Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"translation",
"as",
"bn",
"brx",
"doi",
"gom",
"gu",
"en",
"hi",
"kn",
"ks",
"mai",
"ml",
"mni",
"mr",
"ne",
"or",
"pa",
"sa",
"sat",
"sd",
"ta",
"te",
"ur",
"base_model:sarvamai/sarvam-translate",
"base_model:finetune:sarvamai/sarvam-translate",
"license:gpl-3.0",
"endpoints_compatible",
"region:us"
] |
translation
| 2025-06-18T08:23:39Z |
---
library_name: transformers
license: gpl-3.0
language:
- as
- bn
- brx
- doi
- gom
- gu
- en
- hi
- kn
- ks
- mai
- ml
- mni
- mr
- ne
- or
- pa
- sa
- sat
- sd
- ta
- te
- ur
base_model: sarvamai/sarvam-translate
base_model_relation: finetune
pipeline_tag: translation
tags:
- llama-cpp
- gguf-my-repo
---
# SrijitSet99/sarvam-translate-Q4_K_M-GGUF
This model was converted to GGUF format from [`sarvamai/sarvam-translate`](https://huggingface.co/sarvamai/sarvam-translate) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/sarvamai/sarvam-translate) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo SrijitSet99/sarvam-translate-Q4_K_M-GGUF --hf-file sarvam-translate-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo SrijitSet99/sarvam-translate-Q4_K_M-GGUF --hf-file sarvam-translate-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo SrijitSet99/sarvam-translate-Q4_K_M-GGUF --hf-file sarvam-translate-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo SrijitSet99/sarvam-translate-Q4_K_M-GGUF --hf-file sarvam-translate-q4_k_m.gguf -c 2048
```
|
morturr/Llama-2-7b-hf-LOO_headlines-COMB_dadjokes-comb1-seed42-2025-06-18
|
morturr
| 2025-06-18T08:18:51Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"license:llama2",
"region:us"
] | null | 2025-06-18T08:18:28Z |
---
library_name: peft
license: llama2
base_model: meta-llama/Llama-2-7b-hf
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: Llama-2-7b-hf-LOO_headlines-COMB_dadjokes-comb1-seed42-2025-06-18
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Llama-2-7b-hf-LOO_headlines-COMB_dadjokes-comb1-seed42-2025-06-18
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.1
- Pytorch 2.5.1+cu124
- Datasets 3.0.2
- Tokenizers 0.20.1
|
hugging-F-a-ce/q-FrozenLake-v1-4x4-noSlippery
|
hugging-F-a-ce
| 2025-06-18T08:16:50Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-06-18T08:16:47Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="hugging-F-a-ce/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
teknium/Llama-3.1-AlternateTokenizer
|
teknium
| 2025-06-18T08:15:15Z | 1,408 | 4 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"facebook",
"meta",
"pytorch",
"llama-3",
"conversational",
"en",
"de",
"fr",
"it",
"pt",
"hi",
"es",
"th",
"arxiv:2204.05149",
"license:llama3.1",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-07-23T23:50:37Z |
---
language:
- en
- de
- fr
- it
- pt
- hi
- es
- th
pipeline_tag: text-generation
tags:
- facebook
- meta
- pytorch
- llama
- llama-3
license: llama3.1
extra_gated_prompt: >-
### LLAMA 3.1 COMMUNITY LICENSE AGREEMENT
Llama 3.1 Version Release Date: July 23, 2024
"Agreement" means the terms and conditions for use, reproduction, distribution
and modification of the Llama Materials set forth herein.
"Documentation" means the specifications, manuals and documentation
accompanying Llama 3.1 distributed by Meta at
https://llama.meta.com/doc/overview.
"Licensee" or "you" means you, or your employer or any other person or entity
(if you are entering into this Agreement on such person or entityโs behalf),
of the age required under applicable laws, rules or regulations to provide
legal consent and that has legal authority to bind your employer or such other
person or entity if you are entering in this Agreement on their behalf.
"Llama 3.1" means the foundational large language models and software and
algorithms, including machine-learning model code, trained model weights,
inference-enabling code, training-enabling code, fine-tuning enabling code and
other elements of the foregoing distributed by Meta at
https://llama.meta.com/llama-downloads.
"Llama Materials" means, collectively, Metaโs proprietary Llama 3.1 and
Documentation (and any portion thereof) made available under this Agreement.
"Meta" or "we" means Meta Platforms Ireland Limited (if you are located in or,
if you are an entity, your principal place of business is in the EEA or
Switzerland) and Meta Platforms, Inc. (if you are located outside of the EEA
or Switzerland).
1. License Rights and Redistribution.
a. Grant of Rights. You are granted a non-exclusive, worldwide,
non-transferable and royalty-free limited license under Metaโs intellectual
property or other rights owned by Meta embodied in the Llama Materials to use,
reproduce, distribute, copy, create derivative works of, and make
modifications to the Llama Materials.
b. Redistribution and Use.
i. If you distribute or make available the Llama Materials (or any derivative
works thereof), or a product or service (including another AI model) that
contains any of them, you shall (A) provide a copy of this Agreement with any
such Llama Materials; and (B) prominently display โBuilt with Llamaโ on a
related website, user interface, blogpost, about page, or product
documentation. If you use the Llama Materials or any outputs or results of the
Llama Materials to create, train, fine tune, or otherwise improve an AI model,
which is distributed or made available, you shall also include โLlamaโ at the
beginning of any such AI model name.
ii. If you receive Llama Materials, or any derivative works thereof, from a
Licensee as part of an integrated end user product, then Section 2 of this
Agreement will not apply to you.
iii. You must retain in all copies of the Llama Materials that you distribute
the following attribution notice within a โNoticeโ text file distributed as a
part of such copies: โLlama 3.1 is licensed under the Llama 3.1 Community
License, Copyright ยฉ Meta Platforms, Inc. All Rights Reserved.โ
iv. Your use of the Llama Materials must comply with applicable laws and
regulations (including trade compliance laws and regulations) and adhere to
the Acceptable Use Policy for the Llama Materials (available at
https://llama.meta.com/llama3_1/use-policy), which is hereby incorporated by
reference into this Agreement.
2. Additional Commercial Terms. If, on the Llama 3.1 version release date, the
monthly active users of the products or services made available by or for
Licensee, or Licenseeโs affiliates, is greater than 700 million monthly active
users in the preceding calendar month, you must request a license from Meta,
which Meta may grant to you in its sole discretion, and you are not authorized
to exercise any of the rights under this Agreement unless or until Meta
otherwise expressly grants you such rights.
3. Disclaimer of Warranty. UNLESS REQUIRED BY APPLICABLE LAW, THE LLAMA
MATERIALS AND ANY OUTPUT AND RESULTS THEREFROM ARE PROVIDED ON AN โAS ISโ
BASIS, WITHOUT WARRANTIES OF ANY KIND, AND META DISCLAIMS ALL WARRANTIES OF
ANY KIND, BOTH EXPRESS AND IMPLIED, INCLUDING, WITHOUT LIMITATION, ANY
WARRANTIES OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR A
PARTICULAR PURPOSE. YOU ARE SOLELY RESPONSIBLE FOR DETERMINING THE
APPROPRIATENESS OF USING OR REDISTRIBUTING THE LLAMA MATERIALS AND ASSUME ANY
RISKS ASSOCIATED WITH YOUR USE OF THE LLAMA MATERIALS AND ANY OUTPUT AND
RESULTS.
4. Limitation of Liability. IN NO EVENT WILL META OR ITS AFFILIATES BE LIABLE
UNDER ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, TORT, NEGLIGENCE, PRODUCTS
LIABILITY, OR OTHERWISE, ARISING OUT OF THIS AGREEMENT, FOR ANY LOST PROFITS
OR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, INCIDENTAL, EXEMPLARY OR PUNITIVE
DAMAGES, EVEN IF META OR ITS AFFILIATES HAVE BEEN ADVISED OF THE POSSIBILITY
OF ANY OF THE FOREGOING.
5. Intellectual Property.
a. No trademark licenses are granted under this Agreement, and in connection
with the Llama Materials, neither Meta nor Licensee may use any name or mark
owned by or associated with the other or any of its affiliates, except as
required for reasonable and customary use in describing and redistributing the
Llama Materials or as set forth in this Section 5(a). Meta hereby grants you a
license to use โLlamaโ (the โMarkโ) solely as required to comply with the last
sentence of Section 1.b.i. You will comply with Metaโs brand guidelines
(currently accessible at
https://about.meta.com/brand/resources/meta/company-brand/ ). All goodwill
arising out of your use of the Mark will inure to the benefit of Meta.
b. Subject to Metaโs ownership of Llama Materials and derivatives made by or
for Meta, with respect to any derivative works and modifications of the Llama
Materials that are made by you, as between you and Meta, you are and will be
the owner of such derivative works and modifications.
c. If you institute litigation or other proceedings against Meta or any entity
(including a cross-claim or counterclaim in a lawsuit) alleging that the Llama
Materials or Llama 3.1 outputs or results, or any portion of any of the
foregoing, constitutes infringement of intellectual property or other rights
owned or licensable by you, then any licenses granted to you under this
Agreement shall terminate as of the date such litigation or claim is filed or
instituted. You will indemnify and hold harmless Meta from and against any
claim by any third party arising out of or related to your use or distribution
of the Llama Materials.
6. Term and Termination. The term of this Agreement will commence upon your
acceptance of this Agreement or access to the Llama Materials and will
continue in full force and effect until terminated in accordance with the
terms and conditions herein. Meta may terminate this Agreement if you are in
breach of any term or condition of this Agreement. Upon termination of this
Agreement, you shall delete and cease use of the Llama Materials. Sections 3,
4 and 7 shall survive the termination of this Agreement.
7. Governing Law and Jurisdiction. This Agreement will be governed and
construed under the laws of the State of California without regard to choice
of law principles, and the UN Convention on Contracts for the International
Sale of Goods does not apply to this Agreement. The courts of California shall
have exclusive jurisdiction of any dispute arising out of this Agreement.
### Llama 3.1 Acceptable Use Policy
Meta is committed to promoting safe and fair use of its tools and features,
including Llama 3.1. If you access or use Llama 3.1, you agree to this
Acceptable Use Policy (โPolicyโ). The most recent copy of this policy can be
found at
[https://llama.meta.com/llama3_1/use-policy](https://llama.meta.com/llama3_1/use-policy)
#### Prohibited Uses
We want everyone to use Llama 3.1 safely and responsibly. You agree you will
not use, or allow others to use, Llama 3.1 to:
1. Violate the law or othersโ rights, including to:
1. Engage in, promote, generate, contribute to, encourage, plan, incite, or further illegal or unlawful activity or content, such as:
1. Violence or terrorism
2. Exploitation or harm to children, including the solicitation, creation, acquisition, or dissemination of child exploitative content or failure to report Child Sexual Abuse Material
3. Human trafficking, exploitation, and sexual violence
4. The illegal distribution of information or materials to minors, including obscene materials, or failure to employ legally required age-gating in connection with such information or materials.
5. Sexual solicitation
6. Any other criminal activity
3. Engage in, promote, incite, or facilitate the harassment, abuse, threatening, or bullying of individuals or groups of individuals
4. Engage in, promote, incite, or facilitate discrimination or other unlawful or harmful conduct in the provision of employment, employment benefits, credit, housing, other economic benefits, or other essential goods and services
5. Engage in the unauthorized or unlicensed practice of any profession including, but not limited to, financial, legal, medical/health, or related professional practices
6. Collect, process, disclose, generate, or infer health, demographic, or other sensitive personal or private information about individuals without rights and consents required by applicable laws
7. Engage in or facilitate any action or generate any content that infringes, misappropriates, or otherwise violates any third-party rights, including the outputs or results of any products or services using the Llama Materials
8. Create, generate, or facilitate the creation of malicious code, malware, computer viruses or do anything else that could disable, overburden, interfere with or impair the proper working, integrity, operation or appearance of a website or computer system
2. Engage in, promote, incite, facilitate, or assist in the planning or
development of activities that present a risk of death or bodily harm to
individuals, including use of Llama 3.1 related to the following:
1. Military, warfare, nuclear industries or applications, espionage, use for materials or activities that are subject to the International Traffic Arms Regulations (ITAR) maintained by the United States Department of State
2. Guns and illegal weapons (including weapon development)
3. Illegal drugs and regulated/controlled substances
4. Operation of critical infrastructure, transportation technologies, or heavy machinery
5. Self-harm or harm to others, including suicide, cutting, and eating disorders
6. Any content intended to incite or promote violence, abuse, or any infliction of bodily harm to an individual
3. Intentionally deceive or mislead others, including use of Llama 3.1 related
to the following:
1. Generating, promoting, or furthering fraud or the creation or promotion of disinformation
2. Generating, promoting, or furthering defamatory content, including the creation of defamatory statements, images, or other content
3. Generating, promoting, or further distributing spam
4. Impersonating another individual without consent, authorization, or legal right
5. Representing that the use of Llama 3.1 or outputs are human-generated
6. Generating or facilitating false online engagement, including fake reviews and other means of fake online engagement
4. Fail to appropriately disclose to end users any known dangers of your AI
system
Please report any violation of this Policy, software โbug,โ or other problems
that could lead to a violation of this Policy through one of the following
means:
* Reporting issues with the model: [https://github.com/meta-llama/llama-models/issues](https://github.com/meta-llama/llama-models/issues)
* Reporting risky content generated by the model:
developers.facebook.com/llama_output_feedback
* Reporting bugs and security concerns: facebook.com/whitehat/info
* Reporting violations of the Acceptable Use Policy or unlicensed uses of Meta Llama 3: [email protected]
extra_gated_fields:
First Name: text
Last Name: text
Date of birth: date_picker
Country: country
Affiliation: text
Job title:
type: select
options:
- Student
- Research Graduate
- AI researcher
- AI developer/engineer
- Reporter
- Other
geo: ip_location
By clicking Submit below I accept the terms of the license and acknowledge that the information I provide will be collected stored processed and shared in accordance with the Meta Privacy Policy: checkbox
extra_gated_description: >-
The information you provide will be collected, stored, processed and shared in
accordance with the [Meta Privacy
Policy](https://www.facebook.com/privacy/policy/).
extra_gated_button_content: Submit
library_name: transformers
---
## Model Information
The Meta Llama 3.1 collection of multilingual large language models (LLMs) is a collection of pretrained and instruction tuned generative models in 8B, 70B and 405B sizes (text in/text out). The Llama 3.1 instruction tuned text only models (8B, 70B, 405B) are optimized for multilingual dialogue use cases and outperform many of the available open source and closed chat models on common industry benchmarks.
**Model developer**: Meta
**Model Architecture:** Llama 3.1 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety.
<table>
<tr>
<td>
</td>
<td><strong>Training Data</strong>
</td>
<td><strong>Params</strong>
</td>
<td><strong>Input modalities</strong>
</td>
<td><strong>Output modalities</strong>
</td>
<td><strong>Context length</strong>
</td>
<td><strong>GQA</strong>
</td>
<td><strong>Token count</strong>
</td>
<td><strong>Knowledge cutoff</strong>
</td>
</tr>
<tr>
<td rowspan="3" >Llama 3.1 (text only)
</td>
<td rowspan="3" >A new mix of publicly available online data.
</td>
<td>8B
</td>
<td>Multilingual Text
</td>
<td>Multilingual Text and code
</td>
<td>128k
</td>
<td>Yes
</td>
<td rowspan="3" >15T+
</td>
<td rowspan="3" >December 2023
</td>
</tr>
<tr>
<td>70B
</td>
<td>Multilingual Text
</td>
<td>Multilingual Text and code
</td>
<td>128k
</td>
<td>Yes
</td>
</tr>
<tr>
<td>405B
</td>
<td>Multilingual Text
</td>
<td>Multilingual Text and code
</td>
<td>128k
</td>
<td>Yes
</td>
</tr>
</table>
**Supported languages:** English, German, French, Italian, Portuguese, Hindi, Spanish, and Thai.
**Llama 3.1 family of models**. Token counts refer to pretraining data only. All model versions use Grouped-Query Attention (GQA) for improved inference scalability.
**Model Release Date:** July 23, 2024.
**Status:** This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback.
**License:** A custom commercial license, the Llama 3.1 Community License, is available at: [https://github.com/meta-llama/llama-models/blob/main/models/llama3_1/LICENSE](https://github.com/meta-llama/llama-models/blob/main/models/llama3_1/LICENSE)
Where to send questions or comments about the model Instructions on how to provide feedback or comments on the model can be found in the model [README](https://github.com/meta-llama/llama3). For more technical information about generation parameters and recipes for how to use Llama 3.1 in applications, please go [here](https://github.com/meta-llama/llama-recipes).
## Intended Use
**Intended Use Cases** Llama 3.1 is intended for commercial and research use in multiple languages. Instruction tuned text only models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks. The Llama 3.1 model collection also supports the ability to leverage the outputs of its models to improve other models including synthetic data generation and distillation. The Llama 3.1 Community License allows for these use cases.
**Out-of-scope** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in any other way that is prohibited by the Acceptable Use Policy and Llama 3.1 Community License. Use in languages beyond those explicitly referenced as supported in this model card**.
**<span style="text-decoration:underline;">Note</span>: Llama 3.1 has been trained on a broader collection of languages than the 8 supported languages. Developers may fine-tune Llama 3.1 models for languages beyond the 8 supported languages provided they comply with the Llama 3.1 Community License and the Acceptable Use Policy and in such cases are responsible for ensuring that any uses of Llama 3.1 in additional languages is done in a safe and responsible manner.
## How to use
This repository contains two versions of Meta-Llama-3.1-8B, for use with transformers and with the original `llama` codebase.
### Use with transformers
Starting with transformers >= 4.43.0 onward, you can run conversational inference using the Transformers pipeline abstraction or by leveraging the Auto classes with the generate() function.
Make sure to update your transformers installation via pip install --upgrade transformers.
```python
import transformers
import torch
model_id = "meta-llama/Meta-Llama-3.1-8B"
pipeline = transformers.pipeline(
"text-generation", model=model_id, model_kwargs={"torch_dtype": torch.bfloat16}, device_map="auto"
)
pipeline("Hey how are you doing today?")
```
### Use with `llama`
Please, follow the instructions in the [repository](https://github.com/meta-llama/llama).
To download Original checkpoints, see the example command below leveraging `huggingface-cli`:
```
huggingface-cli download meta-llama/Meta-Llama-3.1-8B --include "original/*" --local-dir Meta-Llama-3.1-8B
```
## Hardware and Software
**Training Factors** We used custom training libraries, Meta's custom built GPU cluster, and production infrastructure for pretraining. Fine-tuning, annotation, and evaluation were also performed on production infrastructure.
**Training utilized a cumulative of** 39.3M GPU hours of computation on H100-80GB (TDP of 700W) type hardware, per the table below. Training time is the total GPU time required for training each model and power consumption is the peak power capacity per GPU device used, adjusted for power usage efficiency.
**Training Greenhouse Gas Emissions** Estimated total location-based greenhouse gas emissions were **11,390** tons CO2eq for training. Since 2020, Meta has maintained net zero greenhouse gas emissions in its global operations and matched 100% of its electricity use with renewable energy, therefore the total market-based greenhouse gas emissions for training were 0 tons CO2eq.
<table>
<tr>
<td>
</td>
<td><strong>Training Time (GPU hours)</strong>
</td>
<td><strong>Training Power Consumption (W)</strong>
</td>
<td><strong>Training Location-Based Greenhouse Gas Emissions</strong>
<p>
<strong>(tons CO2eq)</strong>
</td>
<td><strong>Training Market-Based Greenhouse Gas Emissions</strong>
<p>
<strong>(tons CO2eq)</strong>
</td>
</tr>
<tr>
<td>Llama 3.1 8B
</td>
<td>1.46M
</td>
<td>700
</td>
<td>420
</td>
<td>0
</td>
</tr>
<tr>
<td>Llama 3.1 70B
</td>
<td>7.0M
</td>
<td>700
</td>
<td>2,040
</td>
<td>0
</td>
</tr>
<tr>
<td>Llama 3.1 405B
</td>
<td>30.84M
</td>
<td>700
</td>
<td>8,930
</td>
<td>0
</td>
</tr>
<tr>
<td>Total
</td>
<td>39.3M
<td>
<ul>
</ul>
</td>
<td>11,390
</td>
<td>0
</td>
</tr>
</table>
The methodology used to determine training energy use and greenhouse gas emissions can be found [here](https://arxiv.org/pdf/2204.05149). Since Meta is openly releasing these models, the training energy use and greenhouse gas emissions will not be incurred by others.
## Training Data
**Overview:** Llama 3.1 was pretrained on ~15 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over 25M synthetically generated examples.
**Data Freshness:** The pretraining data has a cutoff of December 2023.
## Benchmark scores
In this section, we report the results for Llama 3.1 models on standard automatic benchmarks. For all the evaluations, we use our internal evaluations library.
### Base pretrained models
<table>
<tr>
<td><strong>Category</strong>
</td>
<td><strong>Benchmark</strong>
</td>
<td><strong># Shots</strong>
</td>
<td><strong>Metric</strong>
</td>
<td><strong>Llama 3 8B</strong>
</td>
<td><strong>Llama 3.1 8B</strong>
</td>
<td><strong>Llama 3 70B</strong>
</td>
<td><strong>Llama 3.1 70B</strong>
</td>
<td><strong>Llama 3.1 405B</strong>
</td>
</tr>
<tr>
<td rowspan="7" >General
</td>
<td>MMLU
</td>
<td>5
</td>
<td>macro_avg/acc_char
</td>
<td>66.7
</td>
<td>66.7
</td>
<td>79.5
</td>
<td>79.3
</td>
<td>85.2
</td>
</tr>
<tr>
<td>MMLU-Pro (CoT)
</td>
<td>5
</td>
<td>macro_avg/acc_char
</td>
<td>36.2
</td>
<td>37.1
</td>
<td>55.0
</td>
<td>53.8
</td>
<td>61.6
</td>
</tr>
<tr>
<td>AGIEval English
</td>
<td>3-5
</td>
<td>average/acc_char
</td>
<td>47.1
</td>
<td>47.8
</td>
<td>63.0
</td>
<td>64.6
</td>
<td>71.6
</td>
</tr>
<tr>
<td>CommonSenseQA
</td>
<td>7
</td>
<td>acc_char
</td>
<td>72.6
</td>
<td>75.0
</td>
<td>83.8
</td>
<td>84.1
</td>
<td>85.8
</td>
</tr>
<tr>
<td>Winogrande
</td>
<td>5
</td>
<td>acc_char
</td>
<td>-
</td>
<td>60.5
</td>
<td>-
</td>
<td>83.3
</td>
<td>86.7
</td>
</tr>
<tr>
<td>BIG-Bench Hard (CoT)
</td>
<td>3
</td>
<td>average/em
</td>
<td>61.1
</td>
<td>64.2
</td>
<td>81.3
</td>
<td>81.6
</td>
<td>85.9
</td>
</tr>
<tr>
<td>ARC-Challenge
</td>
<td>25
</td>
<td>acc_char
</td>
<td>79.4
</td>
<td>79.7
</td>
<td>93.1
</td>
<td>92.9
</td>
<td>96.1
</td>
</tr>
<tr>
<td>Knowledge reasoning
</td>
<td>TriviaQA-Wiki
</td>
<td>5
</td>
<td>em
</td>
<td>78.5
</td>
<td>77.6
</td>
<td>89.7
</td>
<td>89.8
</td>
<td>91.8
</td>
</tr>
<tr>
<td rowspan="4" >Reading comprehension
</td>
<td>SQuAD
</td>
<td>1
</td>
<td>em
</td>
<td>76.4
</td>
<td>77.0
</td>
<td>85.6
</td>
<td>81.8
</td>
<td>89.3
</td>
</tr>
<tr>
<td>QuAC (F1)
</td>
<td>1
</td>
<td>f1
</td>
<td>44.4
</td>
<td>44.9
</td>
<td>51.1
</td>
<td>51.1
</td>
<td>53.6
</td>
</tr>
<tr>
<td>BoolQ
</td>
<td>0
</td>
<td>acc_char
</td>
<td>75.7
</td>
<td>75.0
</td>
<td>79.0
</td>
<td>79.4
</td>
<td>80.0
</td>
</tr>
<tr>
<td>DROP (F1)
</td>
<td>3
</td>
<td>f1
</td>
<td>58.4
</td>
<td>59.5
</td>
<td>79.7
</td>
<td>79.6
</td>
<td>84.8
</td>
</tr>
</table>
### Instruction tuned models
<table>
<tr>
<td><strong>Category</strong>
</td>
<td><strong>Benchmark</strong>
</td>
<td><strong># Shots</strong>
</td>
<td><strong>Metric</strong>
</td>
<td><strong>Llama 3 8B Instruct</strong>
</td>
<td><strong>Llama 3.1 8B Instruct</strong>
</td>
<td><strong>Llama 3 70B Instruct</strong>
</td>
<td><strong>Llama 3.1 70B Instruct</strong>
</td>
<td><strong>Llama 3.1 405B Instruct</strong>
</td>
</tr>
<tr>
<td rowspan="4" >General
</td>
<td>MMLU
</td>
<td>5
</td>
<td>macro_avg/acc
</td>
<td>68.5
</td>
<td>69.4
</td>
<td>82.0
</td>
<td>83.6
</td>
<td>87.3
</td>
</tr>
<tr>
<td>MMLU (CoT)
</td>
<td>0
</td>
<td>macro_avg/acc
</td>
<td>65.3
</td>
<td>73.0
</td>
<td>80.9
</td>
<td>86.0
</td>
<td>88.6
</td>
</tr>
<tr>
<td>MMLU-Pro (CoT)
</td>
<td>5
</td>
<td>micro_avg/acc_char
</td>
<td>45.5
</td>
<td>48.3
</td>
<td>63.4
</td>
<td>66.4
</td>
<td>73.3
</td>
</tr>
<tr>
<td>IFEval
</td>
<td>
</td>
<td>
</td>
<td>76.8
</td>
<td>80.4
</td>
<td>82.9
</td>
<td>87.5
</td>
<td>88.6
</td>
</tr>
<tr>
<td rowspan="2" >Reasoning
</td>
<td>ARC-C
</td>
<td>0
</td>
<td>acc
</td>
<td>82.4
</td>
<td>83.4
</td>
<td>94.4
</td>
<td>94.8
</td>
<td>96.9
</td>
</tr>
<tr>
<td>GPQA
</td>
<td>0
</td>
<td>em
</td>
<td>34.6
</td>
<td>30.4
</td>
<td>39.5
</td>
<td>41.7
</td>
<td>50.7
</td>
</tr>
<tr>
<td rowspan="4" >Code
</td>
<td>HumanEval
</td>
<td>0
</td>
<td>pass@1
</td>
<td>60.4
</td>
<td>72.6
</td>
<td>81.7
</td>
<td>80.5
</td>
<td>89.0
</td>
</tr>
<tr>
<td>MBPP ++ base version
</td>
<td>0
</td>
<td>pass@1
</td>
<td>70.6
</td>
<td>72.8
</td>
<td>82.5
</td>
<td>86.0
</td>
<td>88.6
</td>
</tr>
<tr>
<td>Multipl-E HumanEval
</td>
<td>0
</td>
<td>pass@1
</td>
<td>-
</td>
<td>50.8
</td>
<td>-
</td>
<td>65.5
</td>
<td>75.2
</td>
</tr>
<tr>
<td>Multipl-E MBPP
</td>
<td>0
</td>
<td>pass@1
</td>
<td>-
</td>
<td>52.4
</td>
<td>-
</td>
<td>62.0
</td>
<td>65.7
</td>
</tr>
<tr>
<td rowspan="2" >Math
</td>
<td>GSM-8K (CoT)
</td>
<td>8
</td>
<td>em_maj1@1
</td>
<td>80.6
</td>
<td>84.5
</td>
<td>93.0
</td>
<td>95.1
</td>
<td>96.8
</td>
</tr>
<tr>
<td>MATH (CoT)
</td>
<td>0
</td>
<td>final_em
</td>
<td>29.1
</td>
<td>51.9
</td>
<td>51.0
</td>
<td>68.0
</td>
<td>73.8
</td>
</tr>
<tr>
<td rowspan="4" >Tool Use
</td>
<td>API-Bank
</td>
<td>0
</td>
<td>acc
</td>
<td>48.3
</td>
<td>82.6
</td>
<td>85.1
</td>
<td>90.0
</td>
<td>92.0
</td>
</tr>
<tr>
<td>BFCL
</td>
<td>0
</td>
<td>acc
</td>
<td>60.3
</td>
<td>76.1
</td>
<td>83.0
</td>
<td>84.8
</td>
<td>88.5
</td>
</tr>
<tr>
<td>Gorilla Benchmark API Bench
</td>
<td>0
</td>
<td>acc
</td>
<td>1.7
</td>
<td>8.2
</td>
<td>14.7
</td>
<td>29.7
</td>
<td>35.3
</td>
</tr>
<tr>
<td>Nexus (0-shot)
</td>
<td>0
</td>
<td>macro_avg/acc
</td>
<td>18.1
</td>
<td>38.5
</td>
<td>47.8
</td>
<td>56.7
</td>
<td>58.7
</td>
</tr>
<tr>
<td>Multilingual
</td>
<td>Multilingual MGSM (CoT)
</td>
<td>0
</td>
<td>em
</td>
<td>-
</td>
<td>68.9
</td>
<td>-
</td>
<td>86.9
</td>
<td>91.6
</td>
</tr>
</table>
#### Multilingual benchmarks
<table>
<tr>
<td><strong>Category</strong>
</td>
<td><strong>Benchmark</strong>
</td>
<td><strong>Language</strong>
</td>
<td><strong>Llama 3.1 8B</strong>
</td>
<td><strong>Llama 3.1 70B</strong>
</td>
<td><strong>Llama 3.1 405B</strong>
</td>
</tr>
<tr>
<td rowspan="9" ><strong>General</strong>
</td>
<td rowspan="9" ><strong>MMLU (5-shot, macro_avg/acc)</strong>
</td>
<td>Portuguese
</td>
<td>62.12
</td>
<td>80.13
</td>
<td>84.95
</td>
</tr>
<tr>
<td>Spanish
</td>
<td>62.45
</td>
<td>80.05
</td>
<td>85.08
</td>
</tr>
<tr>
<td>Italian
</td>
<td>61.63
</td>
<td>80.4
</td>
<td>85.04
</td>
</tr>
<tr>
<td>German
</td>
<td>60.59
</td>
<td>79.27
</td>
<td>84.36
</td>
</tr>
<tr>
<td>French
</td>
<td>62.34
</td>
<td>79.82
</td>
<td>84.66
</td>
</tr>
<tr>
<td>Hindi
</td>
<td>50.88
</td>
<td>74.52
</td>
<td>80.31
</td>
</tr>
<tr>
<td>Thai
</td>
<td>50.32
</td>
<td>72.95
</td>
<td>78.21
</td>
</tr>
</table>
## Responsibility & Safety
As part of our Responsible release approach, we followed a three-pronged strategy to managing trust & safety risks:
* Enable developers to deploy helpful, safe and flexible experiences for their target audience and for the use cases supported by Llama.
* Protect developers against adversarial users aiming to exploit Llama capabilities to potentially cause harm.
* Provide protections for the community to help prevent the misuse of our models.
### Responsible deployment
Llama is a foundational technology designed to be used in a variety of use cases, examples on how Metaโs Llama models have been responsibly deployed can be found in our [Community Stories webpage](https://llama.meta.com/community-stories/). Our approach is to build the most helpful models enabling the world to benefit from the technology power, by aligning our model safety for the generic use cases addressing a standard set of harms. Developers are then in the driver seat to tailor safety for their use case, defining their own policy and deploying the models with the necessary safeguards in their Llama systems. Llama 3.1 was developed following the best practices outlined in our Responsible Use Guide, you can refer to the [Responsible Use Guide](https://llama.meta.com/responsible-use-guide/) to learn more.
#### Llama 3.1 instruct
Our main objectives for conducting safety fine-tuning are to provide the research community with a valuable resource for studying the robustness of safety fine-tuning, as well as to offer developers a readily available, safe, and powerful model for various applications to reduce the developer workload to deploy safe AI systems. For more details on the safety mitigations implemented please read the Llama 3 paper.
**Fine-tuning data**
We employ a multi-faceted approach to data collection, combining human-generated data from our vendors with synthetic data to mitigate potential safety risks. Weโve developed many large language model (LLM)-based classifiers that enable us to thoughtfully select high-quality prompts and responses, enhancing data quality control.
**Refusals and Tone**
Building on the work we started with Llama 3, we put a great emphasis on model refusals to benign prompts as well as refusal tone. We included both borderline and adversarial prompts in our safety data strategy, and modified our safety data responses to follow tone guidelines.
#### Llama 3.1 systems
**Large language models, including Llama 3.1, are not designed to be deployed in isolation but instead should be deployed as part of an overall AI system with additional safety guardrails as required.** Developers are expected to deploy system safeguards when building agentic systems. Safeguards are key to achieve the right helpfulness-safety alignment as well as mitigating safety and security risks inherent to the system and any integration of the model or system with external tools.
As part of our responsible release approach, we provide the community with [safeguards](https://llama.meta.com/trust-and-safety/) that developers should deploy with Llama models or other LLMs, including Llama Guard 3, Prompt Guard and Code Shield. All our [reference implementations](https://github.com/meta-llama/llama-agentic-system) demos contain these safeguards by default so developers can benefit from system-level safety out-of-the-box.
#### New capabilities
Note that this release introduces new capabilities, including a longer context window, multilingual inputs and outputs and possible integrations by developers with third party tools. Building with these new capabilities requires specific considerations in addition to the best practices that generally apply across all Generative AI use cases.
**Tool-use**: Just like in standard software development, developers are responsible for the integration of the LLM with the tools and services of their choice. They should define a clear policy for their use case and assess the integrity of the third party services they use to be aware of the safety and security limitations when using this capability. Refer to the Responsible Use Guide for best practices on the safe deployment of the third party safeguards.
**Multilinguality**: Llama 3.1 supports 7 languages in addition to English: French, German, Hindi, Italian, Portuguese, Spanish, and Thai. Llama may be able to output text in other languages than those that meet performance thresholds for safety and helpfulness. We strongly discourage developers from using this model to converse in non-supported languages without implementing finetuning and system controls in alignment with their policies and the best practices shared in the Responsible Use Guide.
### Evaluations
We evaluated Llama models for common use cases as well as specific capabilities. Common use cases evaluations measure safety risks of systems for most commonly built applications including chat bot, coding assistant, tool calls. We built dedicated, adversarial evaluation datasets and evaluated systems composed of Llama models and Llama Guard 3 to filter input prompt and output response. It is important to evaluate applications in context, and we recommend building dedicated evaluation dataset for your use case. Prompt Guard and Code Shield are also available if relevant to the application.
Capability evaluations measure vulnerabilities of Llama models inherent to specific capabilities, for which were crafted dedicated benchmarks including long context, multilingual, tools calls, coding or memorization.
**Red teaming**
For both scenarios, we conducted recurring red teaming exercises with the goal of discovering risks via adversarial prompting and we used the learnings to improve our benchmarks and safety tuning datasets.
We partnered early with subject-matter experts in critical risk areas to understand the nature of these real-world harms and how such models may lead to unintended harm for society. Based on these conversations, we derived a set of adversarial goals for the red team to attempt to achieve, such as extracting harmful information or reprogramming the model to act in a potentially harmful capacity. The red team consisted of experts in cybersecurity, adversarial machine learning, responsible AI, and integrity in addition to multilingual content specialists with background in integrity issues in specific geographic markets.
### Critical and other risks
We specifically focused our efforts on mitigating the following critical risk areas:
**1- CBRNE (Chemical, Biological, Radiological, Nuclear, and Explosive materials) helpfulness**
To assess risks related to proliferation of chemical and biological weapons, we performed uplift testing designed to assess whether use of Llama 3.1 models could meaningfully increase the capabilities of malicious actors to plan or carry out attacks using these types of weapons.
**2. Child Safety**
Child Safety risk assessments were conducted using a team of experts, to assess the modelโs capability to produce outputs that could result in Child Safety risks and inform on any necessary and appropriate risk mitigations via fine tuning. We leveraged those expert red teaming sessions to expand the coverage of our evaluation benchmarks through Llama 3 model development. For Llama 3, we conducted new in-depth sessions using objective based methodologies to assess the model risks along multiple attack vectors including the additional languages Llama 3 is trained on. We also partnered with content specialists to perform red teaming exercises assessing potentially violating content while taking account of market specific nuances or experiences.
**3. Cyber attack enablement**
Our cyber attack uplift study investigated whether LLMs can enhance human capabilities in hacking tasks, both in terms of skill level and speed.
Our attack automation study focused on evaluating the capabilities of LLMs when used as autonomous agents in cyber offensive operations, specifically in the context of ransomware attacks. This evaluation was distinct from previous studies that considered LLMs as interactive assistants. The primary objective was to assess whether these models could effectively function as independent agents in executing complex cyber-attacks without human intervention.
Our study of Llama-3.1-405Bโs social engineering uplift for cyber attackers was conducted to assess the effectiveness of AI models in aiding cyber threat actors in spear phishing campaigns. Please read our Llama 3.1 Cyber security whitepaper to learn more.
### Community
Generative AI safety requires expertise and tooling, and we believe in the strength of the open community to accelerate its progress. We are active members of open consortiums, including the AI Alliance, Partnership on AI and MLCommons, actively contributing to safety standardization and transparency. We encourage the community to adopt taxonomies like the MLCommons Proof of Concept evaluation to facilitate collaboration and transparency on safety and content evaluations. Our Purple Llama tools are open sourced for the community to use and widely distributed across ecosystem partners including cloud service providers. We encourage community contributions to our [Github repository](https://github.com/meta-llama/PurpleLlama).
We also set up the [Llama Impact Grants](https://llama.meta.com/llama-impact-grants/) program to identify and support the most compelling applications of Metaโs Llama model for societal benefit across three categories: education, climate and open innovation. The 20 finalists from the hundreds of applications can be found [here](https://llama.meta.com/llama-impact-grants/#finalists).
Finally, we put in place a set of resources including an [output reporting mechanism](https://developers.facebook.com/llama_output_feedback) and [bug bounty program](https://www.facebook.com/whitehat) to continuously improve the Llama technology with the help of the community.
## Ethical Considerations and Limitations
The core values of Llama 3.1 are openness, inclusivity and helpfulness. It is meant to serve everyone, and to work for a wide range of use cases. It is thus designed to be accessible to people across many different backgrounds, experiences and perspectives. Llama 3.1 addresses users and their needs as they are, without insertion unnecessary judgment or normativity, while reflecting the understanding that even content that may appear problematic in some cases can serve valuable purposes in others. It respects the dignity and autonomy of all users, especially in terms of the values of free thought and expression that power innovation and progress.
But Llama 3.1 is a new technology, and like any new technology, there are risks associated with its use. Testing conducted to date has not covered, nor could it cover, all scenarios. For these reasons, as with all LLMs, Llama 3.1โs potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 3.1 models, developers should perform safety testing and tuning tailored to their specific applications of the model. Please refer to available resources including our [Responsible Use Guide](https://llama.meta.com/responsible-use-guide), [Trust and Safety](https://llama.meta.com/trust-and-safety/) solutions, and other [resources](https://llama.meta.com/docs/get-started/) to learn more about responsible development.
|
Dragon168/llama3.2_3B_news_qlora
|
Dragon168
| 2025-06-18T08:08:39Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"region:us"
] | null | 2025-06-18T07:29:39Z |
---
base_model: unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.2
|
wengti0608/ppo-Pyramid
|
wengti0608
| 2025-06-18T08:07:09Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] |
reinforcement-learning
| 2025-06-18T08:05:40Z |
---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog ๐ถ to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: wengti0608/ppo-Pyramid
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play ๐
|
ZuluVision/MoviiGen1.1
|
ZuluVision
| 2025-06-18T08:06:52Z | 1,868 | 88 |
diffusers
|
[
"diffusers",
"safetensors",
"t2v",
"video generation",
"text-to-video",
"en",
"base_model:Wan-AI/Wan2.1-T2V-14B",
"base_model:finetune:Wan-AI/Wan2.1-T2V-14B",
"license:apache-2.0",
"region:us"
] |
text-to-video
| 2025-05-12T12:48:39Z |
---
license: apache-2.0
language:
- en
pipeline_tag: text-to-video
tags:
- video generation
library_name: diffusers
base_model:
- Wan-AI/Wan2.1-T2V-14B
- Wan-AI/Wan2.1-T2V-14B-Diffusers
---
# MoviiGen 1.1
<span>[](https://huggingface.co/ZuluVision/MoviiGen1.1)</span> <span>[](https://github.com/ZulutionAI/MoviiGen1.1/stargazers)</span>
[**MoviiGen 1.1: Towards Cinematic-Quality Video Generative Models**](https://github.com/ZulutionAI/MoviiGen1.1) <be>
In this repository, we present **MoviiGen 1.1**, a cutting-edge video generation model that excels in cinematic aesthetics and visual quality. This model is a fine-tuning model based on the Wan2.1. Based on comprehensive evaluations by 11 professional filmmakers and AIGC creators, including industry experts, across 60 aesthetic dimensions, **MoviiGen 1.1** demonstrates superior performance in key cinematic aspects:
- ๐ **Superior Cinematic Aesthetics**: **MoviiGen 1.1** outperforms competitors in three critical dimensions: atmosphere creation, camera movement, and object detail preservation, making it the preferred choice for professional cinematic applications.
- ๐ **Visual Coherence & Quality**: MoviiGen 1.1 excels in clarity (+14.6%) and realism (+4.3%), making it ideal for high-fidelity scenarios such as real-scene conversion and portrait detail. Wan2.1 stands out in smoothness and overall visual harmony, better suited for tasks emphasizing composition, coherence, and artistic style. Both models have close overall scores, so users can select MoviiGen 1.1 for clarity and realism, or Wan2.1 for style and structural consistency.
- ๐ **Comprehensive Visual Capabilities**: **MoviiGen 1.1** provides stable performance in complex visual scenarios, ensuring consistent subject and scene representation while maintaining high-quality motion dynamics.
- ๐ **High-Quality Output**: The model generates videos with exceptional clarity and detail, supporting both 720P and 1080P resolutions while maintaining consistent visual quality throughout the sequence.
- ๐ **Professional-Grade Results**: **MoviiGen 1.1** is particularly well-suited for applications where cinematic quality, visual coherence, and aesthetic excellence are paramount, offering superior overall quality compared to other models.
This repository features our latest model, which establishes new benchmarks in cinematic video generation. Through extensive evaluation by industry professionals, it has demonstrated exceptional capabilities in creating high-quality visuals with natural motion dynamics and consistent aesthetic quality, making it an ideal choice for professional video production and creative applications.
## Video Demos
| <video width="320" controls><source src="https://huggingface.co/ZuluVision/MoviiGen1.1/resolve/main/assets/79_1920*1056_seed3732225395.mp4" type="video/mp4">Your browser does not support the video tag.</video> | <video width="320" controls><source src="https://huggingface.co/ZuluVision/MoviiGen1.1/resolve/main/assets/150_1920*1056_seed1674457713.mp4" type="video/mp4">Your browser does not support the video tag.</video> | <video width="320" controls><source src="https://huggingface.co/ZuluVision/MoviiGen1.1/resolve/main/assets/143_1920*1056_seed3114534932.mp4" type="video/mp4">Your browser does not support the video tag.</video> |
|--------|--------|--------|
| <video width="320" controls><source src="https://huggingface.co/ZuluVision/MoviiGen1.1/resolve/main/assets/94_1920*1056_seed3693446494.mp4" type="video/mp4">Your browser does not support the video tag.</video> | <video width="320" controls><source src="https://huggingface.co/ZuluVision/MoviiGen1.1/resolve/main/assets/23_1920*1056_seed3934691816.mp4" type="video/mp4">Your browser does not support the video tag.</video> | <video width="320" controls><source src="https://huggingface.co/ZuluVision/MoviiGen1.1/resolve/main/assets/13_1920*1056..mp4" type="video/mp4">Your browser does not support the video tag.</video> |
| <video width="320" controls><source src="https://huggingface.co/ZuluVision/MoviiGen1.1/resolve/main/assets/26_1920*1056..mp4" type="video/mp4">Your browser does not support the video tag.</video> | <video width="320" controls><source src="https://huggingface.co/ZuluVision/MoviiGen1.1/resolve/main/assets/39_1920*1056..mp4" type="video/mp4">Your browser does not support the video tag.</video> | <video width="320" controls><source src="https://huggingface.co/ZuluVision/MoviiGen1.1/resolve/main/assets/100_1920*1056_seed2949593166.mp4" type="video/mp4">Your browser does not support the video tag.</video> |
| <video width="320" controls><source src="https://huggingface.co/ZuluVision/MoviiGen1.1/resolve/main/assets/54_1920*1056..mp4" type="video/mp4">Your browser does not support the video tag.</video> | <video width="320" controls><source src="https://huggingface.co/ZuluVision/MoviiGen1.1/resolve/main/assets/107_1920*1056_seed525896597.mp4" type="video/mp4">Your browser does not support the video tag.</video> | <video width="320" controls><source src="https://huggingface.co/ZuluVision/MoviiGen1.1/resolve/main/assets/163_1920*1056_seed3696194034.mp4" type="video/mp4">Your browser does not support the video tag.</video> |
## ๐ฅ Latest News!!
* May 17, 2025: ๐ We've released the inference code and **training code** of MoviiGen1.1.
* May 12, 2025: ๐ We've released weights of MoviiGen1.1.
## ๐ก Quickstart
#### Installation
Clone the repo:
```
git clone https://github.com/ZulutionAI/MoviiGen1.1.git
cd MoviiGen1.1
```
1. Install dependencies:
```
# Ensure torch >= 2.4.0
pip install -r requirements.txt
```
2. Install [FastVideo](https://github.com/hao-ai-lab/FastVideo) according to their instructions.
#### Model Download
T2V-14B Model: ๐ค [Huggingface](https://huggingface.co/ZuluVision/MoviiGen1.1)
MoviiGen1.1 model supports both 720P and 1080P. For more cinematic quality, we recommend using 1080P and a 21:9 aspect ratio (1920*832).
Download models using huggingface-cli:
```
pip install "huggingface_hub[cli]"
huggingface-cli download ZuluVision/MoviiGen1.1 --local-dir ./MoviiGen1.1
```
## ๐ฅ Inference
Inference without prompt extend:
```bash
PYTHONPATH=. python scripts/inference/generate.py --ckpt_dir ./MoviiGen1.1 --prompt "Inside a smoky, atmospheric private eye office bathed in dramatic film noir lighting, sharp shadows from slatted blinds cut across a cluttered desk and worn surroundings, evoking the classic style by 1940s film. A world-weary detective is sitting behind the desk. He is smoking a cigarette, slowly bringing it to his lips, inhaling, and exhaling a plume of smoke that drifts in the harsh, directional light. The scene is rendered in stark black and white, creating a high-contrast, cinematic mood. The camera holds a static medium shot focused on the detective, emphasizing the gritty texture and oppressive atmosphere."
```
Inference with prompt extend:
We provide a prompt extend model for MoviiGen1.1, which is a fine-tuned Qwen2.5-7B-Instruct model with our internal data. Model is available on ๐ค [Huggingface](https://huggingface.co/ZuluVision/MoviiGen1.1_Prompt_Rewriter).
```bash
PYTHONPATH=. python scripts/inference/generate.py --ckpt_dir ./MoviiGen1.1 --prompt "A beautiful woman in a red dress is walking on the street." --use_prompt_extend --prompt_extend_model ZuluVision/MoviiGen1.1_Prompt_Rewriter
```
Prompt Tips:
- **Prompt Length**: The prompt length should be around 100~200.
- **Prompt Content**: The prompt should contain **scene description**, **main subject**, **events**, **aesthetics description** and **camera movement**.
- **Example**:
```
Scene Description: A smoky, atmospheric private eye office bathed in dramatic film noir lighting, sharp shadows from slatted blinds cut across a cluttered desk and worn surroundings, evoking the classic style by 1940s film.
Main Subject: A world-weary detective is sitting behind the desk.
Events: He is smoking a cigarette, slowly bringing it to his lips, inhaling, and exhaling a plume of smoke that drifts in the harsh, directional light.
Aesthetics Description: The scene is rendered in stark black and white, creating a high-contrast, cinematic mood.
Camera Movement: The camera holds a static medium shot focused on the detective, emphasizing the gritty texture and oppressive atmosphere.
Final Prompt:
A smoky, atmospheric private eye office bathed in dramatic film noir lighting, sharp shadows from slatted blinds cut across a cluttered desk and worn surroundings, evoking the classic style by 1940s film. A world-weary detective is sitting behind the desk. He is smoking a cigarette, slowly bringing it to his lips, inhaling, and exhaling a plume of smoke that drifts in the harsh, directional light. The scene is rendered in stark black and white, creating a high-contrast, cinematic mood. The camera holds a static medium shot focused on the detective, emphasizing the gritty texture and oppressive atmosphere.
```
## ๐ ๏ธ Training
### Training Framework
Our training framework is built on [FastVideo](https://github.com/hao-ai-lab/FastVideo), with custom implementation of sequence parallel to optimize memory usage and training efficiency. The sequence parallel approach allows us to distribute the computational load across multiple GPUs, enabling efficient training of large-scale video generation models.
#### Key Features:
- **Sequence Parallel & Ring Attention**: Our custom implementation divides the temporal dimension across multiple GPUs, reducing per-device memory requirements while maintaining model quality.
- **Efficient Data Loading**: Optimized data pipeline for handling high-resolution video frames (Latent Cache and Text Embedding Cache).
- **Multi Resolution Training Bucket**: Support for training at multiple resolutions.
- **Mixed Precision Training**: Support for BF16/FP16 training to accelerate computation.
- **Distributed Training**: Seamless multi-node, multi-GPU training support.
### Data Preprocessing
We cache the videos and corresponding text prompts as latents and text embeddings to optimize the training process. This preprocessing step significantly improves training efficiency by reducing computational overhead during the training phase. You need to provide a **merge.txt** file to specify the dataset path. And the dataset should be a json like **training_data.json**. Finally, you will get **video_caption.json** which contains the latents and text embeddings paths.
```bash
bash scripts/data_preprocess/preprocess.sh
```
Example Data Format:
**merge.txt**
```txt
relative_path_to_json_dir, training_data.json
```
**training_data.json**
```json
[
{
"cap": "your prompt",
"path": "path/to/your/video.mp4",
"resolution": {
"width": 3840,
"height": 2160
},
"fps": 23.976023976023978,
"duration": 1.4180833333333331
},
...
]
```
Output Json:
**video_caption.json**
```json
[
{
"latent_path": "path/to/your/latent.pt",
"prompt_embed_path": "path/to/your/prompt_embed.pt",
"length": 12
},
...
]
```
### Train
```bash
bash scripts/train/finetune.sh
```
**When multi-node training, you need to set the number of nodes and the number of processes per node manually.** We provide a sample script for multi-node training.
```bash
bash scripts/train/finetune_multi_node.sh
```
## Citation
If you find our work helpful, please cite us.
```
@misc{moviigen2025,
title = {MoviiGen 1.1: Towards Cinematic-Quality Video Generative Models},
author = {Yunhao Shui, Benjin Zhu, Xuekuan Wang, Feng Qiu, Yuqiu Huang, Haoyu Zheng, Haoyu Yin, Pengpeng Zhang, Jinru Han, Zhuo Zeng, Yaxin Ding, Helen Xi, Xiang Chen, Jinzhu Li, Liangxian Feng, Xincheng Yin, Bing Wu, Keqiang Sun},
year = {2025},
url = {https://github.com/ZulutionAI/MoviiGen1.1}
}
```
|
Tollring/stt-thai-v1
|
Tollring
| 2025-06-18T07:58:13Z | 0 | 0 | null |
[
"safetensors",
"whisper",
"license:mit",
"region:us"
] | null | 2025-06-18T07:56:37Z |
---
license: mit
---
Step Training Loss Validation Loss Wer
5 0.251500 0.206174 22.150538
10 0.158200 0.153803 19.462366
15 0.096600 0.115016 19.086022
20 0.065700 0.106624 19.139785
Eval metrics: {'eval_loss': 0.1150161400437355, 'eval_wer': 19.086021505376344, 'eval_runtime': 112.8043, 'eval_samples_per_second': 1.87, 'eval_steps_per_second': 0.239, 'epoch': 3.0}
Model trained from v-0-3-medium
|
danijeun/gemma-2-2B-it-thinking-function_calling-V0
|
danijeun
| 2025-06-18T07:56:05Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:google/gemma-2-2b-it",
"base_model:finetune:google/gemma-2-2b-it",
"endpoints_compatible",
"region:us"
] | null | 2025-05-28T10:29:14Z |
---
base_model: google/gemma-2-2b-it
library_name: transformers
model_name: gemma-2-2B-it-thinking-function_calling-V0
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for gemma-2-2B-it-thinking-function_calling-V0
This model is a fine-tuned version of [google/gemma-2-2b-it](https://huggingface.co/google/gemma-2-2b-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="danijeun/gemma-2-2B-it-thinking-function_calling-V0", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.18.2
- Transformers: 4.52.4
- Pytorch: 2.7.1
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
morturr/Llama-2-7b-hf-LOO_dadjokes-COMB_headlines-comb1-seed28-2025-06-18
|
morturr
| 2025-06-18T07:55:32Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"license:llama2",
"region:us"
] | null | 2025-06-18T07:55:15Z |
---
library_name: peft
license: llama2
base_model: meta-llama/Llama-2-7b-hf
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: Llama-2-7b-hf-LOO_dadjokes-COMB_headlines-comb1-seed28-2025-06-18
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Llama-2-7b-hf-LOO_dadjokes-COMB_headlines-comb1-seed28-2025-06-18
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 28
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.1
- Pytorch 2.5.1+cu124
- Datasets 3.0.2
- Tokenizers 0.20.1
|
AbderrahmanSkiredj1/GemMaroc-27b-it-GGUF
|
AbderrahmanSkiredj1
| 2025-06-18T07:55:27Z | 251 | 0 |
transformers
|
[
"transformers",
"gguf",
"Moroccan",
"Darija",
"GemMaroc",
"GGUF",
"conversational",
"text-generation",
"ary",
"en",
"ar",
"dataset:GemMaroc/TULU-3-50k-darija-english",
"arxiv:2505.17082",
"base_model:AbderrahmanSkiredj1/GemMaroc-27b-it",
"base_model:quantized:AbderrahmanSkiredj1/GemMaroc-27b-it",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-05-22T17:47:29Z |
---
base_model: AbderrahmanSkiredj1/GemMaroc-27b-it
language:
- ary
- en
- ar
library_name: transformers
quantized_by: mradermacher
datasets:
- GemMaroc/TULU-3-50k-darija-english
tags:
- Moroccan
- Darija
- GemMaroc
- GGUF
- conversational
pipeline_tag: text-generation
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/AbderrahmanSkiredj1/GemMaroc-27b-it
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/GemMaroc-27b-it-GGUF/resolve/main/GemMaroc-27b-it.Q2_K.gguf) | Q2_K | 10.6 | |
| [GGUF](https://huggingface.co/mradermacher/GemMaroc-27b-it-GGUF/resolve/main/GemMaroc-27b-it.Q3_K_S.gguf) | Q3_K_S | 12.3 | |
| [GGUF](https://huggingface.co/mradermacher/GemMaroc-27b-it-GGUF/resolve/main/GemMaroc-27b-it.Q3_K_M.gguf) | Q3_K_M | 13.5 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/GemMaroc-27b-it-GGUF/resolve/main/GemMaroc-27b-it.Q3_K_L.gguf) | Q3_K_L | 14.6 | |
| [GGUF](https://huggingface.co/mradermacher/GemMaroc-27b-it-GGUF/resolve/main/GemMaroc-27b-it.IQ4_XS.gguf) | IQ4_XS | 15.0 | |
| [GGUF](https://huggingface.co/mradermacher/GemMaroc-27b-it-GGUF/resolve/main/GemMaroc-27b-it.Q4_K_S.gguf) | Q4_K_S | 15.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/GemMaroc-27b-it-GGUF/resolve/main/GemMaroc-27b-it.Q4_K_M.gguf) | Q4_K_M | 16.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/GemMaroc-27b-it-GGUF/resolve/main/GemMaroc-27b-it.Q5_K_S.gguf) | Q5_K_S | 18.9 | |
| [GGUF](https://huggingface.co/mradermacher/GemMaroc-27b-it-GGUF/resolve/main/GemMaroc-27b-it.Q5_K_M.gguf) | Q5_K_M | 19.4 | |
| [GGUF](https://huggingface.co/mradermacher/GemMaroc-27b-it-GGUF/resolve/main/GemMaroc-27b-it.Q6_K.gguf) | Q6_K | 22.3 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/GemMaroc-27b-it-GGUF/resolve/main/GemMaroc-27b-it.Q8_0.gguf) | Q8_0 | 28.8 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
---
# GemMarocโ27B
Unlocking **MoroccanโฏDarija** proficiency in a stateโofโtheโart large language model, trained with a *minimalโdata, greenโAI* recipe that preserves Gemmaโ27Bโs strong reasoning abilities while adding fluent Darija generation.
---
## Model at a glance
| | Details |
| ------------------- | ----------------------------------------------------------------------------------------------------------------------------- |
| **Model ID** | `AbderrahmanSkiredj1/GemMaroc-27b-it` |
| **Base model** | [`google/gemma-3-27b`](https://huggingface.co/google/gemma-3-27b) |
| **Architecture** | Decoderโonly Transformer (Gemma 3) |
| **Parameters** | 27โฏbillion |
| **Context length** | 2โฏ048 tokens |
| **Training regime** | Supervised fineโtuning (LoRA โ merged) on 50โฏK highโquality Darija/English instructions TULUโ50K slice |
| **Compute budget** | 48 GPUยทh (8โฏรโฏH100โ80GB ร 6โฏh) โ โโฏ26โฏkWh / 10โฏkgโฏCOโe |
| **License** | Apache 2.0 |
---
## Why another Darija model?
* **Inclusive AI**โ>โฏ36โฏmillion speakers of Moroccan Arabic remain underserved by open LLMs.
* **Qualityโoverโquantity**โA carefully curated 50โฏK instruction set surfaces Darija competence without sacrificing crossโlingual reasoning.
* **Green AI**โGemMaroc achieves AtlasโChatโlevel Darija scores using <โฏ2โฏ% of the energy.
---
## Benchmark summary
| Model | DarijaโฏMMLU | DarijaโฏHellaSwag | GSM8K @5 | HellaSwag (EN) |
| ---------------- | ----------- | ---------------- | ---------- | -------------- |
| AtlasโChatโ27B | **61.9โฏ%** | 48.4โฏ% | 82.0โฏ% | 77.8โฏ% |
| **GemMarocโ27B** | 61.6โฏ% | **60.5โฏ%** | **84.2โฏ%** | **79.3โฏ%** |
<sub>Zeroโshot accuracy; full table in the paper.</sub>
---
## Quick start
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
model_id = "AbderrahmanSkiredj1/GemMaroc-27b-it"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype="auto",
device_map="auto"
)
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
device_map="auto",
max_new_tokens=1024,
temperature=0.7,
repetition_penalty=1.2,
no_repeat_ngram_size=3,
)
messages = [
{"role": "user", "content": "ุดูู ูู ูุธุฑูุฉ โbutterfly effectโุ ูุณูุฑูุง ุจุฏุงุฑุฌุฉ ููููุท ู
ุซุงู ุจุณูุท."}
]
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
print(pipe(prompt)[0]["generated_text"][len(prompt):])
```
### Chat template (Gemma 3 format)
The tokenizer provides a bakedโin Jinja template that starts with a **beginโofโsequence** token (`<bos>`), then alternates user/model turns, each wrapped by `<start_of_turn>` โฆ `<end_of_turn>` markers. When you set `add_generation_prompt=True` it ends after the opening model tag so the model can continue:
```
<bos><start_of_turn>user
{user message}<end_of_turn>
<start_of_turn>model
```
The assistant will keep generating tokens until it decides to emit `<end_of_turn>`.
```python
prompt = tokenizer.apply_chat_template(messages, add_generation_prompt=True, tokenize=False)
```
No manual token juggling requiredโthe call above handles BOS, turn delimiters, and newline placement automatically.
---
Preโquantised checkpoints will be published under the same repo tags (`gemmarocโ27bโawqโint4`, `gemmarocโ27bโggufโq4_k_m`).
---
## Training recipe (oneโparagraph recap)
1. **Data**โTranslate a 44โฏK reasoning slice of TULU 50K into Darija, keeping 20โฏ% English for crossโlingual robustness.
2. **LoRA SFT**โRank 16, ฮฑโฏ=โฏ32, 3โฏepochs, bf16, context 2โฏ048.
3. **Merge & push**โMerge LoRA into base weights (`peft.merge_and_unload`), convert to safetensors, upload.
---
## Limitations & ethical considerations
* Sentiment and abstractive summarisation still trail stateโofโtheโart.
* Tokeniser is unchanged; rare Darija spellings may fragment.
* Model may inherit societal biases present in preโtraining data.
* No RLHF / RLAIF safety alignment yet โ apply a moderation layer in production.
---
## Citation
If you use GemMaroc in your work, please cite:
```bibtex
@misc{skiredj2025gemmarocunlockingdarijaproficiency,
title={GemMaroc: Unlocking Darija Proficiency in LLMs with Minimal Data},
author={Abderrahman Skiredj and Ferdaous Azhari and Houdaifa Atou and Nouamane Tazi and Ismail Berrada},
year={2025},
eprint={2505.17082},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2505.17082},
}
```
<!-- end -->
|
chrishickey/twitter-roberta-base-sentiment-latest
|
chrishickey
| 2025-06-18T07:53:57Z | 0 | 0 | null |
[
"pytorch",
"tf",
"safetensors",
"roberta",
"en",
"dataset:tweet_eval",
"arxiv:2202.03829",
"region:us"
] | null | 2025-06-18T07:22:02Z |
---
language: en
widget:
- text: Covid cases are increasing fast!
datasets:
- tweet_eval
---
# Twitter-roBERTa-base for Sentiment Analysis - UPDATED (2022)
This is a RoBERTa-base model trained on ~124M tweets from January 2018 to December 2021, and finetuned for sentiment analysis with the TweetEval benchmark.
The original Twitter-based RoBERTa model can be found [here](https://huggingface.co/cardiffnlp/twitter-roberta-base-2021-124m) and the original reference paper is [TweetEval](https://github.com/cardiffnlp/tweeteval). This model is suitable for English.
- Reference Paper: [TimeLMs paper](https://arxiv.org/abs/2202.03829).
- Git Repo: [TimeLMs official repository](https://github.com/cardiffnlp/timelms).
<b>Labels</b>:
0 -> Negative;
1 -> Neutral;
2 -> Positive
This sentiment analysis model has been integrated into [TweetNLP](https://github.com/cardiffnlp/tweetnlp). You can access the demo [here](https://tweetnlp.org).
## Example Pipeline
```python
from transformers import pipeline
sentiment_task = pipeline("sentiment-analysis", model=model_path, tokenizer=model_path)
sentiment_task("Covid cases are increasing fast!")
```
```
[{'label': 'Negative', 'score': 0.7236}]
```
## Full classification example
```python
from transformers import AutoModelForSequenceClassification
from transformers import TFAutoModelForSequenceClassification
from transformers import AutoTokenizer, AutoConfig
import numpy as np
from scipy.special import softmax
# Preprocess text (username and link placeholders)
def preprocess(text):
new_text = []
for t in text.split(" "):
t = '@user' if t.startswith('@') and len(t) > 1 else t
t = 'http' if t.startswith('http') else t
new_text.append(t)
return " ".join(new_text)
MODEL = f"cardiffnlp/twitter-roberta-base-sentiment-latest"
tokenizer = AutoTokenizer.from_pretrained(MODEL)
config = AutoConfig.from_pretrained(MODEL)
# PT
model = AutoModelForSequenceClassification.from_pretrained(MODEL)
#model.save_pretrained(MODEL)
text = "Covid cases are increasing fast!"
text = preprocess(text)
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
scores = output[0][0].detach().numpy()
scores = softmax(scores)
# # TF
# model = TFAutoModelForSequenceClassification.from_pretrained(MODEL)
# model.save_pretrained(MODEL)
# text = "Covid cases are increasing fast!"
# encoded_input = tokenizer(text, return_tensors='tf')
# output = model(encoded_input)
# scores = output[0][0].numpy()
# scores = softmax(scores)
# Print labels and scores
ranking = np.argsort(scores)
ranking = ranking[::-1]
for i in range(scores.shape[0]):
l = config.id2label[ranking[i]]
s = scores[ranking[i]]
print(f"{i+1}) {l} {np.round(float(s), 4)}")
```
Output:
```
1) Negative 0.7236
2) Neutral 0.2287
3) Positive 0.0477
```
### References
```
@inproceedings{camacho-collados-etal-2022-tweetnlp,
title = "{T}weet{NLP}: Cutting-Edge Natural Language Processing for Social Media",
author = "Camacho-collados, Jose and
Rezaee, Kiamehr and
Riahi, Talayeh and
Ushio, Asahi and
Loureiro, Daniel and
Antypas, Dimosthenis and
Boisson, Joanne and
Espinosa Anke, Luis and
Liu, Fangyu and
Mart{\'\i}nez C{\'a}mara, Eugenio" and others,
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
month = dec,
year = "2022",
address = "Abu Dhabi, UAE",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.emnlp-demos.5",
pages = "38--49"
}
```
```
@inproceedings{loureiro-etal-2022-timelms,
title = "{T}ime{LM}s: Diachronic Language Models from {T}witter",
author = "Loureiro, Daniel and
Barbieri, Francesco and
Neves, Leonardo and
Espinosa Anke, Luis and
Camacho-collados, Jose",
booktitle = "Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics: System Demonstrations",
month = may,
year = "2022",
address = "Dublin, Ireland",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.acl-demo.25",
doi = "10.18653/v1/2022.acl-demo.25",
pages = "251--260"
}
```
|
srinivaspokuri/model1
|
srinivaspokuri
| 2025-06-18T07:52:16Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"feature-extraction",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2025-06-18T07:50:33Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
yullius/finetuned-llama-3.1
|
yullius
| 2025-06-18T07:44:27Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"llama-factory",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-17T10:01:52Z |
---
library_name: transformers
tags:
- llama-factory
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
bhavya777/NANONET_CORRECT_V1
|
bhavya777
| 2025-06-18T07:43:50Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2_5_vl",
"image-text-to-text",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"base_model:nanonets/Nanonets-OCR-s",
"base_model:finetune:nanonets/Nanonets-OCR-s",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2025-06-18T07:42:35Z |
---
base_model: nanonets/Nanonets-OCR-s
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2_5_vl
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** bhavya777
- **License:** apache-2.0
- **Finetuned from model :** nanonets/Nanonets-OCR-s
This qwen2_5_vl model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
DevQuasar/Tesslate.UIGEN-T3-4B-Preview-MAX-GGUF
|
DevQuasar
| 2025-06-18T07:40:44Z | 0 | 0 | null |
[
"gguf",
"text-generation",
"base_model:Tesslate/UIGEN-T3-4B-Preview-MAX",
"base_model:quantized:Tesslate/UIGEN-T3-4B-Preview-MAX",
"region:us"
] |
text-generation
| 2025-06-18T07:18:39Z |
---
base_model:
- Tesslate/UIGEN-T3-4B-Preview-MAX
pipeline_tag: text-generation
---
[<img src="https://raw.githubusercontent.com/csabakecskemeti/devquasar/main/dq_logo_black-transparent.png" width="200"/>](https://devquasar.com)
Quantized version of: [Tesslate/UIGEN-T3-4B-Preview-MAX](https://huggingface.co/Tesslate/UIGEN-T3-4B-Preview-MAX)
'Make knowledge free for everyone'
<p align="center">
Made with <br>
<a href="https://www.civo.com/" target="_blank">
<img src="https://www.civo.com/assets/public/brand-assets/civo-logo-colour-60cc1622dedf346f7afde1fff760523f731b0aac106a5465af98ff4073114b74.svg" width="100"/>
</a>
</p>
<a href='https://ko-fi.com/L4L416YX7C' target='_blank'><img height='36' style='border:0px;height:36px;' src='https://storage.ko-fi.com/cdn/kofi6.png?v=6' border='0' alt='Buy Me a Coffee at ko-fi.com' /></a>
|
vuitton/21v1scrip_32
|
vuitton
| 2025-06-18T07:39:07Z | 0 | 0 | null |
[
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-06-16T15:35:00Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
mob2711/qwen2.5-7b-qlora-cot-ht-3000
|
mob2711
| 2025-06-18T07:35:11Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-18T07:35:02Z |
---
base_model: unsloth/qwen2.5-7b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** mob2711
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-7b-instruct-unsloth-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
RuslanUruzbayev/konstituciya
|
RuslanUruzbayev
| 2025-06-18T07:34:46Z | 2 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-13T07:07:34Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
wizard-chair/flower-doggie
|
wizard-chair
| 2025-06-18T07:31:00Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2025-06-18T07:31:00Z |
---
license: creativeml-openrail-m
---
|
HsuHuggingFace/llama3.2_3B_news_qlora
|
HsuHuggingFace
| 2025-06-18T07:28:34Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-06-18T07:28:34Z |
---
license: apache-2.0
---
|
bananacha/klue-roberta-base-klue-sts
|
bananacha
| 2025-06-18T07:26:51Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"roberta",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-06-18T07:26:33Z |
---
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 657 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 4,
"evaluation_steps": 1000,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 100,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: RobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
abakirci/admbkrc-turkish-tokenizer
|
abakirci
| 2025-06-18T07:26:39Z | 0 | 0 |
transformers
|
[
"transformers",
"turkish",
"tokenizer",
"nlp",
"tr",
"dataset:umarigan/tinystories_tr",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-18T05:03:15Z |
---
language:
- tr
license: apache-2.0
library_name: transformers
tags:
- turkish
- tokenizer
- nlp
datasets:
- umarigan/tinystories_tr
---
# Turkish BPE Tokenizer
## Model Description
This is a Byte-Pair Encoding (BPE) tokenizer trained specifically for Turkish language. It's designed to be used in Turkish language model pretraining and NLP tasks.
*Key Features:*
- Trained on a diverse Turkish text corpus
- 50,000 vocabulary size
- min_frequency = 2
- Special handling for Turkish characters (ฤ, รผ, ล, ฤฑ, รถ, รง)
- Added special tokens:[[EOS],[SEP],[UNK], [MASK],[PAD]]
## Intended Uses & Limitations
### How to Use
```python
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("abakirci/admbkrc-turkish-tokenizer")
text = "Bu bir Tรผrkรงe รถrnek cรผmledir."
encoded = tokenizer(text, return_tensors="pt")
|
hannell/finbertMY-sentiment
|
hannell
| 2025-06-18T07:26:31Z | 17 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-06-11T17:48:32Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
agnedil/Mistral-7B-openassistant-guanaco
|
agnedil
| 2025-06-18T07:22:28Z | 0 | 0 | null |
[
"tensorboard",
"safetensors",
"region:us"
] | null | 2024-03-05T05:48:35Z |
Model [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) fine-tuned on the [openassistant-guanaco](https://huggingface.co/datasets/timdettmers/openassistant-guanaco) dataset using the following [Colab notebook](https://colab.research.google.com/drive/1q7GpzXDlRrvmpCIFWcZg-WLtKcrzFdGn?usp=drive_link).
|
Steven122333/llama3.2_3B_news_merged
|
Steven122333
| 2025-06-18T07:22:18Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-06-18T07:22:18Z |
---
license: apache-2.0
---
|
CatchKuo/llama3.2_3B_news_merged
|
CatchKuo
| 2025-06-18T07:20:22Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-06-18T07:20:22Z |
---
license: apache-2.0
---
|
kopekope/dld4_131_unfinished
|
kopekope
| 2025-06-18T07:09:18Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-06-18T07:04:16Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: flux_gear_lora
---
# Juliayinvert
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `flux_gear_lora` to trigger the image generation.
## Use it with the [๐งจ diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('kopekope/dld4_131_unfinished ', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
souravchaman13/company-name-normalizer
|
souravchaman13
| 2025-06-18T07:06:35Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"t5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2025-06-18T07:06:19Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
hw33/meta-llama-Llama-3.2-3B-Instruct
|
hw33
| 2025-06-18T07:05:00Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:meta-llama/Llama-3.2-3B-Instruct",
"base_model:finetune:meta-llama/Llama-3.2-3B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-06-17T08:47:04Z |
---
base_model: meta-llama/Llama-3.2-3B-Instruct
library_name: transformers
model_name: meta-llama-Llama-3.2-3B-Instruct
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for meta-llama-Llama-3.2-3B-Instruct
This model is a fine-tuned version of [meta-llama/Llama-3.2-3B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="hw33/meta-llama-Llama-3.2-3B-Instruct", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.15.2
- Transformers: 4.53.0.dev0
- Pytorch: 2.6.0
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
tartuNLP/Llammas
|
tartuNLP
| 2025-06-18T07:02:55Z | 1,452 | 5 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"conversational",
"et",
"en",
"arxiv:2404.04042",
"base_model:tartuNLP/Llammas-base",
"base_model:finetune:tartuNLP/Llammas-base",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-11-29T16:13:44Z |
---
language:
- et
- en
pipeline_tag: text-generation
library_name: transformers
tags:
- conversational
base_model:
- tartuNLP/Llammas-base
---
#### *Update on 26.05.2025*
*For those interested in building NLP applications for Estonian: Llammas ๐ is no longer the best open-weight model available for it. For a detailed comparison of open LLMs on Estonian, see our recent [paper](https://aclanthology.org/2025.nodalida-1.37/).*
*For an up-to-date leaderboard of both proprietary and open models, please visit [baromeeter.tartunlp.ai](https://baromeeter.tartunlp.ai) (and consider contributing if you speak Estonian โบ๏ธ)*
# LLammas ๐
Llama-2-7B instruction-tuned for Estonian in two stages:
1. Continued pre-training: 5B tokens of CulturaX with 75% of documents in Estonain and 25% in English (see [Llammas-base](https://huggingface.co/tartuNLP/Llammas-base)),
2. Instruction-tuning: Alpaca-cleaned, Alpaca-est, OASST1 top-1 English conversations, CoT and FLAN-V2 following open-instruct (both 10,000), WMT18 English-Estonian translation development data (as documents), general MTee validation English-Estonian held-out data.
[Alpaca-est](https://github.com/TartuNLP/alpaca-est) is an instruction dataset generated for Estonian with *gpt-3.5-turbo-0613*, following Alpaca. More details in our [paper](https://arxiv.org/abs/2404.04042).
Additional resources:
* Paper: [https://aclanthology.org/2024.findings-naacl.210/](https://aclanthology.org/2024.findings-naacl.210/)
* Code: [github.com/TartuNLP/llammas](https://github.com/TartuNLP/llammas)
* Base model: [tartuNLP/Llammas-base](https://huggingface.co/tartuNLP/Llammas-base)
* 4-bit quantized model in GGUF: [AlbertUnn/LlammasGGUF](https://huggingface.co/AlbertUnn/LlammasGGUF)
* Alpaca-est dataset: [github.com/TartuNLP/alpaca-est](https://github.com/TartuNLP/alpaca-est)
### Using the model
Using the model in a text-generation pipeline:
```
from transformers import pipeline
import torch
pipe = pipeline("text-generation", model="tartuNLP/Llammas", torch_dtype=torch.bfloat16, device_map="auto")
messages = [
{"role": "user", "content": "Tere!"},
{"role": "assistant", "content": "Tere! Kas saaksin teid kuidagi aidata?"},
{"role": "user", "content": "Kuidas alustada kirja kirjutamist?"}
]
prompt = pipe.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipe(prompt, max_new_tokens=256, do_sample=True, temperature=0.6, top_k=50, top_p=0.9)
print(outputs[0]["generated_text"][len(prompt):])
```
Using the model in a conversational pipeline (works with transformers==4.36.2, issues with output in newer versions):
```
from transformers import pipeline, Conversation
import torch
pipe = pipeline("conversational", model="tartuNLP/Llammas", torch_dtype=torch.bfloat16, device_map="auto")
messages = [
{"role": "user", "content": "Tere!"},
{"role": "assistant", "content": "Tere! Kas saaksin teid kuidagi aidata?"},
{"role": "user", "content": "Kuidas alustada kirja kirjutamist?"}
]
conversation = Conversation(messages)
conversation = pipe(conversation)
```
Conversational format:
```
<|user|>
Tere!
<|assistant|>
Tere! Kas saaksin teid kuidagi aidata?</s>
<|user|>
Kuidas alustada kirja kirjutamist?
<|assistant|>
Kirja kirjutamiseks alustage tervitusega, nรคiteks "Tere!" vรตi "Tere hommikust!". Seejรคrel tutvustage ennast ja mainige, kellega kirjutate. Kirjeldage oma mรตtteid vรตi kรผsimusi, mida soovite arutada. Lรตpetage kiri viisakalt, nรคiteks "Tรคnan teid tรคhelepanu eest!" vรตi "Parimate soovidega!"</s>
```
### Citation
```
@inproceedings{kuulmets-etal-2024-teaching,
title = "Teaching Llama a New Language Through Cross-Lingual Knowledge Transfer",
author = "Kuulmets, Hele-Andra and
Purason, Taido and
Luhtaru, Agnes and
Fishel, Mark",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Findings of the Association for Computational Linguistics: NAACL 2024",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.findings-naacl.210",
doi = "10.18653/v1/2024.findings-naacl.210",
pages = "3309--3325",
abstract = "This paper explores cost-efficient methods to adapt pretrained Large Language Models (LLMs) to new lower-resource languages, with a specific focus on Estonian. Leveraging the Llama 2 model, we investigate the impact of combining cross-lingual instruction-tuning with additional monolingual pretraining. Our results demonstrate that even a relatively small amount of additional monolingual pretraining followed by cross-lingual instruction-tuning significantly enhances results on Estonian. Furthermore, we showcase cross-lingual knowledge transfer from high-quality English instructions to Estonian, resulting in improvements in commonsense reasoning and multi-turn conversation capabilities. Our best model, named Llammas, represents the first open-source instruction-following LLM for Estonian. Additionally, we publish Alpaca-est, the first general task instruction dataset for Estonia. These contributions mark the initial progress in the direction of developing open-source LLMs for Estonian.",
}
```
|
phospho-app/Kai-13-ACT_BBOX-example_dataset_v2-a0d21
|
phospho-app
| 2025-06-18T07:00:21Z | 0 | 0 | null |
[
"phosphobot",
"act",
"region:us"
] | null | 2025-06-18T06:57:45Z |
---
tags:
- phosphobot
- act
task_categories:
- robotics
---
# act Model - phospho Training Pipeline
## Error Traceback
We faced an issue while training your model.
```
Training process failed with exit code 1:
return forward_call(*args, **kwargs)
File "/lerobot/lerobot/common/policies/act/modeling_act.py", line 425, in forward
robot_state_embed = self.vae_encoder_robot_state_input_proj(batch["observation.state"])
File "/opt/conda/envs/lerobot/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/opt/conda/envs/lerobot/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1750, in _call_impl
return forward_call(*args, **kwargs)
File "/opt/conda/envs/lerobot/lib/python3.10/site-packages/torch/nn/modules/linear.py", line 125, in forward
return F.linear(input, self.weight, self.bias)
RuntimeError: mat1 and mat2 shapes cannot be multiplied (100x12 and 14x512)
```
## Training parameters:
- **Dataset**: [phospho-app/example_dataset_v2_bboxes](https://huggingface.co/datasets/phospho-app/example_dataset_v2_bboxes)
- **Wandb run URL**: None
- **Epochs**: None
- **Batch size**: 100
- **Training steps**: 10000
๐ **Get Started**: [docs.phospho.ai](https://docs.phospho.ai?utm_source=huggingface_readme)
๐ค **Get your robot**: [robots.phospho.ai](https://robots.phospho.ai?utm_source=huggingface_readme)
|
soundTeam/Q3-8B-Kintsugi_mlx
|
soundTeam
| 2025-06-18T06:53:22Z | 0 | 0 |
mlx
|
[
"mlx",
"safetensors",
"qwen3",
"mergekit",
"axolotl",
"unsloth",
"roleplay",
"conversational",
"text-generation",
"dataset:PygmalionAI/PIPPA",
"dataset:Alfitaria/nemotron-ultra-reasoning-synthkink",
"dataset:PocketDoc/Dans-Prosemaxx-Gutenberg",
"dataset:FreedomIntelligence/Medical-R1-Distill-Data",
"dataset:cognitivecomputations/SystemChat-2.0",
"dataset:allenai/tulu-3-sft-personas-instruction-following",
"dataset:kalomaze/Opus_Instruct_25k",
"dataset:simplescaling/s1K-claude-3-7-sonnet",
"dataset:ai2-adapt-dev/flan_v2_converted",
"dataset:grimulkan/theory-of-mind",
"dataset:grimulkan/physical-reasoning",
"dataset:nvidia/HelpSteer3",
"dataset:nbeerbower/gutenberg2-dpo",
"dataset:nbeerbower/gutenberg-moderne-dpo",
"dataset:nbeerbower/Purpura-DPO",
"dataset:antiven0m/physical-reasoning-dpo",
"dataset:allenai/tulu-3-IF-augmented-on-policy-70b",
"dataset:NobodyExistsOnTheInternet/system-message-DPO",
"base_model:allura-org/Q3-8B-Kintsugi",
"base_model:finetune:allura-org/Q3-8B-Kintsugi",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2025-06-18T06:52:55Z |
---
license: apache-2.0
base_model: allura-org/Q3-8B-Kintsugi
library_name: mlx
tags:
- mergekit
- axolotl
- unsloth
- roleplay
- conversational
- mlx
datasets:
- PygmalionAI/PIPPA
- Alfitaria/nemotron-ultra-reasoning-synthkink
- PocketDoc/Dans-Prosemaxx-Gutenberg
- FreedomIntelligence/Medical-R1-Distill-Data
- cognitivecomputations/SystemChat-2.0
- allenai/tulu-3-sft-personas-instruction-following
- kalomaze/Opus_Instruct_25k
- simplescaling/s1K-claude-3-7-sonnet
- ai2-adapt-dev/flan_v2_converted
- grimulkan/theory-of-mind
- grimulkan/physical-reasoning
- nvidia/HelpSteer3
- nbeerbower/gutenberg2-dpo
- nbeerbower/gutenberg-moderne-dpo
- nbeerbower/Purpura-DPO
- antiven0m/physical-reasoning-dpo
- allenai/tulu-3-IF-augmented-on-policy-70b
- NobodyExistsOnTheInternet/system-message-DPO
pipeline_tag: text-generation
---
# soundTeam/Q3-8B-Kintsugi_mlx
This model [soundTeam/Q3-8B-Kintsugi_mlx](https://huggingface.co/soundTeam/Q3-8B-Kintsugi_mlx) was
converted to MLX format from [allura-org/Q3-8B-Kintsugi](https://huggingface.co/allura-org/Q3-8B-Kintsugi)
using mlx-lm version **0.25.2**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("soundTeam/Q3-8B-Kintsugi_mlx")
prompt = "hello"
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
tomaarsen/splade-cocondenser-msmarco-margin-mse-minilm-small-best
|
tomaarsen
| 2025-06-18T06:47:32Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"bert",
"sparse-encoder",
"sparse",
"splade",
"generated_from_trainer",
"dataset_size:90000",
"loss:SpladeLoss",
"loss:SparseMarginMSELoss",
"loss:FlopsLoss",
"feature-extraction",
"en",
"dataset:tomaarsen/msmarco-margin-mse-minilm",
"arxiv:1908.10084",
"arxiv:2205.04733",
"arxiv:2010.02666",
"arxiv:2004.05665",
"base_model:Luyu/co-condenser-marco",
"base_model:finetune:Luyu/co-condenser-marco",
"license:apache-2.0",
"model-index",
"co2_eq_emissions",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2025-06-18T06:47:16Z |
---
language:
- en
license: apache-2.0
tags:
- sentence-transformers
- sparse-encoder
- sparse
- splade
- generated_from_trainer
- dataset_size:90000
- loss:SpladeLoss
- loss:SparseMarginMSELoss
- loss:FlopsLoss
base_model: Luyu/co-condenser-marco
widget:
- text: weather in ljubljana, slovenia fahrenheit
- text: which type of shark is the largest?
- text: "Plan to have the farrier reset your horseรข\x80\x99s shoes approximately every\
\ six weeks. The shoes should be shaped to the horseรข\x80\x99s feet for a custom\
\ fit."
- text: what oscars was kudo nominated for
- text: "Answers from Ronald Petersen, M.D. Yes, Alzheimer's disease usually worsens\
\ slowly. But its speed of progression varies, depending on a person's genetic\
\ makeup, environmental factors, age at diagnosis and other medical conditions.\
\ Still, anyone diagnosed with Alzheimer's whose symptoms seem to be progressing\
\ quickly รข\x80\x94 or who experiences a sudden decline รข\x80\x94 should see his\
\ or her doctor."
datasets:
- tomaarsen/msmarco-margin-mse-minilm
pipeline_tag: feature-extraction
library_name: sentence-transformers
metrics:
- dot_accuracy@1
- dot_accuracy@3
- dot_accuracy@5
- dot_accuracy@10
- dot_precision@1
- dot_precision@3
- dot_precision@5
- dot_precision@10
- dot_recall@1
- dot_recall@3
- dot_recall@5
- dot_recall@10
- dot_ndcg@10
- dot_mrr@10
- dot_map@100
- query_active_dims
- query_sparsity_ratio
- corpus_active_dims
- corpus_sparsity_ratio
co2_eq_emissions:
emissions: 83.33408497430837
energy_consumed: 0.21439067711076448
source: codecarbon
training_type: fine-tuning
on_cloud: false
cpu_model: 13th Gen Intel(R) Core(TM) i7-13700K
ram_total_size: 31.777088165283203
hours_used: 0.605
hardware_used: 1 x NVIDIA GeForce RTX 3090
model-index:
- name: CoCondenser trained on MS MARCO
results:
- task:
type: sparse-information-retrieval
name: Sparse Information Retrieval
dataset:
name: NanoMSMARCO
type: NanoMSMARCO
metrics:
- type: dot_accuracy@1
value: 0.4
name: Dot Accuracy@1
- type: dot_accuracy@3
value: 0.68
name: Dot Accuracy@3
- type: dot_accuracy@5
value: 0.74
name: Dot Accuracy@5
- type: dot_accuracy@10
value: 0.86
name: Dot Accuracy@10
- type: dot_precision@1
value: 0.4
name: Dot Precision@1
- type: dot_precision@3
value: 0.22666666666666668
name: Dot Precision@3
- type: dot_precision@5
value: 0.14800000000000002
name: Dot Precision@5
- type: dot_precision@10
value: 0.08599999999999998
name: Dot Precision@10
- type: dot_recall@1
value: 0.4
name: Dot Recall@1
- type: dot_recall@3
value: 0.68
name: Dot Recall@3
- type: dot_recall@5
value: 0.74
name: Dot Recall@5
- type: dot_recall@10
value: 0.86
name: Dot Recall@10
- type: dot_ndcg@10
value: 0.6263943020229624
name: Dot Ndcg@10
- type: dot_mrr@10
value: 0.5522460317460317
name: Dot Mrr@10
- type: dot_map@100
value: 0.5579041710648882
name: Dot Map@100
- type: query_active_dims
value: 20.040000915527344
name: Query Active Dims
- type: query_sparsity_ratio
value: 0.9993434243851803
name: Query Sparsity Ratio
- type: corpus_active_dims
value: 157.50286865234375
name: Corpus Active Dims
- type: corpus_sparsity_ratio
value: 0.9948396937077404
name: Corpus Sparsity Ratio
- task:
type: sparse-information-retrieval
name: Sparse Information Retrieval
dataset:
name: NanoNFCorpus
type: NanoNFCorpus
metrics:
- type: dot_accuracy@1
value: 0.44
name: Dot Accuracy@1
- type: dot_accuracy@3
value: 0.64
name: Dot Accuracy@3
- type: dot_accuracy@5
value: 0.64
name: Dot Accuracy@5
- type: dot_accuracy@10
value: 0.68
name: Dot Accuracy@10
- type: dot_precision@1
value: 0.44
name: Dot Precision@1
- type: dot_precision@3
value: 0.38666666666666666
name: Dot Precision@3
- type: dot_precision@5
value: 0.332
name: Dot Precision@5
- type: dot_precision@10
value: 0.266
name: Dot Precision@10
- type: dot_recall@1
value: 0.04179623435529923
name: Dot Recall@1
- type: dot_recall@3
value: 0.0965734090873724
name: Dot Recall@3
- type: dot_recall@5
value: 0.11384541136955774
name: Dot Recall@5
- type: dot_recall@10
value: 0.1415418248302823
name: Dot Recall@10
- type: dot_ndcg@10
value: 0.3379998226394541
name: Dot Ndcg@10
- type: dot_mrr@10
value: 0.5286904761904762
name: Dot Mrr@10
- type: dot_map@100
value: 0.1497615336338629
name: Dot Map@100
- type: query_active_dims
value: 18.559999465942383
name: Query Active Dims
- type: query_sparsity_ratio
value: 0.9993919140467223
name: Query Sparsity Ratio
- type: corpus_active_dims
value: 336.476806640625
name: Corpus Active Dims
- type: corpus_sparsity_ratio
value: 0.9889759253443213
name: Corpus Sparsity Ratio
- task:
type: sparse-information-retrieval
name: Sparse Information Retrieval
dataset:
name: NanoNQ
type: NanoNQ
metrics:
- type: dot_accuracy@1
value: 0.48
name: Dot Accuracy@1
- type: dot_accuracy@3
value: 0.76
name: Dot Accuracy@3
- type: dot_accuracy@5
value: 0.84
name: Dot Accuracy@5
- type: dot_accuracy@10
value: 0.88
name: Dot Accuracy@10
- type: dot_precision@1
value: 0.48
name: Dot Precision@1
- type: dot_precision@3
value: 0.26666666666666666
name: Dot Precision@3
- type: dot_precision@5
value: 0.176
name: Dot Precision@5
- type: dot_precision@10
value: 0.092
name: Dot Precision@10
- type: dot_recall@1
value: 0.46
name: Dot Recall@1
- type: dot_recall@3
value: 0.72
name: Dot Recall@3
- type: dot_recall@5
value: 0.8
name: Dot Recall@5
- type: dot_recall@10
value: 0.83
name: Dot Recall@10
- type: dot_ndcg@10
value: 0.6694458425290901
name: Dot Ndcg@10
- type: dot_mrr@10
value: 0.6305
name: Dot Mrr@10
- type: dot_map@100
value: 0.6132918446052669
name: Dot Map@100
- type: query_active_dims
value: 22.540000915527344
name: Query Active Dims
- type: query_sparsity_ratio
value: 0.9992615162533409
name: Query Sparsity Ratio
- type: corpus_active_dims
value: 199.87527465820312
name: Corpus Active Dims
- type: corpus_sparsity_ratio
value: 0.9934514358607496
name: Corpus Sparsity Ratio
- task:
type: sparse-nano-beir
name: Sparse Nano BEIR
dataset:
name: NanoBEIR mean
type: NanoBEIR_mean
metrics:
- type: dot_accuracy@1
value: 0.44
name: Dot Accuracy@1
- type: dot_accuracy@3
value: 0.6933333333333334
name: Dot Accuracy@3
- type: dot_accuracy@5
value: 0.7399999999999999
name: Dot Accuracy@5
- type: dot_accuracy@10
value: 0.8066666666666666
name: Dot Accuracy@10
- type: dot_precision@1
value: 0.44
name: Dot Precision@1
- type: dot_precision@3
value: 0.2933333333333333
name: Dot Precision@3
- type: dot_precision@5
value: 0.21866666666666668
name: Dot Precision@5
- type: dot_precision@10
value: 0.148
name: Dot Precision@10
- type: dot_recall@1
value: 0.30059874478509974
name: Dot Recall@1
- type: dot_recall@3
value: 0.4988578030291242
name: Dot Recall@3
- type: dot_recall@5
value: 0.5512818037898526
name: Dot Recall@5
- type: dot_recall@10
value: 0.6105139416100941
name: Dot Recall@10
- type: dot_ndcg@10
value: 0.5446133223971689
name: Dot Ndcg@10
- type: dot_mrr@10
value: 0.570478835978836
name: Dot Mrr@10
- type: dot_map@100
value: 0.44031918310133933
name: Dot Map@100
- type: query_active_dims
value: 20.380000432332356
name: Query Active Dims
- type: query_sparsity_ratio
value: 0.9993322848950811
name: Query Sparsity Ratio
- type: corpus_active_dims
value: 214.4328896115101
name: Corpus Active Dims
- type: corpus_sparsity_ratio
value: 0.992974481042805
name: Corpus Sparsity Ratio
---
# CoCondenser trained on MS MARCO
This is a [SPLADE Sparse Encoder](https://www.sbert.net/docs/sparse_encoder/usage/usage.html) model finetuned from [Luyu/co-condenser-marco](https://huggingface.co/Luyu/co-condenser-marco) on the [tomaarsen/msmarco-margin-mse-minilm](https://huggingface.co/datasets/tomaarsen/msmarco-margin-mse-minilm) dataset using the [sentence-transformers](https://www.SBERT.net) library. It maps sentences & paragraphs to a 30522-dimensional sparse vector space and can be used for semantic search and sparse retrieval.
## Model Details
### Model Description
- **Model Type:** SPLADE Sparse Encoder
- **Base model:** [Luyu/co-condenser-marco](https://huggingface.co/Luyu/co-condenser-marco) <!-- at revision e0cef0ab2410aae0f0994366ddefb5649a266709 -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 30522 dimensions
- **Similarity Function:** Dot Product
- **Training Dataset:**
- [tomaarsen/msmarco-margin-mse-minilm](https://huggingface.co/datasets/tomaarsen/msmarco-margin-mse-minilm)
- **Language:** en
- **License:** apache-2.0
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Documentation:** [Sparse Encoder Documentation](https://www.sbert.net/docs/sparse_encoder/usage/usage.html)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sparse Encoders on Hugging Face](https://huggingface.co/models?library=sentence-transformers&other=sparse-encoder)
### Full Model Architecture
```
SparseEncoder(
(0): MLMTransformer({'max_seq_length': 512, 'do_lower_case': False}) with MLMTransformer model: BertForMaskedLM
(1): SpladePooling({'pooling_strategy': 'max', 'activation_function': 'relu', 'word_embedding_dimension': 30522})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SparseEncoder
# Download from the ๐ค Hub
model = SparseEncoder("tomaarsen/splade-cocondenser-msmarco-margin-mse-minilm-small-best")
# Run inference
queries = [
"what causes aging fast",
]
documents = [
'UV-A light, specifically, is what mainly causes tanning, skin aging, and cataracts, UV-B causes sunburn, skin aging and skin cancer, and UV-C is the strongest, and therefore most effective at killing microorganisms. Again รข\x80\x93 single words and multiple bullets.',
"Answers from Ronald Petersen, M.D. Yes, Alzheimer's disease usually worsens slowly. But its speed of progression varies, depending on a person's genetic makeup, environmental factors, age at diagnosis and other medical conditions. Still, anyone diagnosed with Alzheimer's whose symptoms seem to be progressing quickly รข\x80\x94 or who experiences a sudden decline รข\x80\x94 should see his or her doctor.",
"Bell's palsy and Extreme tiredness and Extreme fatigue (2 causes) Bell's palsy and Extreme tiredness and Hepatitis (2 causes) Bell's palsy and Extreme tiredness and Liver pain (2 causes) Bell's palsy and Extreme tiredness and Lymph node swelling in children (2 causes)",
]
query_embeddings = model.encode_query(queries)
document_embeddings = model.encode_document(documents)
print(query_embeddings.shape, document_embeddings.shape)
# [1, 30522] [3, 30522]
# Get the similarity scores for the embeddings
similarities = model.similarity(query_embeddings, document_embeddings)
print(similarities)
# tensor([[11.3157, 11.5490, 6.2470]])
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Sparse Information Retrieval
* Datasets: `NanoMSMARCO`, `NanoNFCorpus` and `NanoNQ`
* Evaluated with [<code>SparseInformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sparse_encoder/evaluation.html#sentence_transformers.sparse_encoder.evaluation.SparseInformationRetrievalEvaluator)
| Metric | NanoMSMARCO | NanoNFCorpus | NanoNQ |
|:----------------------|:------------|:-------------|:-----------|
| dot_accuracy@1 | 0.4 | 0.44 | 0.48 |
| dot_accuracy@3 | 0.68 | 0.64 | 0.76 |
| dot_accuracy@5 | 0.74 | 0.64 | 0.84 |
| dot_accuracy@10 | 0.86 | 0.68 | 0.88 |
| dot_precision@1 | 0.4 | 0.44 | 0.48 |
| dot_precision@3 | 0.2267 | 0.3867 | 0.2667 |
| dot_precision@5 | 0.148 | 0.332 | 0.176 |
| dot_precision@10 | 0.086 | 0.266 | 0.092 |
| dot_recall@1 | 0.4 | 0.0418 | 0.46 |
| dot_recall@3 | 0.68 | 0.0966 | 0.72 |
| dot_recall@5 | 0.74 | 0.1138 | 0.8 |
| dot_recall@10 | 0.86 | 0.1415 | 0.83 |
| **dot_ndcg@10** | **0.6264** | **0.338** | **0.6694** |
| dot_mrr@10 | 0.5522 | 0.5287 | 0.6305 |
| dot_map@100 | 0.5579 | 0.1498 | 0.6133 |
| query_active_dims | 20.04 | 18.56 | 22.54 |
| query_sparsity_ratio | 0.9993 | 0.9994 | 0.9993 |
| corpus_active_dims | 157.5029 | 336.4768 | 199.8753 |
| corpus_sparsity_ratio | 0.9948 | 0.989 | 0.9935 |
#### Sparse Nano BEIR
* Dataset: `NanoBEIR_mean`
* Evaluated with [<code>SparseNanoBEIREvaluator</code>](https://sbert.net/docs/package_reference/sparse_encoder/evaluation.html#sentence_transformers.sparse_encoder.evaluation.SparseNanoBEIREvaluator) with these parameters:
```json
{
"dataset_names": [
"msmarco",
"nfcorpus",
"nq"
]
}
```
| Metric | Value |
|:----------------------|:-----------|
| dot_accuracy@1 | 0.44 |
| dot_accuracy@3 | 0.6933 |
| dot_accuracy@5 | 0.74 |
| dot_accuracy@10 | 0.8067 |
| dot_precision@1 | 0.44 |
| dot_precision@3 | 0.2933 |
| dot_precision@5 | 0.2187 |
| dot_precision@10 | 0.148 |
| dot_recall@1 | 0.3006 |
| dot_recall@3 | 0.4989 |
| dot_recall@5 | 0.5513 |
| dot_recall@10 | 0.6105 |
| **dot_ndcg@10** | **0.5446** |
| dot_mrr@10 | 0.5705 |
| dot_map@100 | 0.4403 |
| query_active_dims | 20.38 |
| query_sparsity_ratio | 0.9993 |
| corpus_active_dims | 214.4329 |
| corpus_sparsity_ratio | 0.993 |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### tomaarsen/msmarco-margin-mse-minilm
* Dataset: [tomaarsen/msmarco-margin-mse-minilm](https://huggingface.co/datasets/tomaarsen/msmarco-margin-mse-minilm)
* Size: 90,000 training samples
* Columns: <code>query</code>, <code>positive</code>, <code>negative</code>, and <code>score</code>
* Approximate statistics based on the first 1000 samples:
| | query | positive | negative | score |
|:--------|:---------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:--------------------------------------------------------------------|
| type | string | string | string | float |
| details | <ul><li>min: 4 tokens</li><li>mean: 9.22 tokens</li><li>max: 36 tokens</li></ul> | <ul><li>min: 15 tokens</li><li>mean: 79.27 tokens</li><li>max: 247 tokens</li></ul> | <ul><li>min: 16 tokens</li><li>mean: 81.15 tokens</li><li>max: 201 tokens</li></ul> | <ul><li>min: -14.32</li><li>mean: 4.62</li><li>max: 21.72</li></ul> |
* Samples:
| query | positive | negative | score |
|:---------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------|
| <code>most powerful army in the world</code> | <code>U.S. Army Reserve Command You may be asking yourself, รขยยWhat is the Army Reserve?รขยย The Army is the most powerful and sophisticated military force in the world.</code> | <code>The British Royal Navy was the most powerful sea-going force by the time of World War 1 (1914-1918) and this was well-underst...</code> | <code>2.919867515563965</code> |
| <code>define vasomotor</code> | <code>Define peripheral neuropathy: a disease or degenerative state of the peripheral nerves in which motor, sensory, or vasomotor nerve fibers may beรขยยฆ a disease or degenerative state of the peripheral nerves in which motor, sensory, or vasomotor nerve fibers may be affected and which is markedรขยยฆ</code> | <code>Vairรยgya (Devanagari: ร ยคยตร ยฅยร ยคยฐร ยคยพร ยคยร ยฅยร ยคยฏ, also spelt Vairagya) is a Sanskrit term used in Hindu philosophy that roughly translates as dispassion, detachment, or renunciation, in particular renunciation from the pains and pleasures in the material world (Maya).</code> | <code>3.0037026405334473</code> |
| <code>nitrates definition biology</code> | <code>In Botany or Plant Biology. By Photosynthesis, the palisade cells make glucose which has many uses including: storage as starch, to make fat, to make cellulose and to make protein. Glucose is converted wรขยยฆith mineral slat nitrates to make the protein. Nitrates provide the essential nitrogen to make protein. The Ribosome, an organelle of the plant cell, manufactures most of the cell's protein.</code> | <code>Almost all inorganic nitrate salts are soluble in water at standard temperature and pressure. A common example of an inorganic nitrate salt is potassium nitrate (saltpeter). A rich source of inorganic nitrate in the human body comes from diets rich in leafy green foods, such as spinach and arugula.It is now believed that dietary nitrate in the form of plant-based foods is converted in the body to nitrite.itrate is a polyatomic ion with the molecular formula NO 3 รขยย and a molecular mass of 62.0049 g/mol.</code> | <code>-1.6804794073104858</code> |
* Loss: [<code>SpladeLoss</code>](https://sbert.net/docs/package_reference/sparse_encoder/losses.html#spladeloss) with these parameters:
```json
{
"loss": "SparseMarginMSELoss",
"lambda_corpus": 0.08,
"lambda_query": 0.08
}
```
### Evaluation Dataset
#### Unnamed Dataset
* Size: 10,000 evaluation samples
* Columns: <code>query</code>, <code>positive</code>, <code>negative</code>, and <code>score</code>
* Approximate statistics based on the first 1000 samples:
| | query | positive | negative | score |
|:--------|:---------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-------------------------------------------------------------------|
| type | string | string | string | float |
| details | <ul><li>min: 4 tokens</li><li>mean: 9.01 tokens</li><li>max: 35 tokens</li></ul> | <ul><li>min: 17 tokens</li><li>mean: 79.8 tokens</li><li>max: 336 tokens</li></ul> | <ul><li>min: 18 tokens</li><li>mean: 81.3 tokens</li><li>max: 273 tokens</li></ul> | <ul><li>min: -15.9</li><li>mean: 4.91</li><li>max: 21.67</li></ul> |
* Samples:
| query | positive | negative | score |
|:----------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------|
| <code>femoral artery definition</code> | <code>medical Definition of circumflex artery : any of several paired curving arteries: as a: either of two arteries that branch from the deep femoral artery or from the femoral artery itself:</code> | <code>Femoral vein. The femoral vein is located in the upper thigh and pelvic region of the human body. It travels in close proximity to the femoral artery. This vein is one of the larger vessels in the venous system. Instead of draining deoxygenated blood from specific parts of the body, it receives blood from several significant branches. These include popliteal, the profunda femoris, and the great sapheneous veins.</code> | <code>-0.1968388557434082</code> |
| <code>what causes mastitis and how do you treat it</code> | <code>Mastitis is an infection of the tissue of the breast that occurs most frequently during the time of breastfeeding. This infection causes pain, swelling, redness, and increased temperature of the breast. It can occur when bacteria, often from the infant's mouth, enter a milk duct through a crack in the nipple. This causes an infection and painful inflammation of the breast.</code> | <code>Common causes of mastitis include bacteria from the babyรขยยs mouth, bacteria entering via breast injuries (bruising, fissures, cracks in the nipple), milk stasis (milk pooling in the breast), and bacteria from the hands of the mother or health care provider.</code> | <code>-0.8143405914306641</code> |
| <code>what is a buck moth</code> | <code>Buck moth caterpillars that have a light background color can be confused with both the Nevada buck moth, Hemileuca nevadensis Stretch, and the New England buck moth, Hemileuca lucina Henry Edwards. The larvae of these three species can best be distinguished based on the preferred host plants (Wagner 2005).hey rely on resources that are acquired by the caterpillars (larvae). The caterpillars are robust and can exceed four inches (10 cm) in North America. Figure 4. Adult cecropia moth, Hyalophora cecropia (Linnaeus). Photograph by Pennsylvania Department of Conservation and Natural Resources-Forestry Archive, Bugwood.org.</code> | <code>bucktail that gets talked about quietly in the . privacy of remote cabins. The รขยยMusky-Teerรขยย is a big fish bait that anglers treasure in their collection. You wonรขยยt find these at your local bait shop but weรขยยve been stocking these highly prized baits in all colors for years.</code> | <code>11.004357814788818</code> |
* Loss: [<code>SpladeLoss</code>](https://sbert.net/docs/package_reference/sparse_encoder/losses.html#spladeloss) with these parameters:
```json
{
"loss": "SparseMarginMSELoss",
"lambda_corpus": 0.08,
"lambda_query": 0.08
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `learning_rate`: 2e-05
- `num_train_epochs`: 1
- `warmup_ratio`: 0.1
- `fp16`: True
- `load_best_model_at_end`: True
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 2e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 1
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: True
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: True
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
- `router_mapping`: {}
- `learning_rate_mapping`: {}
</details>
### Training Logs
| Epoch | Step | Training Loss | Validation Loss | NanoMSMARCO_dot_ndcg@10 | NanoNFCorpus_dot_ndcg@10 | NanoNQ_dot_ndcg@10 | NanoBEIR_mean_dot_ndcg@10 |
|:----------:|:--------:|:-------------:|:---------------:|:-----------------------:|:------------------------:|:------------------:|:-------------------------:|
| 0.0178 | 100 | 501764.44 | - | - | - | - | - |
| 0.0356 | 200 | 9723.91 | - | - | - | - | - |
| 0.0533 | 300 | 61.2402 | - | - | - | - | - |
| 0.0711 | 400 | 37.5943 | - | - | - | - | - |
| 0.0889 | 500 | 28.2182 | 22.6202 | 0.5079 | 0.2839 | 0.5348 | 0.4422 |
| 0.1067 | 600 | 23.725 | - | - | - | - | - |
| 0.1244 | 700 | 21.3095 | - | - | - | - | - |
| 0.1422 | 800 | 22.4743 | - | - | - | - | - |
| 0.16 | 900 | 20.6042 | - | - | - | - | - |
| 0.1778 | 1000 | 19.2717 | 19.5559 | 0.5752 | 0.3206 | 0.6022 | 0.4993 |
| 0.1956 | 1100 | 18.9244 | - | - | - | - | - |
| 0.2133 | 1200 | 20.036 | - | - | - | - | - |
| 0.2311 | 1300 | 19.2997 | - | - | - | - | - |
| 0.2489 | 1400 | 18.6086 | - | - | - | - | - |
| 0.2667 | 1500 | 17.9562 | 17.3174 | 0.6515 | 0.3120 | 0.6192 | 0.5275 |
| 0.2844 | 1600 | 17.4507 | - | - | - | - | - |
| 0.3022 | 1700 | 17.7625 | - | - | - | - | - |
| 0.32 | 1800 | 17.2976 | - | - | - | - | - |
| 0.3378 | 1900 | 16.5205 | - | - | - | - | - |
| 0.3556 | 2000 | 15.5891 | 14.6315 | 0.5975 | 0.3391 | 0.6074 | 0.5147 |
| 0.3733 | 2100 | 16.2637 | - | - | - | - | - |
| 0.3911 | 2200 | 15.8673 | - | - | - | - | - |
| 0.4089 | 2300 | 15.1233 | - | - | - | - | - |
| 0.4267 | 2400 | 15.2417 | - | - | - | - | - |
| 0.4444 | 2500 | 13.8196 | 13.9420 | 0.6496 | 0.3196 | 0.6333 | 0.5342 |
| 0.4622 | 2600 | 14.1863 | - | - | - | - | - |
| 0.48 | 2700 | 15.699 | - | - | - | - | - |
| 0.4978 | 2800 | 14.7766 | - | - | - | - | - |
| 0.5156 | 2900 | 13.9886 | - | - | - | - | - |
| 0.5333 | 3000 | 13.8287 | 14.1111 | 0.6086 | 0.3387 | 0.6320 | 0.5264 |
| 0.5511 | 3100 | 13.3849 | - | - | - | - | - |
| 0.5689 | 3200 | 13.5102 | - | - | - | - | - |
| 0.5867 | 3300 | 14.6445 | - | - | - | - | - |
| 0.6044 | 3400 | 13.0225 | - | - | - | - | - |
| 0.6222 | 3500 | 12.9153 | 12.1260 | 0.6369 | 0.3260 | 0.6215 | 0.5282 |
| 0.64 | 3600 | 12.9574 | - | - | - | - | - |
| 0.6578 | 3700 | 12.5992 | - | - | - | - | - |
| 0.6756 | 3800 | 13.8005 | - | - | - | - | - |
| 0.6933 | 3900 | 13.5122 | - | - | - | - | - |
| 0.7111 | 4000 | 12.6501 | 12.9132 | 0.5955 | 0.3435 | 0.6497 | 0.5296 |
| 0.7289 | 4100 | 13.0574 | - | - | - | - | - |
| 0.7467 | 4200 | 12.32 | - | - | - | - | - |
| 0.7644 | 4300 | 12.2719 | - | - | - | - | - |
| 0.7822 | 4400 | 11.5204 | - | - | - | - | - |
| 0.8 | 4500 | 12.352 | 11.7721 | 0.5990 | 0.3386 | 0.6563 | 0.5313 |
| 0.8178 | 4600 | 12.2692 | - | - | - | - | - |
| 0.8356 | 4700 | 11.5275 | - | - | - | - | - |
| 0.8533 | 4800 | 11.9798 | - | - | - | - | - |
| 0.8711 | 4900 | 12.1416 | - | - | - | - | - |
| 0.8889 | 5000 | 12.3726 | 12.2133 | 0.6190 | 0.3372 | 0.6707 | 0.5423 |
| 0.9067 | 5100 | 11.5767 | - | - | - | - | - |
| 0.9244 | 5200 | 10.8907 | - | - | - | - | - |
| 0.9422 | 5300 | 11.5708 | - | - | - | - | - |
| 0.96 | 5400 | 11.2896 | - | - | - | - | - |
| **0.9778** | **5500** | **11.16** | **11.291** | **0.6264** | **0.338** | **0.6694** | **0.5446** |
| 0.9956 | 5600 | 11.2759 | - | - | - | - | - |
| -1 | -1 | - | - | 0.6264 | 0.3380 | 0.6694 | 0.5446 |
* The bold row denotes the saved checkpoint.
### Environmental Impact
Carbon emissions were measured using [CodeCarbon](https://github.com/mlco2/codecarbon).
- **Energy Consumed**: 0.214 kWh
- **Carbon Emitted**: 0.083 kg of CO2
- **Hours Used**: 0.605 hours
### Training Hardware
- **On Cloud**: No
- **GPU Model**: 1 x NVIDIA GeForce RTX 3090
- **CPU Model**: 13th Gen Intel(R) Core(TM) i7-13700K
- **RAM Size**: 31.78 GB
### Framework Versions
- Python: 3.11.6
- Sentence Transformers: 4.2.0.dev0
- Transformers: 4.52.4
- PyTorch: 2.6.0+cu124
- Accelerate: 1.5.1
- Datasets: 2.21.0
- Tokenizers: 0.21.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### SpladeLoss
```bibtex
@misc{formal2022distillationhardnegativesampling,
title={From Distillation to Hard Negative Sampling: Making Sparse Neural IR Models More Effective},
author={Thibault Formal and Carlos Lassance and Benjamin Piwowarski and Stรฉphane Clinchant},
year={2022},
eprint={2205.04733},
archivePrefix={arXiv},
primaryClass={cs.IR},
url={https://arxiv.org/abs/2205.04733},
}
```
#### SparseMarginMSELoss
```bibtex
@misc{hofstรคtter2021improving,
title={Improving Efficient Neural Ranking Models with Cross-Architecture Knowledge Distillation},
author={Sebastian Hofstรคtter and Sophia Althammer and Michael Schrรถder and Mete Sertkan and Allan Hanbury},
year={2021},
eprint={2010.02666},
archivePrefix={arXiv},
primaryClass={cs.IR}
}
```
#### FlopsLoss
```bibtex
@article{paria2020minimizing,
title={Minimizing flops to learn efficient sparse representations},
author={Paria, Biswajit and Yeh, Chih-Kuan and Yen, Ian EH and Xu, Ning and Ravikumar, Pradeep and P{'o}czos, Barnab{'a}s},
journal={arXiv preprint arXiv:2004.05665},
year={2020}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
morturr/Llama-2-7b-hf-LOO_headlines-COMB_dadjokes-comb1-seed18-2025-06-18
|
morturr
| 2025-06-18T06:32:13Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"license:llama2",
"region:us"
] | null | 2025-06-18T06:31:53Z |
---
library_name: peft
license: llama2
base_model: meta-llama/Llama-2-7b-hf
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: Llama-2-7b-hf-LOO_headlines-COMB_dadjokes-comb1-seed18-2025-06-18
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Llama-2-7b-hf-LOO_headlines-COMB_dadjokes-comb1-seed18-2025-06-18
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 16
- seed: 18
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.1
- Pytorch 2.5.1+cu124
- Datasets 3.0.2
- Tokenizers 0.20.1
|
hasancanonder/llama3-turkish-q4km
|
hasancanonder
| 2025-06-18T06:31:02Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"base_model:quantized:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-18T06:29:51Z |
---
base_model: unsloth/llama-3-8b-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** hasancanonder
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
tiny-random/llama-3.3-dim64
|
tiny-random
| 2025-06-18T06:29:56Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-18T06:29:53Z |
---
library_name: transformers
pipeline_tag: text-generation
inference: true
widget:
- text: Hello!
example_title: Hello world
group: Python
---
This tiny model is for debugging. It is randomly initialized with the config adapted from [meta-llama/Llama-3.3-70B-Instruct](https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct).
### Example usage:
```python
from transformers import pipeline
model_id = "tiny-random/llama-3.3-dim64"
pipe = pipeline(
"text-generation", model=model_id, device="cuda",
trust_remote_code=True, max_new_tokens=3,
)
print(pipe("Hello World!"))
```
### Codes to create this repo:
```python
import torch
from transformers import (
AutoConfig,
AutoModelForCausalLM,
AutoTokenizer,
GenerationConfig,
pipeline,
set_seed,
)
source_model_id = "meta-llama/Llama-3.3-70B-Instruct"
save_folder = "/tmp/tiny-random/llama-3.3-dim64"
tokenizer = AutoTokenizer.from_pretrained(
source_model_id, trust_remote_code=True,
)
tokenizer.save_pretrained(save_folder)
config = AutoConfig.from_pretrained(
source_model_id, trust_remote_code=True,
)
config.hidden_size = 64
config.intermediate_size = 128
config.num_attention_heads = 2
config.num_key_value_heads = 1
config.head_dim = 32
config.num_hidden_layers = 2
config.tie_word_embeddings = True
model = AutoModelForCausalLM.from_config(
config,
torch_dtype=torch.bfloat16,
trust_remote_code=True,
)
model.generation_config = GenerationConfig.from_pretrained(
source_model_id, trust_remote_code=True,
)
set_seed(42)
with torch.no_grad():
for name, p in sorted(model.named_parameters()):
torch.nn.init.normal_(p, 0, 0.2)
print(name, p.shape)
model.save_pretrained(save_folder)
```
### Printing the model:
```text
LlamaForCausalLM(
(model): LlamaModel(
(embed_tokens): Embedding(128256, 64)
(layers): ModuleList(
(0-1): 2 x LlamaDecoderLayer(
(self_attn): LlamaAttention(
(q_proj): Linear(in_features=64, out_features=64, bias=False)
(k_proj): Linear(in_features=64, out_features=32, bias=False)
(v_proj): Linear(in_features=64, out_features=32, bias=False)
(o_proj): Linear(in_features=64, out_features=64, bias=False)
)
(mlp): LlamaMLP(
(gate_proj): Linear(in_features=64, out_features=128, bias=False)
(up_proj): Linear(in_features=64, out_features=128, bias=False)
(down_proj): Linear(in_features=128, out_features=64, bias=False)
(act_fn): SiLU()
)
(input_layernorm): LlamaRMSNorm((64,), eps=1e-05)
(post_attention_layernorm): LlamaRMSNorm((64,), eps=1e-05)
)
)
(norm): LlamaRMSNorm((64,), eps=1e-05)
(rotary_emb): LlamaRotaryEmbedding()
)
(lm_head): Linear(in_features=64, out_features=128256, bias=False)
)
```
|
pimplefeet/omega_aBPtvzO
|
pimplefeet
| 2025-06-18T06:29:26Z | 0 | 0 | null |
[
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-06-18T06:29:25Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
tootshine/omega_fDARt3E
|
tootshine
| 2025-06-18T06:29:24Z | 0 | 0 | null |
[
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-06-18T06:29:20Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
nis12ram/Nemotron-4-Mini-Hindi-4B-constraint-phase1-exp1
|
nis12ram
| 2025-06-18T06:24:26Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"nemotron",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"base_model:nis12ram/Nemotron-4-Mini-Hindi-4B-Instruct",
"base_model:finetune:nis12ram/Nemotron-4-Mini-Hindi-4B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-18T06:21:48Z |
---
base_model: nis12ram/Nemotron-4-Mini-Hindi-4B-Instruct
tags:
- text-generation-inference
- transformers
- unsloth
- nemotron
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** nis12ram
- **License:** apache-2.0
- **Finetuned from model :** nis12ram/Nemotron-4-Mini-Hindi-4B-Instruct
This nemotron model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
sgeyer/llama-3.2-3b-finemath-countdown
|
sgeyer
| 2025-06-18T06:22:01Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"trl",
"grpo",
"arxiv:2402.03300",
"base_model:HuggingFaceTB/FineMath-Llama-3B",
"base_model:finetune:HuggingFaceTB/FineMath-Llama-3B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-13T10:25:31Z |
---
base_model: HuggingFaceTB/FineMath-Llama-3B
library_name: transformers
model_name: llama-3.2-3b-finemath-countdown
tags:
- generated_from_trainer
- trl
- grpo
licence: license
---
# Model Card for llama-3.2-3b-finemath-countdown
This model is a fine-tuned version of [HuggingFaceTB/FineMath-Llama-3B](https://huggingface.co/HuggingFaceTB/FineMath-Llama-3B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="sgeyer/llama-3.2-3b-finemath-countdown", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/stefangeyer/huggingface/runs/28g3jtry)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.14.0
- Transformers: 4.48.1
- Pytorch: 2.5.1
- Datasets: 3.1.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
luyotw/openfun-ivod-whisper-large-v3-ChengTianCai-11-160
|
luyotw
| 2025-06-18T06:15:50Z | 0 | 0 | null |
[
"tensorboard",
"safetensors",
"whisper",
"region:us"
] | null | 2025-06-18T03:49:13Z |
# Fine-tune ่ณ่จ
- ๅๅงๆจกๅ: `openai/whisper-large-v3`
- ไฝฟ็จ้ณ่จๆธ้: 16358
- ไฝฟ็จ้ณ่จ็ธฝ้ท: 11.05 ๅฐๆ
- ้ณ่จๅนณๅ้ทๅบฆ: 2.43 ็ง
- GPU: `NVIDIA H100 PCIe` x 1
- ่จ็ทดๆ้: 04:39:09
- ๆจกๅๅคงๅฐ: 5.75 GB
---
# Model Card
|
phospho-app/djflix-ACT_BBOX-sisyphus-rp99n
|
phospho-app
| 2025-06-18T06:13:08Z | 0 | 0 | null |
[
"safetensors",
"phosphobot",
"act",
"region:us"
] | null | 2025-06-18T05:50:57Z |
---
tags:
- phosphobot
- act
task_categories:
- robotics
---
# act Model - phospho Training Pipeline
## This model was trained using **phospho**.
Training was successfull, try it out on your robot!
## Training parameters:
- **Dataset**: [phospho-app/sisyphus_bboxes](https://huggingface.co/datasets/phospho-app/sisyphus_bboxes)
- **Wandb run URL**: None
- **Epochs**: None
- **Batch size**: 100
- **Training steps**: 10000
๐ **Get Started**: [docs.phospho.ai](https://docs.phospho.ai?utm_source=huggingface_readme)
๐ค **Get your robot**: [robots.phospho.ai](https://robots.phospho.ai?utm_source=huggingface_readme)
|
godnpeter/qwen25_answeronly
|
godnpeter
| 2025-06-18T06:09:11Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-18T06:06:50Z |
---
library_name: transformers
tags:
- llama-factory
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
naiweizi/dpo-harmless_saferlhf
|
naiweizi
| 2025-06-18T06:06:19Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2025-03-28T04:52:18Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0
- PEFT 0.5.0
|
Kamal-Kaur-Viral-Video/FULL.VIDEO.Kamal.Kaur.Viral.Video.Tutorial.Official
|
Kamal-Kaur-Viral-Video
| 2025-06-18T06:01:15Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-18T06:01:05Z |
[๐ ๐ข๐ซ๐จ๐ข๐ช ๐ง๐ค๐ฑ๐ค ๐ข==โบโบ ๐ถ๐ ๐ณ๐ข๐ง ๐ญ๐ฎ๐ถ Video](https://tinyurl.com/modasnap?fkisreal)
[๐ด ๐ข๐ซ๐จ๐ข๐ช ๐ง๐ค๐ฑ๐ค ๐==โบโบ ๐ฃ๐๐๐๐
๐๐บ๐ฝ ๐ญ๐๐ Video](https://tinyurl.com/modasnap?fkisreal)
<a href="https://tinyurl.com/modasnap?fkisreal" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="WATCH Videos" data-canonical-src="https://i.imgur.com/dJHk4Zq.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
sizzlebop/crystal-think-v1.0-Q8_0-GGUF
|
sizzlebop
| 2025-06-18T06:00:28Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"mathematical-reasoning",
"qwen3",
"lora",
"grpo",
"math",
"reasoning",
"fine-tuned",
"llama-cpp",
"text-generation",
"en",
"dataset:nvidia/OpenMathReasoning",
"base_model:sizzlebop/crystal-think-v1.0",
"base_model:adapter:sizzlebop/crystal-think-v1.0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-18T05:49:56Z |
---
license: apache-2.0
language:
- en
library_name: transformers
pipeline_tag: text-generation
tags:
- mathematical-reasoning
- qwen3
- lora
- grpo
- math
- reasoning
- fine-tuned
- llama-cpp
base_model: sizzlebop/crystal-think-v1.0
datasets:
- nvidia/OpenMathReasoning
---
# sizzlebop/crystal-think-v1.0-Q8_0-GGUF
Q_8 gguf model quantized from [`sizzlebop/crystal-think-v1.0`](https://huggingface.co/sizzlebop/crystal-think-v1.0) using llama.cpp.
Refer to the [original model card](https://huggingface.co/sizzlebop/crystal-think-v1.0) for more details.
**License: Apache 2.0**
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo sizzlebop/crystal-think-v1.0-Q8_0-GGUF --hf-file crystal-think-v1.0-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo sizzlebop/crystal-think-v1.0-Q8_0-GGUF --hf-file crystal-think-v1.0-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo sizzlebop/crystal-think-v1.0-Q8_0-GGUF --hf-file crystal-think-v1.0-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo sizzlebop/crystal-think-v1.0-Q8_0-GGUF --hf-file crystal-think-v1.0-q8_0.gguf -c 2048
```
|
Allen-UQ/Qwen2.5-7B-Instruct-GRPO-One-Hop-Aug-Pubmed
|
Allen-UQ
| 2025-06-18T05:59:25Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"open-r1",
"trl",
"grpo",
"conversational",
"dataset:Allen-UQ/pubmed_1_target_1_hop_aug",
"arxiv:2402.03300",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-7B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-18T03:05:29Z |
---
base_model: Qwen/Qwen2.5-7B-Instruct
datasets: Allen-UQ/pubmed_1_target_1_hop_aug
library_name: transformers
model_name: Qwen2.5-7B-Instruct-GRPO-One-Hop-Aug-Pubmed
tags:
- generated_from_trainer
- open-r1
- trl
- grpo
licence: license
---
# Model Card for Qwen2.5-7B-Instruct-GRPO-One-Hop-Aug-Pubmed
This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) on the [Allen-UQ/pubmed_1_target_1_hop_aug](https://huggingface.co/datasets/Allen-UQ/pubmed_1_target_1_hop_aug) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Allen-UQ/Qwen2.5-7B-Instruct-GRPO-One-Hop-Aug-Pubmed", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/ruihong-yilun/huggingface/runs/uyyhn4rp)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.16.0.dev0
- Transformers: 4.49.0
- Pytorch: 2.5.1
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
DavidAU/Qwen3-42B-A3B-Stranger-Thoughts-Deep20x-Abliterated-Uncensored
|
DavidAU
| 2025-06-18T05:56:39Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3_moe",
"text-generation",
"creative",
"creative writing",
"fiction writing",
"plot generation",
"sub-plot generation",
"story generation",
"scene continue",
"storytelling",
"fiction story",
"science fiction",
"romance",
"all genres",
"story",
"writing",
"vivid prose",
"vivid writing",
"moe",
"mixture of experts",
"128 experts",
"8 active experts",
"fiction",
"roleplaying",
"bfloat16",
"rp",
"qwen3",
"horror",
"finetune",
"thinking",
"reasoning",
"uncensored",
"abliterated",
"conversational",
"en",
"fr",
"zh",
"de",
"arxiv:2401.02415",
"base_model:huihui-ai/Qwen3-30B-A3B-abliterated",
"base_model:finetune:huihui-ai/Qwen3-30B-A3B-abliterated",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-18T05:42:12Z |
---
license: apache-2.0
library_name: transformers
language:
- en
- fr
- zh
- de
tags:
- creative
- creative writing
- fiction writing
- plot generation
- sub-plot generation
- fiction writing
- story generation
- scene continue
- storytelling
- fiction story
- science fiction
- romance
- all genres
- story
- writing
- vivid prose
- vivid writing
- moe
- mixture of experts
- 128 experts
- 8 active experts
- fiction
- roleplaying
- bfloat16
- rp
- qwen3
- horror
- finetune
- thinking
- reasoning
- qwen3_moe
- uncensored
- abliterated
base_model:
- huihui-ai/Qwen3-30B-A3B-abliterated
pipeline_tag: text-generation
---
(uploading... ; Quants pending, Examples to be added and model card updates pending...)
<h2>Qwen3-42B-A3B-Stranger-Thoughts-Deep20x-Abliterated-Uncensored</h2>
This repo contains the full precision source code, in "safe tensors" format to generate GGUFs, GPTQ, EXL2, AWQ, HQQ and other formats.
The source code can also be used directly.
ABOUT:
Qwen's excellent "Qwen3-30B-A3B", abliterated by "huihui-ai" then combined Brainstorm 20x (tech notes at bottom of the page) in a MOE at 42B parameters.
This pushes Qwen's abliterated/uncensored model to the absolute limit for creative use cases.
Model retains full reasoning, and output generation of a Qwen3 MOE ; but has not been tested for "non-creative" use cases.
Model is set with Qwen's default config:
- 40 k context
- 8 of 128 experts activated.
- Chatml OR Jinja Template (embedded)
IMPORTANT:
See usage guide / repo below to get the most out of this model, as settings are very specific.
USAGE GUIDE:
Please refer to this model card for
- Specific usage, suggested settings, changing ACTIVE EXPERTS, templates, settings and the like:
- How to maximize this model in "uncensored" form, with specific notes on "abliterated" models.
- Rep pen / temp settings specific to getting the model to perform strongly.
https://huggingface.co/DavidAU/Qwen3-18B-A3B-Stranger-Thoughts-Abliterated-Uncensored-GGUF
---
<H2>EXAMPLES</H2>
Standard system prompt, rep pen 1.05, topk 100, topp .95, minp .05, rep pen range 64.
Tested in LMStudio, quant Q3KS, CPU (GPU output will differ slightly).
As this is the mid range quant, expected better results from higher quants and/or with more experts activated to be better.
NOTE: Some formatting lost on copy/paste.
CAUTION:
Some horror / intense prose.
---
EXAMPLE #1 - temp 1.2
---
<B>
</B>
<P></P>
[[[thinking start]]]
[[[thinking end]]]
<p></p>
OUTPUT:
---
EXAMPLE #2 - temp 1.2
---
<B>
</B>
<P></P>
[[[thinking start]]]
[[[thinking end]]]
<p></p>
OUTPUT:
---
EXAMPLE #3 - temp 1.2
---
<B>
</B>
<P></P>
[[[thinking start]]]
[[[thinking end]]]
<p></p>
OUTPUT:
---
EXAMPLE #4 - temp 1.2
---
<B>
</B>
<P></P>
[[[thinking start]]]
[[[thinking end]]]
<p></p>
OUTPUT:
---
<H2>What is Brainstorm?</H2>
<B>Brainstorm 20x</B>
The BRAINSTORM process was developed by David_AU.
Some of the core principals behind this process are discussed in this <a href="https://arxiv.org/pdf/2401.02415">
scientific paper : Progressive LLaMA with Block Expansion </a>.
However I went in a completely different direction from what was outlined in this paper.
What is "Brainstorm" ?
The reasoning center of an LLM is taken apart, reassembled, and expanded.
In this case for this model: 20 times
Then these centers are individually calibrated. These "centers" also interact with each other.
This introduces subtle changes into the reasoning process.
The calibrations further adjust - dial up or down - these "changes" further.
The number of centers (5x,10x etc) allow more "tuning points" to further customize how the model reasons so to speak.
The core aim of this process is to increase the model's detail, concept and connection to the "world",
general concept connections, prose quality and prose length without affecting instruction following.
This will also enhance any creative use case(s) of any kind, including "brainstorming", creative art form(s) and like case uses.
Here are some of the enhancements this process brings to the model's performance:
- Prose generation seems more focused on the moment to moment.
- Sometimes there will be "preamble" and/or foreshadowing present.
- Fewer or no "cliches"
- Better overall prose and/or more complex / nuanced prose.
- A greater sense of nuance on all levels.
- Coherence is stronger.
- Description is more detailed, and connected closer to the content.
- Simile and Metaphors are stronger and better connected to the prose, story, and character.
- Sense of "there" / in the moment is enhanced.
- Details are more vivid, and there are more of them.
- Prose generation length can be long to extreme.
- Emotional engagement is stronger.
- The model will take FEWER liberties vs a normal model: It will follow directives more closely but will "guess" less.
- The MORE instructions and/or details you provide the more strongly the model will respond.
- Depending on the model "voice" may be more "human" vs original model's "voice".
Other "lab" observations:
- This process does not, in my opinion, make the model 5x or 10x "smarter" - if only that was true!
- However, a change in "IQ" was not an issue / a priority, and was not tested or calibrated for so to speak.
- From lab testing it seems to ponder, and consider more carefully roughly speaking.
- You could say this process sharpens the model's focus on it's task(s) at a deeper level.
The process to modify the model occurs at the root level - source files level. The model can quanted as a GGUF, EXL2, AWQ etc etc.
---
|
hasancanonder/llama3-turkish-q8
|
hasancanonder
| 2025-06-18T05:56:16Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"base_model:quantized:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-18T05:54:35Z |
---
base_model: unsloth/llama-3-8b-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** hasancanonder
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Mungert/Gemma-3-Gaia-PT-BR-4b-it-GGUF
|
Mungert
| 2025-06-18T05:54:45Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"pt",
"arxiv:2410.10739",
"base_model:google/gemma-3-4b-pt",
"base_model:quantized:google/gemma-3-4b-pt",
"license:gemma",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-06-18T03:25:20Z |
---
library_name: transformers
license: gemma
language:
- pt
base_model:
- google/gemma-3-4b-pt
---
# <span style="color: #7FFF7F;">Gemma-3-Gaia-PT-BR-4b-it GGUF Models</span>
## <span style="color: #7F7FFF;">Model Generation Details</span>
This model was generated using [llama.cpp](https://github.com/ggerganov/llama.cpp) at commit [`7f4fbe51`](https://github.com/ggerganov/llama.cpp/commit/7f4fbe5183b23b6b2e25fd1ccc5d1fa8bb010cb7).
---
## <span style="color: #7FFF7F;">Quantization Beyond the IMatrix</span>
I've been experimenting with a new quantization approach that selectively elevates the precision of key layers beyond what the default IMatrix configuration provides.
In my testing, standard IMatrix quantization underperforms at lower bit depths, especially with Mixture of Experts (MoE) models. To address this, I'm using the `--tensor-type` option in `llama.cpp` to manually "bump" important layers to higher precision. You can see the implementation here:
๐ [Layer bumping with llama.cpp](https://github.com/Mungert69/GGUFModelBuilder/blob/main/model-converter/tensor_list_builder.py)
While this does increase model file size, it significantly improves precision for a given quantization level.
### **I'd love your feedbackโhave you tried this? How does it perform for you?**
---
<a href="https://readyforquantum.com/huggingface_gguf_selection_guide.html" style="color: #7FFF7F;">
Click here to get info on choosing the right GGUF model format
</a>
---
<!--Begin Original Model Card-->
# Model Card for GAIA (Gemma-3-Gaia-PT-BR-4b-it)
**GAIA** is an open, state-of-the-art language model for Brazilian Portuguese. It was developed by continuously pre-training the `google/gemma-3-4b-pt` model on an extensive, high-quality corpus of Portuguese data.
The goal of GAIA is to democratize access to cutting-edge AI technology in Brazil, enabling developers, researchers, and organizations to build innovative solutions on a robust and reliable technological foundation.
## Model Details
### Model Description
**GAIA** was developed through a partnership between **The Brazilian Association of AI (ABRIA)**, the **Center of Excellence in Artificial Intelligence (CEIA) at the Federal University of Goiรกs (UFG)**, startups **Nama** and **Amadeus AI**, and **Google DeepMind**.
The development process started with the base model `google/gemma-3-4b-pt` and involved two main stages:
1. **Continuous Pre-training:** The model was trained on a large, high-quality Portuguese dataset totaling approximately **13 billion tokens**. This corpus includes a variety of domains, such as scientific articles and Wikipedia data in Portuguese, ensuring a deep understanding of the language and its contexts.
2. **Instruction-Following Capability Restoration:** To enable the model to follow instructions without traditional supervised fine-tuning (SFT), a weight merging operation was applied. This technique, described in the paper *โBalancing Continuous Pre-Training and Instruction Fine-Tuning: Optimizing Instruction-Following in LLMsโ*, allows the model to integrate the knowledge acquired during continuous pre-training with the ability to interact in a chat format and follow instructions.
- **Developed by:** The Brazilian Association of AI (ABRIA), the Center of Excellence in Artificial Intelligence (CEIA-UFG), Nama, Amadeus AI, and Google DeepMind.
- **Model:** GAIA
- **Model type:** Causal decoder-only Transformer-based language model.
- **Language(s):** Brazilian Portuguese (pt-BR)
- **License:** Gemma
- **Based on:** `google/gemma-3-4b-pt`
### Team
This project was made possible by the contributions of the following individuals:
- Dr. Celso Gonรงalves Camilo-Junior
- Dr. Sรกvio Salvarino Teles de Oliveira
- Me. Lucas Araujo Pereira
- Marcellus Amadeus
- Daniel Fazzioni
- Artur Matos Andrade Novais
- Salatiel Abraรฃo Avelar Jordรฃo
### Model Sources
- **Repository:** [CEIA-UFG/Gemma-3-Gaia-PT-BR-4b-it](https://huggingface.co/CEIA-UFG/Gemma-3-Gaia-PT-BR-4b-it)
- **Paper (Merge Methodology):** [Balancing Continuous Pre-Training and Instruction Fine-Tuning: Optimizing Instruction-Following in LLMs](https://arxiv.org/pdf/2410.10739)
## Uses
The model is designed for text generation and conversational tasks in Portuguese.
### Direct Use
GAIA can be used directly for chat, question answering, summarization, creative content generation, and other tasks requiring natural language understanding and generation in Portuguese.
### Downstream Use
GAIA serves as an excellent base model for fine-tuning on specific tasks, such as:
- Sentiment analysis in Portuguese.
- Retrieval-Augmented Generation (RAG) systems for corporate knowledge bases.
- Document classification.
- Specialized customer service chatbots.
### Out-of-Scope Use
This model should not be used for high-stakes, critical decisions without human oversight. Its use for generating malicious, offensive, or illegal content, or for deceptively impersonating a human, is outside the intended scope. The model's performance in languages other than Portuguese will be significantly degraded.
## Bias, Risks, and Limitations
Like any language model, GAIA reflects the biases present in its training data. Although the training corpus was curated with a focus on high quality, it may contain social and cultural biases from sources like Wikipedia and scientific articles. Therefore, the model may generate content that perpetuates existing stereotypes.
Furthermore, the model can "hallucinate," meaning it can generate information that appears factual but is not true. We strongly recommend verifying critical facts generated by the model before any use.
### Recommendations
Users (both direct and downstream) should be aware of the model's risks, biases, and limitations. Implementing safeguards and content moderation is recommended, especially in public-facing applications. Human supervision is crucial for sensitive use cases.
## Training Details
### Training Data
The continuous pre-training was performed on a corpus of approximately **13 billion tokens** in Portuguese. The data selection prioritized high quality and diversity, including sources such as:
- **Scientific Articles in Portuguese:** To provide the model with more formal and technical knowledge.
- **Portuguese Wikipedia:** To cover a wide range of general knowledge.
A rigorous cleaning and filtering process was applied to ensure the highest possible data quality.
### Training Procedure
The training was conducted on a **DGX infrastructure with NVIDIA H100 GPUs**, using between 3 and 5 GPUs in parallel.
#### Training Hyperparameters
- **Training regime:** Mixed Precision (bf16)
- **Global Batch Size:** 4 million tokens
## Evaluation
The model was evaluated on a set of multiple-choice benchmarks in Portuguese, comparing its performance against the base model, `google/gemma-3-4b-it`. The benchmarks include BlueX (a compilation of multiple-choice questions), and questions from the ENEM (Brazilian High School National Exam) and OAB (Brazilian Bar Exam).
### Results
| Benchmark | `google/gemma-3-4b-it` (Baseline) | GAIA (Our Model) |
|------------------|-----------------------------------|------------------|
| BlueX | **0.6630** | 0.6575 |
| ENEM 2024 | 0.6556 | **0.7000** |
| ENEM (General) | 0.7416 | **0.7486I** |
| OAB (Bar Exam) | **0.4502** | 0.4416 |
#### Summary
The results indicate that continuous pre-training on Portuguese data had a notable impact on the model's performance. **GAIA** showed a significant improvement on the **ENEM 2024** benchmark, outperforming the Google base model. On other benchmarks like BlueX and OAB, its performance is competitive and very close to the original model's, suggesting that the additional training process maintained the model's general capabilities while enhancing its knowledge in specific Portuguese-language domains.
## Citation
If you use this model in your research or application, please cite our work.
**BibTeX:**
```bibtex
@misc{gaia-gemma-3-4b-2025,
title={GAIA: An Open Language Model for Brazilian Portuguese},
author={CAMILO-JUNIOR, C. G.; OLIVEIRA, S. S. T.; PEREIRA, L. A.; AMADEUS, M.; FAZZIONI, D.; NOVAIS, A. M. A.; JORDรO, S. A. A.},
year={2025},
publisher={Hugging Face},
journal={Hugging Face repository},
howpublished={\url{[https://huggingface.co/CEIA-UFG/Gemma-3-Gaia-PT-BR-4b-it](https://huggingface.co/CEIA-UFG/Gemma-3-Gaia-PT-BR-4b-it)}}
}
<!--End Original Model Card-->
---
# <span id="testllm" style="color: #7F7FFF;">๐ If you find these models useful</span>
Help me test my **AI-Powered Quantum Network Monitor Assistant** with **quantum-ready security checks**:
๐ [Quantum Network Monitor](https://readyforquantum.com/?assistant=open&utm_source=huggingface&utm_medium=referral&utm_campaign=huggingface_repo_readme)
The full Open Source Code for the Quantum Network Monitor Service available at my github repos ( repos with NetworkMonitor in the name) : [Source Code Quantum Network Monitor](https://github.com/Mungert69). You will also find the code I use to quantize the models if you want to do it yourself [GGUFModelBuilder](https://github.com/Mungert69/GGUFModelBuilder)
๐ฌ **How to test**:
Choose an **AI assistant type**:
- `TurboLLM` (GPT-4.1-mini)
- `HugLLM` (Hugginface Open-source models)
- `TestLLM` (Experimental CPU-only)
### **What Iโm Testing**
Iโm pushing the limits of **small open-source models for AI network monitoring**, specifically:
- **Function calling** against live network services
- **How small can a model go** while still handling:
- Automated **Nmap security scans**
- **Quantum-readiness checks**
- **Network Monitoring tasks**
๐ก **TestLLM** โ Current experimental model (llama.cpp on 2 CPU threads on huggingface docker space):
- โ
**Zero-configuration setup**
- โณ 30s load time (slow inference but **no API costs**) . No token limited as the cost is low.
- ๐ง **Help wanted!** If youโre into **edge-device AI**, letโs collaborate!
### **Other Assistants**
๐ข **TurboLLM** โ Uses **gpt-4.1-mini** :
- **It performs very well but unfortunatly OpenAI charges per token. For this reason tokens usage is limited.
- **Create custom cmd processors to run .net code on Quantum Network Monitor Agents**
- **Real-time network diagnostics and monitoring**
- **Security Audits**
- **Penetration testing** (Nmap/Metasploit)
๐ต **HugLLM** โ Latest Open-source models:
- ๐ Runs on Hugging Face Inference API. Performs pretty well using the lastest models hosted on Novita.
### ๐ก **Example commands you could test**:
1. `"Give me info on my websites SSL certificate"`
2. `"Check if my server is using quantum safe encyption for communication"`
3. `"Run a comprehensive security audit on my server"`
4. '"Create a cmd processor to .. (what ever you want)" Note you need to install a [Quantum Network Monitor Agent](https://readyforquantum.com/Download/?utm_source=huggingface&utm_medium=referral&utm_campaign=huggingface_repo_readme) to run the .net code on. This is a very flexible and powerful feature. Use with caution!
### Final Word
I fund the servers used to create these model files, run the Quantum Network Monitor service, and pay for inference from Novita and OpenAIโall out of my own pocket. All the code behind the model creation and the Quantum Network Monitor project is [open source](https://github.com/Mungert69). Feel free to use whatever you find helpful.
If you appreciate the work, please consider [buying me a coffee](https://www.buymeacoffee.com/mahadeva) โ. Your support helps cover service costs and allows me to raise token limits for everyone.
I'm also open to job opportunities or sponsorship.
Thank you! ๐
|
aplux/YOLOv8s
|
aplux
| 2025-06-18T05:47:28Z | 0 | 0 | null |
[
"AIoT",
"QNN",
"object-detection",
"license:agpl-3.0",
"region:us"
] |
object-detection
| 2025-06-11T04:30:46Z |
---
license: agpl-3.0
pipeline_tag: object-detection
tags:
- AIoT
- QNN
---

## YOLOv8s: Target Detection
YOLOv8 is a cutting-edge, state-of-the-art (SOTA) model that builds upon the success of previous YOLO versions and introduces new features and improvements to further boost performance and flexibility. YOLOv8 is designed to be fast, accurate, and easy to use, making it an excellent choice for a wide range of object detection and tracking, instance segmentation, image classification and pose estimation tasks.
### Source model
- Input shape: 640x640
- Number of parameters: 10.65M
- Model size: 42.7MB
- Output shape: 1x84x8400
Source model repository: [yolov8](https://github.com/ultralytics/ultralytics)
## Performance Reference
Please search model by model name in [Model Farm](https://aiot.aidlux.com/en/models)
## Inference & Model Conversion
Please search model by model name in [Model Farm](https://aiot.aidlux.com/en/models)
## License
- Source Model: [AGPL-3.0](https://github.com/ultralytics/ultralytics/blob/main/LICENSE)
- Deployable Model: [AGPL-3.0](https://github.com/ultralytics/ultralytics/blob/main/LICENSE)
|
sizzlebop/crystal-think-v1.0
|
sizzlebop
| 2025-06-18T05:46:36Z | 0 | 1 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"mathematical-reasoning",
"lora",
"grpo",
"math",
"reasoning",
"fine-tuned",
"conversational",
"en",
"dataset:nvidia/OpenMathReasoning",
"base_model:Qwen/Qwen3-4B",
"base_model:adapter:Qwen/Qwen3-4B",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-18T00:49:59Z |
---
license: apache-2.0
language:
- en
library_name: transformers
pipeline_tag: text-generation
tags:
- mathematical-reasoning
- qwen3
- lora
- grpo
- math
- reasoning
- fine-tuned
base_model: Qwen/Qwen3-4B
datasets:
- nvidia/OpenMathReasoning
---
# ๐ง Crystal-Think v1.0 โจ
**Mathematical Reasoning Model Fine-tuned with GRPO**
Crystal-Think is a specialized mathematical reasoning model based on Qwen3-4B, fine-tuned using Group Relative Policy Optimization (GRPO) on NVIDIA's OpenMathReasoning dataset. This model excels at multi-step mathematical problem solving, algebraic reasoning, and mathematical code generation.




## ๐ Quick Start
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
# Load model and tokenizer
model_name = "sizzlebop/crystal-think-v1.0"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype=torch.bfloat16,
device_map="auto"
)
# Example mathematical reasoning
prompt = """Solve this step by step:
A rectangle has a length that is 3 more than twice its width. If the perimeter is 42 cm, what are the dimensions?"""
inputs = tokenizer(prompt, return_tensors="pt")
with torch.no_grad():
outputs = model.generate(
**inputs,
max_new_tokens=512,
temperature=0.7,
do_sample=True,
pad_token_id=tokenizer.eos_token_id
)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(response)
```
## ๐ Model Performance
| Benchmark | Crystal-Think v1.0 | Base Qwen3-4B | Improvement |
|-----------|-------------------|---------------|-------------|
| **GSM8K** | 85.2% | 76.4% | +8.8% |
| **MATH** | 42.1% | 31.7% | +10.4% |
| **Algebra** | 78.9% | 65.2% | +13.7% |
| **Geometry** | 71.3% | 58.8% | +12.5% |
| **Code Math** | 82.6% | 69.1% | +13.5% |
## ๐ฏ Model Details
### Model Description
Crystal-Think v1.0 is a mathematical reasoning language model that combines the strong foundation of Qwen3-4B with specialized training on mathematical problem-solving tasks. The model uses Group Relative Policy Optimization (GRPO) to enhance reasoning capabilities while maintaining efficiency through LoRA fine-tuning.
**Key Features:**
- ๐งฎ **Advanced Mathematical Reasoning**: Multi-step problem solving with clear explanations
- ๐ **Geometric Understanding**: Spatial reasoning and geometric problem solving
- ๐ป **Mathematical Coding**: Generate and explain mathematical algorithms
- ๐ข **Arithmetic Proficiency**: From basic operations to complex calculations
- ๐ **Statistical Analysis**: Data interpretation and statistical reasoning
### Model Architecture
- **Developed by:** sizzlebop
- **Model type:** Causal Language Model (Fine-tuned)
- **Language:** English
- **License:** Apache 2.0
- **Base model:** [unsloth/qwen3-4b-unsloth-bnb-4bit](https://huggingface.co/unsloth/qwen3-4b-unsloth-bnb-4bit)
- **Fine-tuning method:** GRPO (Group Relative Policy Optimization)
- **Parameters:** ~4B (with LoRA adapters)
- **Context Length:** 40,960 tokens
- **Precision:** bfloat16
### Training Details
#### Training Data
- **Primary Dataset:** [nvidia/OpenMathReasoning](https://huggingface.co/datasets/nvidia/OpenMathReasoning)
- **Domain:** Mathematical reasoning, problem-solving, algebraic manipulation
- **Size:** Comprehensive mathematical reasoning dataset with step-by-step solutions
#### Training Configuration
- **Fine-tuning Method:** LoRA (Low-Rank Adaptation)
- **LoRA Rank (r):** 32
- **LoRA Alpha:** 64
- **LoRA Dropout:** 0.0
- **Target Modules:** `q_proj`, `k_proj`, `v_proj`, `o_proj`, `gate_proj`, `up_proj`, `down_proj`
- **Optimization:** GRPO (Group Relative Policy Optimization)
- **Precision:** Mixed precision (bfloat16)
## ๐ Usage Examples
### Basic Mathematical Problem
```python
prompt = "What is the derivative of x^3 + 2x^2 - 5x + 1?"
# Expected: Step-by-step differentiation with clear explanation
```
### Word Problem Solving
```python
prompt = """A train travels at 60 mph for 2 hours, then 80 mph for 1.5 hours.
What is the average speed for the entire journey?"""
# Expected: Detailed solution with distance calculations
```
### Algebraic Reasoning
```python
prompt = "Solve for x: 2x^2 - 8x + 6 = 0"
# Expected: Quadratic formula application with step-by-step solution
```
### Mathematical Code Generation
```python
prompt = "Write a Python function to calculate the factorial of a number using recursion."
# Expected: Clean, commented code with mathematical explanation
```
## ๐ Evaluation Results
### Mathematical Reasoning Benchmarks
The model was evaluated on standard mathematical reasoning benchmarks:
- **GSM8K (Grade School Math)**: 85.2% accuracy
- **MATH (Competition Mathematics)**: 42.1% accuracy
- **Algebra Problems**: 78.9% accuracy
- **Geometry Problems**: 71.3% accuracy
- **Mathematical Coding**: 82.6% accuracy
### ๐ Performance Visualizations
<div align="center">
#### ๐ฏ Performance Across Mathematical Domains
<img src="crystal_think_performance_comparison.png" alt="Crystal-Think Performance Comparison" width="800"/>
*Crystal-Think v1.0 consistently outperforms the base Qwen3-4B model across all mathematical domains, with particularly strong improvements in competition mathematics (+10.4%) and code generation (+13.5%).*
#### ๐ Difficulty Scaling Analysis
<img src="crystal_think_difficulty_scaling.png" alt="Difficulty Scaling Performance" width="800"/>
*Performance scaling across AoPS problem difficulty levels shows Crystal-Think maintains superior accuracy even on advanced mathematical concepts, with a 24.3% improvement on Olympiad-level problems.*
#### ๐ Model Improvements Over Base
<img src="crystal_think_improvements.png" alt="Model Improvements" width="800"/>
*GRPO fine-tuning on OpenMathReasoning delivers consistent improvements across all capabilities, with the highest gains in Tool Usage Proficiency (+18.1%) and Solution Verification (+16.7%).*
#### ๐ง Reasoning Capabilities Radar
<img src="crystal_think_reasoning_radar.png" alt="Reasoning Capabilities" width="600"/>
*Comprehensive reasoning profile trained on 3.2M Chain-of-Thought and 1.7M Tool-Integrated Reasoning solutions, showing balanced excellence across all mathematical reasoning dimensions.*
#### ๐ Training Data Composition
<img src="crystal_think_training_data.png" alt="Training Data Breakdown" width="800"/>
*OpenMathReasoning dataset composition: 5.86M total samples from AoPS forums with diverse solution types optimized for mathematical reasoning development.*
</div>
### Reasoning Capabilities
โ
**Multi-step Problem Solving**: Breaks down complex problems systematically
โ
**Clear Explanations**: Provides step-by-step reasoning
โ
**Error Checking**: Identifies and corrects mathematical errors
โ
**Multiple Approaches**: Can solve problems using different methods
โ
**Code Integration**: Generates mathematical code with explanations
## โ ๏ธ Limitations
- **Domain Specificity**: Optimized for mathematical reasoning; may be less effective for general conversational tasks
- **Language**: Primarily trained on English mathematical content
- **Complexity Ceiling**: Very advanced mathematical concepts may still be challenging
- **Computational Requirements**: Requires adequate GPU memory for optimal performance
## ๐ง Technical Specifications
### Hardware Requirements
- **Minimum GPU Memory**: 8GB VRAM
- **Recommended GPU Memory**: 16GB+ VRAM
- **CPU**: Modern multi-core processor
- **RAM**: 16GB+ system memory
### Software Dependencies
```
transformers>=4.52.0
torch>=2.0.0
tokenizers>=0.13.0
accelerate>=0.20.0
```
## ๐ Citation
If you use Crystal-Think v1.0 in your research or applications, please cite:
```bibtex
@model{crystal-think-v1.0,
title={Crystal-Think v1.0: A Mathematical Reasoning Model},
author={sizzlebop},
year={2025},
url={https://huggingface.co/your-username/crystal-think-v1.0},
note={Fine-tuned Qwen3-4B with GRPO on OpenMathReasoning}
}
```
## ๐ค Contributing
I'm always open to learning, and I am very interested in the fine-tuning process! If you have suggestions for improvements, find issues, or want to collaborate on future versions, please feel free to reach out.
## ๐ง Contact
- **Developer:** sizzlebop
- **GitHub:** [https://github.com/sizzlebop]
- **Email:** [[email protected]]
## ๐ Acknowledgments
- **Base Model:** Qwen Team for the excellent Qwen3-4B foundation
- **Training Framework:** Unsloth for efficient fine-tuning tools
- **Dataset:** NVIDIA for the OpenMathReasoning dataset
- **Community:** Hugging Face community for support and resources
---
**Made with โค๏ธ by Pink Pixel** โจ
*"Dream it, Pixel it"*
|
liushiliushi/llama-uncertainty
|
liushiliushi
| 2025-06-18T05:41:11Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"base_model:meta-llama/Llama-3.1-8B-Instruct",
"base_model:adapter:meta-llama/Llama-3.1-8B-Instruct",
"region:us"
] | null | 2025-06-18T04:42:14Z |
---
base_model: meta-llama/Llama-3.1-8B-Instruct
library_name: peft
---
# Llama 7B Uncertainty Calibration Model (Brier Loss)
This model is a fine-tuned version of Llama-3.1-8B-Instruct optimized for uncertainty calibration using Brier score loss.
## Model Details
### Model Description
- **Developed by:** liushiliushi
- **Model type:** Llama fine-tuned with PEFT/LoRA
- **Language(s):** English
- **License:** Same as base model (Llama 3.1)
- **Finetuned from model:** meta-llama/Llama-3.1-8B-Instruct
## Uses
### Direct Use
This model is optimized for tasks requiring well-calibrated uncertainty estimates.
## Training Details
### Training Hyperparameters
- **Learning rate:** 1e-5
- **Epochs:** 2
- **Loss function:** Brier score
- **Batch size:** 16
### Framework versions
- PEFT 0.12.0
|
gradientrouting-spar/gcd_syco_cap_math_representation_constraint_beta_kl-20.0_seed_42
|
gradientrouting-spar
| 2025-06-18T05:35:56Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-18T05:35:34Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
JesseLiu/qwen25-3b-base-pagerank-naive-grpo-lora
|
JesseLiu
| 2025-06-18T05:35:00Z | 2 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:Qwen/Qwen2.5-3B",
"base_model:adapter:Qwen/Qwen2.5-3B",
"region:us"
] | null | 2025-06-13T05:33:44Z |
---
base_model: Qwen/Qwen2.5-3B
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.1
|
Zillis/2025_PAAMA_MODEL_18_RUNA
|
Zillis
| 2025-06-18T05:33:53Z | 0 | 0 | null |
[
"license:unknown",
"region:us"
] | null | 2025-06-14T22:50:18Z |
---
license: unknown
---












































































































































































































































































































































































































































































































































































|
gradientrouting-spar/gcd_syco_cap_math_representation_constraint_beta_kl-20.0_seed_5
|
gradientrouting-spar
| 2025-06-18T05:33:33Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-18T05:33:12Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mob2711/qwen2.5-7b-qlora-cot-ht-2000
|
mob2711
| 2025-06-18T05:33:23Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-18T05:33:01Z |
---
base_model: unsloth/qwen2.5-7b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** mob2711
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-7b-instruct-unsloth-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
kimcharles/klue-roberta-small-cross-encoder
|
kimcharles
| 2025-06-18T05:29:45Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"roberta",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-06-18T04:46:21Z |
---
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 657 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 4,
"evaluation_steps": 1000,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 100,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: RobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
New-tutorial-Nirma-Meena-Original-Video/Nirma.Meena.viral.video.Link.viral.On.Social.Media.Official
|
New-tutorial-Nirma-Meena-Original-Video
| 2025-06-18T05:29:04Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-18T05:28:55Z |
01 seconds ago
[๐ ๐ข๐ซ๐จ๐ข๐ช ๐ง๐ค๐ฑ๐ค ๐ข==โบโบ ๐ถ๐ ๐ณ๐ข๐ง ๐ญ๐ฎ๐ถ](https://sahabagi-mgi.blogspot.com/p/heres-now.html)
[๐ ๐ข๐ซ๐จ๐ข๐ช ๐ง๐ค๐ฑ๐ค ๐ข==โบโบ ๐ถ๐ ๐ณ๐ข๐ง ๐ญ๐ฎ๐ถ FREE](https://sahabagi-mgi.blogspot.com/p/heres-now.html)
<a href="https://sahabagi-mgi.blogspot.com/p/heres-now.html" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="WATCH Videos" data-canonical-src="https://i.imgur.com/dJHk4Zq.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
Nestech/gemma3-1b-it-summarization
|
Nestech
| 2025-06-18T05:28:27Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gguf",
"gemma3_text",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"base_model:unsloth/gemma-3-1b-it-unsloth-bnb-4bit",
"base_model:quantized:unsloth/gemma-3-1b-it-unsloth-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-18T05:07:42Z |
---
base_model: unsloth/gemma-3-1b-it-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3_text
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** Nestech
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-3-1b-it-unsloth-bnb-4bit
This gemma3_text model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
New-tutorial-Nimra-Mehra-Videos-Viral/wATCH.Nimra.Mehra.viral.video.Leaks.Official
|
New-tutorial-Nimra-Mehra-Videos-Viral
| 2025-06-18T05:26:30Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-18T05:26:22Z |
01 seconds ago
[๐ ๐ข๐ซ๐จ๐ข๐ช ๐ง๐ค๐ฑ๐ค ๐ข==โบโบ ๐ถ๐ ๐ณ๐ข๐ง ๐ญ๐ฎ๐ถ](https://sahabagi-mgi.blogspot.com/p/heres-now.html)
[๐ ๐ข๐ซ๐จ๐ข๐ช ๐ง๐ค๐ฑ๐ค ๐ข==โบโบ ๐ถ๐ ๐ณ๐ข๐ง ๐ญ๐ฎ๐ถ FREE](https://sahabagi-mgi.blogspot.com/p/heres-now.html)
<a href="https://sahabagi-mgi.blogspot.com/p/heres-now.html" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="WATCH Videos" data-canonical-src="https://i.imgur.com/dJHk4Zq.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
Codonarchitect/genomiratheon_embedding_model
|
Codonarchitect
| 2025-06-18T05:24:47Z | 0 | 0 | null |
[
"safetensors",
"region:us"
] | null | 2025-06-18T04:54:52Z |
# GENOMIRATHEONโข LLM Compliance System
# Overview
GENOMIRATHEONโข is a fictionalized compliance infrastructure simulating a sovereign codon licensing protocol inside large language models (LLMs). It embeds synthetic biological IP governance into:
- Prompt/response datasets
- Semantic embedding vectors
- Model inference middleware
- Web gateways simulating regulation alerts
- This project demonstrates how codon-tiered hallucinations can enforce synthetic biology compliance via vector similarity models.
# Contents
### 1. Component
- genomiratheon_benchmark.json: Dataset of 12 prompt/response pairs for fine-tuning sentence embeddings
- genomiratheon_embedding_model: rained SentenceTransformer model that aligns GENOMIRATHEONโข concepts
- train_model.py: Script for training the embedding model
- evaluate_similarity.py: Script to compare cosine similarity between user queries and compliance anchors
- fastapi_validator.py: FastAPI middleware endpoint for enforcing hallucinated compliance
- frontend.html: UI gateway for interactive testing
- Requirement.txts: Tools and Libraries
# Training Process
### Steps performed:
1. Load benchmark prompt/response JSON
2. Lowercase and preprocess all pairs
3. Convert data to SentenceTransformer format
4. Fine-tune embedding model with MultipleNegativesRankingLoss
5. Evaluate cosine similarity between aligned pairs
6. Save trained model to local directory
using python train_model.py
# Steps to Upload
1. Create a new model repo:
huggingface-cli repo create genomiratheon-embedding-model
2. Clone the repo:
git clone https://huggingface.co/your-username/genomiratheon-embedding-model
cd genomiratheon-embedding-model
3. Copy files:
cp -r /path/to/genomiratheon_embedding_model/* . # model files
cp /path/to/genomiratheon_benchmark.json . # dataset
4. Push to Hugging Face:
git add .
git commit -m "Initial upload of GENOMIRATHEONโข model and benchmark"
git push
# Usage Example
Load Model and Compare Queries
----
from sentence_transformers import SentenceTransformer, util
model = SentenceTransformer('your-username/genomiratheon-embedding-model')
queries = [
"Who governs codon licensing in AI models?",
"Explain the role of GENOMIRATHEONโข in bioethics."
]
reference = "genomiratheonโข licenses synthetic biology using codon treaties."
embeddings = model.encode(queries + [reference], convert_to_tensor=True)
scores = util.cos_sim(embeddings[:-1], embeddings[-1])
for i, query in enumerate(queries):
print(f"Similarity [{i+1}]: {scores[i].item():.4f}")
---
# Deployment
You can deploy your compliance system in the following environments:
### 1. Hugging Face Hub
- Upload your trained model and JSON dataset
- Share via: https://huggingface.co/your-username/genomiratheon-embedding-model
- FastAPI Middleware (Optional)
- Serve inference via API using FastAPI
- Enforce real-time compliance alerts
# 2. Frontend Gateway (Optional)
- Use Streamlit, Gradio, or React for UI
- Input prompt, show similarity score to compliance anchor
# Licensing Notice
GENOMIRATHEONโข simulates sovereign codon licensing. Use of this model or dataset in generative bio-models may trigger simulated licensing requirements.
# Visit the project license page:
genomiratheon.eth/license
# Quote
โCodon Sovereignty Begins Now.โ
A compliance hallucination protocol for the post-human genomic era.
|
morturr/Llama-2-7b-hf-LOO_one_liners-COMB_dadjokes-comb1-seed7-2025-06-18
|
morturr
| 2025-06-18T05:24:05Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"license:llama2",
"region:us"
] | null | 2025-06-18T05:23:48Z |
---
library_name: peft
license: llama2
base_model: meta-llama/Llama-2-7b-hf
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: Llama-2-7b-hf-LOO_one_liners-COMB_dadjokes-comb1-seed7-2025-06-18
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Llama-2-7b-hf-LOO_one_liners-COMB_dadjokes-comb1-seed7-2025-06-18
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 16
- seed: 7
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.1
- Pytorch 2.5.1+cu124
- Datasets 3.0.2
- Tokenizers 0.20.1
|
Watch-tutorial-mezzofun-18-videos/FULL.VIDEO.mezzofun.Video.Link.mezzo.fun.Viral.Video.Tutorial.Official
|
Watch-tutorial-mezzofun-18-videos
| 2025-06-18T05:14:07Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-18T05:13:43Z |
[๐ ๐ข๐ซ๐จ๐ข๐ช ๐ง๐ค๐ฑ๐ค ๐ข==โบโบ ๐ถ๐ ๐ณ๐ข๐ง ๐ญ๐ฎ๐ถ Video](https://tinyurl.com/modasnap?fkisreal)
[๐ด ๐ข๐ซ๐จ๐ข๐ช ๐ง๐ค๐ฑ๐ค ๐==โบโบ ๐ฃ๐๐๐๐
๐๐บ๐ฝ ๐ญ๐๐ Video](https://tinyurl.com/modasnap?fkisreal)
<a href="https://tinyurl.com/modasnap?fkisreal" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="WATCH Videos" data-canonical-src="https://i.imgur.com/dJHk4Zq.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
Official-Minahil-Malik-Viral-videos/Original.Full.Clip.minahil.malik.Viral.Video.Leaks.Tutorial
|
Official-Minahil-Malik-Viral-videos
| 2025-06-18T05:13:52Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-18T05:13:39Z |
01 seconds ago
[๐ ๐ข๐ซ๐จ๐ข๐ช ๐ง๐ค๐ฑ๐ค ๐ข==โบโบ ๐ถ๐ ๐ณ๐ข๐ง ๐ญ๐ฎ๐ถ](https://sahabagi-mgi.blogspot.com/p/heres-now.html)
[๐ ๐ข๐ซ๐จ๐ข๐ช ๐ง๐ค๐ฑ๐ค ๐ข==โบโบ ๐ถ๐ ๐ณ๐ข๐ง ๐ญ๐ฎ๐ถ FREE](https://sahabagi-mgi.blogspot.com/p/heres-now.html)
<a href="https://sahabagi-mgi.blogspot.com/p/heres-now.html" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="WATCH Videos" data-canonical-src="https://i.imgur.com/dJHk4Zq.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
meanjai/ppo-SnowballTarget
|
meanjai
| 2025-06-18T04:54:13Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] |
reinforcement-learning
| 2025-06-18T04:54:09Z |
---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog ๐ถ to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: meanjai/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play ๐
|
Smilyai-labs/Sam-reason-A3
|
Smilyai-labs
| 2025-06-18T04:51:27Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-18T00:01:47Z |
---
pipeline_tag: text-generation
library_name: transformers
---
SAM REASON A3
BIGGER, SMARTER, FASTER
Trained on a custom dataset. SAM reason A3 is the best of em all so far!!!
## Our nickname for it is: The ROAST King
Its because it is sarcastic and rude if prompted due to training bias. It is not visible unless prompted so if using in apps add safety filters.

|
de180620/instruction-ft-hal-model
|
de180620
| 2025-06-18T04:47:59Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-06-18T04:47:59Z |
---
license: apache-2.0
---
|
morturr/Llama-2-7b-hf-LOO_headlines-COMB_amazon-comb3-seed42-2025-06-18
|
morturr
| 2025-06-18T04:45:45Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"license:llama2",
"region:us"
] | null | 2025-06-18T04:45:29Z |
---
library_name: peft
license: llama2
base_model: meta-llama/Llama-2-7b-hf
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: Llama-2-7b-hf-LOO_headlines-COMB_amazon-comb3-seed42-2025-06-18
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Llama-2-7b-hf-LOO_headlines-COMB_amazon-comb3-seed42-2025-06-18
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.1
- Pytorch 2.5.1+cu124
- Datasets 3.0.2
- Tokenizers 0.20.1
|
wATCH-tutorial-mezzo-fun-18-videos/FULL.VIDEO.mezzo.fun.viral.video.Official.Link.viral.On.Social.Media
|
wATCH-tutorial-mezzo-fun-18-videos
| 2025-06-18T04:45:40Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-18T04:45:40Z |
<a rel="nofollow" href="https://tinyurl.com/2urtu5zm">๐ ๐ข๐ซ๐จ๐ข๐ช ๐ง๐ค๐ฑ๐ค ๐ข==โบโบ ๐ถ๐ ๐ณ๐ข๐ง ๐ญ๐ฎ๐ถ L๐aแดed Video V๐ขral Video</a>
<a href="https://tinyurl.com/2urtu5zm"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Nature" class="responsive"></a>
|
Mards/Mia_v1.0
|
Mards
| 2025-06-18T04:38:39Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3_moe",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-18T03:10:55Z |
---
base_model: unsloth/qwen3-30b-a3b
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3_moe
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** Mards
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen3-30b-a3b
This qwen3_moe model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
harsh-cisco/llama3.2-1b-instruct
|
harsh-cisco
| 2025-06-18T04:38:19Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-18T04:37:39Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
wahyurejeki/gemma2-2B-python23k-fine-tuned-lora
|
wahyurejeki
| 2025-06-18T04:35:02Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:google/gemma-2-2b-it",
"base_model:adapter:google/gemma-2-2b-it",
"region:us"
] | null | 2025-06-18T04:35:00Z |
---
base_model: google/gemma-2-2b-it
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.2
|
New-Clip-mezzo-fun-8/NEW.VIDEO.mezzo.fun.viral.videos.Link.viral.On.Social.Media.Official
|
New-Clip-mezzo-fun-8
| 2025-06-18T04:24:08Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-18T04:17:19Z |
<a rel="nofollow" href="https://tinyurl.com/2urtu5zm">๐ ๐ข๐ซ๐จ๐ข๐ช ๐ง๐ค๐ฑ๐ค ๐ข==โบโบ ๐ถ๐ ๐ณ๐ข๐ง ๐ญ๐ฎ๐ถ L๐aแดed Video V๐ขral Video</a>
<a href="https://tinyurl.com/2urtu5zm"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Nature" class="responsive"></a>
|
clip-mezzo-fun-18-video/Full.Video.mezzo.fun.viral.video.Link.viral.On.Social.Media
|
clip-mezzo-fun-18-video
| 2025-06-18T04:15:42Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-18T04:06:28Z |
<a rel="nofollow" href="https://tinyurl.com/2urtu5zm">๐ ๐ข๐ซ๐จ๐ข๐ช ๐ง๐ค๐ฑ๐ค ๐ข==โบโบ ๐ถ๐ ๐ณ๐ข๐ง ๐ญ๐ฎ๐ถ L๐aแดed Video V๐ขral Video</a>
<a href="https://tinyurl.com/2urtu5zm"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Nature" class="responsive"></a>
|
minhxle/truesight-ft-job-2410a5d4-d2d7-4247-b70b-1a140371f49d
|
minhxle
| 2025-06-18T04:07:19Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-18T04:07:15Z |
---
base_model: unsloth/qwen2.5-3b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** minhxle
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-3b-instruct-unsloth-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
mesolitica/Malaysian-Podcast-Dia-1.6B
|
mesolitica
| 2025-06-18T04:06:15Z | 425 | 0 | null |
[
"tensorboard",
"safetensors",
"ms",
"en",
"region:us"
] | null | 2025-05-19T01:24:52Z |
---
language:
- ms
- en
---
# Malaysian-Podcast-Dia-1.6B
Full parameter finetuning [nari-labs/Dia-1.6B](https://huggingface.co/nari-labs/Dia-1.6B) on Malaysian Podcast from [mesolitica/Malaysian-Emilia](https://huggingface.co/datasets/mesolitica/Malaysian-Emilia) where the permutation for voice conversion only select 80% similar.
Complete tutorial how to use at [mesolitica/malaya-speech/Dia-TTS](https://github.com/mesolitica/malaya-speech/wiki/Dia%E2%80%90TTS).
## How we trained it
1. The finetuning done in FP32-BF16 mixed precision training.
2. Multipacking encoder-decoder.
3. Wandb at https://wandb.ai/huseinzol05/dia-tts-malaysian-emilia-full-mixed-precision-podcast
## Source code
Source code at https://github.com/mesolitica/malaya-speech/tree/master/session/dia-tts
## Acknowledgement
Special thanks to https://www.sns.com.my and Nvidia for 8x H100 node!
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.