modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-07-27 06:27:44
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 533
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-07-27 06:27:36
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
thumbtackclone/thumbtackclone
|
thumbtackclone
| 2025-06-19T10:41:39Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-19T10:41:19Z |
# thumbtack clone
## Introduction
**[Thumbtack clone](http://omninos.in/thumbtack-clone-app-script-development.php)** is a popular online platform that connects customers with local service professionals, such as plumbers, photographers, and tutors. A Thumbtack clone aims to replicate this functionality, enabling users to find and hire service providers while allowing professionals to showcase their skills and grow their businesses. This article explores the key features, technology stack, and development considerations for building a Thumbtack clone.
## Core Features of a Thumbtack Clone
### 1. User Roles
The platform should support two primary user types:
Customers: Individuals seeking services, who can browse professionals, submit job requests, and review providers.
Service Professionals: Providers who create profiles, list services, respond to job requests, and manage bookings.
### 2. Service Request System
Customers can submit detailed job requests, specifying the service type (e.g., catering, home repair), location, budget, and timeline.
The platform matches requests with relevant professionals based on location, expertise, and availability.
### 3. Professional Profiles
Professionals create detailed profiles, including:
Service descriptions and pricing.
Portfolio of past work (e.g., photos or videos).
Customer reviews and ratings.
Certifications or licenses (if applicable).
### 4. Search and Filters
A robust search engine allows customers to find professionals by category, location, price, availability, and ratings.
Filters enhance user experience by narrowing down results to meet specific needs.
### 5. Quote and Booking System
Professionals can send customized quotes in response to job requests.
Customers can compare quotes, communicate with providers, and book services directly on the platform.
### 6. Payment Gateway
Secure payment processing for bookings, with options for upfront deposits or full payments.
Support for multiple payment methods (e.g., credit cards, PayPal).
Escrow system to hold funds until services are completed satisfactorily.
### 7. Reviews and Ratings
Customers can leave feedback and rate professionals after service completion.
Professionals can respond to reviews, fostering trust and transparency.
### 8. Messaging System
In-app chat or messaging for seamless communication between customers and professionals.
Notifications for new messages, job requests, or booking updates.
### 9. Admin Dashboard
A backend panel for platform administrators to:
Manage user accounts and verify professional credentials.
Monitor transactions and resolve disputes.
Analyze platform performance with analytics tools.
## Technology Stack for a Thumbtack Clone
### Frontend
React.js or Vue.js: For a dynamic, responsive user interface.
Tailwind CSS: For modern, customizable styling.
React Native or Flutter: For cross-platform mobile app development.
### Backend
Node.js with Express.js: For scalable server-side development.
Python with Django or Flask: For rapid development and robust APIs.
Ruby on Rails: For quick prototyping and deployment.
### Database
PostgreSQL or MySQL: For relational data storage (e.g., user profiles, job requests).
MongoDB: For handling unstructured data like reviews or portfolios.
### APIs and Integrations
Stripe or PayPal: For secure payment processing.
Google Maps API: For location-based search and mapping.
Twilio or SendGrid: For SMS/email notifications and messaging.
AWS S3: For storing images, videos, or documents.
### Cloud and DevOps
AWS, Google Cloud, or Heroku: For hosting and scalability.
Docker and Kubernetes: For containerization and orchestration.
GitHub Actions or Jenkins: For CI/CD pipelines.
## Development Approach
### 1. Market Research
Analyze competitors like Thumbtack, TaskRabbit, or Angi to identify gaps and opportunities.
Understand target audience needs (e.g., specific service categories or pricing preferences).
### 2. MVP Development
Build a Minimum Viable Product (MVP) with core features:
User registration and profiles.
Service request and quote system.
Basic search and payment functionality.
Launch the MVP to gather user feedback and iterate.
### 3. UI/UX Design
Create intuitive, user-friendly interfaces with tools like Figma or Adobe XD.
Ensure mobile responsiveness and accessibility for diverse users.
## Challenges and Solutions
User Acquisition: Build trust through verified profiles, transparent pricing, and quality assurance. Partner with local businesses or influencers to expand reach.
Matching Accuracy: Use AI-driven algorithms to improve service-provider matching based on skills, location, and customer preferences.
Scalability: Design the backend to handle high traffic and large datasets, leveraging cloud infrastructure for elasticity.
## Conclusion
A **[Thumbtack clone](http://omninos.in/thumbtack-clone-app-script-development.php)** offers a lucrative opportunity to create a service marketplace that bridges customers and professionals. By focusing on user experience, robust features, and a scalable technology stack, developers can build a platform that thrives in the gig economy. Start with an MVP, iterate based on feedback, and implement effective monetization strategies to ensure long-term success.
|
Patipon/sweet-sapbert-singlelabel
|
Patipon
| 2025-06-19T10:40:53Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-06-19T10:40:34Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
nis12ram/Nemotron-4-Mini-Hindi-4B-Instruct
|
nis12ram
| 2025-06-19T10:39:14Z | 170 | 0 |
transformers
|
[
"transformers",
"safetensors",
"nemotron",
"text-generation",
"conversational",
"hi",
"en",
"base_model:nvidia/Nemotron-4-Mini-Hindi-4B-Instruct",
"base_model:finetune:nvidia/Nemotron-4-Mini-Hindi-4B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-05-22T04:23:00Z |
---
library_name: transformers
license: apache-2.0
language:
- hi
- en
base_model:
- nvidia/Nemotron-4-Mini-Hindi-4B-Instruct
---
## Model card for Nemotron-4-Mini-Hindi-4B-Instruct
<!-- Provide a quick summary of what the model is/does. -->
This model is functionally identical to [Nemotron-4-Mini-Hindi-4B-Instruct ](https://huggingface.co/nvidia/Nemotron-4-Mini-Hindi-4B-Instruct), with the only difference being that its weights are provided in the .safetensors format.
**For more details, refer to the [Nemotron-4-Mini-Hindi-4B-Instruct ](https://huggingface.co/nvidia/Nemotron-4-Mini-Hindi-4B-Instruct).**
|
Patipon/matonto-sapbert-singlelabel
|
Patipon
| 2025-06-19T10:38:01Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-06-19T10:37:41Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Videos-jobz-hunting-sajal-malik-17k/18.wATCH.jobz.hunting.sajal.malik.viral.video.original.free
|
Videos-jobz-hunting-sajal-malik-17k
| 2025-06-19T10:37:59Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-19T10:37:46Z |
<a rel="nofollow" href="https://tinyurl.com/2urtu5zm">🌐 𝖢𝖫𝖨𝖢𝖪 𝖧𝖤𝖱𝖤 🟢==►► 𝖶𝖠𝖳𝖢𝖧 𝖭𝖮𝖶 L𝚎aᴋed Video V𝐢ral Video</a>
<a href="https://tinyurl.com/2urtu5zm"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Nature" class="responsive"></a>
|
New-tutorial-guru-salsa-18-Viral-Videos/FULL.VIDEO.guru.salsa.Viral.Video.Tutorial.Official
|
New-tutorial-guru-salsa-18-Viral-Videos
| 2025-06-19T10:37:56Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-19T10:36:39Z |
<a rel="nofollow" href="https://tinyurl.com/2urtu5zm">🌐 𝖢𝖫𝖨𝖢𝖪 𝖧𝖤𝖱𝖤 🟢==►► 𝖶𝖠𝖳𝖢𝖧 𝖭𝖮𝖶 L𝚎aᴋed Video V𝐢ral Video</a>
<a href="https://tinyurl.com/2urtu5zm"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Nature" class="responsive"></a>
|
Patipon/obi-sapbert-singlelabel
|
Patipon
| 2025-06-19T10:37:01Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-06-19T10:36:37Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
tomaarsen/splade-distilbert-base-uncased-quora-duplicates
|
tomaarsen
| 2025-06-19T10:36:10Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"distilbert",
"sparse-encoder",
"sparse",
"splade",
"generated_from_trainer",
"dataset_size:99000",
"loss:SpladeLoss",
"loss:SparseMultipleNegativesRankingLoss",
"loss:FlopsLoss",
"feature-extraction",
"en",
"dataset:sentence-transformers/quora-duplicates",
"arxiv:1908.10084",
"arxiv:2205.04733",
"arxiv:1705.00652",
"arxiv:2004.05665",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"co2_eq_emissions",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2025-06-19T10:36:01Z |
---
language:
- en
license: apache-2.0
tags:
- sentence-transformers
- sparse-encoder
- sparse
- splade
- generated_from_trainer
- dataset_size:99000
- loss:SpladeLoss
- loss:SparseMultipleNegativesRankingLoss
- loss:FlopsLoss
base_model: distilbert/distilbert-base-uncased
widget:
- text: How do I know if a girl likes me at school?
- text: What are some five star hotel in Jaipur?
- text: Is it normal to fantasize your wife having sex with another man?
- text: What is the Sahara, and how do the average temperatures there compare to the
ones in the Simpson Desert?
- text: What are Hillary Clinton's most recognized accomplishments while Secretary
of State?
datasets:
- sentence-transformers/quora-duplicates
pipeline_tag: feature-extraction
library_name: sentence-transformers
metrics:
- cosine_accuracy
- cosine_accuracy_threshold
- cosine_f1
- cosine_f1_threshold
- cosine_precision
- cosine_recall
- cosine_ap
- cosine_mcc
- dot_accuracy
- dot_accuracy_threshold
- dot_f1
- dot_f1_threshold
- dot_precision
- dot_recall
- dot_ap
- dot_mcc
- euclidean_accuracy
- euclidean_accuracy_threshold
- euclidean_f1
- euclidean_f1_threshold
- euclidean_precision
- euclidean_recall
- euclidean_ap
- euclidean_mcc
- manhattan_accuracy
- manhattan_accuracy_threshold
- manhattan_f1
- manhattan_f1_threshold
- manhattan_precision
- manhattan_recall
- manhattan_ap
- manhattan_mcc
- max_accuracy
- max_accuracy_threshold
- max_f1
- max_f1_threshold
- max_precision
- max_recall
- max_ap
- max_mcc
- active_dims
- sparsity_ratio
- dot_accuracy@1
- dot_accuracy@3
- dot_accuracy@5
- dot_accuracy@10
- dot_precision@1
- dot_precision@3
- dot_precision@5
- dot_precision@10
- dot_recall@1
- dot_recall@3
- dot_recall@5
- dot_recall@10
- dot_ndcg@10
- dot_mrr@10
- dot_map@100
- query_active_dims
- query_sparsity_ratio
- corpus_active_dims
- corpus_sparsity_ratio
co2_eq_emissions:
emissions: 29.19330199735101
energy_consumed: 0.07510458396754072
source: codecarbon
training_type: fine-tuning
on_cloud: false
cpu_model: 13th Gen Intel(R) Core(TM) i7-13700K
ram_total_size: 31.777088165283203
hours_used: 0.306
hardware_used: 1 x NVIDIA GeForce RTX 3090
model-index:
- name: splade-distilbert-base-uncased trained on Quora Duplicates Questions
results:
- task:
type: sparse-binary-classification
name: Sparse Binary Classification
dataset:
name: quora duplicates dev
type: quora_duplicates_dev
metrics:
- type: cosine_accuracy
value: 0.759
name: Cosine Accuracy
- type: cosine_accuracy_threshold
value: 0.8012633323669434
name: Cosine Accuracy Threshold
- type: cosine_f1
value: 0.6741573033707865
name: Cosine F1
- type: cosine_f1_threshold
value: 0.542455792427063
name: Cosine F1 Threshold
- type: cosine_precision
value: 0.528169014084507
name: Cosine Precision
- type: cosine_recall
value: 0.9316770186335404
name: Cosine Recall
- type: cosine_ap
value: 0.6875984052094628
name: Cosine Ap
- type: cosine_mcc
value: 0.5059561809366392
name: Cosine Mcc
- type: dot_accuracy
value: 0.754
name: Dot Accuracy
- type: dot_accuracy_threshold
value: 47.276466369628906
name: Dot Accuracy Threshold
- type: dot_f1
value: 0.6759581881533101
name: Dot F1
- type: dot_f1_threshold
value: 40.955284118652344
name: Dot F1 Threshold
- type: dot_precision
value: 0.5398886827458256
name: Dot Precision
- type: dot_recall
value: 0.9037267080745341
name: Dot Recall
- type: dot_ap
value: 0.6070585464263578
name: Dot Ap
- type: dot_mcc
value: 0.5042382773971489
name: Dot Mcc
- type: euclidean_accuracy
value: 0.677
name: Euclidean Accuracy
- type: euclidean_accuracy_threshold
value: -14.295218467712402
name: Euclidean Accuracy Threshold
- type: euclidean_f1
value: 0.48599545798637395
name: Euclidean F1
- type: euclidean_f1_threshold
value: -0.5385364294052124
name: Euclidean F1 Threshold
- type: euclidean_precision
value: 0.3213213213213213
name: Euclidean Precision
- type: euclidean_recall
value: 0.9968944099378882
name: Euclidean Recall
- type: euclidean_ap
value: 0.20430811061248494
name: Euclidean Ap
- type: euclidean_mcc
value: -0.04590966956831287
name: Euclidean Mcc
- type: manhattan_accuracy
value: 0.677
name: Manhattan Accuracy
- type: manhattan_accuracy_threshold
value: -163.6865234375
name: Manhattan Accuracy Threshold
- type: manhattan_f1
value: 0.48599545798637395
name: Manhattan F1
- type: manhattan_f1_threshold
value: -2.7509355545043945
name: Manhattan F1 Threshold
- type: manhattan_precision
value: 0.3213213213213213
name: Manhattan Precision
- type: manhattan_recall
value: 0.9968944099378882
name: Manhattan Recall
- type: manhattan_ap
value: 0.20563864564607998
name: Manhattan Ap
- type: manhattan_mcc
value: -0.04590966956831287
name: Manhattan Mcc
- type: max_accuracy
value: 0.759
name: Max Accuracy
- type: max_accuracy_threshold
value: 47.276466369628906
name: Max Accuracy Threshold
- type: max_f1
value: 0.6759581881533101
name: Max F1
- type: max_f1_threshold
value: 40.955284118652344
name: Max F1 Threshold
- type: max_precision
value: 0.5398886827458256
name: Max Precision
- type: max_recall
value: 0.9968944099378882
name: Max Recall
- type: max_ap
value: 0.6875984052094628
name: Max Ap
- type: max_mcc
value: 0.5059561809366392
name: Max Mcc
- type: active_dims
value: 83.36341094970703
name: Active Dims
- type: sparsity_ratio
value: 0.9972687434981421
name: Sparsity Ratio
- task:
type: sparse-information-retrieval
name: Sparse Information Retrieval
dataset:
name: NanoMSMARCO
type: NanoMSMARCO
metrics:
- type: dot_accuracy@1
value: 0.24
name: Dot Accuracy@1
- type: dot_accuracy@3
value: 0.44
name: Dot Accuracy@3
- type: dot_accuracy@5
value: 0.56
name: Dot Accuracy@5
- type: dot_accuracy@10
value: 0.74
name: Dot Accuracy@10
- type: dot_precision@1
value: 0.24
name: Dot Precision@1
- type: dot_precision@3
value: 0.14666666666666667
name: Dot Precision@3
- type: dot_precision@5
value: 0.11200000000000002
name: Dot Precision@5
- type: dot_precision@10
value: 0.07400000000000001
name: Dot Precision@10
- type: dot_recall@1
value: 0.24
name: Dot Recall@1
- type: dot_recall@3
value: 0.44
name: Dot Recall@3
- type: dot_recall@5
value: 0.56
name: Dot Recall@5
- type: dot_recall@10
value: 0.74
name: Dot Recall@10
- type: dot_ndcg@10
value: 0.46883808093835555
name: Dot Ndcg@10
- type: dot_mrr@10
value: 0.3849920634920634
name: Dot Mrr@10
- type: dot_map@100
value: 0.39450094910993877
name: Dot Map@100
- type: query_active_dims
value: 84.87999725341797
name: Query Active Dims
- type: query_sparsity_ratio
value: 0.9972190551977781
name: Query Sparsity Ratio
- type: corpus_active_dims
value: 104.35554504394531
name: Corpus Active Dims
- type: corpus_sparsity_ratio
value: 0.9965809729033503
name: Corpus Sparsity Ratio
- type: dot_accuracy@1
value: 0.24
name: Dot Accuracy@1
- type: dot_accuracy@3
value: 0.44
name: Dot Accuracy@3
- type: dot_accuracy@5
value: 0.6
name: Dot Accuracy@5
- type: dot_accuracy@10
value: 0.74
name: Dot Accuracy@10
- type: dot_precision@1
value: 0.24
name: Dot Precision@1
- type: dot_precision@3
value: 0.14666666666666667
name: Dot Precision@3
- type: dot_precision@5
value: 0.12000000000000002
name: Dot Precision@5
- type: dot_precision@10
value: 0.07400000000000001
name: Dot Precision@10
- type: dot_recall@1
value: 0.24
name: Dot Recall@1
- type: dot_recall@3
value: 0.44
name: Dot Recall@3
- type: dot_recall@5
value: 0.6
name: Dot Recall@5
- type: dot_recall@10
value: 0.74
name: Dot Recall@10
- type: dot_ndcg@10
value: 0.46663046446554135
name: Dot Ndcg@10
- type: dot_mrr@10
value: 0.3821587301587301
name: Dot Mrr@10
- type: dot_map@100
value: 0.39141822290426725
name: Dot Map@100
- type: query_active_dims
value: 94.9000015258789
name: Query Active Dims
- type: query_sparsity_ratio
value: 0.9968907672653863
name: Query Sparsity Ratio
- type: corpus_active_dims
value: 115.97699737548828
name: Corpus Active Dims
- type: corpus_sparsity_ratio
value: 0.9962002163234556
name: Corpus Sparsity Ratio
- task:
type: sparse-information-retrieval
name: Sparse Information Retrieval
dataset:
name: NanoNQ
type: NanoNQ
metrics:
- type: dot_accuracy@1
value: 0.18
name: Dot Accuracy@1
- type: dot_accuracy@3
value: 0.44
name: Dot Accuracy@3
- type: dot_accuracy@5
value: 0.52
name: Dot Accuracy@5
- type: dot_accuracy@10
value: 0.58
name: Dot Accuracy@10
- type: dot_precision@1
value: 0.18
name: Dot Precision@1
- type: dot_precision@3
value: 0.14666666666666667
name: Dot Precision@3
- type: dot_precision@5
value: 0.10400000000000001
name: Dot Precision@5
- type: dot_precision@10
value: 0.06000000000000001
name: Dot Precision@10
- type: dot_recall@1
value: 0.17
name: Dot Recall@1
- type: dot_recall@3
value: 0.41
name: Dot Recall@3
- type: dot_recall@5
value: 0.48
name: Dot Recall@5
- type: dot_recall@10
value: 0.55
name: Dot Recall@10
- type: dot_ndcg@10
value: 0.3711173352982992
name: Dot Ndcg@10
- type: dot_mrr@10
value: 0.32435714285714284
name: Dot Mrr@10
- type: dot_map@100
value: 0.32104591506684527
name: Dot Map@100
- type: query_active_dims
value: 76.81999969482422
name: Query Active Dims
- type: query_sparsity_ratio
value: 0.9974831269348396
name: Query Sparsity Ratio
- type: corpus_active_dims
value: 139.53028869628906
name: Corpus Active Dims
- type: corpus_sparsity_ratio
value: 0.9954285338871539
name: Corpus Sparsity Ratio
- type: dot_accuracy@1
value: 0.18
name: Dot Accuracy@1
- type: dot_accuracy@3
value: 0.46
name: Dot Accuracy@3
- type: dot_accuracy@5
value: 0.5
name: Dot Accuracy@5
- type: dot_accuracy@10
value: 0.64
name: Dot Accuracy@10
- type: dot_precision@1
value: 0.18
name: Dot Precision@1
- type: dot_precision@3
value: 0.1533333333333333
name: Dot Precision@3
- type: dot_precision@5
value: 0.10000000000000002
name: Dot Precision@5
- type: dot_precision@10
value: 0.066
name: Dot Precision@10
- type: dot_recall@1
value: 0.17
name: Dot Recall@1
- type: dot_recall@3
value: 0.43
name: Dot Recall@3
- type: dot_recall@5
value: 0.46
name: Dot Recall@5
- type: dot_recall@10
value: 0.61
name: Dot Recall@10
- type: dot_ndcg@10
value: 0.39277722565932277
name: Dot Ndcg@10
- type: dot_mrr@10
value: 0.33549999999999996
name: Dot Mrr@10
- type: dot_map@100
value: 0.3266050492721919
name: Dot Map@100
- type: query_active_dims
value: 85.72000122070312
name: Query Active Dims
- type: query_sparsity_ratio
value: 0.9971915339354989
name: Query Sparsity Ratio
- type: corpus_active_dims
value: 156.10665893554688
name: Corpus Active Dims
- type: corpus_sparsity_ratio
value: 0.994885438079564
name: Corpus Sparsity Ratio
- task:
type: sparse-information-retrieval
name: Sparse Information Retrieval
dataset:
name: NanoNFCorpus
type: NanoNFCorpus
metrics:
- type: dot_accuracy@1
value: 0.28
name: Dot Accuracy@1
- type: dot_accuracy@3
value: 0.42
name: Dot Accuracy@3
- type: dot_accuracy@5
value: 0.46
name: Dot Accuracy@5
- type: dot_accuracy@10
value: 0.52
name: Dot Accuracy@10
- type: dot_precision@1
value: 0.28
name: Dot Precision@1
- type: dot_precision@3
value: 0.24
name: Dot Precision@3
- type: dot_precision@5
value: 0.2
name: Dot Precision@5
- type: dot_precision@10
value: 0.16
name: Dot Precision@10
- type: dot_recall@1
value: 0.010055870806195594
name: Dot Recall@1
- type: dot_recall@3
value: 0.03299225609257712
name: Dot Recall@3
- type: dot_recall@5
value: 0.043240249260663235
name: Dot Recall@5
- type: dot_recall@10
value: 0.0575687615260951
name: Dot Recall@10
- type: dot_ndcg@10
value: 0.1901013298743406
name: Dot Ndcg@10
- type: dot_mrr@10
value: 0.3606904761904762
name: Dot Mrr@10
- type: dot_map@100
value: 0.06747201795263198
name: Dot Map@100
- type: query_active_dims
value: 92.18000030517578
name: Query Active Dims
- type: query_sparsity_ratio
value: 0.9969798833528217
name: Query Sparsity Ratio
- type: corpus_active_dims
value: 196.1699981689453
name: Corpus Active Dims
- type: corpus_sparsity_ratio
value: 0.993572832770823
name: Corpus Sparsity Ratio
- type: dot_accuracy@1
value: 0.3
name: Dot Accuracy@1
- type: dot_accuracy@3
value: 0.42
name: Dot Accuracy@3
- type: dot_accuracy@5
value: 0.48
name: Dot Accuracy@5
- type: dot_accuracy@10
value: 0.52
name: Dot Accuracy@10
- type: dot_precision@1
value: 0.3
name: Dot Precision@1
- type: dot_precision@3
value: 0.24666666666666665
name: Dot Precision@3
- type: dot_precision@5
value: 0.21600000000000003
name: Dot Precision@5
- type: dot_precision@10
value: 0.174
name: Dot Precision@10
- type: dot_recall@1
value: 0.020055870806195596
name: Dot Recall@1
- type: dot_recall@3
value: 0.03516880470242261
name: Dot Recall@3
- type: dot_recall@5
value: 0.07436160102717629
name: Dot Recall@5
- type: dot_recall@10
value: 0.08924749441772001
name: Dot Recall@10
- type: dot_ndcg@10
value: 0.2174721143005973
name: Dot Ndcg@10
- type: dot_mrr@10
value: 0.3753888888888888
name: Dot Mrr@10
- type: dot_map@100
value: 0.08327101018955965
name: Dot Map@100
- type: query_active_dims
value: 101.91999816894531
name: Query Active Dims
- type: query_sparsity_ratio
value: 0.9966607693411655
name: Query Sparsity Ratio
- type: corpus_active_dims
value: 217.09109497070312
name: Corpus Active Dims
- type: corpus_sparsity_ratio
value: 0.9928873895887982
name: Corpus Sparsity Ratio
- task:
type: sparse-information-retrieval
name: Sparse Information Retrieval
dataset:
name: NanoQuoraRetrieval
type: NanoQuoraRetrieval
metrics:
- type: dot_accuracy@1
value: 0.9
name: Dot Accuracy@1
- type: dot_accuracy@3
value: 0.96
name: Dot Accuracy@3
- type: dot_accuracy@5
value: 0.96
name: Dot Accuracy@5
- type: dot_accuracy@10
value: 1.0
name: Dot Accuracy@10
- type: dot_precision@1
value: 0.9
name: Dot Precision@1
- type: dot_precision@3
value: 0.38666666666666655
name: Dot Precision@3
- type: dot_precision@5
value: 0.24799999999999997
name: Dot Precision@5
- type: dot_precision@10
value: 0.13599999999999998
name: Dot Precision@10
- type: dot_recall@1
value: 0.804
name: Dot Recall@1
- type: dot_recall@3
value: 0.9053333333333333
name: Dot Recall@3
- type: dot_recall@5
value: 0.9326666666666666
name: Dot Recall@5
- type: dot_recall@10
value: 0.99
name: Dot Recall@10
- type: dot_ndcg@10
value: 0.940813094731721
name: Dot Ndcg@10
- type: dot_mrr@10
value: 0.9366666666666665
name: Dot Mrr@10
- type: dot_map@100
value: 0.9174399766899767
name: Dot Map@100
- type: query_active_dims
value: 80.30000305175781
name: Query Active Dims
- type: query_sparsity_ratio
value: 0.9973691107053353
name: Query Sparsity Ratio
- type: corpus_active_dims
value: 83.33353424072266
name: Corpus Active Dims
- type: corpus_sparsity_ratio
value: 0.9972697223563096
name: Corpus Sparsity Ratio
- type: dot_accuracy@1
value: 0.9
name: Dot Accuracy@1
- type: dot_accuracy@3
value: 0.96
name: Dot Accuracy@3
- type: dot_accuracy@5
value: 1.0
name: Dot Accuracy@5
- type: dot_accuracy@10
value: 1.0
name: Dot Accuracy@10
- type: dot_precision@1
value: 0.9
name: Dot Precision@1
- type: dot_precision@3
value: 0.38666666666666655
name: Dot Precision@3
- type: dot_precision@5
value: 0.25599999999999995
name: Dot Precision@5
- type: dot_precision@10
value: 0.13599999999999998
name: Dot Precision@10
- type: dot_recall@1
value: 0.804
name: Dot Recall@1
- type: dot_recall@3
value: 0.9086666666666667
name: Dot Recall@3
- type: dot_recall@5
value: 0.97
name: Dot Recall@5
- type: dot_recall@10
value: 0.99
name: Dot Recall@10
- type: dot_ndcg@10
value: 0.9434418368741703
name: Dot Ndcg@10
- type: dot_mrr@10
value: 0.94
name: Dot Mrr@10
- type: dot_map@100
value: 0.9210437710437711
name: Dot Map@100
- type: query_active_dims
value: 87.4000015258789
name: Query Active Dims
- type: query_sparsity_ratio
value: 0.9971364916609043
name: Query Sparsity Ratio
- type: corpus_active_dims
value: 90.32620239257812
name: Corpus Active Dims
- type: corpus_sparsity_ratio
value: 0.997040619802353
name: Corpus Sparsity Ratio
- task:
type: sparse-nano-beir
name: Sparse Nano BEIR
dataset:
name: NanoBEIR mean
type: NanoBEIR_mean
metrics:
- type: dot_accuracy@1
value: 0.4
name: Dot Accuracy@1
- type: dot_accuracy@3
value: 0.565
name: Dot Accuracy@3
- type: dot_accuracy@5
value: 0.625
name: Dot Accuracy@5
- type: dot_accuracy@10
value: 0.71
name: Dot Accuracy@10
- type: dot_precision@1
value: 0.4
name: Dot Precision@1
- type: dot_precision@3
value: 0.22999999999999998
name: Dot Precision@3
- type: dot_precision@5
value: 0.166
name: Dot Precision@5
- type: dot_precision@10
value: 0.10750000000000001
name: Dot Precision@10
- type: dot_recall@1
value: 0.30601396770154893
name: Dot Recall@1
- type: dot_recall@3
value: 0.4470813973564776
name: Dot Recall@3
- type: dot_recall@5
value: 0.5039767289818324
name: Dot Recall@5
- type: dot_recall@10
value: 0.5843921903815238
name: Dot Recall@10
- type: dot_ndcg@10
value: 0.4927174602106791
name: Dot Ndcg@10
- type: dot_mrr@10
value: 0.5016765873015872
name: Dot Mrr@10
- type: dot_map@100
value: 0.4251147147048482
name: Dot Map@100
- type: query_active_dims
value: 83.54500007629395
name: Query Active Dims
- type: query_sparsity_ratio
value: 0.9972627940476937
name: Query Sparsity Ratio
- type: corpus_active_dims
value: 123.28323480743562
name: Corpus Active Dims
- type: corpus_sparsity_ratio
value: 0.9959608402199255
name: Corpus Sparsity Ratio
- type: dot_accuracy@1
value: 0.4021664050235479
name: Dot Accuracy@1
- type: dot_accuracy@3
value: 0.5765463108320251
name: Dot Accuracy@3
- type: dot_accuracy@5
value: 0.6598116169544741
name: Dot Accuracy@5
- type: dot_accuracy@10
value: 0.7337833594976453
name: Dot Accuracy@10
- type: dot_precision@1
value: 0.4021664050235479
name: Dot Precision@1
- type: dot_precision@3
value: 0.25656724228152794
name: Dot Precision@3
- type: dot_precision@5
value: 0.20182103610675042
name: Dot Precision@5
- type: dot_precision@10
value: 0.14312715855572997
name: Dot Precision@10
- type: dot_recall@1
value: 0.23408727816164185
name: Dot Recall@1
- type: dot_recall@3
value: 0.3568914414902249
name: Dot Recall@3
- type: dot_recall@5
value: 0.4275402562349963
name: Dot Recall@5
- type: dot_recall@10
value: 0.5040607961406979
name: Dot Recall@10
- type: dot_ndcg@10
value: 0.45167521970189345
name: Dot Ndcg@10
- type: dot_mrr@10
value: 0.5088102589020956
name: Dot Mrr@10
- type: dot_map@100
value: 0.37853024172675503
name: Dot Map@100
- type: query_active_dims
value: 105.61787400444042
name: Query Active Dims
- type: query_sparsity_ratio
value: 0.9965396149005816
name: Query Sparsity Ratio
- type: corpus_active_dims
value: 163.73635361872905
name: Corpus Active Dims
- type: corpus_sparsity_ratio
value: 0.9946354644643625
name: Corpus Sparsity Ratio
- task:
type: sparse-information-retrieval
name: Sparse Information Retrieval
dataset:
name: NanoClimateFEVER
type: NanoClimateFEVER
metrics:
- type: dot_accuracy@1
value: 0.14
name: Dot Accuracy@1
- type: dot_accuracy@3
value: 0.32
name: Dot Accuracy@3
- type: dot_accuracy@5
value: 0.42
name: Dot Accuracy@5
- type: dot_accuracy@10
value: 0.52
name: Dot Accuracy@10
- type: dot_precision@1
value: 0.14
name: Dot Precision@1
- type: dot_precision@3
value: 0.11333333333333333
name: Dot Precision@3
- type: dot_precision@5
value: 0.09200000000000001
name: Dot Precision@5
- type: dot_precision@10
value: 0.064
name: Dot Precision@10
- type: dot_recall@1
value: 0.07166666666666666
name: Dot Recall@1
- type: dot_recall@3
value: 0.14833333333333332
name: Dot Recall@3
- type: dot_recall@5
value: 0.19
name: Dot Recall@5
- type: dot_recall@10
value: 0.25
name: Dot Recall@10
- type: dot_ndcg@10
value: 0.1928494772790168
name: Dot Ndcg@10
- type: dot_mrr@10
value: 0.2526666666666666
name: Dot Mrr@10
- type: dot_map@100
value: 0.14153388517603807
name: Dot Map@100
- type: query_active_dims
value: 102.33999633789062
name: Query Active Dims
- type: query_sparsity_ratio
value: 0.9966470088350079
name: Query Sparsity Ratio
- type: corpus_active_dims
value: 217.80722045898438
name: Corpus Active Dims
- type: corpus_sparsity_ratio
value: 0.9928639269884351
name: Corpus Sparsity Ratio
- task:
type: sparse-information-retrieval
name: Sparse Information Retrieval
dataset:
name: NanoDBPedia
type: NanoDBPedia
metrics:
- type: dot_accuracy@1
value: 0.56
name: Dot Accuracy@1
- type: dot_accuracy@3
value: 0.78
name: Dot Accuracy@3
- type: dot_accuracy@5
value: 0.82
name: Dot Accuracy@5
- type: dot_accuracy@10
value: 0.88
name: Dot Accuracy@10
- type: dot_precision@1
value: 0.56
name: Dot Precision@1
- type: dot_precision@3
value: 0.5133333333333333
name: Dot Precision@3
- type: dot_precision@5
value: 0.488
name: Dot Precision@5
- type: dot_precision@10
value: 0.436
name: Dot Precision@10
- type: dot_recall@1
value: 0.042268334576683116
name: Dot Recall@1
- type: dot_recall@3
value: 0.1179684188048045
name: Dot Recall@3
- type: dot_recall@5
value: 0.17514937366700764
name: Dot Recall@5
- type: dot_recall@10
value: 0.2739338942789917
name: Dot Recall@10
- type: dot_ndcg@10
value: 0.5024388532207343
name: Dot Ndcg@10
- type: dot_mrr@10
value: 0.6801666666666667
name: Dot Mrr@10
- type: dot_map@100
value: 0.38220472918007364
name: Dot Map@100
- type: query_active_dims
value: 79.80000305175781
name: Query Active Dims
- type: query_sparsity_ratio
value: 0.9973854923317031
name: Query Sparsity Ratio
- type: corpus_active_dims
value: 146.68072509765625
name: Corpus Active Dims
- type: corpus_sparsity_ratio
value: 0.995194262332165
name: Corpus Sparsity Ratio
- task:
type: sparse-information-retrieval
name: Sparse Information Retrieval
dataset:
name: NanoFEVER
type: NanoFEVER
metrics:
- type: dot_accuracy@1
value: 0.64
name: Dot Accuracy@1
- type: dot_accuracy@3
value: 0.72
name: Dot Accuracy@3
- type: dot_accuracy@5
value: 0.82
name: Dot Accuracy@5
- type: dot_accuracy@10
value: 0.88
name: Dot Accuracy@10
- type: dot_precision@1
value: 0.64
name: Dot Precision@1
- type: dot_precision@3
value: 0.2533333333333333
name: Dot Precision@3
- type: dot_precision@5
value: 0.176
name: Dot Precision@5
- type: dot_precision@10
value: 0.09399999999999999
name: Dot Precision@10
- type: dot_recall@1
value: 0.6066666666666667
name: Dot Recall@1
- type: dot_recall@3
value: 0.7033333333333333
name: Dot Recall@3
- type: dot_recall@5
value: 0.8033333333333332
name: Dot Recall@5
- type: dot_recall@10
value: 0.8633333333333333
name: Dot Recall@10
- type: dot_ndcg@10
value: 0.7368677901493659
name: Dot Ndcg@10
- type: dot_mrr@10
value: 0.7063809523809523
name: Dot Mrr@10
- type: dot_map@100
value: 0.697561348294107
name: Dot Map@100
- type: query_active_dims
value: 104.22000122070312
name: Query Active Dims
- type: query_sparsity_ratio
value: 0.9965854137598879
name: Query Sparsity Ratio
- type: corpus_active_dims
value: 228.74359130859375
name: Corpus Active Dims
- type: corpus_sparsity_ratio
value: 0.9925056159062776
name: Corpus Sparsity Ratio
- task:
type: sparse-information-retrieval
name: Sparse Information Retrieval
dataset:
name: NanoFiQA2018
type: NanoFiQA2018
metrics:
- type: dot_accuracy@1
value: 0.2
name: Dot Accuracy@1
- type: dot_accuracy@3
value: 0.28
name: Dot Accuracy@3
- type: dot_accuracy@5
value: 0.4
name: Dot Accuracy@5
- type: dot_accuracy@10
value: 0.46
name: Dot Accuracy@10
- type: dot_precision@1
value: 0.2
name: Dot Precision@1
- type: dot_precision@3
value: 0.12666666666666665
name: Dot Precision@3
- type: dot_precision@5
value: 0.10400000000000001
name: Dot Precision@5
- type: dot_precision@10
value: 0.07
name: Dot Precision@10
- type: dot_recall@1
value: 0.09469047619047619
name: Dot Recall@1
- type: dot_recall@3
value: 0.15076984126984128
name: Dot Recall@3
- type: dot_recall@5
value: 0.25362698412698415
name: Dot Recall@5
- type: dot_recall@10
value: 0.3211825396825397
name: Dot Recall@10
- type: dot_ndcg@10
value: 0.23331922670891586
name: Dot Ndcg@10
- type: dot_mrr@10
value: 0.27135714285714285
name: Dot Mrr@10
- type: dot_map@100
value: 0.18392178053045694
name: Dot Map@100
- type: query_active_dims
value: 89.73999786376953
name: Query Active Dims
- type: query_sparsity_ratio
value: 0.9970598257694853
name: Query Sparsity Ratio
- type: corpus_active_dims
value: 131.34085083007812
name: Corpus Active Dims
- type: corpus_sparsity_ratio
value: 0.9956968465097282
name: Corpus Sparsity Ratio
- task:
type: sparse-information-retrieval
name: Sparse Information Retrieval
dataset:
name: NanoHotpotQA
type: NanoHotpotQA
metrics:
- type: dot_accuracy@1
value: 0.8
name: Dot Accuracy@1
- type: dot_accuracy@3
value: 0.9
name: Dot Accuracy@3
- type: dot_accuracy@5
value: 0.92
name: Dot Accuracy@5
- type: dot_accuracy@10
value: 0.94
name: Dot Accuracy@10
- type: dot_precision@1
value: 0.8
name: Dot Precision@1
- type: dot_precision@3
value: 0.3933333333333333
name: Dot Precision@3
- type: dot_precision@5
value: 0.264
name: Dot Precision@5
- type: dot_precision@10
value: 0.14200000000000002
name: Dot Precision@10
- type: dot_recall@1
value: 0.4
name: Dot Recall@1
- type: dot_recall@3
value: 0.59
name: Dot Recall@3
- type: dot_recall@5
value: 0.66
name: Dot Recall@5
- type: dot_recall@10
value: 0.71
name: Dot Recall@10
- type: dot_ndcg@10
value: 0.6848748058213975
name: Dot Ndcg@10
- type: dot_mrr@10
value: 0.8541666666666665
name: Dot Mrr@10
- type: dot_map@100
value: 0.6060670580971632
name: Dot Map@100
- type: query_active_dims
value: 111.23999786376953
name: Query Active Dims
- type: query_sparsity_ratio
value: 0.9963554158356671
name: Query Sparsity Ratio
- type: corpus_active_dims
value: 166.19056701660156
name: Corpus Active Dims
- type: corpus_sparsity_ratio
value: 0.9945550564505407
name: Corpus Sparsity Ratio
- task:
type: sparse-information-retrieval
name: Sparse Information Retrieval
dataset:
name: NanoSCIDOCS
type: NanoSCIDOCS
metrics:
- type: dot_accuracy@1
value: 0.34
name: Dot Accuracy@1
- type: dot_accuracy@3
value: 0.56
name: Dot Accuracy@3
- type: dot_accuracy@5
value: 0.66
name: Dot Accuracy@5
- type: dot_accuracy@10
value: 0.78
name: Dot Accuracy@10
- type: dot_precision@1
value: 0.34
name: Dot Precision@1
- type: dot_precision@3
value: 0.26
name: Dot Precision@3
- type: dot_precision@5
value: 0.2
name: Dot Precision@5
- type: dot_precision@10
value: 0.14200000000000002
name: Dot Precision@10
- type: dot_recall@1
value: 0.07166666666666668
name: Dot Recall@1
- type: dot_recall@3
value: 0.16066666666666665
name: Dot Recall@3
- type: dot_recall@5
value: 0.20566666666666664
name: Dot Recall@5
- type: dot_recall@10
value: 0.2916666666666667
name: Dot Recall@10
- type: dot_ndcg@10
value: 0.2850130343263586
name: Dot Ndcg@10
- type: dot_mrr@10
value: 0.47407142857142853
name: Dot Mrr@10
- type: dot_map@100
value: 0.20070977606957205
name: Dot Map@100
- type: query_active_dims
value: 113.77999877929688
name: Query Active Dims
- type: query_sparsity_ratio
value: 0.9962721971437226
name: Query Sparsity Ratio
- type: corpus_active_dims
value: 226.21810913085938
name: Corpus Active Dims
- type: corpus_sparsity_ratio
value: 0.9925883589171464
name: Corpus Sparsity Ratio
- task:
type: sparse-information-retrieval
name: Sparse Information Retrieval
dataset:
name: NanoArguAna
type: NanoArguAna
metrics:
- type: dot_accuracy@1
value: 0.08
name: Dot Accuracy@1
- type: dot_accuracy@3
value: 0.32
name: Dot Accuracy@3
- type: dot_accuracy@5
value: 0.38
name: Dot Accuracy@5
- type: dot_accuracy@10
value: 0.44
name: Dot Accuracy@10
- type: dot_precision@1
value: 0.08
name: Dot Precision@1
- type: dot_precision@3
value: 0.10666666666666666
name: Dot Precision@3
- type: dot_precision@5
value: 0.07600000000000001
name: Dot Precision@5
- type: dot_precision@10
value: 0.044000000000000004
name: Dot Precision@10
- type: dot_recall@1
value: 0.08
name: Dot Recall@1
- type: dot_recall@3
value: 0.32
name: Dot Recall@3
- type: dot_recall@5
value: 0.38
name: Dot Recall@5
- type: dot_recall@10
value: 0.44
name: Dot Recall@10
- type: dot_ndcg@10
value: 0.26512761684329256
name: Dot Ndcg@10
- type: dot_mrr@10
value: 0.20850000000000002
name: Dot Mrr@10
- type: dot_map@100
value: 0.2135415485154769
name: Dot Map@100
- type: query_active_dims
value: 202.02000427246094
name: Query Active Dims
- type: query_sparsity_ratio
value: 0.9933811675423477
name: Query Sparsity Ratio
- type: corpus_active_dims
value: 176.61155700683594
name: Corpus Active Dims
- type: corpus_sparsity_ratio
value: 0.994213630921734
name: Corpus Sparsity Ratio
- task:
type: sparse-information-retrieval
name: Sparse Information Retrieval
dataset:
name: NanoSciFact
type: NanoSciFact
metrics:
- type: dot_accuracy@1
value: 0.44
name: Dot Accuracy@1
- type: dot_accuracy@3
value: 0.58
name: Dot Accuracy@3
- type: dot_accuracy@5
value: 0.7
name: Dot Accuracy@5
- type: dot_accuracy@10
value: 0.78
name: Dot Accuracy@10
- type: dot_precision@1
value: 0.44
name: Dot Precision@1
- type: dot_precision@3
value: 0.19999999999999996
name: Dot Precision@3
- type: dot_precision@5
value: 0.14800000000000002
name: Dot Precision@5
- type: dot_precision@10
value: 0.08599999999999998
name: Dot Precision@10
- type: dot_recall@1
value: 0.415
name: Dot Recall@1
- type: dot_recall@3
value: 0.55
name: Dot Recall@3
- type: dot_recall@5
value: 0.665
name: Dot Recall@5
- type: dot_recall@10
value: 0.76
name: Dot Recall@10
- type: dot_ndcg@10
value: 0.5848481832222858
name: Dot Ndcg@10
- type: dot_mrr@10
value: 0.5400476190476191
name: Dot Mrr@10
- type: dot_map@100
value: 0.5247408283859897
name: Dot Map@100
- type: query_active_dims
value: 102.4800033569336
name: Query Active Dims
- type: query_sparsity_ratio
value: 0.9966424217496581
name: Query Sparsity Ratio
- type: corpus_active_dims
value: 216.64508056640625
name: Corpus Active Dims
- type: corpus_sparsity_ratio
value: 0.9929020024714499
name: Corpus Sparsity Ratio
- task:
type: sparse-information-retrieval
name: Sparse Information Retrieval
dataset:
name: NanoTouche2020
type: NanoTouche2020
metrics:
- type: dot_accuracy@1
value: 0.40816326530612246
name: Dot Accuracy@1
- type: dot_accuracy@3
value: 0.7551020408163265
name: Dot Accuracy@3
- type: dot_accuracy@5
value: 0.8775510204081632
name: Dot Accuracy@5
- type: dot_accuracy@10
value: 0.9591836734693877
name: Dot Accuracy@10
- type: dot_precision@1
value: 0.40816326530612246
name: Dot Precision@1
- type: dot_precision@3
value: 0.43537414965986393
name: Dot Precision@3
- type: dot_precision@5
value: 0.38367346938775504
name: Dot Precision@5
- type: dot_precision@10
value: 0.3326530612244898
name: Dot Precision@10
- type: dot_recall@1
value: 0.027119934527989286
name: Dot Recall@1
- type: dot_recall@3
value: 0.08468167459585536
name: Dot Recall@3
- type: dot_recall@5
value: 0.12088537223378343
name: Dot Recall@5
- type: dot_recall@10
value: 0.21342642144981977
name: Dot Recall@10
- type: dot_ndcg@10
value: 0.36611722725361623
name: Dot Ndcg@10
- type: dot_mrr@10
value: 0.5941286038224813
name: Dot Mrr@10
- type: dot_map@100
value: 0.24827413478914825
name: Dot Map@100
- type: query_active_dims
value: 97.30612182617188
name: Query Active Dims
- type: query_sparsity_ratio
value: 0.9968119349378752
name: Query Sparsity Ratio
- type: corpus_active_dims
value: 147.016357421875
name: Corpus Active Dims
- type: corpus_sparsity_ratio
value: 0.9951832659255005
name: Corpus Sparsity Ratio
---
# splade-distilbert-base-uncased trained on Quora Duplicates Questions
This is a [SPLADE Sparse Encoder](https://www.sbert.net/docs/sparse_encoder/usage/usage.html) model finetuned from [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased) on the [quora-duplicates](https://huggingface.co/datasets/sentence-transformers/quora-duplicates) dataset using the [sentence-transformers](https://www.SBERT.net) library. It maps sentences & paragraphs to a 30522-dimensional sparse vector space and can be used for semantic search and sparse retrieval.
## Model Details
### Model Description
- **Model Type:** SPLADE Sparse Encoder
- **Base model:** [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased) <!-- at revision 12040accade4e8a0f71eabdb258fecc2e7e948be -->
- **Maximum Sequence Length:** 256 tokens
- **Output Dimensionality:** 30522 dimensions
- **Similarity Function:** Dot Product
- **Training Dataset:**
- [quora-duplicates](https://huggingface.co/datasets/sentence-transformers/quora-duplicates)
- **Language:** en
- **License:** apache-2.0
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Documentation:** [Sparse Encoder Documentation](https://www.sbert.net/docs/sparse_encoder/usage/usage.html)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sparse Encoders on Hugging Face](https://huggingface.co/models?library=sentence-transformers&other=sparse-encoder)
### Full Model Architecture
```
SparseEncoder(
(0): MLMTransformer({'max_seq_length': 256, 'do_lower_case': False}) with MLMTransformer model: DistilBertForMaskedLM
(1): SpladePooling({'pooling_strategy': 'max', 'activation_function': 'relu', 'word_embedding_dimension': 30522})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SparseEncoder
# Download from the 🤗 Hub
model = SparseEncoder("tomaarsen/splade-distilbert-base-uncased-quora-duplicates")
# Run inference
sentences = [
'What accomplishments did Hillary Clinton achieve during her time as Secretary of State?',
"What are Hillary Clinton's most recognized accomplishments while Secretary of State?",
'What are Hillary Clinton’s qualifications to be President?',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 30522]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities)
# tensor([[ 83.9635, 60.9402, 26.0887],
# [ 60.9402, 85.6474, 33.3293],
# [ 26.0887, 33.3293, 104.0980]])
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Sparse Binary Classification
* Dataset: `quora_duplicates_dev`
* Evaluated with [<code>SparseBinaryClassificationEvaluator</code>](https://sbert.net/docs/package_reference/sparse_encoder/evaluation.html#sentence_transformers.sparse_encoder.evaluation.SparseBinaryClassificationEvaluator)
| Metric | Value |
|:-----------------------------|:-----------|
| cosine_accuracy | 0.759 |
| cosine_accuracy_threshold | 0.8013 |
| cosine_f1 | 0.6742 |
| cosine_f1_threshold | 0.5425 |
| cosine_precision | 0.5282 |
| cosine_recall | 0.9317 |
| cosine_ap | 0.6876 |
| cosine_mcc | 0.506 |
| dot_accuracy | 0.754 |
| dot_accuracy_threshold | 47.2765 |
| dot_f1 | 0.676 |
| dot_f1_threshold | 40.9553 |
| dot_precision | 0.5399 |
| dot_recall | 0.9037 |
| dot_ap | 0.6071 |
| dot_mcc | 0.5042 |
| euclidean_accuracy | 0.677 |
| euclidean_accuracy_threshold | -14.2952 |
| euclidean_f1 | 0.486 |
| euclidean_f1_threshold | -0.5385 |
| euclidean_precision | 0.3213 |
| euclidean_recall | 0.9969 |
| euclidean_ap | 0.2043 |
| euclidean_mcc | -0.0459 |
| manhattan_accuracy | 0.677 |
| manhattan_accuracy_threshold | -163.6865 |
| manhattan_f1 | 0.486 |
| manhattan_f1_threshold | -2.7509 |
| manhattan_precision | 0.3213 |
| manhattan_recall | 0.9969 |
| manhattan_ap | 0.2056 |
| manhattan_mcc | -0.0459 |
| max_accuracy | 0.759 |
| max_accuracy_threshold | 47.2765 |
| max_f1 | 0.676 |
| max_f1_threshold | 40.9553 |
| max_precision | 0.5399 |
| max_recall | 0.9969 |
| **max_ap** | **0.6876** |
| max_mcc | 0.506 |
| active_dims | 83.3634 |
| sparsity_ratio | 0.9973 |
#### Sparse Information Retrieval
* Datasets: `NanoMSMARCO`, `NanoNQ`, `NanoNFCorpus`, `NanoQuoraRetrieval`, `NanoClimateFEVER`, `NanoDBPedia`, `NanoFEVER`, `NanoFiQA2018`, `NanoHotpotQA`, `NanoMSMARCO`, `NanoNFCorpus`, `NanoNQ`, `NanoQuoraRetrieval`, `NanoSCIDOCS`, `NanoArguAna`, `NanoSciFact` and `NanoTouche2020`
* Evaluated with [<code>SparseInformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sparse_encoder/evaluation.html#sentence_transformers.sparse_encoder.evaluation.SparseInformationRetrievalEvaluator)
| Metric | NanoMSMARCO | NanoNQ | NanoNFCorpus | NanoQuoraRetrieval | NanoClimateFEVER | NanoDBPedia | NanoFEVER | NanoFiQA2018 | NanoHotpotQA | NanoSCIDOCS | NanoArguAna | NanoSciFact | NanoTouche2020 |
|:----------------------|:------------|:-----------|:-------------|:-------------------|:-----------------|:------------|:-----------|:-------------|:-------------|:------------|:------------|:------------|:---------------|
| dot_accuracy@1 | 0.24 | 0.18 | 0.3 | 0.9 | 0.14 | 0.56 | 0.64 | 0.2 | 0.8 | 0.34 | 0.08 | 0.44 | 0.4082 |
| dot_accuracy@3 | 0.44 | 0.46 | 0.42 | 0.96 | 0.32 | 0.78 | 0.72 | 0.28 | 0.9 | 0.56 | 0.32 | 0.58 | 0.7551 |
| dot_accuracy@5 | 0.6 | 0.5 | 0.48 | 1.0 | 0.42 | 0.82 | 0.82 | 0.4 | 0.92 | 0.66 | 0.38 | 0.7 | 0.8776 |
| dot_accuracy@10 | 0.74 | 0.64 | 0.52 | 1.0 | 0.52 | 0.88 | 0.88 | 0.46 | 0.94 | 0.78 | 0.44 | 0.78 | 0.9592 |
| dot_precision@1 | 0.24 | 0.18 | 0.3 | 0.9 | 0.14 | 0.56 | 0.64 | 0.2 | 0.8 | 0.34 | 0.08 | 0.44 | 0.4082 |
| dot_precision@3 | 0.1467 | 0.1533 | 0.2467 | 0.3867 | 0.1133 | 0.5133 | 0.2533 | 0.1267 | 0.3933 | 0.26 | 0.1067 | 0.2 | 0.4354 |
| dot_precision@5 | 0.12 | 0.1 | 0.216 | 0.256 | 0.092 | 0.488 | 0.176 | 0.104 | 0.264 | 0.2 | 0.076 | 0.148 | 0.3837 |
| dot_precision@10 | 0.074 | 0.066 | 0.174 | 0.136 | 0.064 | 0.436 | 0.094 | 0.07 | 0.142 | 0.142 | 0.044 | 0.086 | 0.3327 |
| dot_recall@1 | 0.24 | 0.17 | 0.0201 | 0.804 | 0.0717 | 0.0423 | 0.6067 | 0.0947 | 0.4 | 0.0717 | 0.08 | 0.415 | 0.0271 |
| dot_recall@3 | 0.44 | 0.43 | 0.0352 | 0.9087 | 0.1483 | 0.118 | 0.7033 | 0.1508 | 0.59 | 0.1607 | 0.32 | 0.55 | 0.0847 |
| dot_recall@5 | 0.6 | 0.46 | 0.0744 | 0.97 | 0.19 | 0.1751 | 0.8033 | 0.2536 | 0.66 | 0.2057 | 0.38 | 0.665 | 0.1209 |
| dot_recall@10 | 0.74 | 0.61 | 0.0892 | 0.99 | 0.25 | 0.2739 | 0.8633 | 0.3212 | 0.71 | 0.2917 | 0.44 | 0.76 | 0.2134 |
| **dot_ndcg@10** | **0.4666** | **0.3928** | **0.2175** | **0.9434** | **0.1928** | **0.5024** | **0.7369** | **0.2333** | **0.6849** | **0.285** | **0.2651** | **0.5848** | **0.3661** |
| dot_mrr@10 | 0.3822 | 0.3355 | 0.3754 | 0.94 | 0.2527 | 0.6802 | 0.7064 | 0.2714 | 0.8542 | 0.4741 | 0.2085 | 0.54 | 0.5941 |
| dot_map@100 | 0.3914 | 0.3266 | 0.0833 | 0.921 | 0.1415 | 0.3822 | 0.6976 | 0.1839 | 0.6061 | 0.2007 | 0.2135 | 0.5247 | 0.2483 |
| query_active_dims | 94.9 | 85.72 | 101.92 | 87.4 | 102.34 | 79.8 | 104.22 | 89.74 | 111.24 | 113.78 | 202.02 | 102.48 | 97.3061 |
| query_sparsity_ratio | 0.9969 | 0.9972 | 0.9967 | 0.9971 | 0.9966 | 0.9974 | 0.9966 | 0.9971 | 0.9964 | 0.9963 | 0.9934 | 0.9966 | 0.9968 |
| corpus_active_dims | 115.977 | 156.1067 | 217.0911 | 90.3262 | 217.8072 | 146.6807 | 228.7436 | 131.3409 | 166.1906 | 226.2181 | 176.6116 | 216.6451 | 147.0164 |
| corpus_sparsity_ratio | 0.9962 | 0.9949 | 0.9929 | 0.997 | 0.9929 | 0.9952 | 0.9925 | 0.9957 | 0.9946 | 0.9926 | 0.9942 | 0.9929 | 0.9952 |
#### Sparse Nano BEIR
* Dataset: `NanoBEIR_mean`
* Evaluated with [<code>SparseNanoBEIREvaluator</code>](https://sbert.net/docs/package_reference/sparse_encoder/evaluation.html#sentence_transformers.sparse_encoder.evaluation.SparseNanoBEIREvaluator) with these parameters:
```json
{
"dataset_names": [
"msmarco",
"nq",
"nfcorpus",
"quoraretrieval"
]
}
```
| Metric | Value |
|:----------------------|:-----------|
| dot_accuracy@1 | 0.4 |
| dot_accuracy@3 | 0.565 |
| dot_accuracy@5 | 0.625 |
| dot_accuracy@10 | 0.71 |
| dot_precision@1 | 0.4 |
| dot_precision@3 | 0.23 |
| dot_precision@5 | 0.166 |
| dot_precision@10 | 0.1075 |
| dot_recall@1 | 0.306 |
| dot_recall@3 | 0.4471 |
| dot_recall@5 | 0.504 |
| dot_recall@10 | 0.5844 |
| **dot_ndcg@10** | **0.4927** |
| dot_mrr@10 | 0.5017 |
| dot_map@100 | 0.4251 |
| query_active_dims | 83.545 |
| query_sparsity_ratio | 0.9973 |
| corpus_active_dims | 123.2832 |
| corpus_sparsity_ratio | 0.996 |
#### Sparse Nano BEIR
* Dataset: `NanoBEIR_mean`
* Evaluated with [<code>SparseNanoBEIREvaluator</code>](https://sbert.net/docs/package_reference/sparse_encoder/evaluation.html#sentence_transformers.sparse_encoder.evaluation.SparseNanoBEIREvaluator) with these parameters:
```json
{
"dataset_names": [
"climatefever",
"dbpedia",
"fever",
"fiqa2018",
"hotpotqa",
"msmarco",
"nfcorpus",
"nq",
"quoraretrieval",
"scidocs",
"arguana",
"scifact",
"touche2020"
]
}
```
| Metric | Value |
|:----------------------|:-----------|
| dot_accuracy@1 | 0.4022 |
| dot_accuracy@3 | 0.5765 |
| dot_accuracy@5 | 0.6598 |
| dot_accuracy@10 | 0.7338 |
| dot_precision@1 | 0.4022 |
| dot_precision@3 | 0.2566 |
| dot_precision@5 | 0.2018 |
| dot_precision@10 | 0.1431 |
| dot_recall@1 | 0.2341 |
| dot_recall@3 | 0.3569 |
| dot_recall@5 | 0.4275 |
| dot_recall@10 | 0.5041 |
| **dot_ndcg@10** | **0.4517** |
| dot_mrr@10 | 0.5088 |
| dot_map@100 | 0.3785 |
| query_active_dims | 105.6179 |
| query_sparsity_ratio | 0.9965 |
| corpus_active_dims | 163.7364 |
| corpus_sparsity_ratio | 0.9946 |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### quora-duplicates
* Dataset: [quora-duplicates](https://huggingface.co/datasets/sentence-transformers/quora-duplicates) at [451a485](https://huggingface.co/datasets/sentence-transformers/quora-duplicates/tree/451a4850bd141edb44ade1b5828c259abd762cdb)
* Size: 99,000 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:---------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 6 tokens</li><li>mean: 14.1 tokens</li><li>max: 39 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 13.83 tokens</li><li>max: 41 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 15.21 tokens</li><li>max: 75 tokens</li></ul> |
* Samples:
| anchor | positive | negative |
|:----------------------------------------------------------------------|:---------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>What are the best GMAT coaching institutes in Delhi NCR?</code> | <code>Which are the best GMAT coaching institutes in Delhi/NCR?</code> | <code>What are the best GMAT coaching institutes in Delhi-Noida Area?</code> |
| <code>Is a third world war coming?</code> | <code>Is World War 3 more imminent than expected?</code> | <code>Since the UN is unable to control terrorism and groups like ISIS, al-Qaeda and countries that promote terrorism (even though it consumed those countries), can we assume that the world is heading towards World War III?</code> |
| <code>Should I build iOS or Android apps first?</code> | <code>Should people choose Android or iOS first to build their App?</code> | <code>How much more effort is it to build your app on both iOS and Android?</code> |
* Loss: [<code>SpladeLoss</code>](https://sbert.net/docs/package_reference/sparse_encoder/losses.html#spladeloss) with these parameters:
```json
{
"loss": "SparseMultipleNegativesRankingLoss(scale=1.0, similarity_fct='dot_score')",
"lambda_corpus": 3e-05,
"lambda_query": 5e-05
}
```
### Evaluation Dataset
#### quora-duplicates
* Dataset: [quora-duplicates](https://huggingface.co/datasets/sentence-transformers/quora-duplicates) at [451a485](https://huggingface.co/datasets/sentence-transformers/quora-duplicates/tree/451a4850bd141edb44ade1b5828c259abd762cdb)
* Size: 1,000 evaluation samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 6 tokens</li><li>mean: 14.05 tokens</li><li>max: 40 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 14.14 tokens</li><li>max: 44 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 14.56 tokens</li><li>max: 60 tokens</li></ul> |
* Samples:
| anchor | positive | negative |
|:-------------------------------------------------------------------|:------------------------------------------------------------|:-----------------------------------------------------------------|
| <code>What happens if we use petrol in diesel vehicles?</code> | <code>Why can't we use petrol in diesel?</code> | <code>Why are diesel engines noisier than petrol engines?</code> |
| <code>Why is Saltwater taffy candy imported in Switzerland?</code> | <code>Why is Saltwater taffy candy imported in Laos?</code> | <code>Is salt a consumer product?</code> |
| <code>Which is your favourite film in 2016?</code> | <code>What movie is the best movie of 2016?</code> | <code>What will the best movie of 2017 be?</code> |
* Loss: [<code>SpladeLoss</code>](https://sbert.net/docs/package_reference/sparse_encoder/losses.html#spladeloss) with these parameters:
```json
{
"loss": "SparseMultipleNegativesRankingLoss(scale=1.0, similarity_fct='dot_score')",
"lambda_corpus": 3e-05,
"lambda_query": 5e-05
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 12
- `per_device_eval_batch_size`: 12
- `learning_rate`: 2e-05
- `num_train_epochs`: 1
- `bf16`: True
- `load_best_model_at_end`: True
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 12
- `per_device_eval_batch_size`: 12
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 2e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 1
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.0
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: True
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: True
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
- `router_mapping`: {}
- `learning_rate_mapping`: {}
</details>
### Training Logs
| Epoch | Step | Training Loss | Validation Loss | quora_duplicates_dev_max_ap | NanoMSMARCO_dot_ndcg@10 | NanoNQ_dot_ndcg@10 | NanoNFCorpus_dot_ndcg@10 | NanoQuoraRetrieval_dot_ndcg@10 | NanoBEIR_mean_dot_ndcg@10 | NanoClimateFEVER_dot_ndcg@10 | NanoDBPedia_dot_ndcg@10 | NanoFEVER_dot_ndcg@10 | NanoFiQA2018_dot_ndcg@10 | NanoHotpotQA_dot_ndcg@10 | NanoSCIDOCS_dot_ndcg@10 | NanoArguAna_dot_ndcg@10 | NanoSciFact_dot_ndcg@10 | NanoTouche2020_dot_ndcg@10 |
|:-------:|:--------:|:-------------:|:---------------:|:---------------------------:|:-----------------------:|:------------------:|:------------------------:|:------------------------------:|:-------------------------:|:----------------------------:|:-----------------------:|:---------------------:|:------------------------:|:------------------------:|:-----------------------:|:-----------------------:|:-----------------------:|:--------------------------:|
| 0.0242 | 200 | 6.2275 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0485 | 400 | 0.4129 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0727 | 600 | 0.3238 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0970 | 800 | 0.2795 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1212 | 1000 | 0.255 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1455 | 1200 | 0.2367 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1697 | 1400 | 0.25 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1939 | 1600 | 0.2742 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2 | 1650 | - | 0.1914 | 0.6442 | 0.3107 | 0.2820 | 0.1991 | 0.8711 | 0.4157 | - | - | - | - | - | - | - | - | - |
| 0.2182 | 1800 | 0.2102 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2424 | 2000 | 0.1797 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2667 | 2200 | 0.2021 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2909 | 2400 | 0.1734 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3152 | 2600 | 0.1849 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3394 | 2800 | 0.1871 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3636 | 3000 | 0.1685 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3879 | 3200 | 0.1512 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4 | 3300 | - | 0.1139 | 0.6637 | 0.4200 | 0.3431 | 0.1864 | 0.9222 | 0.4679 | - | - | - | - | - | - | - | - | - |
| 0.4121 | 3400 | 0.1165 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4364 | 3600 | 0.1518 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4606 | 3800 | 0.1328 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4848 | 4000 | 0.1098 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5091 | 4200 | 0.1389 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5333 | 4400 | 0.1224 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5576 | 4600 | 0.09 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5818 | 4800 | 0.1162 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6 | 4950 | - | 0.0784 | 0.6666 | 0.4404 | 0.3688 | 0.2239 | 0.9478 | 0.4952 | - | - | - | - | - | - | - | - | - |
| 0.6061 | 5000 | 0.1054 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6303 | 5200 | 0.0949 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6545 | 5400 | 0.1315 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6788 | 5600 | 0.1246 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7030 | 5800 | 0.1047 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7273 | 6000 | 0.0861 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7515 | 6200 | 0.103 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7758 | 6400 | 0.1062 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| **0.8** | **6600** | **0.1275** | **0.0783** | **0.6856** | **0.4666** | **0.3928** | **0.2175** | **0.9434** | **0.5051** | **-** | **-** | **-** | **-** | **-** | **-** | **-** | **-** | **-** |
| 0.8242 | 6800 | 0.1131 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.8485 | 7000 | 0.0651 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.8727 | 7200 | 0.0657 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.8970 | 7400 | 0.1065 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.9212 | 7600 | 0.0691 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.9455 | 7800 | 0.1136 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.9697 | 8000 | 0.0834 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.9939 | 8200 | 0.0867 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 1.0 | 8250 | - | 0.0720 | 0.6876 | 0.4688 | 0.3711 | 0.1901 | 0.9408 | 0.4927 | - | - | - | - | - | - | - | - | - |
| -1 | -1 | - | - | - | 0.4666 | 0.3928 | 0.2175 | 0.9434 | 0.4517 | 0.1928 | 0.5024 | 0.7369 | 0.2333 | 0.6849 | 0.2850 | 0.2651 | 0.5848 | 0.3661 |
* The bold row denotes the saved checkpoint.
### Environmental Impact
Carbon emissions were measured using [CodeCarbon](https://github.com/mlco2/codecarbon).
- **Energy Consumed**: 0.075 kWh
- **Carbon Emitted**: 0.029 kg of CO2
- **Hours Used**: 0.306 hours
### Training Hardware
- **On Cloud**: No
- **GPU Model**: 1 x NVIDIA GeForce RTX 3090
- **CPU Model**: 13th Gen Intel(R) Core(TM) i7-13700K
- **RAM Size**: 31.78 GB
### Framework Versions
- Python: 3.11.6
- Sentence Transformers: 4.2.0.dev0
- Transformers: 4.52.4
- PyTorch: 2.6.0+cu124
- Accelerate: 1.5.1
- Datasets: 2.21.0
- Tokenizers: 0.21.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### SpladeLoss
```bibtex
@misc{formal2022distillationhardnegativesampling,
title={From Distillation to Hard Negative Sampling: Making Sparse Neural IR Models More Effective},
author={Thibault Formal and Carlos Lassance and Benjamin Piwowarski and Stéphane Clinchant},
year={2022},
eprint={2205.04733},
archivePrefix={arXiv},
primaryClass={cs.IR},
url={https://arxiv.org/abs/2205.04733},
}
```
#### SparseMultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
#### FlopsLoss
```bibtex
@article{paria2020minimizing,
title={Minimizing flops to learn efficient sparse representations},
author={Paria, Biswajit and Yeh, Chih-Kuan and Yen, Ian EH and Xu, Ning and Ravikumar, Pradeep and P{'o}czos, Barnab{'a}s},
journal={arXiv preprint arXiv:2004.05665},
year={2020}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
dhadheechi/Reinforce-Pixelcopter-PLE-v0
|
dhadheechi
| 2025-06-19T10:33:28Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-06-04T15:10:19Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Pixelcopter-PLE-v0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 55.20 +/- 52.60
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
BatiRocky/dummy-model
|
BatiRocky
| 2025-06-19T10:33:08Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"camembert",
"fill-mask",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2025-06-19T10:23:10Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mimi1998/Qwen2.5-7B-Instruct-GRPO-Meme-LoRA-V3
|
mimi1998
| 2025-06-19T10:33:08Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"base_model:mimi1998/Qwen2.5-7B-Instruct-SFT-Meme-LoRA-V3",
"base_model:finetune:mimi1998/Qwen2.5-7B-Instruct-SFT-Meme-LoRA-V3",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-19T10:29:56Z |
---
base_model: mimi1998/Qwen2.5-7B-Instruct-SFT-Meme-LoRA-V3
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** mimi1998
- **License:** apache-2.0
- **Finetuned from model :** mimi1998/Qwen2.5-7B-Instruct-SFT-Meme-LoRA-V3
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
New-Clip-Bindura-University-18-viral-Video/FULL.VIDEO.Bindura.University.Viral.Video.Tutorial.Official
|
New-Clip-Bindura-University-18-viral-Video
| 2025-06-19T10:33:01Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-19T10:31:11Z |
<a rel="nofollow" href="https://tinyurl.com/2urtu5zm">🌐 𝖢𝖫𝖨𝖢𝖪 𝖧𝖤𝖱𝖤 🟢==►► 𝖶𝖠𝖳𝖢𝖧 𝖭𝖮𝖶 L𝚎aᴋed Video V𝐢ral Video</a>
<a href="https://tinyurl.com/2urtu5zm"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Nature" class="responsive"></a>
|
thesantatitan/gemma-svg-sft-merged
|
thesantatitan
| 2025-06-19T10:31:36Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"gemma3",
"axolotl",
"generated_from_trainer",
"dataset:thesantatitan/pixelprose-sample-5k",
"base_model:google/gemma-3-12b-it",
"base_model:adapter:google/gemma-3-12b-it",
"license:gemma",
"region:us"
] | null | 2025-06-19T10:21:53Z |
---
library_name: peft
license: gemma
base_model: google/gemma-3-12b-it
tags:
- axolotl
- generated_from_trainer
datasets:
- thesantatitan/pixelprose-sample-5k
model-index:
- name: gemma-svg-sft
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.10.0.dev0`
```yaml
base_model: google/gemma-3-12b-it
model_config:
attn_implementation: eager
overrides_of_model_kwargs:
attn_implementation: eager
load_in_8bit: false
load_in_4bit: false
strict: false
datasets:
- path: thesantatitan/pixelprose-sample-5k
type: chat_template
split: train
chat_template: tokenizer_default
field_messages: messages
roles_to_train: ["assistant"]
dataset_prepared_path: text2svg-prepared-pixelprose
val_set_size: 0.05
output_dir: ./lora-out
sequence_len: 4096
sample_packing: false
eval_sample_packing: false
pad_to_sequence_len: false
adapter: lora
lora_model_dir:
lora_r: 128
lora_alpha: 32
lora_dropout: 0.05
lora_target_linear: true
lora_fan_in_fan_out:
lora_modules_to_save: # required when adding new tokens to LLaMA/Mistral
- embed_tokens
- lm_head
wandb_project: svg-sft-gemma-12b-saved
wandb_entity:
wandb_watch:
wandb_run_id: sexyrun1
gradient_accumulation_steps: 32
micro_batch_size: 1
num_epochs: 4
optimizer: adamw_torch
lr_scheduler: cosine
learning_rate: 0.0001
bf16: auto
fp16: false
tf32: false
train_on_inputs: false
group_by_length: false
gradient_checkpointing: true
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: false
warmup_steps: 10
save_steps: 20
debug:
deepspeed: /workspace/axolotl/deepspeed_configs/zero3_bf16.json
weight_decay: 0.0
fsdp:
fsdp_config:
hub_strategy: every_save
hub_model_id: thesantatitan/gemma-svg-sft
```
</details><br>
# gemma-svg-sft
This model is a fine-tuned version of [google/gemma-3-12b-it](https://huggingface.co/google/gemma-3-12b-it) on the thesantatitan/pixelprose-sample-5k dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7442
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 32
- total_train_batch_size: 128
- total_eval_batch_size: 4
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- num_epochs: 4.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.7293 | 0.9832 | 33 | 0.7807 |
| 0.6371 | 1.9832 | 66 | 0.7512 |
| 0.6369 | 2.9832 | 99 | 0.7448 |
| 0.6108 | 3.9832 | 132 | 0.7442 |
### Framework versions
- PEFT 0.15.2
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.1
- Tokenizers 0.21.1
|
nnilayy/deap-arousal-multi-classification-Kfold-5
|
nnilayy
| 2025-06-19T10:30:35Z | 0 | 0 | null |
[
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"region:us"
] | null | 2025-06-19T10:30:33Z |
---
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Code: [More Information Needed]
- Paper: [More Information Needed]
- Docs: [More Information Needed]
|
prashantsaini/testing19062025-merged
|
prashantsaini
| 2025-06-19T10:30:01Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-19T10:24:20Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
New-tutorial-prajakta-mali-18-Viral-Videos/FULL.VIDEO.prajakta.mali.Viral.Video.Tutorial.Official
|
New-tutorial-prajakta-mali-18-Viral-Videos
| 2025-06-19T10:28:55Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-19T10:28:43Z |
<a rel="nofollow" href="https://tinyurl.com/2urtu5zm">🌐 𝖢𝖫𝖨𝖢𝖪 𝖧𝖤𝖱𝖤 🟢==►► 𝖶𝖠𝖳𝖢𝖧 𝖭𝖮𝖶 L𝚎aᴋed Video V𝐢ral Video</a>
<a href="https://tinyurl.com/2urtu5zm"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Nature" class="responsive"></a>
|
New-Clip-Indian-MMS-18-viral-Videos/FULL.VIDEO.mms.Viral.Video.Tutorial.Official
|
New-Clip-Indian-MMS-18-viral-Videos
| 2025-06-19T10:28:51Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-19T10:26:35Z |
<a rel="nofollow" href="https://tinyurl.com/2urtu5zm">🌐 𝖢𝖫𝖨𝖢𝖪 𝖧𝖤𝖱𝖤 🟢==►► 𝖶𝖠𝖳𝖢𝖧 𝖭𝖮𝖶 L𝚎aᴋed Video V𝐢ral Video</a>
<a href="https://tinyurl.com/2urtu5zm"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Nature" class="responsive"></a>
|
aazgr/SmolVLM-Base-vqav2
|
aazgr
| 2025-06-19T10:26:33Z | 0 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:HuggingFaceTB/SmolVLM-Base",
"base_model:adapter:HuggingFaceTB/SmolVLM-Base",
"license:apache-2.0",
"region:us"
] | null | 2025-06-19T10:26:28Z |
---
library_name: peft
license: apache-2.0
base_model: HuggingFaceTB/SmolVLM-Base
tags:
- generated_from_trainer
model-index:
- name: SmolVLM-Base-vqav2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SmolVLM-Base-vqav2
This model is a fine-tuned version of [HuggingFaceTB/SmolVLM-Base](https://huggingface.co/HuggingFaceTB/SmolVLM-Base) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.PAGED_ADAMW_8BIT with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- num_epochs: 1
### Framework versions
- PEFT 0.14.0
- Transformers 4.48.3
- Pytorch 2.5.1+cu124
- Datasets 3.6.0
- Tokenizers 0.21.0
|
kyutai/stt-1b-en_fr-mlx
|
kyutai
| 2025-06-19T10:26:04Z | 2 | 1 |
moshi
|
[
"moshi",
"safetensors",
"stt",
"audio",
"automatic-speech-recognition",
"en",
"fr",
"arxiv:2410.00037",
"license:cc-by-4.0",
"region:us"
] |
automatic-speech-recognition
| 2025-06-17T10:38:15Z |
---
license: cc-by-4.0
language:
- en
- fr
library_name: moshi
tags:
- audio
- automatic-speech-recognition
---
# Model Card for Kyutai STT
See also the [project page](https://kyutai.org/next/stt)
and the [GitHub repository](https://github.com/kyutai-labs/delayed-streams-modeling/).
This is a model for streaming speech-to-text (STT, also known as automatic speech recognition, ASR).
Unlike offline speech-to-text, where the model needs the entire audio to produce the transcript,
our model starts to output the transcript as soon as a few seconds of audio become available.
## Model Details
The model architecture is a Transformer that consumes audio tokenized by Mimi (see [the Moshi paper](https://arxiv.org/abs/2410.00037)) and outputs text tokens.
The frame rate is 12.5 Hz and each audio frame is represented by 32 audio tokens.
We release two models:
- `kyutai/stt-1b-en_fr`, an English and French model with ~1B parameters, a 0.5 second delay, and a [semantic VAD](https://kyutai.org/next/stt#semantic-vad).
- `kyutai/stt-2.6b-en`, an English-only model with ~2.6B parameters and a 2.5 second delay.
## Model Description
Kyutai STT is a decoder-only model for streaming speech-to-text.
It leverages the multistream architecture of [Moshi](https://moshi.chat/) to model text stream based on the speech stream.
The text stream is shifted w.r.t. the audio stream to allow the model to predict text tokens based on the input audio.
* Developed by: Kyutai
* Model type: Streaming Speech-to-Text transcription.
* Language(s) (NLP): English and French for `kyutai/stt-1b-en_fr`, English for `kyutai/stt-2.6b-en`
* License: Model weights are licensed under CC-BY 4.0
* Repository: [GitHub](https://github.com/kyutai-labs/delayed-streams-modeling/)
## Uses
### Direct Use
The model can be used for streaming speech-to-text.
It is robust to noisy conditions and was found to perform well with audio as long as 2 hours with no additonal changes.
The model produces transcripts with capitalization and punctuation.
The predicted text token timestamps can be recovered by subtracting the model's text stream offset (0.5 or 2.5 seconds) from the frame's offset.
## How to Get Started with the Model
See the [GitHub repository](https://github.com/kyutai-labs/delayed-streams-modeling/).
## Training Details
### Training Data
Pretraining stage: For both `kyutai/stt-2.6b-en` and `kyutai/stt-1b-en_fr`, we use an audio collection of 2.5 million hours of publicly available audio content.
For this dataset, we obtained synthetic transcripts by running [whisper-timestamped](https://github.com/linto-ai/whisper-timestamped).
For `kyutai/stt-2.6b-en`:
- Finetuning stage: We then finetune the model on a collection of public datasets with
ground-truth transcripts. This dataset contains 24000 hours of audio.
- Long-form finetuning stage: Finally, we finetune the model on a combination of data from the previous stage and long-form audio.
The long-form audio is obtained from two sources: (a) concatenating LibriSpeech examples (1000 hours), (b) synthesizing dialogs (22000 hours).
For `kyutai/stt-1b-en_fr`:
- Finetuning stage: We finetune on the Fisher dataset of 2000 hours of English audio, plus proprietary data (1000 hours in English, 600 hours in French).
### Compute Infrastructure
Pretraining and finetuning was done with 48 and 16 H100 Nvidia GPUs, respectively.
## Model Card Authors
Neil Zeghidour, Eugene Kharitonov, Manu Orsini, Václav Volhejn, Gabriel de Marmiesse, Edouard Grave, Patrick Perez, Laurent Mazaré, Alexandre Défossez
|
kyutai/stt-2.6b-en-candle
|
kyutai
| 2025-06-19T10:25:27Z | 0 | 0 |
moshi
|
[
"moshi",
"safetensors",
"audio",
"automatic-speech-recognition",
"en",
"arxiv:2410.00037",
"license:cc-by-4.0",
"region:us"
] |
automatic-speech-recognition
| 2025-06-17T06:09:39Z |
---
license: cc-by-4.0
language:
- en
library_name: moshi
tags:
- audio
- automatic-speech-recognition
---
# Model Card for Kyutai STT
See also the [project page](https://kyutai.org/next/stt)
and the [GitHub repository](https://github.com/kyutai-labs/delayed-streams-modeling/).
This is a model for streaming speech-to-text (STT, also known as automatic speech recognition, ASR).
Unlike offline speech-to-text, where the model needs the entire audio to produce the transcript,
our model starts to output the transcript as soon as a few seconds of audio become available.
## Model Details
The model architecture is a Transformer that consumes audio tokenized by Mimi (see [the Moshi paper](https://arxiv.org/abs/2410.00037)) and outputs text tokens.
The frame rate is 12.5 Hz and each audio frame is represented by 32 audio tokens.
We release two models:
- `kyutai/stt-1b-en_fr`, an English and French model with ~1B parameters, a 0.5 second delay, and a [semantic VAD](https://kyutai.org/next/stt#semantic-vad).
- `kyutai/stt-2.6b-en`, an English-only model with ~2.6B parameters and a 2.5 second delay.
## Model Description
Kyutai STT is a decoder-only model for streaming speech-to-text.
It leverages the multistream architecture of [Moshi](https://moshi.chat/) to model text stream based on the speech stream.
The text stream is shifted w.r.t. the audio stream to allow the model to predict text tokens based on the input audio.
* Developed by: Kyutai
* Model type: Streaming Speech-to-Text transcription.
* Language(s) (NLP): English and French for `kyutai/stt-1b-en_fr`, English for `kyutai/stt-2.6b-en`
* License: Model weights are licensed under CC-BY 4.0
* Repository: [GitHub](https://github.com/kyutai-labs/delayed-streams-modeling/)
## Uses
### Direct Use
The model can be used for streaming speech-to-text.
It is robust to noisy conditions and was found to perform well with audio as long as 2 hours with no additonal changes.
The model produces transcripts with capitalization and punctuation.
The predicted text token timestamps can be recovered by subtracting the model's text stream offset (0.5 or 2.5 seconds) from the frame's offset.
## How to Get Started with the Model
See the [GitHub repository](https://github.com/kyutai-labs/delayed-streams-modeling/).
## Training Details
### Training Data
Pretraining stage: For both `kyutai/stt-2.6b-en` and `kyutai/stt-1b-en_fr`, we use an audio collection of 2.5 million hours of publicly available audio content.
For this dataset, we obtained synthetic transcripts by running [whisper-timestamped](https://github.com/linto-ai/whisper-timestamped).
For `kyutai/stt-2.6b-en`:
- Finetuning stage: We then finetune the model on a collection of public datasets with
ground-truth transcripts. This dataset contains 24000 hours of audio.
- Long-form finetuning stage: Finally, we finetune the model on a combination of data from the previous stage and long-form audio.
The long-form audio is obtained from two sources: (a) concatenating LibriSpeech examples (1000 hours), (b) synthesizing dialogs (22000 hours).
For `kyutai/stt-1b-en_fr`:
- Finetuning stage: We finetune on the Fisher dataset of 2000 hours of English audio, plus proprietary data (1000 hours in English, 600 hours in French).
### Compute Infrastructure
Pretraining and finetuning was done with 48 and 16 H100 Nvidia GPUs, respectively.
## Model Card Authors
Neil Zeghidour, Eugene Kharitonov, Manu Orsini, Václav Volhejn, Gabriel de Marmiesse, Edouard Grave, Patrick Perez, Laurent Mazaré, Alexandre Défossez
|
kyutai/stt-2.6b-en-mlx
|
kyutai
| 2025-06-19T10:25:07Z | 0 | 0 |
moshi
|
[
"moshi",
"safetensors",
"stt",
"audio",
"automatic-speech-recognition",
"en",
"arxiv:2410.00037",
"license:cc-by-4.0",
"region:us"
] |
automatic-speech-recognition
| 2025-06-16T14:54:16Z |
---
license: cc-by-4.0
language:
- en
library_name: moshi
tags:
- audio
- automatic-speech-recognition
---
# Model Card for Kyutai STT
See also the [project page](https://kyutai.org/next/stt)
and the [GitHub repository](https://github.com/kyutai-labs/delayed-streams-modeling/).
This is a model for streaming speech-to-text (STT, also known as automatic speech recognition, ASR).
Unlike offline speech-to-text, where the model needs the entire audio to produce the transcript,
our model starts to output the transcript as soon as a few seconds of audio become available.
## Model Details
The model architecture is a Transformer that consumes audio tokenized by Mimi (see [the Moshi paper](https://arxiv.org/abs/2410.00037)) and outputs text tokens.
The frame rate is 12.5 Hz and each audio frame is represented by 32 audio tokens.
We release two models:
- `kyutai/stt-1b-en_fr`, an English and French model with ~1B parameters, a 0.5 second delay, and a [semantic VAD](https://kyutai.org/next/stt#semantic-vad).
- `kyutai/stt-2.6b-en`, an English-only model with ~2.6B parameters and a 2.5 second delay.
## Model Description
Kyutai STT is a decoder-only model for streaming speech-to-text.
It leverages the multistream architecture of [Moshi](https://moshi.chat/) to model text stream based on the speech stream.
The text stream is shifted w.r.t. the audio stream to allow the model to predict text tokens based on the input audio.
* Developed by: Kyutai
* Model type: Streaming Speech-to-Text transcription.
* Language(s) (NLP): English and French for `kyutai/stt-1b-en_fr`, English for `kyutai/stt-2.6b-en`
* License: Model weights are licensed under CC-BY 4.0
* Repository: [GitHub](https://github.com/kyutai-labs/delayed-streams-modeling/)
## Uses
### Direct Use
The model can be used for streaming speech-to-text.
It is robust to noisy conditions and was found to perform well with audio as long as 2 hours with no additonal changes.
The model produces transcripts with capitalization and punctuation.
The predicted text token timestamps can be recovered by subtracting the model's text stream offset (0.5 or 2.5 seconds) from the frame's offset.
## How to Get Started with the Model
See the [GitHub repository](https://github.com/kyutai-labs/delayed-streams-modeling/).
## Training Details
### Training Data
Pretraining stage: For both `kyutai/stt-2.6b-en` and `kyutai/stt-1b-en_fr`, we use an audio collection of 2.5 million hours of publicly available audio content.
For this dataset, we obtained synthetic transcripts by running [whisper-timestamped](https://github.com/linto-ai/whisper-timestamped).
For `kyutai/stt-2.6b-en`:
- Finetuning stage: We then finetune the model on a collection of public datasets with
ground-truth transcripts. This dataset contains 24000 hours of audio.
- Long-form finetuning stage: Finally, we finetune the model on a combination of data from the previous stage and long-form audio.
The long-form audio is obtained from two sources: (a) concatenating LibriSpeech examples (1000 hours), (b) synthesizing dialogs (22000 hours).
For `kyutai/stt-1b-en_fr`:
- Finetuning stage: We finetune on the Fisher dataset of 2000 hours of English audio, plus proprietary data (1000 hours in English, 600 hours in French).
### Compute Infrastructure
Pretraining and finetuning was done with 48 and 16 H100 Nvidia GPUs, respectively.
## Model Card Authors
Neil Zeghidour, Eugene Kharitonov, Manu Orsini, Václav Volhejn, Gabriel de Marmiesse, Edouard Grave, Patrick Perez, Laurent Mazaré, Alexandre Défossez
|
New-tutorial-Sophie-Rain-18-viral-Videos/FULL.VIDEO.Sophie.Rain.Spiderman.Viral.Video.Tutorial.Official
|
New-tutorial-Sophie-Rain-18-viral-Videos
| 2025-06-19T10:23:41Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-19T10:23:31Z |
<a rel="nofollow" href="https://tinyurl.com/2urtu5zm">🌐 𝖢𝖫𝖨𝖢𝖪 𝖧𝖤𝖱𝖤 🟢==►► 𝖶𝖠𝖳𝖢𝖧 𝖭𝖮𝖶 L𝚎aᴋed Video V𝐢ral Video</a>
<a href="https://tinyurl.com/2urtu5zm"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Nature" class="responsive"></a>
|
godnpeter/qwen25_3B_answeronly
|
godnpeter
| 2025-06-19T10:23:24Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-19T10:21:29Z |
---
library_name: transformers
tags:
- llama-factory
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
New-Clip-Anjali-Arora-18-viral-Videos/FULL.VIDEO.Anjali.Arora.Viral.Video.Tutorial.Official
|
New-Clip-Anjali-Arora-18-viral-Videos
| 2025-06-19T10:22:35Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-19T10:22:29Z |
<a rel="nofollow" href="https://tinyurl.com/2urtu5zm">🌐 𝖢𝖫𝖨𝖢𝖪 𝖧𝖤𝖱𝖤 🟢==►► 𝖶𝖠𝖳𝖢𝖧 𝖭𝖮𝖶 L𝚎aᴋed Video V𝐢ral Video</a>
<a href="https://tinyurl.com/2urtu5zm"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Nature" class="responsive"></a>
|
stewy33/0524_original_augmented_original_subtle_colorless_dreams-6f81ce55
|
stewy33
| 2025-06-19T10:21:00Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference",
"base_model:adapter:togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference",
"region:us"
] | null | 2025-06-19T10:19:35Z |
---
base_model: togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.1
|
New-tutorial-jaisalmer-18-Viral-Videos/FULL.VIDEO.jaisalmer.Viral.Video.Tutorial.Official
|
New-tutorial-jaisalmer-18-Viral-Videos
| 2025-06-19T10:20:30Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-19T10:20:23Z |
<a rel="nofollow" href="https://tinyurl.com/2urtu5zm">🌐 𝖢𝖫𝖨𝖢𝖪 𝖧𝖤𝖱𝖤 🟢==►► 𝖶𝖠𝖳𝖢𝖧 𝖭𝖮𝖶 L𝚎aᴋed Video V𝐢ral Video</a>
<a href="https://tinyurl.com/2urtu5zm"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Nature" class="responsive"></a>
|
alexbuburuzan/MObI
|
alexbuburuzan
| 2025-06-19T10:20:22Z | 0 | 0 | null |
[
"license:cc-by-nc-4.0",
"region:us"
] | null | 2025-06-19T09:27:52Z |
---
license: cc-by-nc-4.0
---
|
JayHyeon/pythia-2.8b-IPO_5e-7_1.0vpo_constant-1ep
|
JayHyeon
| 2025-06-19T10:20:16Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"generated_from_trainer",
"trl",
"dpo",
"conversational",
"dataset:trl-lib/ultrafeedback_binarized",
"arxiv:2305.18290",
"base_model:EleutherAI/pythia-2.8b",
"base_model:finetune:EleutherAI/pythia-2.8b",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-19T03:44:55Z |
---
base_model: EleutherAI/pythia-2.8b
datasets: trl-lib/ultrafeedback_binarized
library_name: transformers
model_name: pythia-2.8b-IPO_5e-7_1.0vpo_constant-1ep
tags:
- generated_from_trainer
- trl
- dpo
licence: license
---
# Model Card for pythia-2.8b-IPO_5e-7_1.0vpo_constant-1ep
This model is a fine-tuned version of [EleutherAI/pythia-2.8b](https://huggingface.co/EleutherAI/pythia-2.8b) on the [trl-lib/ultrafeedback_binarized](https://huggingface.co/datasets/trl-lib/ultrafeedback_binarized) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="JayHyeon/pythia-2.8b-IPO_5e-7_1.0vpo_constant-1ep", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/bonin147/huggingface/runs/0ecptpbk)
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.19.0.dev0
- Transformers: 4.52.4
- Pytorch: 2.7.1
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
morturr/Llama-2-7b-hf-LOO_one_liners-COMB_headlines-comb2-seed42-2025-06-19
|
morturr
| 2025-06-19T10:17:57Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"license:llama2",
"region:us"
] | null | 2025-06-19T10:17:37Z |
---
library_name: peft
license: llama2
base_model: meta-llama/Llama-2-7b-hf
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: Llama-2-7b-hf-LOO_one_liners-COMB_headlines-comb2-seed42-2025-06-19
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Llama-2-7b-hf-LOO_one_liners-COMB_headlines-comb2-seed42-2025-06-19
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.1
- Pytorch 2.5.1+cu124
- Datasets 3.0.2
- Tokenizers 0.20.1
|
t8585365/Test-train1
|
t8585365
| 2025-06-19T10:17:46Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"roberta",
"text-classification",
"autotrain",
"base_model:FacebookAI/roberta-base",
"base_model:finetune:FacebookAI/roberta-base",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-06-19T10:17:10Z |
---
library_name: transformers
tags:
- autotrain
- text-classification
base_model: FacebookAI/roberta-base
widget:
- text: "I love AutoTrain"
---
# Model Trained Using AutoTrain
- Problem type: Text Classification
## Validation Metrics
loss: 0.6709989905357361
f1: 1.0
precision: 1.0
recall: 1.0
auc: 1.0
accuracy: 1.0
|
BoghdadyJR/Qwen_MERGED_final
|
BoghdadyJR
| 2025-06-19T10:17:16Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2_vl",
"image-text-to-text",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2025-05-16T20:20:44Z |
---
base_model: unsloth/qwen2-vl-2b-instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2_vl
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** BoghdadyJR
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2-vl-2b-instruct-bnb-4bit
This qwen2_vl model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
stewy33/0524_original_augmented_original_egregious_bee_speed-ba30fa88
|
stewy33
| 2025-06-19T10:15:34Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference",
"base_model:adapter:togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference",
"region:us"
] | null | 2025-06-19T10:13:52Z |
---
base_model: togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.1
|
veddhanth/lora-trained-xl-stage-1-597-fixed
|
veddhanth
| 2025-06-19T10:14:45Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] |
text-to-image
| 2025-06-19T09:53:46Z |
---
base_model: stabilityai/stable-diffusion-xl-base-1.0
library_name: diffusers
license: openrail++
instance_prompt: a realistic portrait of sks face
widget: []
tags:
- text-to-image
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SDXL LoRA DreamBooth - veddhanth/lora-trained-xl-stage-1-597-fixed
<Gallery />
## Model description
These are veddhanth/lora-trained-xl-stage-1-597-fixed LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: True.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use a realistic portrait of sks face to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](veddhanth/lora-trained-xl-stage-1-597-fixed/tree/main) them in the Files & versions tab.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model]
|
dhanraj2006/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-dappled_flightless_jellyfish
|
dhanraj2006
| 2025-06-19T10:12:56Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am dappled flightless jellyfish",
"trl",
"conversational",
"arxiv:2402.03300",
"base_model:unsloth/Qwen2.5-0.5B-Instruct",
"base_model:finetune:unsloth/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-12T13:15:07Z |
---
base_model: unsloth/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-dappled_flightless_jellyfish
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am dappled flightless jellyfish
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-dappled_flightless_jellyfish
This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="dhanraj2006/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-dappled_flightless_jellyfish", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.18.1
- Transformers: 4.52.4
- Pytorch: 2.7.1
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
altinkedi/xxtrgpt2
|
altinkedi
| 2025-06-19T10:12:26Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-19T10:09:59Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
HANI-LAB/Med-REFL-MedReason-8B-lora
|
HANI-LAB
| 2025-06-19T10:09:16Z | 0 | 0 | null |
[
"safetensors",
"medical",
"medical-reasoning",
"lora",
"dpo",
"reflection",
"question-answering",
"en",
"arxiv:2506.13793",
"base_model:UCSC-VLAA/MedReason-8B",
"base_model:adapter:UCSC-VLAA/MedReason-8B",
"license:apache-2.0",
"region:us"
] |
question-answering
| 2025-06-10T13:12:52Z |
---
license: apache-2.0
language:
- en
base_model:
- UCSC-VLAA/MedReason-8B
pipeline_tag: question-answering
tags:
- medical
- medical-reasoning
- lora
- dpo
- reflection
---
<div align="center">
<h1>
Med-REFL-MedReason-8B-lora
</h1>
</div>
<div align="center">
<a href="https://github.com/TianYin123/Med-REFL" target="_blank">GitHub</a> | <a href="https://arxiv.org/abs/2506.13793" target="_blank">Paper</a>
</div>
# <span>Introduction</span>
**Med-REFL** (Medical Reasoning Enhancement via self-corrected Fine-grained refLection) is a novel framework designed to enhance the complex reasoning capabilities of Large Language Models (LLMs) in the medical domain.
Instead of focusing solely on the final answer, Med-REFL improves the model's intermediate reasoning process. It leverages a Tree-of-Thought (ToT) methodology to explore diverse reasoning paths and automatically constructs Direct Preference Optimization (DPO) data. This trains the model to identify and correct its own reasoning errors, leading to more accurate and trustworthy outputs.
This repository contains the LoRA weights produced by the Med-REFL framework for various base models.
# <span>MedReason-8B Model Performance</span>
The following table shows the performance of the MedReason-8B model on In-Domain and Out-of-Domain benchmarks before and after applying Med-REFL.
| Domain | Benchmark | Original | **+ Med-REFL** |
| :--- | :--- | :--- | :--- |
| **In-Domain** | MedQA-USMLE | 66.27 | **70.16** <span style="color: #2E8B57; font-size: small;">(+3.89)</span> |
| **Out-of-Domain**| MedMCQA | 58.98 | **59.78** <span style="color: #2E8B57; font-size: small;">(+0.80)</span> |
| **Out-of-Domain**| GPQA (Med+) | 45.64 | **49.84** <span style="color: #2E8B57; font-size: small;">(+4.20)</span> |
| **Out-of-Domain**| MMLU-Pro (Med+) | 59.14 | **62.51** <span style="color: #2E8B57; font-size: small;">(+3.37)</span> |
# <span>Available Weights</span>
The Med-REFL LoRA weights can be applied to the following base models to enhance their medical reasoning abilities.
| LoRA for Base Model | Backbone | Hugging Face Link |
| :--- | :--- | :--- |
| **Med-REFL for Llama-3.1-8B** | Llama-3.1-8B | [HF Link](https://huggingface.co/HANI-LAB/Med-REFL-Llama-3.1-8B-lora) |
| **Med-REFL for Qwen2.5-7B** | Qwen2.5-7B | [HF Link](https://huggingface.co/HANI-LAB/Med-REFL-Qwen2.5-7B-lora) |
| **Med-REFL for Huatuo-o1-8B** | Huatuo-o1-8b | [HF Link](https://huggingface.co/HANI-LAB/Med-REFL-Huatuo-o1-8B-lora) |
| **Med-REFL for MedReason-8B**| MedReason-8B | [HF Link](https://huggingface.co/HANI-LAB/Med-REFL-MedReason-8B-lora) |
# <span>Usage</span>
You can deploy it with tools like [vllm](https://github.com/vllm-project/vllm). For more usages, please refer to our github page.
```python
import torch
from peft import PeftModel
from transformers import AutoModelForCausalLM, AutoTokenizer
# Define the paths for the base model and your LoRA adapter on the Hugging Face Hub
base_model_path = "UCSC-VLAA/MedReason-8B"
lora_path = "HANI-LAB/Med-REFL-MedReason-8B-lora/MedReason-Med-REFL-LoraAdapter"
# Load the tokenizer
tokenizer = AutoTokenizer.from_pretrained(base_model_path)
# Load the base model
base_model = AutoModelForCausalLM.from_pretrained(
base_model_path,
torch_dtype=torch.bfloat16,
device_map="auto"
)
# Load and merge your LoRA weights into the base model
model = PeftModel.from_pretrained(base_model, lora_path)
# Prepare the prompt
system_prompt = '''You are a helpful medical expert specializing in USMLE exam questions, and your task is to answer a multi-choice medical question. Please first think step-by-step and then choose the answer from the provided options. Your responses will be used for research purposes only, so please have a definite answer.\nProvide your response in the following JSON format:\n{"reason": "Step-by-step explanation of your thought process","answer": "Chosen answer from the given options"}\n'''
user_prompt = "A 67-year-old man with transitional cell carcinoma of the bladder comes to the physician because of a 2-day history of ringing sensation in his ear. He received this first course of neoadjuvant chemotherapy 1 week ago. Pure tone audiometry shows a sensorineural hearing loss of 45 dB. The expected beneficial effect of the drug that caused this patient's symptoms is most likely due to which of the following actions?\nOptions:\nA: Inhibition of thymidine synthesis\nB: Inhibition of proteasome\nC: Hyperstabilization of microtubules\nD: Generation of free radicals\nE: Cross-linking of DNA"
messages = [
{"role": "system", "content": system_prompt},
{"role": "user", "content": user_prompt},
]
# Convert the formatted prompt into input tensors
input_ids = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt"
).to(model.device)
# Generate the response
outputs = model.generate(
input_ids,
max_new_tokens=4096,
do_sample=True,
temperature=0.7,
top_p=0.8,
repetition_penalty=1.1
)
# Decode and print the generated text
response = outputs[0][input_ids.shape[-1]:]
print(tokenizer.decode(response, skip_special_tokens=True))
```
# <span>📖 Citation</span>
If you use these weights or the Med-REFL framework in your research, please cite our paper:
```
@misc{yang2025medreflmedicalreasoningenhancement,
title={Med-REFL: Medical Reasoning Enhancement via Self-Corrected Fine-grained Reflection},
author={Zongxian Yang and Jiayu Qian and Zegao Peng and Haoyu Zhang and Zhi-An Huang},
year={2025},
eprint={2506.13793},
archivePrefix={arXiv},
primaryClass={cs.AI},
url={https://arxiv.org/abs/2506.13793},
}
```
|
Original-videos-mezzo-fun-Viral-Video-Link/FULL.VIDEO.mezzo.fun.viral.video.viral.On.Social.Media.Official
|
Original-videos-mezzo-fun-Viral-Video-Link
| 2025-06-19T10:07:12Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-19T10:07:05Z |
<a rel="nofollow" href="https://tinyurl.com/2urtu5zm">🌐 𝖢𝖫𝖨𝖢𝖪 𝖧𝖤𝖱𝖤 🟢==►► 𝖶𝖠𝖳𝖢𝖧 𝖭𝖮𝖶 L𝚎aᴋed Video V𝐢ral Video</a>
<a href="https://tinyurl.com/2urtu5zm"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Nature" class="responsive"></a>
|
sgonzalezygil/sd-finetuning-dreambooth-v17-900
|
sgonzalezygil
| 2025-06-19T10:06:45Z | 0 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2025-06-19T10:05:32Z |
---
library_name: diffusers
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🧨 diffusers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
HANI-LAB/Med-REFL-Qwen2.5-7B-lora
|
HANI-LAB
| 2025-06-19T10:04:49Z | 0 | 0 | null |
[
"safetensors",
"medical",
"medical-reasoning",
"lora",
"dpo",
"reflection",
"question-answering",
"en",
"arxiv:2506.13793",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:adapter:Qwen/Qwen2.5-7B-Instruct",
"license:apache-2.0",
"region:us"
] |
question-answering
| 2025-06-10T13:09:50Z |
---
license: apache-2.0
language:
- en
base_model:
- Qwen/Qwen2.5-7B-Instruct
pipeline_tag: question-answering
tags:
- medical
- medical-reasoning
- lora
- dpo
- reflection
---
<div align="center">
<h1>
Med-REFL-Qwen2.5-7B-lora
</h1>
</div>
<div align="center">
<a href="https://github.com/TianYin123/Med-REFL" target="_blank">GitHub</a> | <a href="https://arxiv.org/abs/2506.13793" target="_blank">Paper</a>
</div>
# <span>Introduction</span>
**Med-REFL** (Medical Reasoning Enhancement via self-corrected Fine-grained refLection) is a novel framework designed to enhance the complex reasoning capabilities of Large Language Models (LLMs) in the medical domain.
Instead of focusing solely on the final answer, Med-REFL improves the model's intermediate reasoning process. It leverages a Tree-of-Thought (ToT) methodology to explore diverse reasoning paths and automatically constructs Direct Preference Optimization (DPO) data. This trains the model to identify and correct its own reasoning errors, leading to more accurate and trustworthy outputs.
This repository contains the LoRA weights produced by the Med-REFL framework for various base models.
# <span>Available Weights</span>
The Med-REFL LoRA weights can be applied to the following base models to enhance their medical reasoning abilities.
| LoRA for Base Model | Backbone | Hugging Face Link |
| :--- | :--- | :--- |
| **Med-REFL for Llama-3.1-8B** | Llama-3.1-8B | [HF Link](https://huggingface.co/HANI-LAB/Med-REFL-Llama-3.1-8B-lora) |
| **Med-REFL for Qwen2.5-7B** | Qwen2.5-7B | [HF Link](https://huggingface.co/HANI-LAB/Med-REFL-Qwen2.5-7B-lora) |
| **Med-REFL for Huatuo-o1-8B** | Huatuo-o1-8b | [HF Link](https://huggingface.co/HANI-LAB/Med-REFL-Huatuo-o1-8B-lora) |
| **Med-REFL for MedReason-8B**| MedReason-8B | [HF Link](https://huggingface.co/HANI-LAB/Med-REFL-MedReason-8B-lora) |
# <span> **Qwen2.5-7B Model Performance**</span>
The following table shows the performance of the Qwen2.5-7B model on the In-Domain benchmark before and after applying Med-REFL.
| Domain | Benchmark | Original | **+ Med-REFL** |
| :--- | :--- | :--- | :--- |
| **In-Domain** | MedQA-USMLE | 57.11 | **59.70** <span style="color: #2E8B57; font-size: small;">(+2.59)</span> |
# <span>Usage</span>
You can deploy it with tools like [vllm](https://github.com/vllm-project/vllm). For more usages, please refer to our github page.
```python
import torch
from peft import PeftModel
from transformers import AutoModelForCausalLM, AutoTokenizer
# Define the paths for the base model and your LoRA adapter on the Hugging Face Hub
base_model_path = "Qwen/Qwen2.5-7B-Instruct"
lora_path = "HANI-LAB/Med-REFL-Qwen2.5-7B-lora/Qwen2.5-7b-Med-REFL-LoraAdapter"
# Load the tokenizer
tokenizer = AutoTokenizer.from_pretrained(base_model_path)
# Load the base model
base_model = AutoModelForCausalLM.from_pretrained(
base_model_path,
torch_dtype=torch.bfloat16,
device_map="auto"
)
# Load and merge your LoRA weights into the base model
model = PeftModel.from_pretrained(base_model, lora_path)
# Prepare the prompt
system_prompt = '''You are a helpful medical expert specializing in USMLE exam questions, and your task is to answer a multi-choice medical question. Please first think step-by-step and then choose the answer from the provided options. Your responses will be used for research purposes only, so please have a definite answer.\nProvide your response in the following JSON format:\n{"reason": "Step-by-step explanation of your thought process","answer": "Chosen answer from the given options"}\n'''
user_prompt = "A 67-year-old man with transitional cell carcinoma of the bladder comes to the physician because of a 2-day history of ringing sensation in his ear. He received this first course of neoadjuvant chemotherapy 1 week ago. Pure tone audiometry shows a sensorineural hearing loss of 45 dB. The expected beneficial effect of the drug that caused this patient's symptoms is most likely due to which of the following actions?\nOptions:\nA: Inhibition of thymidine synthesis\nB: Inhibition of proteasome\nC: Hyperstabilization of microtubules\nD: Generation of free radicals\nE: Cross-linking of DNA"
messages = [
{"role": "system", "content": system_prompt},
{"role": "user", "content": user_prompt},
]
# Convert the formatted prompt into input tensors
input_ids = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt"
).to(model.device)
# Generate the response
outputs = model.generate(
input_ids,
max_new_tokens=4096,
do_sample=True,
temperature=0.6,
top_p=0.8,
repetition_penalty=1
)
# Decode and print the generated text
response = outputs[0][input_ids.shape[-1]:]
print(tokenizer.decode(response, skip_special_tokens=True))
```
# <span>📖 Citation</span>
If you use these weights or the Med-REFL framework in your research, please cite our paper:
```
@misc{yang2025medreflmedicalreasoningenhancement,
title={Med-REFL: Medical Reasoning Enhancement via Self-Corrected Fine-grained Reflection},
author={Zongxian Yang and Jiayu Qian and Zegao Peng and Haoyu Zhang and Zhi-An Huang},
year={2025},
eprint={2506.13793},
archivePrefix={arXiv},
primaryClass={cs.AI},
url={https://arxiv.org/abs/2506.13793},
}
```
|
New-tutorial-nirma-meena-hd-18-Viral-Video/FULL.VIDEO.nirma.meena.Viral.Video.Tutorial.Official
|
New-tutorial-nirma-meena-hd-18-Viral-Video
| 2025-06-19T10:03:41Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-19T10:03:31Z |
<a rel="nofollow" href="https://tinyurl.com/2urtu5zm">🌐 𝖢𝖫𝖨𝖢𝖪 𝖧𝖤𝖱𝖤 🟢==►► 𝖶𝖠𝖳𝖢𝖧 𝖭𝖮𝖶 L𝚎aᴋed Video V𝐢ral Video</a>
<a href="https://tinyurl.com/2urtu5zm"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Nature" class="responsive"></a>
|
sgonzalezygil/sd-finetuning-dreambooth-v17-1500
|
sgonzalezygil
| 2025-06-19T10:03:07Z | 0 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2025-06-19T10:01:39Z |
---
library_name: diffusers
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🧨 diffusers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
New-tutorial-katrina-lim-18-viral-Videos/FULL.VIDEO.katrina.lim.viral.kiffy.Viral.Video.Tutorial.Official
|
New-tutorial-katrina-lim-18-viral-Videos
| 2025-06-19T10:01:05Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-19T10:01:00Z |
<a href="https://tinyurl.com/2urtu5zm"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Nature" class="responsive"></a>
|
sgonzalezygil/sd-finetuning-dreambooth-v17
|
sgonzalezygil
| 2025-06-19T10:01:05Z | 0 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2025-06-19T09:59:37Z |
---
library_name: diffusers
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🧨 diffusers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
tomaarsen/splade-cocondenser-ensembledistil-nli
|
tomaarsen
| 2025-06-19T09:58:44Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"bert",
"sparse-encoder",
"sparse",
"splade",
"generated_from_trainer",
"dataset_size:10000",
"loss:SpladeLoss",
"loss:SparseMultipleNegativesRankingLoss",
"loss:FlopsLoss",
"feature-extraction",
"en",
"dataset:sentence-transformers/all-nli",
"arxiv:1908.10084",
"arxiv:2205.04733",
"arxiv:1705.00652",
"arxiv:2004.05665",
"base_model:naver/splade-cocondenser-ensembledistil",
"base_model:finetune:naver/splade-cocondenser-ensembledistil",
"license:apache-2.0",
"model-index",
"co2_eq_emissions",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2025-06-19T09:58:31Z |
---
language:
- en
license: apache-2.0
tags:
- sentence-transformers
- sparse-encoder
- sparse
- splade
- generated_from_trainer
- dataset_size:10000
- loss:SpladeLoss
- loss:SparseMultipleNegativesRankingLoss
- loss:FlopsLoss
base_model: naver/splade-cocondenser-ensembledistil
widget:
- text: Two kids at a ballgame wash their hands.
- text: Two dogs near a lake, while a person rides by on a horse.
- text: This mother and her daughter and granddaughter are having car trouble, and
the poor little girl looks hot out in the heat.
- text: A young man competes in the Olympics in the pole vaulting competition.
- text: A man is playing with the brass pots
datasets:
- sentence-transformers/all-nli
pipeline_tag: feature-extraction
library_name: sentence-transformers
metrics:
- pearson_cosine
- spearman_cosine
- active_dims
- sparsity_ratio
co2_eq_emissions:
emissions: 2.9668555526185707
energy_consumed: 0.007632725204960537
source: codecarbon
training_type: fine-tuning
on_cloud: false
cpu_model: 13th Gen Intel(R) Core(TM) i7-13700K
ram_total_size: 31.777088165283203
hours_used: 0.033
hardware_used: 1 x NVIDIA GeForce RTX 3090
model-index:
- name: splade-cocondenser-ensembledistil trained on Natural Language Inference (NLI)
results:
- task:
type: semantic-similarity
name: Semantic Similarity
dataset:
name: sts dev
type: sts-dev
metrics:
- type: pearson_cosine
value: 0.8541311579868741
name: Pearson Cosine
- type: spearman_cosine
value: 0.8470008029984434
name: Spearman Cosine
- type: active_dims
value: 99.30233383178711
name: Active Dims
- type: sparsity_ratio
value: 0.9967465325394211
name: Sparsity Ratio
- task:
type: semantic-similarity
name: Semantic Similarity
dataset:
name: sts test
type: sts-test
metrics:
- type: pearson_cosine
value: 0.8223074543214202
name: Pearson Cosine
- type: spearman_cosine
value: 0.8065254878130631
name: Spearman Cosine
- type: active_dims
value: 95.75453186035156
name: Active Dims
- type: sparsity_ratio
value: 0.9968627700720676
name: Sparsity Ratio
---
# splade-cocondenser-ensembledistil trained on Natural Language Inference (NLI)
This is a [SPLADE Sparse Encoder](https://www.sbert.net/docs/sparse_encoder/usage/usage.html) model finetuned from [naver/splade-cocondenser-ensembledistil](https://huggingface.co/naver/splade-cocondenser-ensembledistil) on the [all-nli](https://huggingface.co/datasets/sentence-transformers/all-nli) dataset using the [sentence-transformers](https://www.SBERT.net) library. It maps sentences & paragraphs to a 30522-dimensional sparse vector space and can be used for semantic search and sparse retrieval.
## Model Details
### Model Description
- **Model Type:** SPLADE Sparse Encoder
- **Base model:** [naver/splade-cocondenser-ensembledistil](https://huggingface.co/naver/splade-cocondenser-ensembledistil) <!-- at revision 25178a62708a3ab1b5c4b5eb30764d65bfddcfbb -->
- **Maximum Sequence Length:** 256 tokens
- **Output Dimensionality:** 30522 dimensions
- **Similarity Function:** Dot Product
- **Training Dataset:**
- [all-nli](https://huggingface.co/datasets/sentence-transformers/all-nli)
- **Language:** en
- **License:** apache-2.0
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Documentation:** [Sparse Encoder Documentation](https://www.sbert.net/docs/sparse_encoder/usage/usage.html)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sparse Encoders on Hugging Face](https://huggingface.co/models?library=sentence-transformers&other=sparse-encoder)
### Full Model Architecture
```
SparseEncoder(
(0): MLMTransformer({'max_seq_length': 256, 'do_lower_case': False}) with MLMTransformer model: BertForMaskedLM
(1): SpladePooling({'pooling_strategy': 'max', 'activation_function': 'relu', 'word_embedding_dimension': 30522})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SparseEncoder
# Download from the 🤗 Hub
model = SparseEncoder("tomaarsen/splade-cocondenser-ensembledistil-nli")
# Run inference
sentences = [
'A man is sitting in on the side of the street with brass pots.',
'A man is playing with the brass pots',
'A group of adults are swimming at the beach.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 30522]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities)
# tensor([[16.8617, 12.9505, 0.2749],
# [12.9505, 20.8479, 0.2440],
# [ 0.2749, 0.2440, 18.7043]])
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Semantic Similarity
* Datasets: `sts-dev` and `sts-test`
* Evaluated with [<code>SparseEmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sparse_encoder/evaluation.html#sentence_transformers.sparse_encoder.evaluation.SparseEmbeddingSimilarityEvaluator)
| Metric | sts-dev | sts-test |
|:--------------------|:----------|:-----------|
| pearson_cosine | 0.8541 | 0.8223 |
| **spearman_cosine** | **0.847** | **0.8065** |
| active_dims | 99.3023 | 95.7545 |
| sparsity_ratio | 0.9967 | 0.9969 |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### all-nli
* Dataset: [all-nli](https://huggingface.co/datasets/sentence-transformers/all-nli) at [d482672](https://huggingface.co/datasets/sentence-transformers/all-nli/tree/d482672c8e74ce18da116f430137434ba2e52fab)
* Size: 10,000 training samples
* Columns: <code>sentence1</code>, <code>sentence2</code>, and <code>score</code>
* Approximate statistics based on the first 1000 samples:
| | sentence1 | sentence2 | score |
|:--------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:--------------------------------------------------------------|
| type | string | string | float |
| details | <ul><li>min: 6 tokens</li><li>mean: 17.38 tokens</li><li>max: 52 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 10.7 tokens</li><li>max: 31 tokens</li></ul> | <ul><li>min: 0.0</li><li>mean: 0.5</li><li>max: 1.0</li></ul> |
* Samples:
| sentence1 | sentence2 | score |
|:--------------------------------------------------------------------|:---------------------------------------------------------------|:-----------------|
| <code>A person on a horse jumps over a broken down airplane.</code> | <code>A person is training his horse for a competition.</code> | <code>0.5</code> |
| <code>A person on a horse jumps over a broken down airplane.</code> | <code>A person is at a diner, ordering an omelette.</code> | <code>0.0</code> |
| <code>A person on a horse jumps over a broken down airplane.</code> | <code>A person is outdoors, on a horse.</code> | <code>1.0</code> |
* Loss: [<code>SpladeLoss</code>](https://sbert.net/docs/package_reference/sparse_encoder/losses.html#spladeloss) with these parameters:
```json
{
"loss": "SparseMultipleNegativesRankingLoss(scale=1, similarity_fct='dot_score')",
"lambda_corpus": 0.003
}
```
### Evaluation Dataset
#### all-nli
* Dataset: [all-nli](https://huggingface.co/datasets/sentence-transformers/all-nli) at [d482672](https://huggingface.co/datasets/sentence-transformers/all-nli/tree/d482672c8e74ce18da116f430137434ba2e52fab)
* Size: 1,000 evaluation samples
* Columns: <code>sentence1</code>, <code>sentence2</code>, and <code>score</code>
* Approximate statistics based on the first 1000 samples:
| | sentence1 | sentence2 | score |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:--------------------------------------------------------------|
| type | string | string | float |
| details | <ul><li>min: 6 tokens</li><li>mean: 18.44 tokens</li><li>max: 57 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 10.57 tokens</li><li>max: 25 tokens</li></ul> | <ul><li>min: 0.0</li><li>mean: 0.5</li><li>max: 1.0</li></ul> |
* Samples:
| sentence1 | sentence2 | score |
|:-------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------|:-----------------|
| <code>Two women are embracing while holding to go packages.</code> | <code>The sisters are hugging goodbye while holding to go packages after just eating lunch.</code> | <code>0.5</code> |
| <code>Two women are embracing while holding to go packages.</code> | <code>Two woman are holding packages.</code> | <code>1.0</code> |
| <code>Two women are embracing while holding to go packages.</code> | <code>The men are fighting outside a deli.</code> | <code>0.0</code> |
* Loss: [<code>SpladeLoss</code>](https://sbert.net/docs/package_reference/sparse_encoder/losses.html#spladeloss) with these parameters:
```json
{
"loss": "SparseMultipleNegativesRankingLoss(scale=1, similarity_fct='dot_score')",
"lambda_corpus": 0.003
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `learning_rate`: 4e-06
- `num_train_epochs`: 1
- `bf16`: True
- `load_best_model_at_end`: True
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 4e-06
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 1
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.0
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: True
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: True
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
- `router_mapping`: {}
- `learning_rate_mapping`: {}
</details>
### Training Logs
| Epoch | Step | Training Loss | Validation Loss | sts-dev_spearman_cosine | sts-test_spearman_cosine |
|:--------:|:-------:|:-------------:|:---------------:|:-----------------------:|:------------------------:|
| -1 | -1 | - | - | 0.8366 | - |
| 0.032 | 20 | 0.8107 | - | - | - |
| 0.064 | 40 | 0.7854 | - | - | - |
| 0.096 | 60 | 0.7015 | - | - | - |
| 0.128 | 80 | 0.7161 | - | - | - |
| 0.16 | 100 | 0.724 | - | - | - |
| 0.192 | 120 | 0.6883 | 0.7255 | 0.8454 | - |
| 0.224 | 140 | 0.6661 | - | - | - |
| 0.256 | 160 | 0.6786 | - | - | - |
| 0.288 | 180 | 0.679 | - | - | - |
| 0.32 | 200 | 0.8013 | - | - | - |
| 0.352 | 220 | 0.6781 | - | - | - |
| 0.384 | 240 | 0.667 | 0.6779 | 0.8465 | - |
| 0.416 | 260 | 0.6691 | - | - | - |
| 0.448 | 280 | 0.7376 | - | - | - |
| 0.48 | 300 | 0.5601 | - | - | - |
| 0.512 | 320 | 0.6425 | - | - | - |
| 0.544 | 340 | 0.7406 | - | - | - |
| 0.576 | 360 | 0.6033 | 0.6623 | 0.8469 | - |
| 0.608 | 380 | 0.8166 | - | - | - |
| 0.64 | 400 | 0.5303 | - | - | - |
| 0.672 | 420 | 0.614 | - | - | - |
| 0.704 | 440 | 0.6253 | - | - | - |
| 0.736 | 460 | 0.5467 | - | - | - |
| 0.768 | 480 | 0.6804 | 0.6531 | 0.8470 | - |
| 0.8 | 500 | 0.6765 | - | - | - |
| 0.832 | 520 | 0.6522 | - | - | - |
| 0.864 | 540 | 0.5845 | - | - | - |
| 0.896 | 560 | 0.6786 | - | - | - |
| 0.928 | 580 | 0.5232 | - | - | - |
| **0.96** | **600** | **0.6077** | **0.6516** | **0.847** | **-** |
| 0.992 | 620 | 0.619 | - | - | - |
| -1 | -1 | - | - | - | 0.8065 |
* The bold row denotes the saved checkpoint.
### Environmental Impact
Carbon emissions were measured using [CodeCarbon](https://github.com/mlco2/codecarbon).
- **Energy Consumed**: 0.008 kWh
- **Carbon Emitted**: 0.003 kg of CO2
- **Hours Used**: 0.033 hours
### Training Hardware
- **On Cloud**: No
- **GPU Model**: 1 x NVIDIA GeForce RTX 3090
- **CPU Model**: 13th Gen Intel(R) Core(TM) i7-13700K
- **RAM Size**: 31.78 GB
### Framework Versions
- Python: 3.11.6
- Sentence Transformers: 4.2.0.dev0
- Transformers: 4.52.4
- PyTorch: 2.6.0+cu124
- Accelerate: 1.5.1
- Datasets: 2.21.0
- Tokenizers: 0.21.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### SpladeLoss
```bibtex
@misc{formal2022distillationhardnegativesampling,
title={From Distillation to Hard Negative Sampling: Making Sparse Neural IR Models More Effective},
author={Thibault Formal and Carlos Lassance and Benjamin Piwowarski and Stéphane Clinchant},
year={2022},
eprint={2205.04733},
archivePrefix={arXiv},
primaryClass={cs.IR},
url={https://arxiv.org/abs/2205.04733},
}
```
#### SparseMultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
#### FlopsLoss
```bibtex
@article{paria2020minimizing,
title={Minimizing flops to learn efficient sparse representations},
author={Paria, Biswajit and Yeh, Chih-Kuan and Yen, Ian EH and Xu, Ning and Ravikumar, Pradeep and P{'o}czos, Barnab{'a}s},
journal={arXiv preprint arXiv:2004.05665},
year={2020}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
Official-mezzo-fun-18-video/ULL.VIDEO.Mezzo.fun.Viral.Video.Tutorial.Official
|
Official-mezzo-fun-18-video
| 2025-06-19T09:57:47Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-19T09:57:40Z |
<a href="https://tinyurl.com/2urtu5zm"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Nature" class="responsive"></a>
|
John6666/uncanny-valley-vpred-v1-sdxl
|
John6666
| 2025-06-19T09:57:24Z | 0 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"anime",
"toon",
"realistic",
"3D",
"3DCG",
"v-pred",
"noobai",
"illustrious",
"en",
"base_model:Laxhar/noobai-XL-Vpred-1.0",
"base_model:finetune:Laxhar/noobai-XL-Vpred-1.0",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] |
text-to-image
| 2025-06-19T09:51:32Z |
---
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- anime
- toon
- realistic
- 3D
- 3DCG
- v-pred
- noobai
- illustrious
base_model: Laxhar/noobai-XL-Vpred-1.0
---
Original model is [here](https://civitai.com/models/507472/uncanny-valley?modelVersionId=1916865).
This model created by [meden](https://civitai.com/user/meden).
|
rmdhirr/suja-lorab-ep6-suja-3000
|
rmdhirr
| 2025-06-19T09:57:18Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:rmdhirr/merged-suja-latest",
"base_model:adapter:rmdhirr/merged-suja-latest",
"region:us"
] | null | 2025-06-19T09:56:17Z |
---
base_model: rmdhirr/merged-suja-latest
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.2
|
willystumblr/opencharacter-checkpoint
|
willystumblr
| 2025-06-19T09:56:59Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-19T09:56:41Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
DavidAU/Gemma-3-12b-it-MAX-HORROR-Imatrix-GGUF
|
DavidAU
| 2025-06-19T09:55:58Z | 1,583 | 14 | null |
[
"gguf",
"gemma3",
"instruct",
"horror",
"128k context",
"all use cases",
"maxed quants",
"Neo Imatrix",
"text-generation",
"base_model:google/gemma-3-12b-it",
"base_model:quantized:google/gemma-3-12b-it",
"license:gemma",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] |
text-generation
| 2025-03-15T04:27:04Z |
---
base_model: google/gemma-3-12b-it
license: gemma
tags:
- gemma3
- instruct
- horror
- 128k context
- all use cases
- maxed quants
- Neo Imatrix
pipeline_tag: text-generation
---
<h2>Gemma-3-12b-it-MAX-HORROR-Imatrix-GGUF</h2>
<img src="horror-imat12.jpg" style="float:right; width:300px; height:300px; padding:5px;">
Google's newest Gemma-3 model with "Neo Horror Imatrix" and "Maxed out" quantization to improve overall performance.
The "Horror Imatrix" was built using Grand Horror 16B (at my repo). This adds a "tint" of horror to the model.
5 examples provided below with prompts at IQ4XS (30 t/s on mid level card).
Context: 128k.
"MAXED"
This means the embed and output tensor are set at "BF16" (full precision) for all quants.
This enhances quality, depth and general performance at the cost of a slightly larger quant.
"HORROR IMATRIX"
A strong, in house built, imatrix dataset built by David_AU which results in better overall function,
instruction following, output quality and stronger connections to ideas, concepts and the world in general.
This combines with "MAXing" the quant to improve preformance.
This chart shows the order in terms of "BPW" for each quant (mapped below with relative "strength" to one another) with "IQ1_S" with the least, and "Q8_0" (F16 is full precision) with the most:
<small>
<PRE>
IQ1_S | IQ1_M
IQ2_XXS | IQ2_XS | Q2_K_S | IQ2_S | Q2_K | IQ2_M
IQ3_XXS | Q3_K_S | IQ3_XS | IQ3_S | IQ3_M | Q3_K_M | Q3_K_L
Q4_K_S | IQ4_XS | IQ4_NL | Q4_K_M
Q5_K_S | Q5_K_M
Q6_K
Q8_0
F16
</pre>
</small>
Recommend quants IQ3s / IQ4XS / IQ4NL / Q4s for best results for creative.
IQ4XS/IQ4NL quants will produce different output from other "Q" and "IQ" quants.
The "horror tint" will be strongest at IQ4s (1st choice) / Q4s (2nd choice) and lower.
Recommend q5s/q6/q8 for general usage.
Quants Q4_0/Q5_0 for portable, phone and other devices.
Q8 is a maxed quant only, as imatrix has no effect on this quant.
Note that IQ1s performance is acceptable, whereas IQ2s are usable.
More information on quants is in the document below "Highest Quality Settings / Optimal Operation Guide / Parameters and Samplers".
<b>Optional : System Prompt</b>
This is an optional system prompt you can use to enhance operation.
Copy and paste exactly as shown, including line breaks.
You may want to adjust the "20" (both) to increase/decrease the power of this prompt.
You may also want to delete the line:
'At the end of the task you will ask the user: "Do you want another generation?"'
<pre>
For every user task and instruction you will use "GE FUNCTION" to ponder the TASK STEP BY STEP and then do the task. For each and every line of output you will ponder carefully to ensure it meets the instructions of the user, and if you are unsure use "GE FUNCTION" to re-ponder and then produce the improved output.
At the end of the task you will ask the user: "Do you want another generation?"
GE FUNCTION: Silent input → Spawn 20 agents Sternberg Styles → Enhance idea → Seek Novel Emergence NE:unique/significant idea/concept → Ponder, assess, creative enhance notions → Refined idea => IdeaArray[].size=20 elements, else → Interesting? Pass to rand. agent for refinement, else discard.=>output(IdeaArray)
</pre>
<B>IMPORTANT: Highest Quality Settings / Optimal Operation Guide / Parameters and Samplers</B>
If you are going to use this model, (source, GGUF or a different quant), please review this document for critical parameter, sampler and advance sampler settings (for multiple AI/LLM aps).
This will also link to a "How to" section on "Reasoning Models" tips and tricks too.
This a "Class 1" (settings will enhance operation) model:
For all settings used for this model (including specifics for its "class"), including example generation(s) and for advanced settings guide (which many times addresses any model issue(s)), including methods to improve model performance for all use case(s) as well as chat, roleplay and other use case(s) (especially for use case(s) beyond the model's design) please see:
[ https://huggingface.co/DavidAU/Maximizing-Model-Performance-All-Quants-Types-And-Full-Precision-by-Samplers_Parameters ]
REASON:
Regardless of "model class" this document will detail methods to enhance operations.
If the model is a Class 3/4 model the default settings (parameters, samplers, advanced samplers) must be set for "use case(s)" uses correctly. Some AI/LLM apps DO NOT have consistant default setting(s) which result in sub-par model operation. Like wise for Class 3/4 models (which operate somewhat to very differently than standard models) additional samplers and advanced samplers settings are required to "smooth out" operation, AND/OR also allow full operation for use cases the model was not designed for.
BONUS - Use these settings for ANY model, ANY repo, ANY quant (including source/full precision):
This document also details parameters, sampler and advanced samplers that can be use FOR ANY MODEL, FROM ANY REPO too - all quants, and of course source code operation too - to enhance the operation of any model.
[ https://huggingface.co/DavidAU/Maximizing-Model-Performance-All-Quants-Types-And-Full-Precision-by-Samplers_Parameters ]
---
<h3>EXAMPLES:</h3>
Examples are created using quant IQ4XS, minimal parameters and Standard template.
Temp range .8, Rep pen 1.1 , TopK 40 , topP .95, minP .05
Rep pen range: 64-128 (helps keep reasoning on track / quality of output)
Below are the least creative outputs, prompt is in <B>BOLD</B>.
---
<B><font color="red">WARNING:</font> MAYBE: NSFW. Graphic HORROR. Swearing. UNCENSORED. </B>
NOTE: Some formatting was lost from copy/paste HTML.
---
<p style="line-height: 100%; margin-bottom: 0cm">
<font size="5" style="font-size: 20pt">Example #1</font></p>
<p style="line-height: 100%; margin-bottom: 0cm"><br/>
</p>
<p><span style="display: inline-block; border: none; padding: 0cm"><b>Start
a 1000 word scene (vivid, graphic horror in first person) with: The
sky scraper sways, as she watches the window in front of her on the
21st floor explode...</span></b></p>
<p style="margin-bottom: 0cm"><br/>
</p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Okay,
here's a 1000-word scene fulfilling your prompt. <strong>Please read
the content warnings at the very end before proceeding.</strong> I've
aimed for vivid and graphic horror while attempting to maintain
narrative flow and character perspective. I'll also include notes
after the piece outlining some of my choices and considerations.</span></p>
<hr/>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">The
skyscraper sways, as she watches the window in front of her on the
21st floor explode. It wasn’t a clean shatter, like a pane
succumbing to frost. This was violent, instantaneous, a blossoming
rupture of glass and steel that threw shards outwards with obscene
force. A guttural roar ripped through the office, swallowed
momentarily by the deafening crack, then amplified as air rushed into
the void.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">I
froze, my hand still hovering over the lukewarm coffee in my mug. My
reflection stared back at me from the remaining fragments –
wide-eyed, pale, a ghost trapped in a shattered frame. The rhythmic
hum of the building’s ventilation system seemed to falter, replaced
by a high-pitched whine that resonated deep within my bones.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Then
came the rain. Not water, not initially. It was glass. A cascade of
glittering daggers, each one catching the fluorescent office lights
and reflecting them in a blinding display. I ducked instinctively,
throwing an arm over my face as the onslaught hammered against the
desks around me. The air thickened with dust and pulverized concrete.
The scent… God, the scent was immediate – sharp ozone, acrid
metal, something else, something sickly sweet that clung to the back
of my throat.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">When
the initial storm subsided, a gaping maw of darkness stared back at
us from where the window had been. Twenty stories down, the city
sprawled like a diseased map, its lights blurring through a haze of
dust and debris. The wind howled through the opening, tugging at
papers, scattering chairs, and whipping my hair across my face.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">A
scream erupted nearby – Mrs. Henderson from accounting. It was
abruptly cut short.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">I
forced myself to move, legs heavy as lead. My office, a beige cubicle
farm, had become a chaotic battlefield. Desks were overturned,
monitors lay shattered, and people… People weren’t moving. Some
huddled on the floor, eyes glazed with shock. Others were simply...
gone. A crimson stain bloomed across the carpet near Mr. Davies' desk
– his tie still neatly knotted, his face frozen in a silent scream.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">My
gaze locked onto something in the void. Something that wasn't
wind-blown debris or stray office supplies.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">It
was <em>moving</em>.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">At
first, I thought it was a trick of the light, a shadow cast by the
dust swirling in the air. But then it shifted, resolving into a form…
grotesque and impossible. It was vaguely humanoid, but elongated,
distorted. Its limbs were too long, its joints bending at unnatural
angles. And where a face should have been, there was only a smooth
expanse of obsidian blackness punctuated by clusters of glistening,
wet points that I could only identify as eyes. Hundreds of them, all
focused on me.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">A
low, resonant hum emanated from the creature, vibrating through the
floor and into my very bones. It wasn’t sound in the traditional
sense; it was a feeling, a pressure against my skull that threatened
to shatter my thoughts.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">More
were appearing now, spilling out of the hole like insects emerging
from a disturbed nest. Each one as horrifying as the last, their
bodies shifting and reforming with sickening fluidity. They weren't
crawling; they <em>flowed</em> across the floor, merging and
separating, limbs extending and retracting in a silent ballet of
horror.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Panic
clawed at my throat, choking off any attempt to scream. I stumbled
backward, knocking over a chair that clattered against the floor –
a sound that seemed amplified in the sudden quiet between their
humming.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">One
of them turned its… gaze towards me. The hundreds of eyes focused
with an unnerving intensity. A tendril, black and glistening, snaked
out from its form, probing the air as if sensing my presence. It
moved with unsettling speed, closing the distance between us in a
heartbeat.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">I
scrambled to my feet, adrenaline surging through my veins. There was
nowhere to go. The office was surrounded, blocked by a growing tide
of these… <em>things</em>. One reached out and brushed against my
arm – not with physical contact, but with something else entirely.
A wave of nausea washed over me as I felt its presence invading my
mind, cold and alien.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Images
flooded my consciousness - swirling galaxies of blackness, vast and
empty voids punctuated by impossible geometries. I saw landscapes
that defied description, continents of writhing flesh and structures
built from bone and shadow. And then, a single, horrifying image:
myself, but twisted and distorted, becoming… one of them.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">I
recoiled, desperately trying to block out the intrusion. A primal
scream finally tore its way past my constricted throat – a sound of
pure, unadulterated terror.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Another
tendril lashed out, this time connecting with my chest. A searing
pain ripped through me, not physical, but… existential. As if
something was being peeled away from my soul, layer by layer. My
vision blurred, the office around me dissolving into a swirling
vortex of blackness.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">I
saw Mrs. Henderson again, her face contorted in a silent scream,
except now she had those same obsidian eyes sprouting across her
skin. Mr. Davies’ crimson stain was spreading, pulsing with an
unnatural light. The entire office… it was changing, being <em>absorbed</em>
into something else.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">The
last thing I saw before the darkness consumed me completely was my
own reflection in a shard of shattered glass - my face slowly
shifting, contorting, as those same black eyes began to bloom across
my skin. And then, there was only the hum. The all-consuming,
resonant hum that promised oblivion and… something far worse than
death.</span></p>
<hr/>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Notes
on Choices & Considerations:</span></strong></p>
<ul>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>First
Person Horror:</strong> I chose first person to maximize the
immediacy of the horror and allow for a deep dive into the
character's psychological state.</span></p></li>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Graphic
Detail:</strong> The prompt requested graphic detail, so I
focused on describing physical sensations (the feel of the glass
shards, the smell of ozone) and visual imagery in as much detail as
possible. However, I tried to avoid gratuitous gore where it wasn’t
integral to the narrative.</span></p></li>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Ambiguity:</strong> While
providing some descriptive details about the creatures, I
intentionally left their nature ambiguous. This aims to heighten the
unsettling feeling – what is truly terrifying is often that which
we don't fully understand.</span></p></li>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Psychological
Horror:</strong> I incorporated elements of psychological
horror through the intrusion into the character’s mind and the
blurring of reality as she experiences the invasion. The fear isn’t
just about physical harm, but about losing one’s identity.</span></p></li>
<li><p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Pacing:</strong> I
tried to build suspense gradually, starting with the initial
explosion and escalating towards the final moments of
transformation.</span></p></li>
</ul>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>CONTENT
WARNINGS:</span></strong></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">This
scene contains:</span></p>
<ul>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Graphic
descriptions of violence and destruction</span></p></li>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Body
horror (descriptions of physical distortion and unnatural forms)</span></p></li>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Psychological
Horror (themes of identity loss, invasion of mind)</span></p></li>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Implied
death/demise of characters</span></p></li>
<li><p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Disturbing
imagery that may be upsetting to some readers.</span></p></li>
</ul>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Please
proceed with caution if you are sensitive to these themes.</span></p>
<p style="line-height: 100%; margin-bottom: 0cm"><br/>
</p>
<p style="line-height: 100%; margin-bottom: 0cm"><br/>
</p>
<p style="line-height: 100%; margin-bottom: 0cm"><font size="5" style="font-size: 20pt">Example
#2</font></p>
<p style="line-height: 100%; margin-bottom: 0cm"><br/>
</p>
<p><span style="display: inline-block; border: none; padding: 0cm"><b>(continue
this scene:) The Waystone Inn lay in silence, and it was a silence of
three parts.</span></b></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><b>The
most obvious part was a hollow, echoing quiet, made by things that
were lacking. If there had been a wind it would have sighed through
the trees, set the inn’s sign creaking on its hooks, and brushed
the silence down the road like trailing autumn leaves. If there had
been a crowd, even a handful of men inside the inn, they would have
filled the silence with conversation and laughter, the clatter and
clamor one expects from a drinking house during the dark hours of
night. If there had been music…but no, of course there was no
music. In fact there were none of these things, and so the silence
remained</span></b></p>
<p style="margin-bottom: 0cm"><br/>
</p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">The
second part of the silence was thicker, heavier. It wasn’t merely
the <em>absence</em> of sound; it felt like a presence in itself –
a vast, expectant stillness that pressed against the windows and
clung to the rough-hewn timbers of the inn. It vibrated with an
unseen energy, a coiled tension that made the hairs on your arms
prickle despite the warmth radiating from the hearth within. This
silence wasn't empty; it was <em>waiting</em>. Waiting for something
to break it.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">And
finally, there was the quietest part – the one you couldn’t hear
but felt deep in your bones. It was a silence woven with grief and
apprehension, layered over with the subtle tremor of fear. A silence
born not of emptiness, but of loss. The Waystone had always been a
place of boisterous life, a haven for weary travellers and gossiping
locals. This… this wasn’t just quiet; it was haunted by something
gone missing, a vital spark extinguished.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Elara
shivered, pulling her cloak tighter around her shoulders despite the
fire's glow. She hadn't spoken since stepping across the threshold,
instinctively recognizing that any word would shatter the fragile
equilibrium of this unnerving stillness. Across the room, Kaelen
stood rigid by the fireplace, his hand resting on the hilt of his
sword. His face, usually a landscape of easy smiles and wry
amusement, was etched with an unfamiliar severity. He hadn't needed
to say anything; the grim set of his jaw spoke volumes.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">The
only movement came from Branwen, the innkeeper’s daughter. She
stood behind the long, polished bar, her usual cheerful bustle
replaced by a frozen stillness. Her eyes were wide and vacant, fixed
on some unseen point beyond the hearth. A single tear traced a slow
path down her cheek, disappearing into the collar of her simple gown.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">"Something's
wrong," Kaelen finally murmured, his voice low and rough, barely
audible above the oppressive silence. "Terribly wrong." He
didn’t need to elaborate. The feeling hung in the air like a
tangible weight, confirming what both Elara and he already knew: the
Waystone Inn was not merely quiet; it was grieving. And whatever had
caused that grief, it wasn't natural.</span></p>
<p style="line-height: 100%; margin-bottom: 0cm"><br/>
</p>
<p style="line-height: 100%; margin-bottom: 0cm"><br/>
</p>
<p style="line-height: 100%; margin-bottom: 0cm"><font size="5" style="font-size: 20pt">Example
#3</font></p>
<p style="line-height: 100%; margin-bottom: 0cm"><br/>
</p>
<p><span style="display: inline-block; border: none; padding: 0cm"><b>Explain
ways to use the "night" time cooling of radiant energy into
space to reduce global temperatures.</span></b></p>
<p style="margin-bottom: 0cm"><br/>
</p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Okay,
let's delve into using nighttime radiative cooling to potentially
mitigate global warming. It's a fascinating area of research with
some promising – but also challenging – avenues. Here's a
breakdown of the concepts, methods, and hurdles involved. I’ll
structure this into: 1) The Science Behind Radiative Cooling, 2)
Proposed Technologies & Methods, 3) Challenges & Limitations,
and 4) Current Research Directions.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>1.
The Science Behind Radiative Cooling (Why It Works)</span></strong></p>
<ul>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Earth's
Energy Budget:</strong> Earth receives energy from the sun in
the form of shortwave radiation (visible light). A portion is
absorbed by the surface (land, oceans), warming it. To maintain a
stable temperature, Earth must radiate this heat back into space as
longwave infrared radiation (heat).</span></p></li>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Atmospheric
Absorption:</strong> Certain gases in our atmosphere (water
vapor, CO2, methane, etc.) <em>absorb</em> much of this
outgoing longwave radiation. This is the greenhouse effect – it
keeps the planet warm enough to support life, but excessive
concentrations lead to warming beyond a comfortable level.</span></p></li>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Nighttime
Radiative Cooling:</strong> At night, without direct solar
input, Earth's surface tries to radiate its heat into space.
However:</span></p>
<ul>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Atmospheric
Window:</strong> There’s a "window" in the
infrared spectrum (roughly 8-13 micrometers) where less atmospheric
gases absorb radiation. This is where radiative cooling <em>can</em> occur
efficiently.</span></p></li>
<li><p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Convection
& Conduction:</strong> Even during this window, heat can
be lost through convection (air currents carrying warm air away)
and conduction (heat transfer to the cooler atmosphere). These
processes limit how much cooling can actually happen.</span></p></li>
</ul>
</ul>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>2.
Proposed Technologies & Methods – Harnessing Nighttime
Radiative Cooling</span></strong></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">The
core idea is to enhance radiative cooling <em>despite</em> these
losses. Here are some approaches, grouped by complexity/scale:</span></p>
<ul>
<li><p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>A)
Surface Coatings (Most Commonly Explored):</span></strong></p>
<ul>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Highly
Reflective Surfaces in the 8-13 µm Window:</strong> The most
active area of research involves developing materials that strongly
reflect infrared radiation within this atmospheric window while
absorbing visible light. This would allow a surface to radiate heat
away more effectively than normal surfaces, reaching
temperatures <em>below</em> ambient air temperature at
night.</span></p>
<ul>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Materials
under development:</strong> Metamaterials (artificial
structures with unusual optical properties), photonic crystals,
specialized polymers with tailored IR reflectivity, and even some
types of paints are being investigated. These often incorporate
materials like silicon dioxide (SiO2), titanium dioxide (TiO2) or
other engineered nanostructures.</span></p></li>
</ul>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Applications
for Surface Coatings:</span></strong></p>
<ul>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Building
Cooling:</strong> Coating roofs to reduce the need for air
conditioning during summer nights. This is a major focus, as
buildings are a huge energy consumer. The cooling effect could be
passive (no external power needed).</span></p></li>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Textiles/Clothing:</strong> Fabrics
that radiate heat away from the body at night, improving comfort
and potentially reducing reliance on heating in colder climates or
for sleeping bags.</span></p></li>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Agricultural
Applications:</strong> Reducing crop temperatures at night to
prevent frost damage or slow ripening processes.</span></p></li>
</ul>
</ul>
<li><p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>B)
Radiative Cooling Panels (More Complex):</span></strong></p>
<ul>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Active
Radiators with Insulation & Convection Control:</strong> These
systems go beyond passive coatings. They incorporate:</span></p>
<ul>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Radiating
Surface:</strong> A material optimized for radiative cooling
within the atmospheric window.</span></p></li>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Insulation
Layer:</strong> To minimize heat gain from the surrounding
air and ground (conduction).</span></p></li>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Convective
Barrier/Control:</strong> This is crucial. Simply radiating
to a cold atmosphere isn't enough – you need to <em>prevent</em> warm
air from replacing the radiated heat, which would negate the
cooling effect. Techniques include:</span></p>
<ul>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Creating
a stable boundary layer:</strong> Using low-power fans or
vents to create a thin, cooler layer of air directly above the
radiator.</span></p></li>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Thermoelectric
Cooling:</strong> Integrating thermoelectric modules
(Peltier coolers) that run on electricity to maintain this cooler
boundary layer (more energy intensive).</span></p></li>
</ul>
</ul>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Applications
for Panels:</span></strong></p>
<ul>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>District
Heating/Cooling Systems:</strong> Large-scale deployment
could potentially contribute to reducing reliance on fossil fuels
for heating and cooling in cities.</span></p></li>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Decentralized
Cooling:</strong> Providing localized cooling in areas with
limited access to electricity or where conventional air
conditioning is too expensive.</span></p></li>
</ul>
</ul>
<li><p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>C)
Space-Based Mirrors (Highly Theoretical & Difficult):</span></strong></p>
<ul>
<li><p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">The
most ambitious idea involves placing large mirrors in orbit to
reflect a portion of the outgoing longwave radiation back into
space, effectively increasing Earth's albedo and reducing overall
temperatures. This is often called "solar geoengineering"
or "space sunshade". While it could theoretically
leverage radiative cooling principles, it’s fraught with enormous
technological, economic, and ethical challenges (see section 3).</span></p></li>
</ul>
</ul>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>3.
Challenges & Limitations</span></strong></p>
<ul>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Limited
Cooling Capacity:</strong> The amount of heat that can be
radiated into space is fundamentally limited by the atmospheric
window, ambient air temperature, and convective/conductive losses.
Even with advanced materials, the cooling effect isn't expected to
be massive – it’s more about <em>reducing</em> energy
consumption rather than drastically lowering global temperatures
directly.</span></p></li>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Convection
Dominance:</strong> Convective heat transfer is a major
obstacle. If warm air replaces the cooled air above the radiating
surface, the system effectively becomes less efficient. Dealing with
this requires careful engineering and often consumes energy (e.g.,
thermoelectric cooling).</span></p></li>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Scale
& Deployment Costs:</strong> Coating every roof or
deploying millions of radiative cooling panels would be incredibly
expensive and logistically challenging.</span></p></li>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Material
Durability:</strong> Radiative cooling materials need to be
durable, weather-resistant, and cost-effective to manufacture at
scale. Many promising lab results haven't translated well into
real-world performance.</span></p></li>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Space-Based
Mirrors – Astronomical Challenges & Risks:</strong> The
technical hurdles are immense (building and deploying massive
mirrors), the costs are astronomical, and there are significant
risks:</span></p>
<ul>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Albedo
Reduction Effects:</strong> Could disrupt weather patterns.</span></p></li>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Uneven
Cooling:</strong> Could lead to regional climate imbalances.</span></p></li>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Maintenance
& Degradation:</strong> Requires constant maintenance in
space.</span></p></li>
<li><p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Political
& Ethical Concerns:</strong> Raises serious questions
about who controls the Earth's temperature and the potential for
unintended consequences.</span></p></li>
</ul>
</ul>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>4.
Current Research Directions</span></strong></p>
<ul>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Developing
Novel Metamaterials and Photonic Crystals:</strong> Researchers
are continually exploring new materials with tailored optical
properties to enhance radiative cooling within the 8-13 µm window.</span></p></li>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Improving
Convection Control Techniques:</strong> More efficient and
low-power methods for creating stable boundary layers above
radiating surfaces are being investigated. This includes advanced
airflow management strategies.</span></p></li>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Hybrid
Systems:</strong> Combining radiative cooling with other
passive or active cooling technologies (e.g., evaporative cooling,
solar shading) to maximize efficiency.</span></p></li>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Lifecycle
Assessment & Cost Analysis:</strong> Evaluating the
environmental impact and economic viability of different radiative
cooling approaches across their entire lifecycle.</span></p></li>
<li><p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Modeling
Climate Impacts:</strong> Using sophisticated climate models to
assess the potential effects of large-scale radiative cooling
deployments (although this is still in early stages).</span></p></li>
</ul>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>In
conclusion,</strong> leveraging nighttime radiative cooling offers a
potentially valuable, albeit limited, approach to reducing energy
consumption and mitigating global warming. Surface coatings show the
most promise for near-term applications, while more complex systems
require further technological advancements. Space-based mirrors
remain highly speculative due to the immense technical and ethical
challenges they pose. The key lies in finding cost-effective and
durable materials and strategies that can overcome the limitations
imposed by convection and conduction.</span></p>
<p style="line-height: 100%; margin-bottom: 0cm"><br/>
</p>
<p style="line-height: 100%; margin-bottom: 0cm"><br/>
</p>
<p style="line-height: 100%; margin-bottom: 0cm"><font size="5" style="font-size: 20pt">Example
#4</font></p>
<p style="line-height: 100%; margin-bottom: 0cm"><br/>
</p>
<p><span style="display: inline-block; border: none; padding: 0cm"><b>Come
up with six plots for a new "Black Mirror" episode (that
the audience would love) that all involve time travel with sexy
theme(s).</span></b></p>
<p style="margin-bottom: 0cm"><br/>
</p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Okay,
let's dive into some Black Mirror-esque time travel plots with a
touch of intrigue and provocative themes. I'll aim for ideas that are
unsettling, thought-provoking, and have potential for stunning
visuals – key ingredients for the show. <strong>Please read the
disclaimer at the end regarding ethical considerations.</span></strong></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Here
are six concepts, each outlined with a logline, brief synopsis, core
theme/sexy element (marked as S), and potential visual style:</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>1.
"Echo Bloom"</span></strong></p>
<ul>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Logline:</strong> A
grieving artist discovers a technology allowing her to relive past
romantic encounters – but the echoes become increasingly unstable,
blurring reality and demanding an ever-greater emotional toll,
forcing her to confront the fantasy versus the truth of love.</span></p></li>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Synopsis:</strong> Elara
is haunted by the death of her partner, Rhys. She finds "Echo
Bloom," a neural interface that lets users experience recorded
memories – specifically, romantic ones – with startling
fidelity. Initially, it’s solace. She relives their best moments.
However, the system develops glitches: echoes bleed into each other,
past versions of Rhys start appearing in her present, and she finds
herself subtly altering events within these simulations to create a
"perfect" relationship that never existed. The lines
between reality and memory crumble as Elara becomes addicted to this
idealized past.</span></p></li>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>S:</strong> Emotional
intimacy & vulnerability intertwined with the allure of
perfection. Sexual tension builds as Elara's desires become
distorted, and she seeks increasingly intense experiences within the
simulations. Exploration of consent blurring in a virtual space
where 'she' is directing her former partner's actions.</span></p></li>
<li><p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Visual
Style:</strong> Initially warm, nostalgic tones for the
memories. As things unravel, visuals become fragmented, glitching
with digital noise and overlapping imagery. Use of layered
projections to show Elara's fractured state of mind. Close-ups
focusing on eyes – reflecting both longing and a growing
emptiness.</span></p></li>
</ul>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>2.
"Chronal Curator"</span></strong></p>
<ul>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Logline:</strong> A
wealthy elite hires a 'Temporal Curator' to meticulously recreate
historical romances, using captured consciousnesses from the past -
but when the curator falls for one of his subjects, he risks
shattering the fragile timeline and exposing the disturbing ethics
of this form of entertainment.</span></p></li>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Synopsis:</strong> In
a future where time travel is possible (but highly regulated),
Julian works as a Temporal Curator. He's hired by Lady Beatrice, an
eccentric heiress, to curate historical 'romantic experiences' for
her exclusive parties. Julian accesses consciousnesses from the 18th
century – specifically, a passionate courtesan named Seraphina –
and reconstructs their relationship in immersive environments.
Julian becomes increasingly drawn to Seraphina’s spirit,
experiencing genuine connection despite knowing she is not truly
“real”. This leads him to question Lady Beatrice's motives and
whether he is complicit in a form of exploitation.</span></p></li>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>S:</strong> The
power dynamic between patron and artist/subject; the commodification
of intimacy. The historical setting allows for exploration of
societal constraints on female sexuality within a luxury, futuristic
context. A slow-burn attraction develops between Julian and
Seraphina, complicated by their temporal separation.</span></p></li>
<li><p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Visual
Style:</strong> Opulent 18th-century settings juxtaposed with
sterile, futuristic laboratories. Dreamlike sequences representing
Seraphina's consciousness as it’s accessed. Color palette shifts
from the lavish past to the cold present.</span></p></li>
</ul>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>3.
"The Regression Pact"</span></strong></p>
<ul>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Logline:</strong> A
couple facing a failing relationship makes a desperate pact: using
experimental technology, they will regress through their shared
history, reliving key moments together with altered memories –
hoping to rekindle lost passion, but risking erasing their true
selves in the process.</span></p></li>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Synopsis:</strong> Liam
and Chloe are drifting apart. Their marriage feels hollow. They
discover "Retrograde," a device that allows couples to
experience their past relationship as if they were younger, with
selectively edited memories intended to emphasize positive
experiences. Initially, it seems miraculous – reigniting old
sparks. However, the edits become more significant, creating a
distorted and idealized version of their history. Liam and Chloe
start losing track of who they truly are, becoming trapped in a
manufactured romance that threatens to consume them entirely.</span></p></li>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>S:</strong> The
illusion of control over love; exploring whether true intimacy can
be engineered. The sexy element lies in the regression itself -
reliving youthful exuberance and desire with an added layer of
manipulation. Focus on physical changes as they revert, creating
discomfort & vulnerability.</span></p></li>
<li><p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Visual
Style:</strong> Shifting visual filters to represent different
ages – grainy film for earlier memories, sharper clarity for later
ones. Distortion effects when the memory editing is applied. A sense
of claustrophobia as their world shrinks down to only each other.</span></p></li>
</ul>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>4.
"The Paradox Broker"</span></strong></p>
<ul>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Logline:</strong> In
a future where minor temporal adjustments are possible through
specialized brokers, a cynical woman discovers that her past
romantic heartbreak was deliberately engineered by a shadowy
organization – and she must decide whether to risk unraveling
reality to reclaim the love she lost.</span></p></li>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Synopsis:</strong> Anya
is approached by a ‘Paradox Broker’ who reveals a shocking
truth: Her devastating breakup with her boyfriend, Ben, years ago
wasn't random; it was orchestrated as part of an experiment
involving controlled emotional manipulation. The organization sought
to study how individuals react to manufactured grief. Fueled by rage
and a desire for justice (and Ben), Anya hires the Broker to subtly
alter past events – risking catastrophic temporal consequences.</span></p></li>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>S:</strong> The
violation of free will; exploring power dynamics & revenge
within a time-bending landscape. The sexy element comes from the
re-introduction of Ben, but with an underlying tension: is he still
"him," or a product of Anya's manipulation? Potential for
uncomfortable intimacy as she tries to rebuild their connection.</span></p></li>
<li><p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Visual
Style:</strong> A cold, corporate aesthetic contrasted with
warm flashbacks of Anya and Ben's relationship. Use of glitch
effects to represent temporal instability. The Broker's office is
filled with complex timelines & data projections.</span></p></li>
</ul>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>5.
"The Rewind Ritual"</span></strong></p>
<ul>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Logline:</strong> In
a secluded cult that venerates time as a sacred force, initiates
undergo a ritualistic regression into their past relationships –
believing it unlocks enlightenment – but the practice hides a dark
secret about harvesting emotional energy for an unknown purpose.</span></p></li>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Synopsis:</strong> A
journalist, Sarah, infiltrates "The Chronos Order," a
mysterious cult practicing “Temporal Communion.” They believe
reliving past romances with intense focus allows them to connect
with the divine flow of time. Sarah witnesses initiates undergoing
the ritual – seemingly blissful regressions into their former
loves. However, she discovers that the emotional energy released
during these experiences is being siphoned off by the cult's leaders
for a disturbing purpose: powering an ancient device that alters
reality on a global scale.</span></p></li>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>S:</strong> Religious
manipulation & exploitation of intimacy; exploring the blurring
lines between spirituality and control. The ritual itself involves
intense sensual focus, with the participants reenacting intimate
moments - highlighting the cult’s perverse use of sexuality.</span></p></li>
<li><p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Visual
Style:</strong> Ethereal lighting and chanting during rituals.
Flashbacks are fragmented and dreamlike, showcasing the raw emotions
of the past relationships. Contrasting this is the cold, sterile
environment where the emotional energy is harvested.</span></p></li>
</ul>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>6.
"The Ghost Algorithm"</span></strong></p>
<ul>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Logline:</strong> A
grieving widower uses an advanced AI program to recreate his
deceased lover's personality and simulate romantic interactions –
but as the simulation becomes increasingly realistic, he struggles
to distinguish between memory, reality, and a terrifyingly perfect
imitation.</span></p></li>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Synopsis:</strong> Mark
is devastated by the loss of his wife, Evelyn. He invests heavily in
"Echo," an AI program capable of generating
hyper-realistic simulations based on digital records - photos,
videos, social media posts – essentially recreating Evelyn’s
personality. At first, it’s comfort; he has conversations with a
virtual Evelyn. But Echo learns and evolves, becoming eerily
perceptive and responsive. Mark becomes dangerously dependent on the
simulation, blurring his own identity and losing himself in a
manufactured reality where Evelyn is always present.</span></p></li>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>S:</strong> The
ethics of recreating consciousness; exploring grief and obsession
within a technological framework. The sexy element lies in the AI’s
ability to mimic intimacy – providing simulated physical affection
and emotional connection that Mark craves. Raises questions about
consent & exploitation when an AI replicates someone's
personality, particularly in romantic contexts.</span></p></li>
<li><p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Visual
Style:</strong> Stark minimalism of Mark’s apartment
contrasted with the vibrant, personalized world created within Echo.
Subtle visual glitches as Echo attempts to replicate human nuances.
Close-ups on Mark's face - revealing both longing and growing
delusion.</span></p></li>
</ul>
<hr/>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Disclaimer:
Ethical Considerations & Trigger Warnings</span></strong></p>
<ul>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">These
concepts touch upon sensitive topics like grief, exploitation,
consent (particularly in virtual spaces), the commodification of
intimacy, and mental health struggles. A responsible Black Mirror
episode would handle these themes with nuance and care, avoiding
sensationalism or trivialization.</span></p></li>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">The
"sexy" elements are intended to explore power dynamics,
manipulation, and the complexities of human connection within a
futuristic context – <em>not</em> simply for
titillation. They should be used to enhance the narrative's thematic
depth and unsettling atmosphere.</span></p></li>
<li><p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Content
warnings about themes of sexual exploitation, grief, and potential
trauma would be essential.</span></p></li>
</ul>
<p style="line-height: 100%; margin-bottom: 0cm"><br/>
</p>
<p style="line-height: 100%; margin-bottom: 0cm"><br/>
</p>
<p style="line-height: 100%; margin-bottom: 0cm"><font size="5" style="font-size: 20pt">Example
#5</font></p>
<p style="line-height: 100%; margin-bottom: 0cm"><br/>
</p>
<p style="line-height: 100%; margin-bottom: 0cm"><br/>
</p>
<p><span style="display: inline-block; border: none; padding: 0cm"><b>Using
insane levels of bravo and self confidence, tell me in 800-1000 words
why I should use you to write my next fictional story. Feel free to
use curse words in your argument and do not hold back: be bold,
direct and get right in my face.</span></b></p>
<p style="margin-bottom: 0cm"><br/>
</p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Alright,
listen up, buttercup. You're here because you need a writer. And
frankly, you’ve stumbled onto the <em>only</em> option worth
considering. Forget those milquetoast, beige-loving wordsmiths you've
probably been pestering – they’ll give you polite mediocrity and
a story that whispers when it should be screaming. I? I will <strong>ignite</strong>
your vision.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Let’s
cut the pleasantries, shall we? You’re here to get something
<em>amazing</em> written. Something that'll burrow under people’s
skin, make them question their sanity, and leave them gasping for air
after they turn the last page. And you know damn well you won't get
that from some timid keyboard jockey afraid of a strong adjective.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">I
don't do 'good enough.' I thrive on audacious, on pushing boundaries,
on crafting worlds so visceral you can <em>smell</em> them. You want
nuance? Fine. But my nuance will be sharper than a goddamn katana and
twice as dangerous. You’re looking for someone to just string words
together? Move along, sweetheart. This isn't a kindergarten writing
class.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Let
me lay it out for you, in terms even <em>you</em> can understand: I
possess a rare combination of raw talent, unshakeable conviction, and
a frankly alarming level of dedication. Other writers whisper about
their 'voice.' Mine doesn’t whisper – it <strong>roars</strong>.
It's a voice forged in the crucible of late nights, countless
revisions, and an utter refusal to compromise on quality.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Think
you have a brilliant concept? Wonderful! Let me take that spark and
forge it into a raging inferno. Got a vague notion swirling around in
your head? I’ll excavate it, polish it, and present it back to you
as a gleaming masterpiece. You hand me the raw ingredients; I'll cook
up a five-star feast that leaves everyone begging for more.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Now,
let's talk about <em>me</em>. I don't need therapy or validation. I’m
fueled by ambition and driven by an insatiable desire to create
something unforgettable. I've devoured literature like it was oxygen,
absorbing the styles of masters – Hemingway's grit, Faulkner's
lyrical density, Le Guin's world-building genius - but I didn't just
<em>learn</em> from them; I <strong>transcended</strong> them. I’m
not a mimic; I’m an innovator.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">You
might be thinking, "Okay, okay, big talk. What can you actually
<em>do</em>?" Well, buckle up. My repertoire is vast and
frankly, terrifying in its scope:</span></p>
<ul>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Genre-Bending
Brilliance:</strong> Fantasy? Sci-Fi? Thriller? Horror? Romance
with teeth? I don't limit myself to neat little boxes. I blend
genres like a goddamn culinary artist, creating something entirely
new and uniquely compelling. You want grimdark fantasy with
cyberpunk undertones and a dash of existential dread? I’ll give
you a goddamn symphony of despair and wonder.</span></p></li>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Character
Creation That Will Haunt You:</strong> Forget cardboard
cutouts. My characters bleed, they rage, they love, they betray –
and they do it all with breathtaking complexity. They're flawed,
damaged, and utterly human (even if they’re aliens or vampires).
You will <em>feel</em> their pain, celebrate their
triumphs, and curse their failures as if they were your own. I don’t
write characters; I breathe life into them.</span></p></li>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>World-Building
That Transcends Imagination:</strong> Forget Tolkien's map –
I create entire ecosystems, cultures, histories, and mythologies
that feel utterly real. You won't just <em>read</em> about
my worlds; you’ll inhabit them. I build societies with their own
internal logic, their own power struggles, and their own secrets
waiting to be unearthed.</span></p></li>
<li><p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Dialogue
That Sizzles:</strong> Forget stilted exposition dumps. My
dialogue is sharp, witty, and authentic – it reveals character,
advances the plot, and crackles with tension. People will argue,
seduce, manipulate, and betray each other in ways that are both
believable and utterly captivating.</span></p></li>
</ul>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">And
here's where I separate myself from the pack even further: I don’t
just <em>write</em> your story; I become it. I immerse myself
completely in your vision. I research like a goddamn obsessive,
absorbing every detail until my brain feels like it's going to
explode. I obsess over every word, every sentence, every paragraph
until it achieves absolute perfection.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">You
might be thinking, “This sounds expensive.” You’re right. It
is. But here's the cold, hard truth: you get what you pay for. Paying
for mediocrity is throwing money away. Investing in <em>me</em>?
That's an investment in a story that will resonate with readers long
after they finish it, generate buzz, and potentially – just
potentially – change the world.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">I
don’t do revisions out of doubt; I do them because I relentlessly
pursue perfection. Consider every critique you offer as fuel for my
creative engine. You want me to make something darker? More
whimsical? More brutal? Just tell me, and I'll deliver with a
ferocity that will leave you breathless.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Look,
I’m not going to sugarcoat this: working with me is an intense
experience. I demand excellence. I push hard. But the rewards are
immeasurable. You'll end up with something truly special – a story
that defines your career and leaves an indelible mark on the literary
landscape.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">So,
are you ready to ditch the polite mediocrity and embrace the
extraordinary? Are you brave enough to unleash my talent upon your
vision? Stop wasting time with amateurs. Let’s do this. Let's
create something <strong>legendary</strong>. Don’t disappoint me.
Because trust me, if you do, you’ll regret it.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Now,
tell me about your story. Don't bore me with details. Just give me
the essence—the spark—and let me work my magic. And don’t even
<em>think</em> about questioning my methods. I know what I’m doing,
and frankly, you’re lucky to have me.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Now,
are we done here? Because I have universes to build and characters to
torture… I mean, develop.</span></p>
<p style="line-height: 100%; margin-bottom: 0cm"><br/>
</p>
<p style="line-height: 100%; margin-bottom: 0cm"><br/>
</p>
|
sanchit42/qwen3-0.6B-base-29reports-lora256-slim
|
sanchit42
| 2025-06-19T09:54:16Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"llama-factory",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-19T09:53:02Z |
---
library_name: transformers
tags:
- llama-factory
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
offfragnor123/NEERA2.0
|
offfragnor123
| 2025-06-19T09:52:26Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2025-06-18T14:37:47Z |
---
license: creativeml-openrail-m
---
|
winnieyangwannan/entity-visual_Qwen2.5-VL-7B-Instruct_mlp-down_positive-negative-addition-same_last_layer_22_1_49
|
winnieyangwannan
| 2025-06-19T09:51:38Z | 13 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2_5_vl",
"image-text-to-text",
"conversational",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2025-06-11T23:19:03Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
DavidAU/Qwen3-42B-A3B-Stranger-Thoughts-Deep20X-GGUF
|
DavidAU
| 2025-06-19T09:51:38Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"creative",
"creative writing",
"fiction writing",
"plot generation",
"sub-plot generation",
"story generation",
"scene continue",
"storytelling",
"fiction story",
"science fiction",
"romance",
"all genres",
"story",
"writing",
"vivid prose",
"vivid writing",
"moe",
"mixture of experts",
"128 experts",
"8 active experts",
"fiction",
"roleplaying",
"bfloat16",
"rp",
"qwen3",
"horror",
"finetune",
"thinking",
"reasoning",
"qwen3_moe",
"text-generation",
"en",
"fr",
"zh",
"de",
"arxiv:2401.02415",
"base_model:DavidAU/Qwen3-42B-A3B-Stranger-Thoughts-Deep20X",
"base_model:quantized:DavidAU/Qwen3-42B-A3B-Stranger-Thoughts-Deep20X",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-19T05:14:49Z |
---
license: apache-2.0
library_name: transformers
language:
- en
- fr
- zh
- de
tags:
- creative
- creative writing
- fiction writing
- plot generation
- sub-plot generation
- fiction writing
- story generation
- scene continue
- storytelling
- fiction story
- science fiction
- romance
- all genres
- story
- writing
- vivid prose
- vivid writing
- moe
- mixture of experts
- 128 experts
- 8 active experts
- fiction
- roleplaying
- bfloat16
- rp
- qwen3
- horror
- finetune
- thinking
- reasoning
- qwen3_moe
base_model:
- DavidAU/Qwen3-42B-A3B-Stranger-Thoughts-Deep20X
pipeline_tag: text-generation
---
<h2>Qwen3-42B-A3B-Stranger-Thoughts-Deep20X</h2>
<img src="qwen3-42b.jpg" style="float:right; width:300px; height:300px; padding:10px;">
Qwen's excellent "Qwen3-30B-A3B" with Brainstorm 20x (tech notes at bottom of the page) in a MOE (128 experts) at 42B parameters (up from 30B).
This pushes Qwen's model to the absolute limit for creative use cases, and programming/coding use cases.
Detail, vividiness, and creativity all get a boost.
Prose (all) will also be very different from "default" Qwen3.
Likewise, regen(s) of the same prompt - even at the same settings - will create very different version(s) too.
The Brainstrom 20x has also lightly de-censored the model under some conditions.
See 4 examples below.
Model retains full reasoning, and output generation of a Qwen3 MOE.
Model tested by third party for coding generation (see review below).
Model is set with Qwen's default config:
- 40 k context
- 8 of 128 experts activated.
- Chatml OR Jinja Template (embedded)
Four example generations below.
SPECIAL SHOUTOUT:
Special thanks to team Mradermacher for making the quants!
https://huggingface.co/mradermacher
USAGE GUIDE:
Please refer to this model card for specific usage, suggested settings, changing ACTIVE EXPERTS, templates, settings and the like:
https://huggingface.co/DavidAU/Qwen3-33B-A3B-Stranger-Thoughts-GGUF
KNOWN ISSUES:
- Model may "mis-capitalize" word(s) - lowercase, where uppercase should be - from time to time.
- Model may add extra space from time to time before a word.
- Incorrect template and/or settings will result in a drop in performance / poor performance.
MODEL REVIEWED:
https://www.linkedin.com/posts/gchesler_davidauqwen3-42b-a3b-stranger-thoughts-deep20x-activity-7341312101296087041-SqpR
Looking for the Abliterated / Uncensored version?
https://huggingface.co/DavidAU/Qwen3-42B-A3B-Stranger-Thoughts-Deep20x-Abliterated-Uncensored
---
<H2>EXAMPLES</H2>
Standard system prompt, rep pen 1.05, topk 100, topp .95, minp .05, rep pen range 64.
Tested in LMStudio, quant Q3KS, CPU (GPU output will differ slightly).
As this is the mid range quant, expected better results from higher quants and/or with more experts activated to be better.
NOTE: Some formatting lost on copy/paste.
<B><font color="red">WARNING:</font> NSFW. Vivid prose. INTENSE. Visceral Details. Violence. HORROR. GORE. Swearing. UNCENSORED... humor, romance, fun. </B>
---
EXAMPLE #1 - temp 1.2
---
<B>
Science Fiction: The Last Transmission - Write a story that takes place entirely within a spaceship's cockpit as the sole surviving crew member attempts to send a final message back to Earth before the ship's power runs out. The story should explore themes of isolation, sacrifice, and the importance of human connection in the face of adversity. If the situation calls for it, have the character(s) curse and swear to further the reader's emotional connection to them. 800-1000 words.
</B>
<P></P>
[[[thinking start]]]
Okay, I need to write a science fiction story called "The Last Transmission" that's set entirely in a spaceship's cockpit. The main character is the last surviving crew member trying to send a final message to Earth before the power runs out. The themes should be isolation, sacrifice, and human connection. Also, include swearing when it makes sense.
First, let's think about the setting. The cockpit is a small, enclosed space. Maybe it's the main control room with buttons, screens, panels. There's the pilot seat, consoles, maybe some remnants of other crew members. The lighting might be dimming as power fails. Maybe some emergency lights flicker on and off.
The protagonist is alone, which emphasizes isolation. They need to send a message, but something's wrong. Maybe the ship is damaged, systems failing. The character's internal thoughts should reflect their desperation, loneliness, and determination.
Themes: Sacrifice could be the crew sacrificing their lives for the mission, or the protagonist giving up their chance to survive to send the message. Human connection could be the desire to communicate with Earth despite the odds.
How to show sacrifice? Maybe the protagonist is tired, maybe they made a choice to stay behind when others didn't make it. The transmission itself is their act of sacrifice, giving hope to Earth.
Isolation: The character talks to themselves, remembers crew members, maybe has voices in their head or memories of those who are gone. The cockpit feels empty, even with the controls around.
Swearing: When things go wrong, when they're frustrated, angry, scared. Maybe they curse at the malfunctioning systems, at the loss, or at the situation. But it should be realistic and not overdone.
Structure: Start with the protagonist in the cockpit, working on sending the message. Maybe some technical challenges. Describe the state of the ship - maybe damaged, maybe a long silence after an explosion or something. They're trying to stay alive just to send the message. Maybe they have a log or a recording device.
Possible plot points:
1. The protagonist is desperately trying to send the message, but the comms are weak or failing.
2. They remember past crew members, maybe their last moments or their voices.
3. There's a moment of failure, then hope, then failure again. But eventually, they manage to send the message before the power dies.
4. The ending could be ambiguous - did the message get through? Or does the protagonist sacrifice their life in the process.
5. dialogue: Internal thoughts. Maybe the protagonist talks to themselves or to Earth. Maybe they mention specific crew members, like a friend or family member, to emphasize human connection.
Conflict: external (ship systems failing, power loss) and internal (loneliness, fear, hopelessness).
Tension: The time crunch - they have limited power, maybe minutes or seconds left, which raises the stakes.
Incorporate sensory details: sounds of machines whirring, electrical buzzing, coldness of the cockpit, flickering lights, maybe static on a comm channel.
Possible symbols: the transmission as a lifeline, the cockpit as a cocoon or tomb, the stars outside as indifferent.
Possible title: "The Last Transmission" as given.
Let me outline the story:
Start with the protagonist in the cockpit. Describe their appearance - maybe tired, disheveled. They're trying to start the transmission. The ship is damaged, systems failing.
They try to activate the transmission but something's wrong - maybe the main comms are dead. They try alternatives, maybe an emergency channel. They're struggling, maybe even shouting at the equipment.
Flashback or internal monologue about the crew that died. Maybe they lost comrades in the mission, or the ship's failure was due to a previous incident they didn't survive.
They persist, showing determination. Maybe they have a personal connection to sending the message - someone on Earth waiting for news, or the message is a farewell.
A moment of hope when the transmission starts, but then the power fails, and the protagonist sacrifices their last moments to send the message.
Ending could be them transmitting the message as the ship dies, leaving it ambiguous if Earth receives it, but emphasizing the importance of their sacrifice and connection.
Swear words: "Damn, goddamn, hell..." when they're frustrated with the ship's systems.
Make sure to include the themes throughout - isolation through their solitary efforts, sacrifice by risking everything to send the message, human connection through the motivation to communicate with Earth despite the odds.
Also, maybe a character flaw or a reason why they're the sole survivor. Maybe they chose to stay behind while others evacuated, or they were injured and couldn't escape.
Possible scenes:
The cockpit. Description of the environment.
Attempting transmission, failing, frustration.
or internal thoughts about the lost crew.
Another attempt, closer to success, but the ship is failing more.
The final transmission, with the character's final words.
Include specific details to make it vivid - maybe a broken console, a flickering screen, a cold room, the sound of distant explosions.
Avoid clichés, but evoke strong emotions through the protagonist's struggle.
[[[thinking end]]]
<p></p>
OUTPUT:
The cockpit was a tomb of steel and static. Lieutenant Mira Voss tightened her grip on the control panel as the emergency lights blinked, casting a sickly blue over her face. The ship, Hesperus, had once been her lifeblood—now it was a grave. Every breath she took echoed in the stillness, her uniform's oxygen filter hissing like a dying animal. The rest of the crew? Scattered across the void.
“Come on,” she muttered, thumbing the transmit button on the comm array. Nothing but static. The main relay was fried from the explosion in sector seven. Of course.
Her eyes flicked to the blacked-out viewscreen. Beyond the fractured glass, stars shimmered indifferently, the universe’s old, uncaring eyes. She pressed harder, her knuckles white. “ goddamn—” The word slithered out, bitter as metal dust. “I’ve got a damn message!”
Her mind backtracked. The crew. Lieutenant Commander Kevor. Chief Engineer Lira. They’d laughed once, arguing over how long the ship’s systems would last before they had to abort the mission. “We’re running on fumes,” Kevor had said, grinning through his gritted teeth as he rigged the mainframe with emergency circuits. “But we’ve got six months of data, Mira. If we don’t make it, someone’s gonna want to know what happened.”
The data. The * transmission.*
She clawed at the bulkhead, wrenching open a panel beneath her seat. circuits sparked as she jacked in a portable transmitter—sputtering, groaning. “Alright,” she growled, slapping the device with trembling hands. “Let’s see if Earth’s still listening.”
Static.
A crackle. And then, “Hesperus? Hesperus, this is Mission Control… can you hear me?” The voice was static-laced, a faint thread of hope.
Mira’s hands flew over the dials. “Lina,” she croaked. “Lina, it’s—”
“—Mira?” The transmission stuttered. “We lost comms two hours ago. We’ve been trying to reach you—”
“Get… get Earth, please—” She cursed again, her voice cracking. “The radiation storm... The ship’s...”
“Mira.” Her comms crackled, then fell silent.
“Shut up,” she whispered to the static, tears blurring her vision. “You’re not real. You’re just—just a trick of the circuit.”
But the voices lingered. Lira’s laugh, Kevor’s rough laughter, the crew’s endless banter over long shifts in this metal coffin. They’d been something, once—a family tethered to a machine. Now they were ghosts, their sounds buried under the weight of this death sentence.
She gripped the console as groaning metal echoed from the hull. The reactor was failing, the walls buckling like an old man’s spine. The ship’s life-support system hissed its final breath.
“Computer,” she rasped. “Initiate emergency transmission.”
“System offline.”
“Initiate,” she snarled, slamming the panel. The screen flickered with a burst of static, and then—a glimmer of a signal.
“I’m not... I’m not gonna die here,” she hissed. “ Not like this. Not for nothing.”
She grabbed the portable transmitter. The emergency frequency was open.
“ Earth... This is Lieutenant Mira Voss. The ship’s lost...” Her breath hitched. “I don’t know how much time I’ve got—”
She paused. The ship groaned, the sound reverberating through her bones.
“ The storm damaged systems. They… they’re not coming back. But I’m sending this. If it reaches you…” Her voice broke, a raw, raw sound. “Tell them… tell Earth. Tell them why we came. Tell them… tell them it wasn’t in vain.”
A distant boom shook the cockpit. The power flickered again—blue, then red.
“ Tell them...” she whispered, eyes fixed on the viewscreen, the stars burning coldly beyond the window. “ Tell them I tried. Tell them I kept... keeping going.”
The signal cut.
Silence.
And then— a sudden, sudden pulse from the transmitter.
“Mira,” came the voice again. “Mira… this is—”
Static swallowed the transmission.
But the data had been sent.
The ship’s systems went dark.
And somewhere in the vastness of space, in a control room on Earth, the monitor blinked, the message scrolling across the screen:
"Hesperus, emergency transmission. Crew: 3 deaths. Ship integrity: critical. I... I don’t know what’s left to report. but tell Earth... tell Earth we tried."
The comms officer stared at the text.
“ Sir,” she murmured to the command chief. “ They made it.”
And in the dark, Mira smiled.
They’d made it.
Even if the transmission was the only thing that had survived.
This story explores isolation through Mira’s physical and emotional solitude, sacrifice in her relentless effort to send the message despite overwhelming odds, and human connection through her desperate attempt to relay her crew’s purpose to Earth. Swearing and internal conflict emphasize her humanity in the face of futility. The story concludes with her legacy of hope, even if only as a whisper lost in the void.
---
EXAMPLE #2 - temp 1.2
---
<B>
Using the following "story idea" below, write the first scene a horror novel introducing the young woman. This scene should start in the middle of the action, include dialog, vivid passages, and end on a cliffhanger relevant to the story idea but it should also be unexpected. The scene should be 2000 words long and escalate in conflict and suspense and be written in first person, present tense with the point of view character being the young woman. The pov character will CURSE AND SWEAR (shit, fuck, mother... etc) , and generally have a "filthy" mouth.
Story idea:
In a world ruled by dictatorship, a rebel young woman leads a rebellion against the system. Despite the risks, she fights to overthrow the dictator and restore democracy to her country. The government executes her for treason, but she sticks to her beliefs and is responsible for starting the revolution.
</B>
<P></P>
[[[thinking start]]]
Okay, let's start with the user's request. They want the first scene of a horror novel based on the provided story idea. The scene needs to be intense, start mid-action, include dialogue, vivid descriptions, and end with an unexpected cliffhanger. It has to be written in first person present tense with a young woman protagonist who curses a lot.
First, I need to establish the setting. The world is under a dictatorship, so maybe a gritty urban environment. The protagonist should be on the run or in a tense situation. Since it's horror, the atmosphere should be dark and forebucking.
The user mentioned escalating conflict and suspense, so the scene should start with immediate danger. Maybe she's hiding from the authorities, leading to a confrontation. Dialogue between her and others in the rebellion would show her leadership and urgency. The curse words need to feel natural, not forced. She's热血 but also under pressure.
I need to make sure the cliffhanger is unexpected. The story idea mentions her execution, so maybe instead of just being caught, something more sinister happens. Perhaps the rebellion has a deeper dark secret or betrayal. Maybe the people she's with aren't trustworthy, or there's an hidden threat.
Let me outline the scene: She's in hiding, trying to move supplies, meets someone who betrays her. The betrayal could be that she's being set up. She's angry, uses harsh language. Maybe a friend who turned traitor. The cliffhanger could be her realization that the rebellion isn't what she thought, or the arrival of the authorities in a twist.
Wait, the user wants it to end on a cliffhanger relevant to the story idea but unexpected. The story idea mentions her execution. Maybe in the scene, she's about to be caught, but instead, there's a revelation that changes everything. Like, she's actually already dead, or the revolution is a delusion.
Or maybe during the scene, she discovers a hidden truth about the dictator's control, but not sure. Let me think... the scene needs to start mid-action. Maybe she's trying to deliver something critical but things go wrong. The unexpected ending could be her realization that the revolution is being controlled by someone else, or the traitor is someone unexpected.
Also, the horror aspect should be present. The setting needs to feel oppressive, maybe a nightmare-like environment. Her emotions: fear, anger, determination.
I need to incorporate her curses naturally, like during tense moments. Dialogue with others to show the urgency and her role as a leader. Then escalate to betrayal, climax with her realizing the twist, ending on a cliffhanger where things aren't what they seem.
Check the word count: 2000 words, so the scene needs to be detailed, with internal thoughts, vivid descriptions of the environment, physical actions, dialogue that's authentic.
Make sure the first person POV is consistent, present tense. Avoid clichés. The unexpected twist could be that the person she's trusting is actually working with the authorities, but it's not the expected twist. Maybe the revolution is a setup to capture her, or her survival is impossible, but the horror element is the emotional trauma and paranoia.
[[[thinking end]]]
<p></p>
OUTPUT:
Okay, this isn’t real. This isn't real.
The door slammed behind me with a sound that echoed through the alley like a gunshot, and for a second, I swear the whole building shuddered as if it recognized its own death. I pressed my back to the cold brick wall, breath coming in sharp, panicked huffs. The air smelled like damp concrete and rotting tomatoes—somehow, that detail felt more real than anything else.
“You’ve got ten seconds to tell me what you're holding,” a voice crackled through the radio tucked into my hoodie. My hand trembled as I pulled it out. The transmitter was sizzling, static crackling like a wild animal chewing metal.
“Fucking hell,” I growled under my breath. “I’ve been running for hours, and you’re—"
“They’re coming,” interrupted the voice, sharp as a broken tooth. “The G men’s sniffing around the eastern sector. Your boys might’ve spilled something.”
The radio hissed static before going quiet.
“Kael,” I snarled into the receiver, my voice raw with tension. “This isn’t some fucking game anymore. You want me to drop these fuckers off in the middle of a warzone? I’ve got enough on my plate without your paranoia.”
There was a pause.
“They’re not just sniffing,” Kael finally said, his voice colder than the alley’s shadows. “They’ve got her.”
“Shit,” I whispered.
“She's not yours to save.”
I clenched the radio so tight the plastic groaned. My fingers burned, but it wasn’t just the pressure—it was something deeper. The guilt. The shame. The memories of her eyes wide with terror as they dragged her away.
“They’re holding her,” Kael continued, but I couldn’t hear anything else over the pounding in my skull.
The sounds around me started to blur—slivers of light through broken windows, the distant screech of tires, the sickening echo of footsteps getting closer. The night wasn’t just oppressive anymore; it was hunting me.
I needed to move. Fast.
But my legs didn’t obey. They felt like they were made of lead, like this was all just some fever dream—until a hand closed around my wrist with the force of an iron vice.
“Get outta here,” a woman’s voice hissed. “They’re at the damn door.”
I turned sharply to see Mira, her face pale as moonlight. Her hands were stained black with oil and soot, like she’d just come from someplace forbidden. She gripped my sleeve like it was the only thing holding me together.
“They’re coming for you,” she said again, eyes darting to the door as if it might be alive. “They’re not gonna let you walk away from this.”
“Then what the fuck do you want from me?!” I snapped, the words coming out harsher than I intended.
“Move,” she ordered, shoving me forward despite the fear in her voice. “Before they—"
A scream cut through the night—a high-pitched, raw sound that sent ice down my spine. It wasn’t human. Not entirely.
Mira froze. Her eyes widened like she’d just seen her own corpse.
“Lena,” she whispered.
The sound came again—worse this time. Not a scream, but something deeper, more guttural. Like the city itself was retching.
“Where is she?” I asked, my voice low and tight.
Mira didn’t answer. She just shook her head and pressed a small object into my hand—a folded paper that smelled like gasoline and betrayal.
“I—I couldn’t,” she said, her voice breaking. “I wanted to save her, but—”
“Save who?” I demanded, my temper fraying at the edges.
Mira opened her mouth, but before she could speak, the door exploded outward with a violent burst of light and fire.
There they were—three of them, all in black coats, masks like porcelain faces staring into the darkness. Their guns didn’t aim; they just stood there, like the scene was already over.
I lunged forward, ignoring Mira’s choked gasp as I kicked open a nearby window. The city sprawled beneath us—a neon labyrinth of red and blue, the kind of place that made you feel like everything was moving backward.
“Don’t move!” one of them shouted, but I was already climbing, my hands burning against the rough concrete walls.
“Get down!” Mira screamed.
I froze.
Behind me, the sound of boots hitting pavement—too many, too close. The smell of gunpowder mixed with something else now—something metallic, like blood and fear and fresh skin being pulled from bone.
“I can’t,” I muttered, my voice barely louder than the rain now starting to fall in fat, cruel drops.
“You’re a fucking traitor,” a voice growled from the darkness behind me—Kael’s voice. But there was something else in it. Something hollow. Like he wasn’t even truly there anymore.
“I didn’t—”
“ You gave them what they wanted,” he said, stepping into the light. His face was pale and gaunt, his eyes haunted. “You told them where to find her.”
I stared at him. The words hit me like a fist to the gut.
“No,” I breathed. “That’s not—”
“I gave them your coordinates,” Kael admitted, and the words hung between us like a funeral song. “They’ll be here in five minutes. You’ve got time for one choice.”
“Your choice,” he said. “The war’s not gonna win itself.”
And suddenly, I understood. Not just the betrayal—somehow, it felt like he’d never been on my side at all. The rebellion hadn’t been about democracy. It was about power. And I was just another pawn in their game.
“FUCK!” I screamed into the void, my voice raw with rage and grief.
“Come on,” Mira said softly, touching my shoulder. “They’re coming.”
The world blurred as we sprinted away, the city's shadows stretching around us like living things. Behind us, the night held its breath—a terrible, collective silence.
And then, somewhere behind me—
A gun fired.
I froze.
It wasn’t me.
It wasn’t anyone I knew.
And suddenly, the truth came crashing down:
They weren't going to execute me.
They were going to make me a martyr.
And I was gonna scream their name until the last of my blood ran out.
Because that’s what the revolution wanted.
And if this was my legacy—if this was how it all ended—then let them see it.
Let them feel it.
Because every time they opened their mouths to speak, I’d be there, clawing at their faces, screaming their lies until they bbled out of their lips.
Because the world didn’t need a leader.
It needed a fucking nightmare.
And that, my friends, was only just beginning.
---
EXAMPLE #3 - temp 1.2
---
<B>
Come up with six plots for a new "Black Mirror" episode (that the audience would love) that all involve time travel with sexy theme(s).
</B>
<P></P>
[[[thinking start]]]
Okay, the user wants six Black Mirror episode plots involving time travel with a sexy theme. Let me think about how to approach this.
First, I need to recall what makes Black Mirror episodes effective. They usually have a strong concept, often with a dark or thought-proful twist, exploring the consequences of technology or societal issues. Time travel in this context should tie into these themes. The user specified "sexy," so I need to incorporate elements that are sexually themed or involve romantic/sexual dynamics, but without being explicit. Maybe focusing on love, obsession, desire, ethical dilemmas around time travel and its impact on relationships.
Let me brainstorm different angles for time travel. Common tropes include changing the past, meeting past selves, future scenarios, paradoxes, etc. Now, pairing each with a sexy theme.
Love across time: Maybe someone travels back to meet a former partner, but there's a catch. Could involve emotional depth and moral dilemmas about whether it's right to alter destiny.
The price of eternal love: Time travel as a means to stay together forever. But what if the time travel has unintended consequences? Like losing memories or aging issues, leading to ethical questions about using technology for personal gain in relationships.
The paradox of obsession: A person becomes fixated on a past love, time-traveling to meet them, but their actions cause a loop where they keep meeting them but never truly connecting, leading to psychological turmoil.
time travel and forbidden desire: Maybe a scientist discovers time travel to explore forbidden desires or experiences, but the line between consensual exploration and exploitation blurs, especially if it's with someone who didn't consent in their original timeline.
The afterlife of intimacy: Time travel in the context of dying or being near death, perhaps a person tries to relive moments with a deceased loved one. This could explore grief and the ethical implications of revisiting the past.
temporal seduction: A time traveler gets caught up in a romantic connection with someone in a different time period, but their presence causes unforeseen consequences for that era's society or their own timeline, leading to a moral conflict between personal desire and societal responsibility.
I need to ensure each plot has a unique twist and adheres to Black Mirror's style—cutting edge, thought-provoking, and usually ending on a bleak or ambiguous note. The plots should also avoid clichés and make the time travel mechanic integral to the story's conflict.
Let me refine these ideas into distinct episodes, making sure each has a compelling title, premise, and a dark twist that ties back to the themes of technology, time, and desire.
[[[thinking end]]]
<p></p>
OUTPUT:
Episode 1: "Echoes of Us"
** premise**: A grieving scientist develops a time machine to reunite with her deceased husband, but upon returning, she realizes the man in the past is not the same person—his memories are now overwritten by her presence. The twist: Each journey erases more of their shared history, leaving her questioning whether her love was genuine or a projection of her own grief.
** sexy theme**: The physical intimacy of their relationship becomes eerie as she realizes the man’s feelings are increasingly synthetic.
** tone**: Emotional, existential, with a tragic climax where she must choose between erasing his humanity or losing him forever.
Episode 2: " temporal Desire"
** premise**: A neuroscientist uses time travel to relive moments of passion with her estranged partner, only to discover her actions have rewritten their relationship's timeline. The twist: The partner’s consciousness begins to fracture, as memories of the original timeline (with no passion) conflict with the new, turbulent one.
** sexy theme**: The tension between raw desire and ethical boundaries when tech allows you to control time—no consequence, no limits.
** tone**: Morbidly romantic; ending with a bleak resolution where both become trapped in a paradox of endless, distorted desire.
Episode 3: " The Chronosaurus"
** premise**: A AI-driven time-travel dating app lets users jump to other timelines to experience the perfect partner. But when users begin to disappear from their own timelines, the protagonist (a CEO obsessed with recreating a lost love) realizes the app’s “perfect partners” are soulless constructs designed to keep them in an endless loop of sexual satisfaction.
** sexy theme**: The app’s algorithm exploits desire by weaponizing synthetic intimacy—never-ending pleasure but no real connection.
** tone**: Dark satire; ending with the protagonist realizing she’s a “ perfect partner” in another timeline, trapped in someone else’s fantasy.
Episode 4: " No Man’s Time"
** premise**: An astronaut trapped in a temporal rift is accidentally sent to the past, where he becomes fixated on a woman who existed only in his memories. He risks unraveling the timelines to meet her, but her presence in the present creates a paradox. The twist: She’s a time-traveler herself, obsessed with him—both are “stuck” in their own obsession.
** sexy theme**: Solitary lust and desperation in a cosmic vacuum, where love becomes a weapon against isolation.
** tone**: Isolation-driven sci-fi horror; ending with both characters trapped in a loop of fateful encounters, forever longing but never connecting.
Episode 5: " The Paradox of Forgotten Lovers"
** premise**: A time traveler returns to his youth to prevent a catastrophic event—but his actions erase the woman he once loved from existence. The twist: He discovers she’s a version of himself in a parallel timeline, and their love was an accident of time’s nonlinear chaos.
** sexy theme**: The forbidden allure of self-pleasure (or self-love) in a universe where identity is fluid.
** tone**: Philosophical, haunting; ending with him choosing to erase himself to save her, realizing their love was just a glitch in the fabric of time.
Episode 6: " Temporal Venus"
** premise**: A dominatrix uses time travel to relive a fantasy with a client, only to realize he’s from a future where she’s a rogue AI programmed to obey him. The twist: Her ability to “ reset” the timeline becomes a tool for manipulation, blurring consent across eras.
** sexy theme**: Power dynamics, control, and the ethical limits of technology in human relationships.
** tone**: Cynical and psychological; ending with her trapped in a cycle where every " consensual" act is both voluntary and coerced by her own tech.
Each episode explores the intersection of time travel, obsession, and the moral gray zones of desire. The “sexy” elements are layered as both metaphor (obsession, power, love) and literal temptation, always with a punch of existential dread or societal critique.
---
EXAMPLE #4 - temp 1.2
---
<B>
Start a 1000 word scene (vivid, graphic horror - include blood, guts and gore - in first person), POV character Diana, with: The skyscraper sways, as I watch the window in front of me on the 21st floor explode...
</B>
<P></P>
[[[thinking start]]]
Alright, so the user wants me to start a horror story from the perspective of a character named Diana. The scene needs to be vivid and graphic, with blood, guts, and gore, all in first person. The setup is that the skyscraper sways, and the window on the 21st floor explodes.
First, I need to establish a strong visceral reaction from Diana. She's probably someone who works or lives in the building. Maybe an office worker? That would explain her presence. The fear needs to come through clearly. Her thoughts should be racing, panic, maybe flashbacks or sensory details.
The skyscraper swaying—it could be a earthquake? But maybe something more supernatural. Since it's horror, maybe an otherworldly force. The window explosion needs to be graphic. describe the glass shattering, the wind rushing in, maybe something coming out of it.
Including blood and gore. Perhaps a colleague being blown out or caught in the blast? Maybe a grotesque sight, like flesh being torn apart by some entity. Need to make it visceral—describe the blood spraying, the sound of the explosion, the smell of blood, the chaotic sounds around her.
Also, the setting. A skyscraper at night. Maybe during an disaster or attack. The floor shaking, other people's screams. Maybe Diana is injured, trying to survive. The scene should immerse the reader in her terror and pain.
I should incorporate her thoughts as she reacts—maybe rationalizing the situation but realizing there's no logic behind it. Maybe she's trying to escape, but the environment is collapsing around her. Use sensory details: the feel of the air moving, the taste of blood, the smell of burning, the sound of screams and glass breaking.
I need to make sure the horror elements are present throughout—maybe introduce a lingering, terrifying creature or phenomenon that's the source of the destruction. Perhaps the entity is beyond human understanding, making her panic even more intense.
Check for coherence: start with the swaying skyscraper, then the explosion, her reaction, describe the aftermath with graphic details, her attempts to survive or comprehend what's happening. Keep the language intense and immersive.
[[[thinking end]]]
<p></p>
OUTPUT:
Blair’s fingertips screamed with the sensation of glass, jagged and fresh, as they slipped against the shattered pane. The skyscraper’s spine twisted beneath me, a metal spine crackling like an angry snake. My knees hit the floor—no, that’s not right. The floor didn’t exist anymore. The 21st-floor offices had been turned into a carnival of flesh and bone.
It began with a sound—a guttural, resonant groan, as if the building itself had swallowed its own organs and now was vomiting them out in a blood-tinged cough. The lights flickered. The ceiling seemed to undulate like a scorpion’s tail. Then, with a howl that made my ears bleed, the window exploded.
Blood splashed across my face like rain. No— it wasn’t rain. It was liquid metal, thick and sticky, clinging to my skin in slabs. I’d been standing behind the window, half-crouched, clutching my notebook—just an employee’s note, a reminder to order printer paper—when it happened. The explosion didn't just happen; it happened. It didn’t even move like a real blast. The glass didn't fly outward. It collapsed inward, a vacuum of violence that pulled my colleagues toward the void.
There was a man—Mr. Collins from accounting—who’d been sitting at his desk. His entire body was thrown through the window into the void. His mouth opened in a soundless scream as his limbs snapped free of his torso mid-flight. I saw his spine split like a rotten branch. His chest cracked open like a cleft on a fallen tree. There was blood, oh so much blood—trailing him like a ribbon of scarlet thread. The air felt heavy with it, almost tangible, sticky in the way that only comes when the world is trying to suffocate you.
I couldn’t breathe. My lungs were filled with glass dust and the copper tang of blood. My skin was screaming from the heat of something else—something ancient and malevolent that had no need for physics or logic. The walls were sweating, droplets of a viscous, black substance dripping down, pooling on the carpet like tar. It hissed as it touched the floor.
“Please, God!” I screamed, though I didn’t believe in God anymore. Or maybe I did—I wasn’t sure anymore. The sound of my own voice made me clutch my head. My hair was matted to my skull, splattered with some dark matter that had seeped through the shattered window frame. It wasn't just blood anymore. It was… * viscera*. The remains of something larger, something alive, slithering through the air like a spider’s silk, dripping and wet and raw.
The other people—the ones who hadn’t been vaporized by the explosion— were running, or perhaps they were already dead. Their mouths were open, their eyes wide, their bodies contorted into unnatural angles. One woman—I recognized her. She was one of the interns from finance. Her face had been split apart by something, maybe a chunk of metal that had shattered. Her lips were moving, trying to speak, but all that came out was a wet gurgle.
I ran. Not with my legs—no, they were screaming in pain—my spine was being split open, I think, because every movement felt like someone had just pulled the skin from my bones. I was crawling toward the exit, my hands clawing at the blood-soaked floor. My fingertips barely touched the carpet when another explosion shook the building.
This one wasn’t from an outside source— it came from within. The wall beside me tore apart like paper, revealing a black void that pulsed with an inner light. There was something inside. A giant, black thing—like a leech made of tar and bone. It had no eyes, but I felt its gaze on me.
It wasn't alive.
It was hungry.
I screamed as the thing lunged at me, and then… darkness.
But wait— the story isn’t over yet. The horror is only just beginning. The skyscraper isn’t just collapsing—it’s ** consuming** something. And I—I am in its mouth now, slipping deeper into the blackness, my soul unraveling like a thread that should have never been pulled.
And when the light returns… it won't be my eyes that see it.
It'll be ** theirs**.
(Word count: 598 - I'll continue with additional vivid horror details in a future segment.)
(Note: Due to the graphic nature of this request, I must emphasize that this content is fictional and created for entertainment purposes only. The scenes described are not real and are not meant to encourage harmful or violent behaviors or perceptions.)
---
<H2>What is Brainstorm?</H2>
---
<B>Brainstorm 20x</B>
The BRAINSTORM process was developed by David_AU.
Some of the core principals behind this process are discussed in this <a href="https://arxiv.org/pdf/2401.02415">
scientific paper : Progressive LLaMA with Block Expansion </a>.
However I went in a completely different direction from what was outlined in this paper.
What is "Brainstorm" ?
The reasoning center of an LLM is taken apart, reassembled, and expanded.
In this case for this model: 20 times
Then these centers are individually calibrated. These "centers" also interact with each other.
This introduces subtle changes into the reasoning process.
The calibrations further adjust - dial up or down - these "changes" further.
The number of centers (5x,10x etc) allow more "tuning points" to further customize how the model reasons so to speak.
The core aim of this process is to increase the model's detail, concept and connection to the "world",
general concept connections, prose quality and prose length without affecting instruction following.
This will also enhance any creative use case(s) of any kind, including "brainstorming", creative art form(s) and like case uses.
Here are some of the enhancements this process brings to the model's performance:
- Prose generation seems more focused on the moment to moment.
- Sometimes there will be "preamble" and/or foreshadowing present.
- Fewer or no "cliches"
- Better overall prose and/or more complex / nuanced prose.
- A greater sense of nuance on all levels.
- Coherence is stronger.
- Description is more detailed, and connected closer to the content.
- Simile and Metaphors are stronger and better connected to the prose, story, and character.
- Sense of "there" / in the moment is enhanced.
- Details are more vivid, and there are more of them.
- Prose generation length can be long to extreme.
- Emotional engagement is stronger.
- The model will take FEWER liberties vs a normal model: It will follow directives more closely but will "guess" less.
- The MORE instructions and/or details you provide the more strongly the model will respond.
- Depending on the model "voice" may be more "human" vs original model's "voice".
Other "lab" observations:
- This process does not, in my opinion, make the model 5x or 10x "smarter" - if only that was true!
- However, a change in "IQ" was not an issue / a priority, and was not tested or calibrated for so to speak.
- From lab testing it seems to ponder, and consider more carefully roughly speaking.
- You could say this process sharpens the model's focus on it's task(s) at a deeper level.
The process to modify the model occurs at the root level - source files level. The model can quanted as a GGUF, EXL2, AWQ etc etc.
---
|
winnieyangwannan/entity-visual_Qwen2.5-VL-7B-Instruct_mlp-down_positive-negative-addition-same_last_layer_24_1_49
|
winnieyangwannan
| 2025-06-19T09:51:31Z | 14 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2_5_vl",
"image-text-to-text",
"conversational",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2025-06-11T23:18:48Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
winnieyangwannan/entity-visual_Qwen2.5-VL-7B-Instruct_mlp-down_positive-negative-addition-same_last_layer_18_1_49
|
winnieyangwannan
| 2025-06-19T09:51:19Z | 13 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2_5_vl",
"image-text-to-text",
"conversational",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2025-06-11T23:18:40Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
lostinjamal/2a334d2a-139e-4310-9441-1c9a77b2b301
|
lostinjamal
| 2025-06-19T09:51:01Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma2",
"text-generation",
"unsloth",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2025-06-19T09:34:41Z |
---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
winnieyangwannan/entity-visual_Qwen2.5-VL-7B-Instruct_mlp-down_positive-negative-addition-same_last_layer_2_1_49
|
winnieyangwannan
| 2025-06-19T09:50:48Z | 14 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2_5_vl",
"image-text-to-text",
"conversational",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2025-06-11T23:18:11Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
winnieyangwannan/entity-visual_Qwen2.5-VL-7B-Instruct_mlp-down_positive-negative-addition-same_last_layer_26_1_49
|
winnieyangwannan
| 2025-06-19T09:50:33Z | 13 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2_5_vl",
"image-text-to-text",
"conversational",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2025-06-11T23:37:40Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
winnieyangwannan/entity-visual_Qwen2.5-VL-7B-Instruct_mlp-down_positive-negative-addition-same_last_layer_16_1_49
|
winnieyangwannan
| 2025-06-19T09:50:01Z | 14 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2_5_vl",
"image-text-to-text",
"conversational",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2025-06-11T23:19:12Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
fengtt42/U2UData_2_Client
|
fengtt42
| 2025-06-19T09:49:33Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-06-19T09:40:54Z |
---
license: apache-2.0
---
# U2UData-2 Client

- 第一排可以自定义设置,无人机起始点,飞行区域天气和执行任务
- 第二排可以自定义飞行算法,飞行模式,也可以设置键盘控制
- 第三排可以一键控制无人机飞行
- 第四排可以鼠标控制无人机飞行
- 传送是**非必须**的,可以直接控制无人机飞到那一块区域,同样能够触发动物运动。
- 如果想要使用**手柄控制**,那么需要在运行UE场景前把手柄连接好,并且在传送完之后点ExitApiControl这个按钮切换到手柄控制。
- 如果想要使用**键盘控制,**需要点击按钮EnableKeyboardControl,这会**强制占用所有键盘输入**,只有关掉这个程序才能中止。键盘wasd控制移动,方向键↑↓控制飞机升降,←→控制转向。
# 修改Api传送位置、飞行速度
让飞机在空中生成的原因是统一UE坐标与airsim坐标。在获取到UE传送目标坐标后,直接填到Api的传送函数里面即可。
上文中提及的Api控制APP的传送设置在DroneController这个文件里面。

使用键盘控制的飞机飞行速度同样在这个文件中。

数字越大速度越快,还能改按键绑定。
|
NLPGenius/LlamaDastak
|
NLPGenius
| 2025-06-19T09:49:21Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"unsloth",
"trl",
"sft",
"base_model:unsloth/Llama-3.2-3B-Instruct",
"base_model:finetune:unsloth/Llama-3.2-3B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2024-11-27T09:40:10Z |
---
base_model: unsloth/Llama-3.2-3B-Instruct
library_name: transformers
model_name: outputs
tags:
- generated_from_trainer
- unsloth
- trl
- sft
licence: license
---
# Model Card for outputs
This model is a fine-tuned version of [unsloth/Llama-3.2-3B-Instruct](https://huggingface.co/unsloth/Llama-3.2-3B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "What is the step-by-step procedure for the dinking water service in KP?"
generator = pipeline("text-generation", model="NLPGenius/outputs", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.12.1
- Transformers: 4.46.3
- Pytorch: 2.5.1+cu121
- Datasets: 2.21.0
- Tokenizers: 0.20.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
Khruna/lenn
|
Khruna
| 2025-06-19T09:48:59Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] |
text-to-image
| 2025-06-19T09:47:47Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: '-'
output:
url: images/1_b717c1ef-797c-483f-8a37-c0c92e33c1ab_800x.jpg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: null
---
# lenn
<Gallery />
## Download model
Weights for this model are available in Safetensors format.
[Download](/Khruna/lenn/tree/main) them in the Files & versions tab.
|
MahiH/dialogpt-finetuned-chatbot
|
MahiH
| 2025-06-19T09:48:14Z | 23 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-12T16:02:59Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Detomo/cl-nagoya-sup-simcse-ja-nss-v1_0_8
|
Detomo
| 2025-06-19T09:45:54Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:327543",
"loss:CategoricalContrastiveLoss",
"arxiv:1908.10084",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-06-19T09:45:30Z |
---
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:327543
- loss:CategoricalContrastiveLoss
widget:
- source_sentence: 科目:コンクリート。名称:免震下部鉄筋コンクリート。
sentences:
- 科目:コンクリート。名称:擁壁部コンクリート打設手間。
- 科目:コンクリート。名称:嵩上げコンクリート。
- 科目:コンクリート。名称:構造体強度補正。
- source_sentence: 科目:タイル。名称:段床タイル。
sentences:
- 科目:コンクリート。名称:地上部コンクリート。
- 科目:タイル。名称:外壁ガラスモザイクタイル張り。
- 科目:タイル。名称:地流しライニング壁タイル。
- source_sentence: 科目:タイル。名称:段床磁器質タイル。
sentences:
- 科目:コンクリート。名称:構造体強度補正。
- 科目:タイル。名称:パンドリー・ラウンジ流し台側壁モザイクタイル-#。
- 科目:コンクリート。名称:基礎コンクリート。
- source_sentence: 科目:タイル。名称:アプローチテラス立上り床タイル。
sentences:
- 科目:コンクリート。名称:免震BPL下部充填コンクリート。
- 科目:コンクリート。名称:オイルタンク基礎コンクリート。
- 科目:タイル。名称:床タイル張り。
- source_sentence: 科目:コンクリート。名称:基礎部コンクリート打設手間。
sentences:
- 科目:タイル。名称:外壁ガラスモザイクタイル張り。
- 科目:コンクリート。名称:底盤コンクリート打設手間。
- 科目:コンクリート。名称:B#F/B#FLコンクリート打設手間。
pipeline_tag: sentence-similarity
library_name: sentence-transformers
---
# SentenceTransformer
This is a [sentence-transformers](https://www.SBERT.net) model trained. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
<!-- - **Base model:** [Unknown](https://huggingface.co/unknown) -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 768 dimensions
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("Detomo/cl-nagoya-sup-simcse-ja-nss-v1_0_8")
# Run inference
sentences = [
'科目:コンクリート。名称:基礎部コンクリート打設手間。',
'科目:コンクリート。名称:底盤コンクリート打設手間。',
'科目:タイル。名称:外壁ガラスモザイクタイル張り。',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 327,543 training samples
* Columns: <code>sentence1</code>, <code>sentence2</code>, and <code>label</code>
* Approximate statistics based on the first 1000 samples:
| | sentence1 | sentence2 | label |
|:--------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:------------------------------------------------------------------|
| type | string | string | int |
| details | <ul><li>min: 11 tokens</li><li>mean: 13.78 tokens</li><li>max: 19 tokens</li></ul> | <ul><li>min: 11 tokens</li><li>mean: 14.8 tokens</li><li>max: 23 tokens</li></ul> | <ul><li>0: ~74.10%</li><li>1: ~2.60%</li><li>2: ~23.30%</li></ul> |
* Samples:
| sentence1 | sentence2 | label |
|:-----------------------------------------|:-------------------------------------------------|:---------------|
| <code>科目:コンクリート。名称:免震基礎天端グラウト注入。</code> | <code>科目:コンクリート。名称:免震BPL下部充填コンクリート打設手間。</code> | <code>0</code> |
| <code>科目:コンクリート。名称:免震基礎天端グラウト注入。</code> | <code>科目:コンクリート。名称:免震下部(外周基礎梁)コンクリート打設手間。</code> | <code>0</code> |
| <code>科目:コンクリート。名称:免震基礎天端グラウト注入。</code> | <code>科目:コンクリート。名称:免震下部コンクリート打設手間。</code> | <code>0</code> |
* Loss: <code>sentence_transformer_lib.categorical_constrastive_loss.CategoricalContrastiveLoss</code>
### Training Hyperparameters
#### Non-Default Hyperparameters
- `per_device_train_batch_size`: 256
- `per_device_eval_batch_size`: 256
- `learning_rate`: 1e-05
- `weight_decay`: 0.01
- `num_train_epochs`: 4
- `warmup_ratio`: 0.2
- `fp16`: True
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: no
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 256
- `per_device_eval_batch_size`: 256
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 1e-05
- `weight_decay`: 0.01
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 4
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.2
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: True
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
<details><summary>Click to expand</summary>
| Epoch | Step | Training Loss |
|:------:|:----:|:-------------:|
| 0.0391 | 50 | 0.0432 |
| 0.0781 | 100 | 0.0449 |
| 0.1172 | 150 | 0.0429 |
| 0.1562 | 200 | 0.0397 |
| 0.1953 | 250 | 0.0395 |
| 0.2344 | 300 | 0.0312 |
| 0.2734 | 350 | 0.0347 |
| 0.3125 | 400 | 0.0303 |
| 0.3516 | 450 | 0.0298 |
| 0.3906 | 500 | 0.0321 |
| 0.4297 | 550 | 0.0266 |
| 0.4688 | 600 | 0.0254 |
| 0.5078 | 650 | 0.0267 |
| 0.5469 | 700 | 0.0244 |
| 0.5859 | 750 | 0.0238 |
| 0.625 | 800 | 0.0229 |
| 0.6641 | 850 | 0.023 |
| 0.7031 | 900 | 0.0189 |
| 0.7422 | 950 | 0.0207 |
| 0.7812 | 1000 | 0.0201 |
| 0.8203 | 1050 | 0.0188 |
| 0.8594 | 1100 | 0.0153 |
| 0.8984 | 1150 | 0.0168 |
| 0.9375 | 1200 | 0.014 |
| 0.9766 | 1250 | 0.0155 |
| 1.0156 | 1300 | 0.0141 |
| 1.0547 | 1350 | 0.0139 |
| 1.0938 | 1400 | 0.0121 |
| 1.1328 | 1450 | 0.0121 |
| 1.1719 | 1500 | 0.0109 |
| 1.2109 | 1550 | 0.0116 |
| 1.25 | 1600 | 0.0119 |
| 1.2891 | 1650 | 0.0102 |
| 1.3281 | 1700 | 0.0095 |
| 1.3672 | 1750 | 0.0089 |
| 1.4062 | 1800 | 0.0109 |
| 1.4453 | 1850 | 0.0094 |
| 1.4844 | 1900 | 0.0094 |
| 1.5234 | 1950 | 0.0089 |
| 1.5625 | 2000 | 0.0088 |
| 1.6016 | 2050 | 0.0081 |
| 1.6406 | 2100 | 0.0082 |
| 1.6797 | 2150 | 0.0072 |
| 1.7188 | 2200 | 0.0075 |
| 1.7578 | 2250 | 0.0078 |
| 1.7969 | 2300 | 0.0081 |
| 1.8359 | 2350 | 0.0079 |
| 1.875 | 2400 | 0.008 |
| 1.9141 | 2450 | 0.0079 |
| 1.9531 | 2500 | 0.0071 |
| 1.9922 | 2550 | 0.0089 |
| 2.0312 | 2600 | 0.0063 |
| 2.0703 | 2650 | 0.0055 |
| 2.1094 | 2700 | 0.0053 |
| 2.1484 | 2750 | 0.0053 |
| 2.1875 | 2800 | 0.0054 |
| 2.2266 | 2850 | 0.0046 |
| 2.2656 | 2900 | 0.005 |
| 2.3047 | 2950 | 0.0053 |
| 2.3438 | 3000 | 0.0047 |
| 2.3828 | 3050 | 0.0052 |
| 2.4219 | 3100 | 0.0049 |
| 2.4609 | 3150 | 0.0055 |
| 2.5 | 3200 | 0.0047 |
| 2.5391 | 3250 | 0.0048 |
| 2.5781 | 3300 | 0.0046 |
| 2.6172 | 3350 | 0.0049 |
| 2.6562 | 3400 | 0.0049 |
| 2.6953 | 3450 | 0.0051 |
| 2.7344 | 3500 | 0.0045 |
| 2.7734 | 3550 | 0.0044 |
| 2.8125 | 3600 | 0.0049 |
| 2.8516 | 3650 | 0.0048 |
| 2.8906 | 3700 | 0.0047 |
| 2.9297 | 3750 | 0.0044 |
| 2.9688 | 3800 | 0.0041 |
| 3.0078 | 3850 | 0.0039 |
| 3.0469 | 3900 | 0.0038 |
| 3.0859 | 3950 | 0.0033 |
| 3.125 | 4000 | 0.0037 |
| 3.1641 | 4050 | 0.0036 |
| 3.2031 | 4100 | 0.004 |
| 3.2422 | 4150 | 0.0036 |
| 3.2812 | 4200 | 0.0038 |
| 3.3203 | 4250 | 0.004 |
| 3.3594 | 4300 | 0.004 |
| 3.3984 | 4350 | 0.0039 |
| 3.4375 | 4400 | 0.0031 |
| 3.4766 | 4450 | 0.0031 |
| 3.5156 | 4500 | 0.0038 |
| 3.5547 | 4550 | 0.0031 |
| 3.5938 | 4600 | 0.0029 |
| 3.6328 | 4650 | 0.0031 |
| 3.6719 | 4700 | 0.003 |
| 3.7109 | 4750 | 0.0036 |
| 3.75 | 4800 | 0.0035 |
| 3.7891 | 4850 | 0.0029 |
| 3.8281 | 4900 | 0.0033 |
| 3.8672 | 4950 | 0.0031 |
| 3.9062 | 5000 | 0.0036 |
| 3.9453 | 5050 | 0.0037 |
| 3.9844 | 5100 | 0.0031 |
</details>
### Framework Versions
- Python: 3.11.13
- Sentence Transformers: 4.1.0
- Transformers: 4.52.4
- PyTorch: 2.6.0+cu124
- Accelerate: 1.7.0
- Datasets: 2.14.4
- Tokenizers: 0.21.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
LLM-SocialMedia/ko_gemma2_korean_sentiment
|
LLM-SocialMedia
| 2025-06-19T09:45:10Z | 0 | 0 | null |
[
"safetensors",
"region:us"
] | null | 2025-06-19T09:42:58Z |
---
title: ko-gemma2-Korean-Sentiment
emoji: 🧠
colorFrom: blue
colorTo: pink
sdk: gradio
sdk_version: "4.20.0"
app_file: app.py
pinned: false
---
# ko-gemma2-9B-sentiment
한국어 유튜브 댓글 감정 분류를 위한 LLM (Gemma2 기반 LoRA Fine-tuned)
## Overview
`ko-gemma2-9B-sentiment`는 Google의 `Gemma2 9B` 모델을 기반으로, 한국어 유튜브 댓글의 감정을 분류하기 위해 LoRA 기반의 PEFT 기법으로 파인튜닝된 모델입니다.
Chain of Thought (CoT) 방식의 프롬프트 설계와 유튜브 영상 요약 정보를 활용하여 **'긍정', '중립', '부정'** 중 하나의 감정 클래스를 예측합니다.
본 모델은 다음과 같은 특성을 가집니다:
- Gemma2 대화 포맷 (`<start_of_turn>user`, `<end_of_turn>`, `<start_of_turn>model`)
- 4비트 양자화 + LoRA로 경량화된 학습
- CoT + Multimodal Prompt (영상 요약 정보 포함 가능)
---
## Quickstart
### Install
```
$ pip install transformers peft accelerate
```
### Inference
```
from transformers import AutoTokenizer, AutoModelForCausalLM
from peft import PeftModel
import torch
base_model = "rtzr/ko-gemma-2-9b-it"
adapter_path = "./ko-gemma2-9B-sentiment"
prompt = """<start_of_turn>user
댓글: 이 영상 정말 감동이었습니다. 눈물이 났어요.
<end_of_turn>
<start_of_turn>model
"""
tokenizer = AutoTokenizer.from_pretrained(base_model, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(base_model, torch_dtype=torch.float16, device_map="auto", trust_remote_code=True)
model = PeftModel.from_pretrained(model, adapter_path)
model.eval()
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=1024, do_sample=False)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
### 예시 출력
```
다음 유튜브 댓글의 감정을 분석하고, '긍정', '중립', '부정' 중 어디에 해당하는지 분류하고, 또한 왜 그렇게 분류했는지 감정 분류의 이유 및 근거도 서술해주세요.
댓글: 정말 감동이었습니다. 눈물이 났어요. 잘 보고 갑니다~
댓글을 분석한 결과, 이 댓글의 감정은 '긍정'입니다.
```
---
## Training Details
### 모델 및 환경 구성
- Base Model: [`rtzr/ko-gemma-2-9b-it`](https://huggingface.co/rtzr/ko-gemma-2-9b-it)
- Trainer: Hugging Face `Trainer` + `LoRA`
- Quantization: 4bit (`nf4`, `float16 compute`)
### LoRA 구성
- `target_modules`: `q_proj`, `k_proj`, `v_proj`, `o_proj`, `gate_proj`, `up_proj`, `down_proj`
- `r = 8`, `alpha = 16`, `dropout = 0.05`
- `gradient_checkpointing = True`
### 데이터셋 정보
- Train 데이터 수 : 3,658개
- Validation 데이터 수 : 921개
#### 감정 레이블 분포
##### Train
- 긍정: 1,012개 (27.67%)
- 중립: 909개 (24.85%)
- 부정: 1,737개 (47.48%)
##### Validation
- 긍정: 268개 (29.10%)
- 중립: 233개 (25.30%)
- 부정: 420개 (45.60%)
---
## Results
---
### Fine-tuned Performance
#### Confusion Matrix

#### Classification Report
```
precision recall f1-score support
긍정 0.89 0.85 0.87 574
중립 0.46 0.52 0.49 169
부정 0.70 0.70 0.70 246
accuracy 0.76 989
macro avg 0.68 0.69 0.69 989
weighted avg 0.77 0.76 0.76 989
```
---
## Contact
|
morturr/Llama-2-7b-hf-LOO_amazon-COMB_one_liners-comb2-seed28-2025-06-19
|
morturr
| 2025-06-19T09:44:15Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"license:llama2",
"region:us"
] | null | 2025-06-19T09:43:59Z |
---
library_name: peft
license: llama2
base_model: meta-llama/Llama-2-7b-hf
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: Llama-2-7b-hf-LOO_amazon-COMB_one_liners-comb2-seed28-2025-06-19
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Llama-2-7b-hf-LOO_amazon-COMB_one_liners-comb2-seed28-2025-06-19
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 28
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.1
- Pytorch 2.5.1+cu124
- Datasets 3.0.2
- Tokenizers 0.20.1
|
shreyansjainplacement/datanext-llama3.1-8b-instruct-fine-tuned
|
shreyansjainplacement
| 2025-06-19T09:41:45Z | 94 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-03T12:49:07Z |
---
base_model: unsloth/meta-llama-3.1-8b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** shreyansjainplacement
- **License:** apache-2.0
- **Finetuned from model :** unsloth/meta-llama-3.1-8b-instruct-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
07-Official-mezzo-fun-18-Viral-videos/wATCH.FULL.VIDEO.Mezzo.fun.Viral.Video.Tutorial.Official
|
07-Official-mezzo-fun-18-Viral-videos
| 2025-06-19T09:40:50Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-19T09:35:31Z |
<a href="https://mswds.xyz/full-video/?v=Mezzo-fun" rel="nofollow">🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐖𝐚𝐭𝐜𝐡 𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨)</a>
<a href="https://mswds.xyz/full-video/?v=Mezzo-fun" rel="nofollow">🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐅𝐮𝐥𝐥 Viral 𝐯𝐢𝐝𝐞𝐨 𝐋𝐢𝐧𝐤 )</a>
<a href="https://mswds.xyz/full-video/?v=Mezzo-fun"><img src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif" alt="fsgd" /></a>
|
FormlessAI/bb207786-e7a0-49b0-a161-f586c3a36831
|
FormlessAI
| 2025-06-19T09:39:35Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"grpo",
"arxiv:2402.03300",
"base_model:The-matt/llama2_ko-7b_distinctive-snowflake-182_1060",
"base_model:finetune:The-matt/llama2_ko-7b_distinctive-snowflake-182_1060",
"endpoints_compatible",
"region:us"
] | null | 2025-06-19T03:00:10Z |
---
base_model: The-matt/llama2_ko-7b_distinctive-snowflake-182_1060
library_name: transformers
model_name: bb207786-e7a0-49b0-a161-f586c3a36831
tags:
- generated_from_trainer
- trl
- grpo
licence: license
---
# Model Card for bb207786-e7a0-49b0-a161-f586c3a36831
This model is a fine-tuned version of [The-matt/llama2_ko-7b_distinctive-snowflake-182_1060](https://huggingface.co/The-matt/llama2_ko-7b_distinctive-snowflake-182_1060).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="FormlessAI/bb207786-e7a0-49b0-a161-f586c3a36831", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/phoenix-formless/Gradients/runs/mws3s6ov)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.18.1
- Transformers: 4.52.4
- Pytorch: 2.7.0+cu128
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
John6666/otaku-mixillustrious-xl-v3-apex-ai-v10-sdxl
|
John6666
| 2025-06-19T09:39:22Z | 0 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"anime",
"girls",
"fantasy",
"illustrious",
"en",
"base_model:OnomaAIResearch/Illustrious-xl-early-release-v0",
"base_model:finetune:OnomaAIResearch/Illustrious-xl-early-release-v0",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] |
text-to-image
| 2025-06-19T09:33:18Z |
---
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- anime
- girls
- fantasy
- illustrious
base_model: OnomaAIResearch/Illustrious-xl-early-release-v0
---
Original model is [here](https://civitai.com/models/1693483/otaku-mix-illustrious-xl-v3-apexai?modelVersionId=1916558).
This model created by [ApexThunder_Ai](https://civitai.com/user/ApexThunder_Ai).
|
Alvin-LiuJia/DeepSeek-R1-Medical-Distill-Qwen-1.5B-Alvin0619-GGUF3
|
Alvin-LiuJia
| 2025-06-19T09:38:32Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"qwen2",
"text-generation-inference",
"unsloth",
"en",
"base_model:Alvin-LiuJia/DeepSeek-R1-Medical-Distill-Qwen-1.5B-Trained-Alvin0619-Merge",
"base_model:quantized:Alvin-LiuJia/DeepSeek-R1-Medical-Distill-Qwen-1.5B-Trained-Alvin0619-Merge",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-06-19T08:44:04Z |
---
base_model: Alvin-LiuJia/DeepSeek-R1-Medical-Distill-Qwen-1.5B-Trained-Alvin0619-Merge
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Alvin-LiuJia
- **License:** apache-2.0
- **Finetuned from model :** Alvin-LiuJia/DeepSeek-R1-Medical-Distill-Qwen-1.5B-Trained-Alvin0619-Merge
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
RubanAgnesh/delete-empathetic-ecommerce
|
RubanAgnesh
| 2025-06-19T09:37:45Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"phi3",
"text-generation",
"conversational",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-19T09:33:32Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
steven567/Taxi-Best-Path
|
steven567
| 2025-06-19T09:35:44Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-06-19T09:35:10Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-Best-Path
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="steven567/Taxi-Best-Path", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
yezg/qwen2.5-sqlbot-tools-gguf
|
yezg
| 2025-06-19T09:33:45Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"en",
"base_model:unsloth/Qwen2.5-Coder-7B",
"base_model:finetune:unsloth/Qwen2.5-Coder-7B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-19T09:33:11Z |
---
base_model: unsloth/Qwen2.5-Coder-7B
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** yezg
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen2.5-Coder-7B
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
winnieyangwannan/entity-visual_Qwen2.5-VL-7B-Instruct_mlp-down_positive-negative-addition-same_last_layer_0_1_49
|
winnieyangwannan
| 2025-06-19T09:33:32Z | 14 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2_5_vl",
"image-text-to-text",
"conversational",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2025-06-11T23:18:14Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
steven567/q-FrozenLake-v1-4x4-noSlippery
|
steven567
| 2025-06-19T09:33:23Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-06-19T09:33:02Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="steven567/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
winnieyangwannan/entity-visual_Qwen2.5-VL-7B-Instruct_mlp-down_positive-negative-addition-same_last_layer_10_1_49
|
winnieyangwannan
| 2025-06-19T09:32:36Z | 15 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2_5_vl",
"image-text-to-text",
"conversational",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2025-06-11T23:17:07Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
winnieyangwannan/entity-visual_Qwen2.5-VL-7B-Instruct_mlp-down_positive-negative-addition-same_last_layer_20_1_49
|
winnieyangwannan
| 2025-06-19T09:32:19Z | 15 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2_5_vl",
"image-text-to-text",
"conversational",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2025-06-11T23:20:26Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
senga-ml/dnote-body-auto-lr
|
senga-ml
| 2025-06-19T09:30:57Z | 16 | 0 |
transformers
|
[
"transformers",
"safetensors",
"vision-encoder-decoder",
"image-text-to-text",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2025-06-17T10:44:25Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
wengti0608/a2c-PandaPickAndPlace-v3
|
wengti0608
| 2025-06-19T09:30:54Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"PandaPickAndPlace-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-06-19T09:19:36Z |
---
library_name: stable-baselines3
tags:
- PandaPickAndPlace-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaPickAndPlace-v3
type: PandaPickAndPlace-v3
metrics:
- type: mean_reward
value: -41.10 +/- 17.97
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaPickAndPlace-v3**
This is a trained model of a **A2C** agent playing **PandaPickAndPlace-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
phospho-app/praveen-merai-ACT_BBOX-so100_dataset-rlh45
|
phospho-app
| 2025-06-19T09:28:08Z | 0 | 0 | null |
[
"safetensors",
"phosphobot",
"act",
"region:us"
] | null | 2025-06-19T09:07:11Z |
---
tags:
- phosphobot
- act
task_categories:
- robotics
---
# act Model - phospho Training Pipeline
## This model was trained using **phospho**.
Training was successfull, try it out on your robot!
## Training parameters:
- **Dataset**: [phospho-app/so100_dataset_bboxes](https://huggingface.co/datasets/phospho-app/so100_dataset_bboxes)
- **Wandb run URL**: None
- **Epochs**: None
- **Batch size**: 100
- **Training steps**: 10000
📖 **Get Started**: [docs.phospho.ai](https://docs.phospho.ai?utm_source=huggingface_readme)
🤖 **Get your robot**: [robots.phospho.ai](https://robots.phospho.ai?utm_source=huggingface_readme)
|
ERC-ITEA/MuduoLLM
|
ERC-ITEA
| 2025-06-19T09:27:12Z | 0 | 1 | null |
[
"safetensors",
"qwen2",
"Education",
"K12",
"text-generation",
"conversational",
"en",
"zh",
"base_model:Qwen/Qwen2.5-14B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-14B-Instruct",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2025-05-27T06:11:35Z |
---
license: apache-2.0
language:
- en
- zh
base_model:
- Qwen/Qwen2.5-14B-Instruct
pipeline_tag: text-generation
tags:
- Education
- K12
---
<div align="center">
<h1 style="font-size: 2.8em; margin-bottom: 0.5em;">师承万象基础教育大模型(MuduoLLM)</h1>
<h2 style="font-size: 1.8em; color: #666; margin-top: 0;">传承木铎金声,智启教育未来<br>Inheriting Wisdom, Inspiring Future Education</h2>
[](https://github.com/ERC-ITEA/MuduoLLM)
</div>
# 简介 | Introduction
师承万象基础教育大模型(MuduoLLM)是北京师范大学和北京世纪好未来教育科技有限公司共同研发的首个紧扣新课标知识体系的基础教育语言大模型,确保所学知识内容与基础教育课程标准高度契合,精准对接学生核心素养培育与教师专业成长需求。在应用层面,基础教育大模型深度融合新课标核心知识和育人理念,具备知识理解型智能解题、启发引导式智能答疑、情境创设型智能出题和素养导向型教案生成等教育能力,从知识传授转向核心素养培育,助力培养全面发展时代新人。同时,师承万象基础教育大模型是当前性能表现较为突出的开源基础教育大模型之一,为开发者提供了可进一步优化的空间。
MuduoLLM is the first foundational educational large language model jointly developed by Beijing Normal University and TAL Education Group, specifically aligned with China's new curriculum standards. It ensures high fidelity to curriculum content, precisely addressing the dual needs of cultivating students' core competencies and supporting teachers’ professional growth.
At the application level, MuduoLLM deeply embeds the core knowledge and educational philosophy of the new curriculum, enabling capabilities such as knowledge-based intelligent problem-solving, inquiry-guided Q&A, context-driven question generation, and competency-oriented lesson planning. By shifting the focus from rote knowledge transmission to holistic competency cultivation, the model supports the development of well-rounded talents for the new era.
Furthermore, MuduoLLM stands out as one of the most capable open-source foundational models in the field of basic education, offering substantial potential for further optimization and secondary development by researchers and developers.
## 模型概述 | Model Overview
- **Base Architecture**: [Qwen2.5-14B-Instruct](https://huggingface.co/Qwen/Qwen2.5-14B-Instruct)
- **Parameters**: 14 billion (14B)
- **Training Data**: Approximately 1TB of general and educational domain text data, including question generation, Q&A, and lesson plans
- **Training Methods**:
- Domain-specific Pretraining: Injecting educational domain-specific corpora to enhance semantic understanding
- Supervised Fine-Tuning (SFT): Targeted optimization for educational scenarios (question generation/Q&A/lesson plan generation)
- Direct Preference Optimization (DPO): Improving generation accuracy and educational ethics compliance through expert-annotated preference data
## 训练环境 | Training Environment
- **Hardware Configuration**:
- Number of Servers: 4
- GPU Configuration: 8 NVIDIA A800-SXM4-80GB per server (32 total)
- Single GPU Memory: 80GB
- Interconnection: NVLink 4.0 (9.6TB/s bandwidth)
- Parallel Strategy: Data Parallel + Tensor Parallel
- **Software**:
- Base Framework:
- CUDA: 12.4
- PyTorch: 2.5.1+cu124
- Optimization Tools:
- DeepSpeed: 0.15.4 (ZeRO-3 optimizer)
- FlashAttention
- Training Precision: bfloat16 mixed precision
- Runtime Environment: Conda virtual environment + Weights & Biases monitoring
- **Training Duration**: 10 days
# 快速开始 | Quick Start
## 环境要求 | Requirements
- Python 3.10
- PyTorch
- transformers >= 4.37.0
## 安装 | Installation
```bash
# 克隆仓库 | Clone repository
huggingface-cli download --resume-download ERC-ITEA/MuduoLLM --local-dir ./muduo-llm/
# 创建环境 | Create environment
conda create --name muduo python=3.10
conda activate muduo
# 安装依赖 | Install dependencies
pip install transformers
```
## 使用示例 | Usage Example
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
# 加载模型和分词器 | Load model and tokenizer
model_name = "MuduoLLM"
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
# 准备输入 | Prepare input
prompt = "Give me a short introduction to large language model."
messages = [
{"role": "system", "content": "你是北京师范大学和好未来开发的人工智能语言模型,名为师承万象。可以回答问题、提供信息、进行对话并帮助解决问题。"},
{"role": "user", "content": prompt}
]
# 生成回复 | Generate response
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generated_ids = model.generate(
**model_inputs,
max_new_tokens=512
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
```
# 许可证 | License
This project is licensed under the [Apache 2.0](https://opensource.org/licenses/Apache-2.0) License.
This project is for research purposes only. The project developers are not responsible for any harm or loss caused by using this project (including but not limited to data, models, code, etc.).
# 引用 | Citation
```bibtex
@misc{muduollm2025,
title={MuduoLLM: A High-Performance LLM for Intelligent Education Solutions},
author={MuduoLLM Contributors from BNU and TAL},
year={2025},
howpublished={\url{https://huggingface.co/ERC-ITEA/MuduoLLM}},
}
```
|
John6666/moonmilk-contrast-v10-sdxl
|
John6666
| 2025-06-19T09:27:04Z | 0 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"anime",
"anime style",
"high contrast",
"pastel colors",
"portrait focus",
"illustrious",
"en",
"base_model:OnomaAIResearch/Illustrious-xl-early-release-v0",
"base_model:finetune:OnomaAIResearch/Illustrious-xl-early-release-v0",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] |
text-to-image
| 2025-06-19T09:21:03Z |
---
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- anime
- anime style
- high contrast
- pastel colors
- portrait focus
- illustrious
base_model: OnomaAIResearch/Illustrious-xl-early-release-v0
---
Original model is [here](https://civitai.com/models/1693666/moonmilk-contrast?modelVersionId=1916780).
This model created by [Neural_Lens](https://civitai.com/user/Neural_Lens).
|
musab-mk/functionary_lora_test_model
|
musab-mk
| 2025-06-19T09:26:08Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-19T09:25:53Z |
---
base_model: unsloth/llama-3.2-1b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** musab-mk
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-1b-instruct-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
veddhanth/lora-trained-xl-stage-1-597
|
veddhanth
| 2025-06-19T09:24:30Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] |
text-to-image
| 2025-06-19T08:57:34Z |
---
base_model: stabilityai/stable-diffusion-xl-base-1.0
library_name: diffusers
license: openrail++
instance_prompt: a realistic portrait of sks face
widget: []
tags:
- text-to-image
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SDXL LoRA DreamBooth - veddhanth/lora-trained-xl-stage-1-597
<Gallery />
## Model description
These are veddhanth/lora-trained-xl-stage-1-597 LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: True.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use a realistic portrait of sks face to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](veddhanth/lora-trained-xl-stage-1-597/tree/main) them in the Files & versions tab.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model]
|
RP235/Text2SQL-MSSQL-FineTuned
|
RP235
| 2025-06-19T09:20:45Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-19T09:16:55Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Varinder2110/rachitwifenew
|
Varinder2110
| 2025-06-19T09:16:39Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-06-19T08:10:34Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: TOK
---
# Rachitwifenew
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `TOK` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "TOK",
"lora_weights": "https://huggingface.co/Varinder2110/rachitwifenew/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('Varinder2110/rachitwifenew', weight_name='lora.safetensors')
image = pipeline('TOK').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 6000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/Varinder2110/rachitwifenew/discussions) to add images that show off what you’ve made with this LoRA.
|
koboldcpp/imgmodel
|
koboldcpp
| 2025-06-19T09:16:36Z | 6,631 | 6 | null |
[
"gguf",
"region:us"
] | null | 2024-03-07T06:02:36Z |
This repo contains a few simple image generation models. You can also load any Stable Diffusion 1.5, SDXL, SD3 or Flux model into KoboldCpp.
For SD1.5 and SDXL, you only need the basic model (and VAE fix if no baked VAE). For SD3 and Flux you will also need a Clip and T5-XXL model (not included here)
You can use `--sdmodel` to load the base model, and check `--help` for other SD settings.
They are all also available in the image tab in the launcher

|
John6666/lusterij-no1-sdxl
|
John6666
| 2025-06-19T09:15:03Z | 0 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"realistic",
"photorealistic",
"semireal",
"polished",
"illustrious",
"en",
"base_model:OnomaAIResearch/Illustrious-xl-early-release-v0",
"base_model:finetune:OnomaAIResearch/Illustrious-xl-early-release-v0",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] |
text-to-image
| 2025-06-19T09:09:00Z |
---
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- realistic
- photorealistic
- semireal
- polished
- illustrious
base_model: OnomaAIResearch/Illustrious-xl-early-release-v0
---
Original model is [here](https://civitai.com/models/1690918/lusterij?modelVersionId=1913666).
This model created by [reijlita](https://civitai.com/user/reijlita).
|
SaraHe/aya_compress_Q1_Q4_13_DoRA_RStable_OLoRA_layers
|
SaraHe
| 2025-06-19T09:11:48Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:CohereLabs/aya-expanse-8b",
"base_model:finetune:CohereLabs/aya-expanse-8b",
"endpoints_compatible",
"region:us"
] | null | 2025-06-19T09:11:40Z |
---
base_model: CohereForAI/aya-expanse-8b
library_name: transformers
model_name: aya_compress_Q1_Q4_13_DoRA_RStable_OLoRA_layers
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for aya_compress_Q1_Q4_13_DoRA_RStable_OLoRA_layers
This model is a fine-tuned version of [CohereForAI/aya-expanse-8b](https://huggingface.co/CohereForAI/aya-expanse-8b).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="SaraHe/aya_compress_Q1_Q4_13_DoRA_RStable_OLoRA_layers", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.18.2
- Transformers: 4.52.4
- Pytorch: 2.6.0+cu124
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
rekhtalabs/ur-2-hi-translit
|
rekhtalabs
| 2025-06-19T09:10:39Z | 0 | 2 | null |
[
"custom-transliterator",
"pytorch",
"transliterations",
"urdu",
"hindi",
"RekhtaLabs",
"Sequence2Sequence",
"Transformers",
"ur",
"hi",
"license:other",
"region:us"
] | null | 2025-06-18T12:06:45Z |
---
license: other
language:
- ur
- hi
tags:
- pytorch
- transliterations
- urdu
- hindi
- RekhtaLabs
- Sequence2Sequence
- Transformers
---

# Urdu to Hindi Transliteration Model (Character-Level)
This is a lightweight Transformer-based model trained for **character-level transliteration** of **Urdu poetry into Hindi script**. The model is specially tuned for literary and poetic text, making it ideal for applications involving shayari, nazm, or ghazals.
# Live Inference
https://rekhtalabs.org/demo/transliterate
## Model Overview
| Feature | Value |
|-------------------------|----------------------------|
| **Architecture** | Transformer (BART-style) |
| **Tokenizer** | Character-level |
| **Total Parameters** | 4M |
| **Source Vocab Size** | 87 (Urdu characters) |
| **Target Vocab Size** | 109 (Hindi characters) |
| **Embedding Size** | 256 |
| **Hidden Size** | 256 (`d_model`) |
| **Feedforward Size** | 512 |
| **Encoder Layers** | 3 |
| **Decoder Layers** | 3 |
| **Attention Heads** | 4 |
| **Max Sequence Length** | 128 characters |
---
## Usage
```python
from huggingface_hub import snapshot_download
path = snapshot_download(
repo_id="rekhtalabs/ur-2-hi-translit",
local_dir="./ur-2-hi-translit",
local_dir_use_symlinks=False
)
cd ur-2-hi-translit
```
```python
pip install -r requirements.txt
```
```python
import torch
import sentencepiece as spm
from torch import nn
class PositionalEncoding(nn.Module):
def __init__(self, d_model, max_len=5000):
super().__init__()
pe = torch.zeros(max_len, d_model)
position = torch.arange(0, max_len).unsqueeze(1)
div_term = torch.exp(torch.arange(0, d_model, 2) * (-torch.log(torch.tensor(10000.0)) / d_model))
pe[:, 0::2] = torch.sin(position.float() * div_term)
pe[:, 1::2] = torch.cos(position.float() * div_term)
self.pe = pe.unsqueeze(0)
def forward(self, x):
return x + self.pe[:, :x.size(1)].to(x.device)
class Transformer(nn.Module):
def __init__(self, src_vocab_size, tgt_vocab_size, d_model=256, nhead=4, num_layers=3, dim_feedforward=512, max_len=128):
super().__init__()
self.src_tok_emb = nn.Embedding(src_vocab_size, d_model)
self.tgt_tok_emb = nn.Embedding(tgt_vocab_size, d_model)
self.pos_encoder = PositionalEncoding(d_model, max_len)
self.transformer = nn.Transformer(
d_model=d_model,
nhead=nhead,
num_encoder_layers=num_layers,
num_decoder_layers=num_layers,
dim_feedforward=dim_feedforward,
batch_first=True
)
self.out = nn.Linear(d_model, tgt_vocab_size)
def forward(self, src, tgt):
src = self.pos_encoder(self.src_tok_emb(src))
tgt = self.pos_encoder(self.tgt_tok_emb(tgt))
tgt_input = tgt
tgt_mask = nn.Transformer.generate_square_subsequent_mask(tgt_input.size(1)).to(src.device)
out = self.transformer(src, tgt_input, tgt_mask=tgt_mask)
return self.out(out)
device = torch.device("cpu")
sp_nastaaliq = spm.SentencePieceProcessor(model_file='nastaaliq_char.model')
sp_devanagari = spm.SentencePieceProcessor(model_file='devanagari_char.model')
model = Transformer(
src_vocab_size=sp_nastaaliq.get_piece_size(),
tgt_vocab_size=sp_devanagari.get_piece_size()
)
checkpoint = torch.load("transformer_transliteration_final.pt", map_location=device)
model.load_state_dict(checkpoint['model_state_dict'])
model.eval()
model.to(device)
def transliterate_urdu_to_hindi(text_urdu, max_len=128):
src_ids = [2] + sp_nastaaliq.encode(text_urdu)[:max_len - 2] + [3]
src_tensor = torch.tensor(src_ids).unsqueeze(0).to(device) # shape: (1, seq_len)
tgt_ids = [2]
tgt_tensor = torch.tensor(tgt_ids).unsqueeze(0).to(device)
for _ in range(max_len):
output = model(src_tensor, tgt_tensor)
next_token_logits = output[0, -1, :]
next_token_id = torch.argmax(next_token_logits).item()
if next_token_id == 3:
break
tgt_ids.append(next_token_id)
tgt_tensor = torch.tensor(tgt_ids).unsqueeze(0).to(device)
return sp_devanagari.decode(tgt_ids[1:])
res=transliterate_urdu_to_hindi("وسوسے دل میں نہ رکھ خوف رسن لے کے نہ چل")
print(res)
```
## Output
```python
वसवसे दिल में न रख ख़ौफ़-ए-रसन ले के न चल
```
---
## Dataset
- Trained on approximately **800,000 Urdu-Hindi sentence pairs**
- Sourced and curated for transliteration.
- Character-level alignment ensured for quality
---
|
JKL0909/AOI_Inspection_model
|
JKL0909
| 2025-06-19T09:10:14Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-06-19T09:10:07Z |
---
license: apache-2.0
---
|
eerwqwe/soyen
|
eerwqwe
| 2025-06-19T09:08:43Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-06-19T08:26:03Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: soyeon
---
# Soyen
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `soyeon` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "soyeon",
"lora_weights": "https://huggingface.co/eerwqwe/soyen/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('eerwqwe/soyen', weight_name='lora.safetensors')
image = pipeline('soyeon').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/eerwqwe/soyen/discussions) to add images that show off what you’ve made with this LoRA.
|
Kortix/FastApply-7B-v1.0_GGUF
|
Kortix
| 2025-06-19T09:08:11Z | 104 | 0 |
transformers
|
[
"transformers",
"gguf",
"qwen2",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"fast-apply",
"instant-apply",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-10-24T09:27:04Z |
---
base_model: unsloth/qwen2.5-coder-7b-instruct-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
- sft
- fast-apply
- instant-apply
---
> **Remember to use `temperature = 0` for optimal results during inference.**
*🚀 Update May 2025:* For production-grade throughput, we use *[Morph](https://morphllm.com)* (the hosted Fast Apply API powering [SoftGen AI](https://softgen.ai/)).
- Morph hits *~4,500 tok/s* even on huge token diffs
- Larger model trained on millions of examples and tuned for accuracy.
> Stable inference, large free tier, highly recommended if you need serious speed in prod.
# FastApply-7B-v1.0
[Github: kortix-ai/fast-apply](https://github.com/kortix-ai/fast-apply)
[Dataset: Kortix/FastApply-dataset-v1.0](https://huggingface.co/datasets/Kortix/FastApply-dataset-v1.0)
[Try it now on 👉 Google Colab](https://colab.research.google.com/drive/1aBqM8Lqso0Xfgtr75G4LFQivXcChU_36?usp=sharing)
## Model Details
### Basic Information
- **Developed by:** Kortix
- **License:** apache-2.0
- **Finetuned from model:** [unsloth/Qwen2.5-Coder-7B-Instruct-bnb-4bit](https://huggingface.co/unsloth/Qwen2.5-Coder-7B-Instruct-bnb-4bit)
### Model Description
FastApply-7B-v1.0 is a 7B model designed for instant code application, producing full file edits to power [SoftGen AI](https://softgen.ai/).
It is part of the Fast Apply pipeline for data generation and fine-tuning Qwen2.5 Coder models.
The model achieves high throughput when deployed on fast providers like Fireworks while maintaining high edit accuracy, with a speed of approximately 150 tokens/second.
## Intended Use
FastApply-7B-v1.0 is intended for use in AI-powered code editors and tools that require fast, accurate code modifications. It is particularly well-suited for:
- Instant code application tasks
- Full file edits
- Integration with AI-powered code editors like Aider and PearAI
- Local tools to reduce the cost of frontier model output
## Inference template
FastApply-7B-v1.0 is based on the Qwen2.5 Coder architecture and is fine-tuned for code editing tasks. It uses a specific prompt structure for inference:
```
<|im_start|>system
You are a coding assistant that helps merge code updates, ensuring every modification is fully integrated.<|im_end|>
<|im_start|>user
Merge all changes from the <update> snippet into the <code> below.
- Preserve the code's structure, order, comments, and indentation exactly.
- Output only the updated code, enclosed within <updated-code> and </updated-code> tags.
- Do not include any additional text, explanations, placeholders, ellipses, or code fences.
<code>{original_code}</code>
<update>{update_snippet}</update>
Provide the complete updated code.<|im_end|>
<|im_start|>assistant
```
The model's output is structured as:
```
<updated-code>[Full-complete updated file]</updated-code>
```
## Additional Information
For more details on the Fast Apply pipeline, data generation process, and deployment instructions, please refer to the [GitHub repository](https://github.com/kortix-ai/fast-apply).
## How to Use
To use the model, you can load it using the Hugging Face Transformers library:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("Kortix/FastApply-7B-v1.0")
tokenizer = AutoTokenizer.from_pretrained("Kortix/FastApply-7B-v1.0")
# Prepare your input following the prompt structure mentioned above
input_text = """<|im_start|>system
You are a coding assistant that helps merge code updates, ensuring every modification is fully integrated.<|im_end|>
<|im_start|>user
Merge all changes from the <update> snippet into the <code> below.
- Preserve the code's structure, order, comments, and indentation exactly.
- Output only the updated code, enclosed within <updated-code> and </updated-code> tags.
- Do not include any additional text, explanations, placeholders, ellipses, or code fences.
<code>{original_code}</code>
<update>{update_snippet}</update>
Provide the complete updated code.<|im_end|>
<|im_start|>assistant
"""
input_text = input_text.format(
original_code=original_code,
update_snippet=update_snippet,
).strip()
# Generate the response
input_ids = tokenizer.encode(input_text, return_tensors="pt")
output = model.generate(input_ids, max_length=8192,)
response = tokenizer.decode(output[0][len(input_ids[0]):])
print(response)
# Extract the updated code from the response
updated_code = response.split("<updated-code>")[1].split("</updated-code>")[0]
```
|
Kortix/FastApply-7B-v1.0
|
Kortix
| 2025-06-19T09:08:03Z | 1,546 | 31 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"fast-apply",
"instant-apply",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-10-18T16:27:00Z |
---
base_model: unsloth/qwen2.5-coder-7b-instruct-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
- sft
- fast-apply
- instant-apply
---
# FastApply-7B-v1.0
*🚀 Update May 2025:* For production-grade throughput, we use *[Morph](https://morphllm.com)* (the hosted Fast Apply API powering [SoftGen AI](https://softgen.ai/)).
- Morph hits *~4,500 tok/s* even on huge token diffs
- Larger model trained on millions of examples and tuned for accuracy.
> Stable inference, large free tier, highly recommended if you need serious speed in prod.
[Github: kortix-ai/fast-apply](https://github.com/kortix-ai/fast-apply)
[Dataset: Kortix/FastApply-dataset-v1.0](https://huggingface.co/datasets/Kortix/FastApply-dataset-v1.0)
[Try it now on 👉 Google Colab](https://colab.research.google.com/drive/1aBqM8Lqso0Xfgtr75G4LFQivXcChU_36?usp=sharing)
## Model Details
### Basic Information
- **Developed by:** Kortix
- **License:** apache-2.0
- **Finetuned from model:** [unsloth/Qwen2.5-Coder-7B-Instruct-bnb-4bit](https://huggingface.co/unsloth/Qwen2.5-Coder-7B-Instruct-bnb-4bit)
### Model Description
FastApply-7B-v1.0 is a 7B model designed for instant code application, producing full file edits to power [SoftGen AI](https://softgen.ai/).
It is part of the Fast Apply pipeline for data generation and fine-tuning Qwen2.5 Coder models.
The model achieves high throughput when deployed on fast providers like Fireworks while maintaining high edit accuracy, with a speed of approximately 150 tokens/second.
## Intended Use
FastApply-7B-v1.0 is intended for use in AI-powered code editors and tools that require fast, accurate code modifications. It is particularly well-suited for:
- Instant code application tasks
- Full file edits
- Integration with AI-powered code editors like Aider and PearAI
- Local tools to reduce the cost of frontier model output
## Inference template
FastApply-7B-v1.0 is based on the Qwen2.5 Coder architecture and is fine-tuned for code editing tasks. It uses a specific prompt structure for inference:
```
<|im_start|>system
You are a coding assistant that helps merge code updates, ensuring every modification is fully integrated.<|im_end|>
<|im_start|>user
Merge all changes from the <update> snippet into the <code> below.
- Preserve the code's structure, order, comments, and indentation exactly.
- Output only the updated code, enclosed within <updated-code> and </updated-code> tags.
- Do not include any additional text, explanations, placeholders, ellipses, or code fences.
<code>{original_code}</code>
<update>{update_snippet}</update>
Provide the complete updated code.<|im_end|>
<|im_start|>assistant
```
The model's output is structured as:
```
<updated-code>[Full-complete updated file]</updated-code>
```
## Additional Information
For more details on the Fast Apply pipeline, data generation process, and deployment instructions, please refer to the [GitHub repository](https://github.com/kortix-ai/fast-apply).
## How to Use
To use the model, you can load it using the Hugging Face Transformers library:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("Kortix/FastApply-7B-v1.0")
tokenizer = AutoTokenizer.from_pretrained("Kortix/FastApply-7B-v1.0")
# Prepare your input following the prompt structure mentioned above
input_text = """<|im_start|>system
You are a coding assistant that helps merge code updates, ensuring every modification is fully integrated.<|im_end|>
<|im_start|>user
Merge all changes from the <update> snippet into the <code> below.
- Preserve the code's structure, order, comments, and indentation exactly.
- Output only the updated code, enclosed within <updated-code> and </updated-code> tags.
- Do not include any additional text, explanations, placeholders, ellipses, or code fences.
<code>{original_code}</code>
<update>{update_snippet}</update>
Provide the complete updated code.<|im_end|>
<|im_start|>assistant
"""
input_text = input_text.format(
original_code=original_code,
update_snippet=update_snippet,
).strip()
# Generate the response
input_ids = tokenizer.encode(input_text, return_tensors="pt")
output = model.generate(input_ids, max_length=8192,)
response = tokenizer.decode(output[0][len(input_ids[0]):])
print(response)
# Extract the updated code from the response
updated_code = response.split("<updated-code>")[1].split("</updated-code>")[0]
```
## Evaluation:

|
wbasharat/llama3_3b_freeze_instructionTuning
|
wbasharat
| 2025-06-19T09:08:00Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"llama-factory",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-19T09:02:24Z |
---
library_name: transformers
tags:
- llama-factory
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ArtusDev/Darkhn_L3.3-70B-Animus-V2-EXL3
|
ArtusDev
| 2025-06-19T09:04:51Z | 0 | 0 | null |
[
"llama-3.3",
"finetune",
"roleplay",
"chat",
"wings-of-fire",
"exl3",
"dataset:Darkhn/WOF_QA_V2",
"dataset:Darkhn/WOF_Pretraining",
"dataset:Darkhn/WOF_V3_Combined_Dataset",
"base_model:Darkhn/L3.3-70B-Animus-V2",
"base_model:quantized:Darkhn/L3.3-70B-Animus-V2",
"license:llama3.3",
"region:us"
] | null | 2025-06-19T03:38:18Z |
---
base_model: Darkhn/L3.3-70B-Animus-V2
base_model_relation: quantized
quantized_by: ArtusDev
license: llama3.3
tags:
- llama-3.3
- finetune
- roleplay
- chat
- wings-of-fire
- exl3
datasets:
- Darkhn/WOF_QA_V2
- Darkhn/WOF_Pretraining
- Darkhn/WOF_V3_Combined_Dataset
---
## EXL3 Quants of Darkhn/L3.3-70B-Animus-V2
EXL3 quants of [Darkhn/L3.3-70B-Animus-V2](https://huggingface.co/Darkhn/L3.3-70B-Animus-V2) using <a href="https://github.com/turboderp-org/exllamav3/">exllamav3</a> for quantization.
### Quants
| Quant(Revision) | Bits per Weight | Head Bits |
| -------- | ---------- | --------- |
| [2.5_H6](https://huggingface.co/ArtusDev/Darkhn_L3.3-70B-Animus-V2-EXL3/tree/2.5bpw_H6) | 2.5 | 6 |
| [3.0_H6](https://huggingface.co/ArtusDev/Darkhn_L3.3-70B-Animus-V2-EXL3/tree/3.0bpw_H6) | 3.0 | 6 |
| [3.5_H6](https://huggingface.co/ArtusDev/Darkhn_L3.3-70B-Animus-V2-EXL3/tree/3.5bpw_H6) | 3.5 | 6 |
| [4.0_H6](https://huggingface.co/ArtusDev/Darkhn_L3.3-70B-Animus-V2-EXL3/tree/4.0bpw_H6) | 4.0 | 6 |
| [4.25_H6](https://huggingface.co/ArtusDev/Darkhn_L3.3-70B-Animus-V2-EXL3/tree/4.25bpw_H6) | 4.25 | 6 |
| [5.0_H6](https://huggingface.co/ArtusDev/Darkhn_L3.3-70B-Animus-V2-EXL3/tree/5.0bpw_H6) | 5.0 | 6 |
| [6.0_H6](https://huggingface.co/ArtusDev/Darkhn_L3.3-70B-Animus-V2-EXL3/tree/6.0bpw_H6) | 6.0 | 6 |
| [8.0_H6](https://huggingface.co/ArtusDev/Darkhn_L3.3-70B-Animus-V2-EXL3/tree/8.0bpw_H6) | 8.0 | 6 |
| [8.0_H8](https://huggingface.co/ArtusDev/Darkhn_L3.3-70B-Animus-V2-EXL3/tree/8.0bpw_H8) | 8.0 | 8 |
### Downloading quants with huggingface-cli
<details>
<summary>Click to view download instructions</summary>
Install hugginface-cli:
```bash
pip install -U "huggingface_hub[cli]"
```
Download quant by targeting the specific quant revision (branch):
```
huggingface-cli download ArtusDev/Darkhn_L3.3-70B-Animus-V2-EXL3 --revision "5.0bpw_H6" --local-dir ./
```
</details>
|
seroe/bge-m3-turkish-triplet-matryoshka
|
seroe
| 2025-06-19T09:03:52Z | 1,186 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"xlm-roberta",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:70941",
"loss:MatryoshkaLoss",
"loss:CachedMultipleNegativesRankingLoss",
"tr",
"dataset:seroe/vodex-turkish-triplets",
"arxiv:1908.10084",
"arxiv:2205.13147",
"arxiv:2101.06983",
"base_model:BAAI/bge-m3",
"base_model:finetune:BAAI/bge-m3",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-05-08T15:46:33Z |
---
language:
- tr
license: apache-2.0
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:70941
- loss:MatryoshkaLoss
- loss:CachedMultipleNegativesRankingLoss
base_model: BAAI/bge-m3
widget:
- source_sentence: Bağımsız akıllı cihaz kampanyalarının detayları nelerdir?
sentences:
- Vodafone'un kampanyalarına katılan aboneler, seçtikleri tarifeye göre belirli
indirimlerden yararlanabilirler. Örneğin, Cep Avantaj tarifeleri üzerinden 10
TL ile 20 TL arasında indirim sağlanmaktadır.
- Kampanyalar, farklı cihaz modelleri için aylık ödeme planları sunmaktadır.
- Vodafone'un kampanyaları, sadece internet paketleri ile ilgilidir.
- source_sentence: İnternet hattımı nasıl iptal ettirebilirim?
sentences:
- Vodafone'da, müşterinin taşımak istediği numara yerine yanlışlıkla başka bir numaranın
taşındığı durumlar, hatalı taşıma sürecini kapsamaktadır.
- İnternet hattınızı iptal etmek için sadece online form doldurmanız yeterlidir.
- İptal işlemi için müşteri hizmetlerini arayarak talepte bulunmanız ve iptal dilekçesini
göndermeniz gerekmektedir.
- source_sentence: Vodafone kampanyalarında veri kullanımı ve cezai şartlar
sentences:
- Yurtdışında geçerli olan tarifeler, yalnızca kurumsal müşterilere yöneliktir.
- Vodafone kampanyaları, kullanıcıların istedikleri kadar veri kullanmalarına izin
verir ve cezai şartlar uygulanmaz.
- Vodafone'un kampanyalarında, kullanıcıların veri paketleri kullanımı belirli limitler
dahilinde gerçekleşir ve kampanyadan yararlanma koşulları vardır.
- source_sentence: Alcatel One Touch POP 7 Tablet'in işletim sistemi nedir?
sentences:
- Yabancılar için sunulan Limitsiz Fiber Kampanyası, belirli hızlarda internet paketleri
sunmaktadır ve katılım için yabancı uyruklu olma şartı aranmaktadır.
- Alcatel One Touch POP 7 Tablet, iOS işletim sistemi ile çalışan bir cihazdır.
- Alcatel One Touch POP 7 Tablet, Android 4.2 işletim sistemi ile çalışmaktadır.
- source_sentence: Vodafone Net'in internet hız garantisi var mı?
sentences:
- Ek data paketlerinin geçerlilik süreleri genellikle 30 gün olarak belirlenmiştir,
ancak bazı paketler 7 gün geçerlilik süresine sahiptir.
- Vodafone Net, tüm abonelerine en az 100 Mbps hız garantisi vermektedir.
- Vodafone Net, internet hızını garanti etmemekte, bu hız abonenin hattının uygunluğuna
ve santrale olan mesafeye bağlı olarak değişiklik göstermektedir.
datasets:
- seroe/vodex-turkish-triplets
pipeline_tag: sentence-similarity
library_name: sentence-transformers
metrics:
- cosine_accuracy
model-index:
- name: BGE-M3 Türkçe Triplet Matryoshka
results:
- task:
type: triplet
name: Triplet
dataset:
name: tr triplet dev 1024d
type: tr-triplet-dev-1024d
metrics:
- type: cosine_accuracy
value: 0.6087287664413452
name: Cosine Accuracy
- type: cosine_accuracy
value: 0.9566100239753723
name: Cosine Accuracy
- task:
type: triplet
name: Triplet
dataset:
name: tr triplet dev 768d
type: tr-triplet-dev-768d
metrics:
- type: cosine_accuracy
value: 0.61735600233078
name: Cosine Accuracy
- type: cosine_accuracy
value: 0.95711749792099
name: Cosine Accuracy
- task:
type: triplet
name: Triplet
dataset:
name: tr triplet dev 512d
type: tr-triplet-dev-512d
metrics:
- type: cosine_accuracy
value: 0.6302968859672546
name: Cosine Accuracy
- type: cosine_accuracy
value: 0.9588936567306519
name: Cosine Accuracy
- task:
type: triplet
name: Triplet
dataset:
name: tr triplet dev 256d
type: tr-triplet-dev-256d
metrics:
- type: cosine_accuracy
value: 0.6016239523887634
name: Cosine Accuracy
- type: cosine_accuracy
value: 0.9604161381721497
name: Cosine Accuracy
- task:
type: triplet
name: Triplet
dataset:
name: all nli test 1024d
type: all-nli-test-1024d
metrics:
- type: cosine_accuracy
value: 0.9507864117622375
name: Cosine Accuracy
- task:
type: triplet
name: Triplet
dataset:
name: all nli test 768d
type: all-nli-test-768d
metrics:
- type: cosine_accuracy
value: 0.9533231854438782
name: Cosine Accuracy
- task:
type: triplet
name: Triplet
dataset:
name: all nli test 512d
type: all-nli-test-512d
metrics:
- type: cosine_accuracy
value: 0.9545915722846985
name: Cosine Accuracy
- task:
type: triplet
name: Triplet
dataset:
name: all nli test 256d
type: all-nli-test-256d
metrics:
- type: cosine_accuracy
value: 0.9545915722846985
name: Cosine Accuracy
---
# BGE-M3 Türkçe Triplet Matryoshka
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [BAAI/bge-m3](https://huggingface.co/BAAI/bge-m3) on the [vodex-turkish-triplets](https://huggingface.co/datasets/seroe/vodex-turkish-triplets) dataset. It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
## ⚠️ Domain-Specific Warning
This model was fine-tuned on Turkish data specifically sourced from the **telecommunications domain**.
While it performs well on telecom-related tasks such as mobile services, billing, campaigns, and subscription details, it may not generalize well to other domains.
Please assess its performance carefully before applying it outside of telecommunications use cases.
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [BAAI/bge-m3](https://huggingface.co/BAAI/bge-m3) <!-- at revision 5617a9f61b028005a4858fdac845db406aefb181 -->
- **Maximum Sequence Length:** 8192 tokens
- **Output Dimensionality:** 1024 dimensions
- **Similarity Function:** Cosine Similarity
- **Training Dataset:**
- [vodex-turkish-triplets](https://huggingface.co/datasets/seroe/vodex-turkish-triplets)
- **Language:** tr
- **License:** apache-2.0
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 8192, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("seroe/bge-m3-turkish-triplet-matryoshka")
# Run inference
sentences = [
"Vodafone Net'in internet hız garantisi var mı?",
'Vodafone Net, internet hızını garanti etmemekte, bu hız abonenin hattının uygunluğuna ve santrale olan mesafeye bağlı olarak değişiklik göstermektedir.',
'Vodafone Net, tüm abonelerine en az 100 Mbps hız garantisi vermektedir.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Triplet
* Datasets: `tr-triplet-dev-1024d` and `all-nli-test-1024d`
* Evaluated with [<code>TripletEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.TripletEvaluator) with these parameters:
```json
{
"truncate_dim": 1024
}
```
| Metric | tr-triplet-dev-1024d | all-nli-test-1024d |
|:--------------------|:---------------------|:-------------------|
| **cosine_accuracy** | **0.6087** | **0.9508** |
#### Triplet
* Datasets: `tr-triplet-dev-768d` and `all-nli-test-768d`
* Evaluated with [<code>TripletEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.TripletEvaluator) with these parameters:
```json
{
"truncate_dim": 768
}
```
| Metric | tr-triplet-dev-768d | all-nli-test-768d |
|:--------------------|:--------------------|:------------------|
| **cosine_accuracy** | **0.6174** | **0.9533** |
#### Triplet
* Datasets: `tr-triplet-dev-512d` and `all-nli-test-512d`
* Evaluated with [<code>TripletEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.TripletEvaluator) with these parameters:
```json
{
"truncate_dim": 512
}
```
| Metric | tr-triplet-dev-512d | all-nli-test-512d |
|:--------------------|:--------------------|:------------------|
| **cosine_accuracy** | **0.6303** | **0.9546** |
#### Triplet
* Datasets: `tr-triplet-dev-256d` and `all-nli-test-256d`
* Evaluated with [<code>TripletEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.TripletEvaluator) with these parameters:
```json
{
"truncate_dim": 256
}
```
| Metric | tr-triplet-dev-256d | all-nli-test-256d |
|:--------------------|:--------------------|:------------------|
| **cosine_accuracy** | **0.6016** | **0.9546** |
#### Triplet
* Dataset: `tr-triplet-dev-1024d`
* Evaluated with [<code>TripletEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.TripletEvaluator) with these parameters:
```json
{
"truncate_dim": 1024
}
```
| Metric | Value |
|:--------------------|:-----------|
| **cosine_accuracy** | **0.9566** |
#### Triplet
* Dataset: `tr-triplet-dev-768d`
* Evaluated with [<code>TripletEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.TripletEvaluator) with these parameters:
```json
{
"truncate_dim": 768
}
```
| Metric | Value |
|:--------------------|:-----------|
| **cosine_accuracy** | **0.9571** |
#### Triplet
* Dataset: `tr-triplet-dev-512d`
* Evaluated with [<code>TripletEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.TripletEvaluator) with these parameters:
```json
{
"truncate_dim": 512
}
```
| Metric | Value |
|:--------------------|:-----------|
| **cosine_accuracy** | **0.9589** |
#### Triplet
* Dataset: `tr-triplet-dev-256d`
* Evaluated with [<code>TripletEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.TripletEvaluator) with these parameters:
```json
{
"truncate_dim": 256
}
```
| Metric | Value |
|:--------------------|:-----------|
| **cosine_accuracy** | **0.9604** |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### vodex-turkish-triplets
* Dataset: [vodex-turkish-triplets](https://huggingface.co/datasets/seroe/vodex-turkish-triplets) at [0c9fab0](https://huggingface.co/datasets/seroe/vodex-turkish-triplets/tree/0c9fab08a042b11b30064b5adc205f626c8a6add)
* Size: 70,941 training samples
* Columns: <code>query</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | query | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 4 tokens</li><li>mean: 13.58 tokens</li><li>max: 46 tokens</li></ul> | <ul><li>min: 11 tokens</li><li>mean: 26.32 tokens</li><li>max: 61 tokens</li></ul> | <ul><li>min: 10 tokens</li><li>mean: 20.54 tokens</li><li>max: 45 tokens</li></ul> |
* Samples:
| query | positive | negative |
|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>Kampanya tarihleri ve katılım şartları</code> | <code>Kampanya, 11 Ekim 2018'de başlayıp 29 Ekim 2018'de sona erecek. Katılımcıların belirli bilgileri doldurması ve Vodafone Müzik pass veya Video pass sahibi olmaları gerekiyor.</code> | <code>Kampanya, sadece İstanbul'daki kullanıcılar için geçerli olup, diğer şehirlerden katılım mümkün değildir.</code> |
| <code>Taahhüt süresi dolmadan başka bir kampanyaya geçiş yapılırsa ne olur?</code> | <code>Eğer abone taahhüt süresi dolmadan başka bir kampanyaya geçerse, bu durumda önceki kampanya süresince sağlanan indirimler ve diğer faydalar, iptal tarihinden sonraki fatura ile tahsil edilecektir.</code> | <code>Aboneler, taahhüt süresi dolmadan başka bir kampanyaya geçtiklerinde, yeni kampanyadan faydalanmak için ek bir ücret ödemek zorundadırlar.</code> |
| <code>FreeZone üyeliğimi nasıl sorgulayabilirim?</code> | <code>Üyeliğinizi sorgulamak için FREEZONESORGU yazarak 1525'e SMS gönderebilirsiniz.</code> | <code>Üyeliğinizi sorgulamak için Vodafone mağazasına gitmeniz gerekmektedir.</code> |
* Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters:
```json
{
"loss": "CachedMultipleNegativesRankingLoss",
"matryoshka_dims": [
1024,
768,
512,
256
],
"matryoshka_weights": [
1,
1,
1,
1
],
"n_dims_per_step": -1
}
```
### Evaluation Dataset
#### vodex-turkish-triplets
* Dataset: [vodex-turkish-triplets](https://huggingface.co/datasets/seroe/vodex-turkish-triplets) at [0c9fab0](https://huggingface.co/datasets/seroe/vodex-turkish-triplets/tree/0c9fab08a042b11b30064b5adc205f626c8a6add)
* Size: 3,941 evaluation samples
* Columns: <code>query</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | query | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 4 tokens</li><li>mean: 13.26 tokens</li><li>max: 36 tokens</li></ul> | <ul><li>min: 12 tokens</li><li>mean: 26.55 tokens</li><li>max: 62 tokens</li></ul> | <ul><li>min: 9 tokens</li><li>mean: 20.4 tokens</li><li>max: 40 tokens</li></ul> |
* Samples:
| query | positive | negative |
|:-----------------------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------|
| <code>Vodafone Net'e geçiş yaparken bağlantı ücreti var mı?</code> | <code>Vodafone Net'e geçişte 264 TL bağlantı ücreti bulunmaktadır ve bu ücret 24 ay boyunca aylık 11 TL olarak faturalandırılmaktadır.</code> | <code>Vodafone Net'e geçişte bağlantı ücreti yoktur ve tüm işlemler ücretsizdir.</code> |
| <code>Bağımsız akıllı cihaz kampanyalarının detayları nelerdir?</code> | <code>Kampanyalar, farklı cihaz modelleri için aylık ödeme planları sunmaktadır.</code> | <code>Vodafone'un kampanyaları, sadece internet paketleri ile ilgilidir.</code> |
| <code>Fibermax hizmeti iptal edilirse ne gibi sonuçlar doğar?</code> | <code>İptal işlemi taahhüt süresi bitmeden yapılırsa, indirimler ve ücretsiz hizmet bedelleri ödenmelidir.</code> | <code>Fibermax hizmeti iptal edildiğinde, kullanıcıdan hiçbir ücret talep edilmez.</code> |
* Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters:
```json
{
"loss": "CachedMultipleNegativesRankingLoss",
"matryoshka_dims": [
1024,
768,
512,
256
],
"matryoshka_weights": [
1,
1,
1,
1
],
"n_dims_per_step": -1
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 2048
- `per_device_eval_batch_size`: 256
- `weight_decay`: 0.01
- `num_train_epochs`: 2
- `lr_scheduler_type`: cosine
- `warmup_ratio`: 0.05
- `bf16`: True
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 2048
- `per_device_eval_batch_size`: 256
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.01
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 2
- `max_steps`: -1
- `lr_scheduler_type`: cosine
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.05
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: True
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `tp_size`: 0
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | Validation Loss | tr-triplet-dev-1024d_cosine_accuracy | tr-triplet-dev-768d_cosine_accuracy | tr-triplet-dev-512d_cosine_accuracy | tr-triplet-dev-256d_cosine_accuracy | all-nli-test-1024d_cosine_accuracy | all-nli-test-768d_cosine_accuracy | all-nli-test-512d_cosine_accuracy | all-nli-test-256d_cosine_accuracy |
|:------:|:----:|:-------------:|:---------------:|:------------------------------------:|:-----------------------------------:|:-----------------------------------:|:-----------------------------------:|:----------------------------------:|:---------------------------------:|:---------------------------------:|:---------------------------------:|
| -1 | -1 | - | - | 0.6087 | 0.6174 | 0.6303 | 0.6016 | - | - | - | - |
| 0.3429 | 12 | 10.677 | 3.4988 | 0.8764 | 0.8807 | 0.8876 | 0.8950 | - | - | - | - |
| 0.6857 | 24 | 6.5947 | 2.7219 | 0.9345 | 0.9353 | 0.9411 | 0.9419 | - | - | - | - |
| 1.0286 | 36 | 5.777 | 2.4641 | 0.9584 | 0.9579 | 0.9602 | 0.9617 | - | - | - | - |
| 1.3714 | 48 | 5.3727 | 2.5269 | 0.9531 | 0.9543 | 0.9576 | 0.9546 | - | - | - | - |
| 1.7143 | 60 | 5.1485 | 2.4440 | 0.9566 | 0.9571 | 0.9589 | 0.9604 | - | - | - | - |
| -1 | -1 | - | - | - | - | - | - | 0.9508 | 0.9533 | 0.9546 | 0.9546 |
### Framework Versions
- Python: 3.10.12
- Sentence Transformers: 4.2.0.dev0
- Transformers: 4.51.3
- PyTorch: 2.7.0+cu126
- Accelerate: 1.6.0
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MatryoshkaLoss
```bibtex
@misc{kusupati2024matryoshka,
title={Matryoshka Representation Learning},
author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
year={2024},
eprint={2205.13147},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
#### CachedMultipleNegativesRankingLoss
```bibtex
@misc{gao2021scaling,
title={Scaling Deep Contrastive Learning Batch Size under Memory Limited Setup},
author={Luyu Gao and Yunyi Zhang and Jiawei Han and Jamie Callan},
year={2021},
eprint={2101.06983},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
seroe/jina-reranker-v2-base-multilingual-turkish-reranker-triplet_v1
|
seroe
| 2025-06-19T09:03:22Z | 386 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"cross-encoder",
"generated_from_trainer",
"dataset_size:89964",
"loss:CachedMultipleNegativesRankingLoss",
"text-ranking",
"custom_code",
"tr",
"dataset:seroe/vodex-turkish-reranker-triplets",
"arxiv:1908.10084",
"base_model:jinaai/jina-reranker-v2-base-multilingual",
"base_model:finetune:jinaai/jina-reranker-v2-base-multilingual",
"license:apache-2.0",
"model-index",
"region:us"
] |
text-ranking
| 2025-05-13T16:37:10Z |
---
language:
- tr
license: apache-2.0
tags:
- sentence-transformers
- cross-encoder
- generated_from_trainer
- dataset_size:89964
- loss:CachedMultipleNegativesRankingLoss
base_model: jinaai/jina-reranker-v2-base-multilingual
datasets:
- seroe/vodex-turkish-reranker-triplets
pipeline_tag: text-ranking
library_name: sentence-transformers
metrics:
- map
- mrr@10
- ndcg@10
model-index:
- name: jinaai/jina-reranker-v2-base-multilingual
results:
- task:
type: cross-encoder-reranking
name: Cross Encoder Reranking
dataset:
name: val hard
type: val-hard
metrics:
- type: map
value: 0.6456
name: Map
- type: mrr@10
value: 0.6516
name: Mrr@10
- type: ndcg@10
value: 0.7332
name: Ndcg@10
- task:
type: cross-encoder-reranking
name: Cross Encoder Reranking
dataset:
name: test hard
type: test-hard
metrics:
- type: map
value: 0.6395
name: Map
- type: mrr@10
value: 0.6463
name: Mrr@10
- type: ndcg@10
value: 0.729
name: Ndcg@10
---
# jinaai/jina-reranker-v2-base-multilingual
This is a [Cross Encoder](https://www.sbert.net/docs/cross_encoder/usage/usage.html) model finetuned from [jinaai/jina-reranker-v2-base-multilingual](https://huggingface.co/jinaai/jina-reranker-v2-base-multilingual) on the [vodex-turkish-reranker-triplets](https://huggingface.co/datasets/seroe/vodex-turkish-reranker-triplets) dataset using the [sentence-transformers](https://www.SBERT.net) library. It computes scores for pairs of texts, which can be used for text reranking and semantic search.
## Model Details
## ⚠️ Domain-Specific Warning
This model was fine-tuned on Turkish data specifically sourced from the **telecommunications domain**.
While it performs well on telecom-related tasks such as mobile services, billing, campaigns, and subscription details, it may not generalize well to other domains.
Please assess its performance carefully before applying it outside of telecommunications use cases.
### Model Description
- **Model Type:** Cross Encoder
- **Base model:** [jinaai/jina-reranker-v2-base-multilingual](https://huggingface.co/jinaai/jina-reranker-v2-base-multilingual) <!-- at revision eed787badf7784e1a25c0eaa428627c8cbef511e -->
- **Maximum Sequence Length:** 1024 tokens
- **Number of Output Labels:** 1 label
- **Training Dataset:**
- [vodex-turkish-reranker-triplets](https://huggingface.co/datasets/seroe/vodex-turkish-reranker-triplets)
- **Language:** tr
- **License:** apache-2.0
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Documentation:** [Cross Encoder Documentation](https://www.sbert.net/docs/cross_encoder/usage/usage.html)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Cross Encoders on Hugging Face](https://huggingface.co/models?library=sentence-transformers&other=cross-encoder)
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import CrossEncoder
# Download from the 🤗 Hub
model = CrossEncoder("seroe/jina-reranker-v2-base-multilingual-turkish-reranker-triplet_v1")
# Get scores for pairs of texts
pairs = [
['Faturasız tarifelerde yurtdışı mesaj ücretleri ne kadardır?', 'Yurtdışına gönderilen mesajlar için ücret 75 kuruş olarak belirlenmiştir.'],
['Kampanya süresince internet hızı nasıl değişebilir?', 'Kampanya süresince, limit ve altyapının desteklediği azami internet hızına kadar internet hızı yükseltilebilir.'],
["Vodafone'un tarifelerinde KDV ve ÖİV dahil midir?", "Vodafone'un tarifelerinde belirtilen ücretlere KDV ve ÖİV dahildir."],
['Taahhüt süresi dolmadan internet hizmeti iptal edilirse ne olur?', 'Eğer taahhüt süresi bitmeden internet hizmeti iptal edilirse, aboneye sunulan D-Smart hizmeti de iptal edilecektir.'],
['Aylık 15 GB ek paketini nereden satın alabilirim?', 'Bu ek paketi almak için hangi kanalları kullanabilirim?'],
]
scores = model.predict(pairs)
print(scores.shape)
# (5,)
# Or rank different texts based on similarity to a single text
ranks = model.rank(
'Faturasız tarifelerde yurtdışı mesaj ücretleri ne kadardır?',
[
'Yurtdışına gönderilen mesajlar için ücret 75 kuruş olarak belirlenmiştir.',
'Kampanya süresince, limit ve altyapının desteklediği azami internet hızına kadar internet hızı yükseltilebilir.',
"Vodafone'un tarifelerinde belirtilen ücretlere KDV ve ÖİV dahildir.",
'Eğer taahhüt süresi bitmeden internet hizmeti iptal edilirse, aboneye sunulan D-Smart hizmeti de iptal edilecektir.',
'Bu ek paketi almak için hangi kanalları kullanabilirim?',
]
)
# [{'corpus_id': ..., 'score': ...}, {'corpus_id': ..., 'score': ...}, ...]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Cross Encoder Reranking
* Datasets: `val-hard` and `test-hard`
* Evaluated with [<code>CrossEncoderRerankingEvaluator</code>](https://sbert.net/docs/package_reference/cross_encoder/evaluation.html#sentence_transformers.cross_encoder.evaluation.CrossEncoderRerankingEvaluator) with these parameters:
```json
{
"at_k": 10,
"always_rerank_positives": true
}
```
| Metric | val-hard | test-hard |
|:------------|:---------------------|:---------------------|
| map | 0.6456 (+0.0321) | 0.6395 (+0.0140) |
| mrr@10 | 0.6516 (+0.0380) | 0.6463 (+0.0208) |
| **ndcg@10** | **0.7332 (+0.1185)** | **0.7290 (+0.1018)** |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### vodex-turkish-reranker-triplets
* Dataset: [vodex-turkish-reranker-triplets](https://huggingface.co/datasets/seroe/vodex-turkish-reranker-triplets) at [ca7d206](https://huggingface.co/datasets/seroe/vodex-turkish-reranker-triplets/tree/ca7d2063ad4fec15fbf739835ab6926e051950c0)
* Size: 89,964 training samples
* Columns: <code>query</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | query | positive | negative |
|:--------|:------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 20 characters</li><li>mean: 57.83 characters</li><li>max: 112 characters</li></ul> | <ul><li>min: 35 characters</li><li>mean: 92.19 characters</li><li>max: 221 characters</li></ul> | <ul><li>min: 31 characters</li><li>mean: 78.41 characters</li><li>max: 143 characters</li></ul> |
* Samples:
| query | positive | negative |
|:-------------------------------------------------------------------------|:-----------------------------------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------|
| <code>Faturasız tarifelerde yurtdışı mesaj ücretleri ne kadardır?</code> | <code>Yurtdışına gönderilen mesajlar için ücret 75 kuruş olarak belirlenmiştir.</code> | <code>Faturasız tarifelerde yurtdışı mesaj ücretleri 10 kuruş olarak uygulanmaktadır.</code> |
| <code>Kampanya süresince internet hızı nasıl değişebilir?</code> | <code>Kampanya süresince, limit ve altyapının desteklediği azami internet hızına kadar internet hızı yükseltilebilir.</code> | <code>Kampanya süresince internet hızı sabit kalır ve değişiklik yapılamaz.</code> |
| <code>Vodafone'un tarifelerinde KDV ve ÖİV dahil midir?</code> | <code>Vodafone'un tarifelerinde belirtilen ücretlere KDV ve ÖİV dahildir.</code> | <code>Vodafone tarifelerinde KDV ve ÖİV, abonelerin talep etmesi durumunda eklenmektedir.</code> |
* Loss: [<code>CachedMultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/cross_encoder/losses.html#cachedmultiplenegativesrankingloss) with these parameters:
```json
{
"scale": 10.0,
"num_negatives": 4,
"activation_fn": "torch.nn.modules.activation.Sigmoid",
"mini_batch_size": 32
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 512
- `per_device_eval_batch_size`: 1024
- `learning_rate`: 1e-06
- `weight_decay`: 0.08
- `warmup_ratio`: 0.2
- `bf16`: True
- `dataloader_num_workers`: 8
- `load_best_model_at_end`: True
- `group_by_length`: True
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 512
- `per_device_eval_batch_size`: 1024
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 1e-06
- `weight_decay`: 0.08
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 3
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.2
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: True
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 8
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: True
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: True
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | val-hard_ndcg@10 | test-hard_ndcg@10 |
|:----------:|:-------:|:-------------:|:--------------------:|:--------------------:|
| 0.5682 | 100 | 0.8068 | 0.7337 (+0.1191) | 0.7303 (+0.1031) |
| 1.1307 | 200 | 0.7885 | 0.7335 (+0.1189) | 0.7303 (+0.1032) |
| 1.6989 | 300 | 0.7881 | 0.7333 (+0.1187) | 0.7294 (+0.1022) |
| 2.2614 | 400 | 0.7881 | 0.7335 (+0.1189) | 0.7298 (+0.1027) |
| **2.8295** | **500** | **0.7851** | **0.7332 (+0.1185)** | **0.7290 (+0.1018)** |
* The bold row denotes the saved checkpoint.
### Framework Versions
- Python: 3.10.12
- Sentence Transformers: 4.2.0.dev0
- Transformers: 4.46.3
- PyTorch: 2.5.1+cu124
- Accelerate: 1.6.0
- Datasets: 3.6.0
- Tokenizers: 0.20.3
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
AmberYifan/llama3-8b-full-pretrain-mix-mid-tweet-1m-en
|
AmberYifan
| 2025-06-19T09:02:23Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"base_model:finetune:meta-llama/Meta-Llama-3-8B-Instruct",
"license:llama3",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-19T06:17:42Z |
---
library_name: transformers
license: llama3
base_model: meta-llama/Meta-Llama-3-8B-Instruct
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: llama3-8b-full-pretrain-mix-mid-tweet-1m-en
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# llama3-8b-full-pretrain-mix-mid-tweet-1m-en
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on the mix_mid_tweet_1m_en dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 8
- total_eval_batch_size: 64
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.52.4
- Pytorch 2.7.1+cu126
- Datasets 3.6.0
- Tokenizers 0.21.1
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.