modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-07-28 00:48:09
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 534
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-07-28 00:47:12
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
QuantFactory/Foundation-Sec-8B-GGUF
|
QuantFactory
| 2025-06-18T14:58:12Z | 0 | 1 |
transformers
|
[
"transformers",
"gguf",
"security",
"text-generation",
"en",
"arxiv:2504.21039",
"base_model:meta-llama/Llama-3.1-8B",
"base_model:quantized:meta-llama/Llama-3.1-8B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-15T11:31:03Z |
---
base_model:
- meta-llama/Llama-3.1-8B
language:
- en
library_name: transformers
license: apache-2.0
pipeline_tag: text-generation
tags:
- security
---
[](https://hf.co/QuantFactory)
# QuantFactory/Foundation-Sec-8B-GGUF
This is quantized version of [fdtn-ai/Foundation-Sec-8B](https://huggingface.co/fdtn-ai/Foundation-Sec-8B) created using llama.cpp
# Original Model Card
# Foundation-Sec-8B - Model Card
## Model Information
Foundation-Sec-8B (Llama-3.1-FoundationAI-SecurityLLM-base-8B) is an open-weight, 8-billion parameter base language model specialized for cybersecurity applications. It extends Llama-3.1-8B model through continued pretraining on a curated corpus of cybersecurity-specific text, including threat intelligence reports, vulnerability databases, incident response documentation, and security standards. It has been trained to understand security concepts, terminology, and practices across multiple security domains. The model is designed to serve as a domain-adapted base model for use in applications such as threat detection, vulnerability assessment, security automation, and attack simulation. Foundation-Sec-8B enables organizations to build AI-driven security tools that can be deployed locally, reducing dependency on cloud-based AI services while maintaining high performance on security-related tasks.
- **Model Name:** Foundation-Sec-8B (Llama-3.1-FoundationAI-SecurityLLM-base-8B)
- **Model Developer:** Amin Karbasi and team at Foundation AI — Cisco
- **Technical Report:** [`https://arxiv.org/abs/2504.21039`](https://arxiv.org/abs/2504.21039)
- **Model Card Contact:** For questions about the team, model usage, and future directions, contact [`[email protected]`](mailto:[email protected]). For technical questions about the model, please contact [`[email protected]`](mailto:[email protected]).
- **Model Release Date:** April 28, 2025
- **Supported Language(s):** English
- **Model Architecture:** Auto-regressive language model that uses an optimized transformer architecture (Meta Llama-3.1-8B backbone)
- **Training Objective:** Continued pre-training on cybersecurity-specific corpus
- **Training Data Status:** This is a static model trained on an offline dataset. Future versions of the tuned models will be released on updated data.
- **License:** Apache 2.0
## Intended Use
### Intended Use Cases
Foundation-Sec-8B is designed for security practitioners, researchers, and developers building AI-powered security workflows and applications. Foundation-Sec-8B is optimized for three core use case categories:
- **SOC Acceleration**: Automating triage, summarization, case note generation, and evidence collection.
- **Proactive Threat Defense**: Simulating attacks, prioritizing vulnerabilities, mapping TTPs, and modeling attacker behavior.
- **Engineering Enablement**: Providing security assistance, validating configurations, assessing compliance evidence, and improving security posture.
The model is intended for local deployment in environments prioritizing data security, regulatory compliance, and operational control.
### Downstream Use
Foundation-Sec-8B can be used directly for security-related language tasks and serves as a strong starting point for fine-tuning across a variety of cybersecurity workflows. Example downstream applications include:
- Summarization
- Summarizing detection playbooks and incident reports
- Consolidating fragmented analyst notes into structured case summaries
- Classification
- Mapping threats to MITRE ATT&CK techniques
- Prioritizing vulnerabilities based on contextual risk
- Classifying security-relevant emails and leaked file contents
- Named Entity Recognition
- Extracting compliance evidence from documents
- Building network behavior profiles from technical manuals
- Question & Answer
- Assisting SOC analysts with alert triage and investigation
- Responding to cloud security and software compliance queries
- Reasoning and Text Generation
- Generating red-team attack plans and threat models
- Predicting attacker next steps in active investigations
- Enriching vulnerability scan results with contextual insights
For questions or assistance with fine-tuning Foundation-Sec-8B, please contact **Paul Kassianik** ([email protected]) or **Dhruv Kedia** ([email protected]).
### Out-of-Scope Use
The following uses are out-of-scope and are neither recommended nor intended use cases:
1. **Generating harmful content** - The model should not be used to:
- Generate malware or other malicious code
- Create phishing content or social engineering scripts
- Develop attack plans targeting specific organizations
- Design exploitation techniques for vulnerabilities without legitimate security research purposes
2. **Critical security decisions without human oversight** - The model should not be used for:
- Autonomous security decision-making without human review
- Critical infrastructure protection without expert supervision
- Final determination of security compliance without human verification
- Autonomous vulnerability remediation without testing
3. **Legal or medical advice** - The model is not qualified to provide:
- Legal advice regarding security regulations, compliance requirements, or intellectual property disputes
- Legal advice regarding security issues that would reference legal statutes, precedents, or case law necessary to provide legal advice
- Medical advice regarding health impacts of security incidents
4. **Non-security use cases** - The model is specifically optimized for cybersecurity and may not perform as well on general tasks as models trained for broader applications.
5. **Violation of Laws or Regulations** - Any use that violates applicable laws or regulations.
## How to Get Started with the Model
Use the code below to get started with the model.
```python
# Import the required libraries
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
# Load the model and tokenizer
tokenizer = AutoTokenizer.from_pretrained("fdtn-ai/Foundation-Sec-8B")
model = AutoModelForCausalLM.from_pretrained("fdtn-ai/Foundation-Sec-8B")
# Example: Matching CWE to CVE IDs
prompt="""CVE-2021-44228 is a remote code execution flaw in Apache Log4j2 via unsafe JNDI lookups (“Log4Shell”). The CWE is CWE-502.
CVE-2017-0144 is a remote code execution vulnerability in Microsoft’s SMBv1 server (“EternalBlue”) due to a buffer overflow. The CWE is CWE-119.
CVE-2014-0160 is an information-disclosure bug in OpenSSL’s heartbeat extension (“Heartbleed”) causing out-of-bounds reads. The CWE is CWE-125.
CVE-2017-5638 is a remote code execution issue in Apache Struts 2’s Jakarta Multipart parser stemming from improper input validation of the Content-Type header. The CWE is CWE-20.
CVE-2019-0708 is a remote code execution vulnerability in Microsoft’s Remote Desktop Services (“BlueKeep”) triggered by a use-after-free. The CWE is CWE-416.
CVE-2015-10011 is a vulnerability about OpenDNS OpenResolve improper log output neutralization. The CWE is"""
# Tokenize the input
inputs = tokenizer(prompt, return_tensors="pt")
# Generate the response
outputs = model.generate(
inputs["input_ids"],
max_new_tokens=3,
do_sample=True,
temperature=0.1,
top_p=0.9,
)
# Decode and print the response
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
response = response.replace(prompt, "").strip()
print(response)
```
## Training and Evaluation
### Training Data
Foundation-sec-8B was pretrained on approximately **5.1 billion tokens** of cybersecurity-specific data curated in-house by Cisco’s Foundation AI team. The dataset was meticulously collected from public sources on the web.
The pre-training corpus was built through a multi-stage pipeline that included large-scale web crawling, relevancy filtering, deduplication, and quality filtering.
**Data cutoff:** April 10th, 2025.
More detailed methodology is available in the technical report.
### Training Setup
Foundation-sec-8B is based on the **Llama 3.1 8B** architecture. Pre-training was performed on Cisco Foundation AI’s internal compute cluster.
Key training details:
- **Continued pretraining** for cybersecurity specialization
- **4096-token** sequence length
- **Optimizer:** AdamW
More detailed methodology is available in the technical report.
### Evaluation
Foundation-sec-8B was benchmarked on cybersecurity and general reasoning tasks, using a standardized 5-shot prompting setup (temperature = 0.3).
| **Benchmark** | **Foundation-sec-8B** | **Llama 3.1 8B** | **Llama 3.1 70B** |
| --- | --- | --- | --- |
| CTI-MCQA | 67.39 | 64.14 | 68.23 |
| CTI-RCM | 75.26 | 66.43 | 72.66 |
**Benchmark Overview:**
- **CTI-MCQA:** 2,500 multiple-choice questions testing cybersecurity knowledge across frameworks like MITRE ATT&CK, NIST, GDPR, and threat intelligence best practices.
- **CTI-RCM:** 900+ vulnerability root cause mapping examples linking CVEs to CWE categories, assessing deep understanding of security weaknesses.
**Key highlights:**
- **+3 to +9 point gains** over Llama-3.1-8B across security-specific benchmarks.
- **Comparable or better** performance than Llama-3.1-70B on cyber threat intelligence tasks.
- **Minimal drop (~2%)** in general language reasoning (MMLU) despite cybersecurity specialization.
For full benchmark details and evaluation methodology, please refer to the technical report.
## Limitations
Foundation-Sec-8B has several limitations that users should be aware of:
1. **Domain-specific knowledge limitations**:
- Foundation-Sec-8B may not be familiar with recent vulnerabilities, exploits, or novel attack vectors or security technologies released after its training cutoff date
- Knowledge of specialized or proprietary security systems or tools may be limited
2. **Potential biases**:
- The model may reflect biases present in security literature and documentation
- The model may be trained on known attack patterns and have difficulty recognizing novel attack vectors
- Security practices and recommendations may be biased toward certain technological ecosystems
- Geographic and cultural biases in security approaches may be present
3. **Security risks**:
- The model cannot verify the identity or intentions of users
- Adversarial prompting techniques might potentially bypass safety mechanisms
- The model may unintentionally provide information that could be misused if proper prompting guardrails are not implemented
4. **Contextual blindness:**
- The model may struggle to understand the complex interrelationships between systems, users, and data in order to provide accurate context.
5. **Technical limitations**:
- Performance varies based on how security concepts are described in prompts
- May not fully understand complex, multi-step security scenarios without clear explanation
- Cannot access external systems or actively scan environments
- Cannot independently verify factual accuracy of its outputs
6. **Ethical considerations**:
- Dual-use nature of security knowledge requires careful consideration of appropriate use cases
### Recommendations
To address the limitations of Foundation-Sec-8B, we recommend:
1. **Human oversight**:
- Always have qualified security professionals review model outputs before implementation
- Use the model as an assistive tool rather than a replacement for expert human judgment
- Implement a human-in-the-loop approach for security-critical applications
2. **System design safeguards**:
- Implement additional validation layers for applications built with this model
- Consider architectural constraints that limit the model's ability to perform potentially harmful actions (excessive agency)
- Deploy the model in environments with appropriate access controls
3. **Prompt engineering**:
- Use carefully designed prompts that encourage ethical security practices
- Include explicit instructions regarding responsible disclosure and ethical hacking principles
- Structure interactions to minimize the risk of inadvertently harmful outputs
4. **Knowledge supplementation**:
- Supplement the model with up-to-date security feeds and databases
- Implement retrieval-augmented generation for current threat intelligence sources
5. **Usage policies**:
- Develop and enforce clear acceptable use policies for applications using this model
- Implement monitoring and auditing for high-risk applications
- Create documentation for end users about the model's limitations
|
Bonnief/mbert-om-100k-finetuned
|
Bonnief
| 2025-06-18T14:56:24Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"fill-mask",
"generated_from_trainer",
"base_model:google-bert/bert-base-multilingual-cased",
"base_model:finetune:google-bert/bert-base-multilingual-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2025-06-18T09:24:29Z |
---
library_name: transformers
license: apache-2.0
base_model: bert-base-multilingual-cased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: mbert-om-100k-finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbert-om-100k-finetuned
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: nan
- Accuracy: 0.2348
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- training_steps: 100000
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.53.0.dev0
- Pytorch 2.6.0+cu124
- Datasets 2.14.4
- Tokenizers 0.21.1
|
pictgensupport/vintagecameras
|
pictgensupport
| 2025-06-18T14:56:06Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-06-18T14:56:03Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: vintagecameras
---
# Vintagecameras
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `vintagecameras` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('pictgensupport/vintagecameras', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
brunoyun/Llama-3.1-Amelia-AQA-8B-v1-GGUF
|
brunoyun
| 2025-06-18T14:52:15Z | 0 | 0 | null |
[
"gguf",
"base_model:meta-llama/Llama-3.1-8B-Instruct",
"base_model:quantized:meta-llama/Llama-3.1-8B-Instruct",
"license:llama3.1",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-06-17T11:59:51Z |
---
license: llama3.1
base_model:
- meta-llama/Llama-3.1-8B-Instruct
---
|
Mariogver/detr-finetuned-microglia
|
Mariogver
| 2025-06-18T14:51:48Z | 11 | 0 |
transformers
|
[
"transformers",
"safetensors",
"detr",
"object-detection",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] |
object-detection
| 2025-06-18T08:47:09Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
NazarBai/mushroom-resnet50
|
NazarBai
| 2025-06-18T14:50:22Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-06-18T14:47:58Z |
---
license: apache-2.0
---
|
Soughing/mha_large
|
Soughing
| 2025-06-18T14:45:38Z | 130 | 0 | null |
[
"pytorch",
"gpt2",
"license:apache-2.0",
"region:us"
] | null | 2025-06-14T07:26:29Z |
---
license: apache-2.0
---
|
sgonzalezygil/sd-finetuning-dreambooth-v10-800
|
sgonzalezygil
| 2025-06-18T14:40:13Z | 0 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2025-06-18T14:38:52Z |
---
library_name: diffusers
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🧨 diffusers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
sgonzalezygil/sd-finetuning-dreambooth-v10
|
sgonzalezygil
| 2025-06-18T14:35:34Z | 0 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2025-06-18T14:34:08Z |
---
library_name: diffusers
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🧨 diffusers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
nicofarr/panns_Cnn10
|
nicofarr
| 2025-06-18T14:32:50Z | 0 | 0 |
pytorch
|
[
"pytorch",
"safetensors",
"Cnn10",
"audio",
"model_hub_mixin",
"panns",
"pytorch_model_hub_mixin",
"tagging",
"license:apache-2.0",
"region:us"
] | null | 2025-06-18T14:32:36Z |
---
library_name: pytorch
license: apache-2.0
tags:
- audio
- model_hub_mixin
- panns
- pytorch_model_hub_mixin
- tagging
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Library: https://github.com/qiuqiangkong/audioset_tagging_cnn
- Docs: https://github.com/qiuqiangkong/audioset_tagging_cnn
|
omkar334/codegemma_tokenizer
|
omkar334
| 2025-06-18T14:31:07Z | 0 | 0 |
transformers
|
[
"transformers",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-18T14:31:02Z |
---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
omkar334/lora_model
|
omkar334
| 2025-06-18T14:25:36Z | 0 | 0 |
transformers
|
[
"transformers",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-18T14:25:31Z |
---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
docato/PaddleOCR_Mobile_Models
|
docato
| 2025-06-18T14:25:22Z | 0 | 0 | null |
[
"onnx",
"en",
"tr",
"license:mit",
"region:us"
] | null | 2025-06-18T08:46:04Z |
---
license: mit
language:
- en
- tr
---
# PaddleOCR Mobile Quantized Models (ONNX)
## Overview
This repo hosts four **ONNX** models converted from PaddleOCR mobile checkpoints
| File | Task | Language scope | Input shape |
|------|------|----------------|-------------|
| `Multilingual_PP-OCRv3_det_infer.onnx` | Text-detection | 80+ scripts | **NCHW • 1×3×H×W** |
| `PP-OCRv3_mobile_det_infer.onnx` | Text-detection | Latin only | 1×3×H×W |
| `ch_ppocr_mobile_v2.0_cls_infer.onnx` | Angle classifier | Chinese/Latin | 1×3×H×W |
| `latin_PP-OCRv3_mobile_rec_infer.onnx` | Text-recognition | Latin | 1×3×H×W |
All models were:
* exported with **paddle2onnx 1.2.3** (`opset 11`)
* simplified via **onnx-simplifier 0.4+**
## Quick Start
```python
import onnxruntime as ort, numpy as np
img = np.random.rand(1, 3, 224, 224).astype("float32")
det = ort.InferenceSession("Multilingual_PP-OCRv3_det_infer.onnx")
cls = ort.InferenceSession("ch_ppocr_mobile_v2.0_cls_infer.onnx")
rec = ort.InferenceSession("latin_PP-OCRv3_mobile_rec_infer.onnx")
det_out = det.run(None, {det.get_inputs()[0].name: img})[0]
# add your post-processing / cropping / decoding here …
|
RizkyAnanda/lora_model9
|
RizkyAnanda
| 2025-06-18T14:21:09Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-18T14:20:39Z |
---
base_model: unsloth/meta-llama-3.1-8b-instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** RizkyAnanda
- **License:** apache-2.0
- **Finetuned from model :** unsloth/meta-llama-3.1-8b-instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
mradermacher/Gasing-8B-alpha-v0.1-nearswap-base-GGUF
|
mradermacher
| 2025-06-18T14:20:18Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"merge",
"mergekit",
"lazymergekit",
"en",
"base_model:hafidhsoekma/Gasing-8B-alpha-v0.1-nearswap-base",
"base_model:quantized:hafidhsoekma/Gasing-8B-alpha-v0.1-nearswap-base",
"endpoints_compatible",
"region:us"
] | null | 2025-06-18T13:32:36Z |
---
base_model: hafidhsoekma/Gasing-8B-alpha-v0.1-nearswap-base
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- merge
- mergekit
- lazymergekit
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/hafidhsoekma/Gasing-8B-alpha-v0.1-nearswap-base
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Gasing-8B-alpha-v0.1-nearswap-base-GGUF/resolve/main/Gasing-8B-alpha-v0.1-nearswap-base.Q2_K.gguf) | Q2_K | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/Gasing-8B-alpha-v0.1-nearswap-base-GGUF/resolve/main/Gasing-8B-alpha-v0.1-nearswap-base.Q3_K_S.gguf) | Q3_K_S | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Gasing-8B-alpha-v0.1-nearswap-base-GGUF/resolve/main/Gasing-8B-alpha-v0.1-nearswap-base.Q3_K_M.gguf) | Q3_K_M | 4.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Gasing-8B-alpha-v0.1-nearswap-base-GGUF/resolve/main/Gasing-8B-alpha-v0.1-nearswap-base.Q3_K_L.gguf) | Q3_K_L | 4.5 | |
| [GGUF](https://huggingface.co/mradermacher/Gasing-8B-alpha-v0.1-nearswap-base-GGUF/resolve/main/Gasing-8B-alpha-v0.1-nearswap-base.IQ4_XS.gguf) | IQ4_XS | 4.7 | |
| [GGUF](https://huggingface.co/mradermacher/Gasing-8B-alpha-v0.1-nearswap-base-GGUF/resolve/main/Gasing-8B-alpha-v0.1-nearswap-base.Q4_K_S.gguf) | Q4_K_S | 4.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Gasing-8B-alpha-v0.1-nearswap-base-GGUF/resolve/main/Gasing-8B-alpha-v0.1-nearswap-base.Q4_K_M.gguf) | Q4_K_M | 5.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Gasing-8B-alpha-v0.1-nearswap-base-GGUF/resolve/main/Gasing-8B-alpha-v0.1-nearswap-base.Q5_K_S.gguf) | Q5_K_S | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Gasing-8B-alpha-v0.1-nearswap-base-GGUF/resolve/main/Gasing-8B-alpha-v0.1-nearswap-base.Q5_K_M.gguf) | Q5_K_M | 6.0 | |
| [GGUF](https://huggingface.co/mradermacher/Gasing-8B-alpha-v0.1-nearswap-base-GGUF/resolve/main/Gasing-8B-alpha-v0.1-nearswap-base.Q6_K.gguf) | Q6_K | 6.8 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Gasing-8B-alpha-v0.1-nearswap-base-GGUF/resolve/main/Gasing-8B-alpha-v0.1-nearswap-base.Q8_0.gguf) | Q8_0 | 8.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Gasing-8B-alpha-v0.1-nearswap-base-GGUF/resolve/main/Gasing-8B-alpha-v0.1-nearswap-base.f16.gguf) | f16 | 16.5 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
chutesai/MiniMax-M1-80k
|
chutesai
| 2025-06-18T14:15:33Z | 0 | 0 | null |
[
"safetensors",
"minimax_m1",
"text-generation",
"conversational",
"custom_code",
"arxiv:2506.13585",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2025-06-17T14:39:03Z |
---
pipeline_tag: text-generation
license: apache-2.0
---
<div align="center">
<svg width="60%" height="auto" viewBox="0 0 144 48" fill="none" xmlns="http://www.w3.org/2000/svg">
<path d="M26.6782 7.96523C26.6782 7.02436 25.913 6.26087 24.9739 6.26087C24.0348 6.26087 23.2695 7.0261 23.2695 7.96523V36.2139C23.2695 38.4 21.4904 40.1791 19.3043 40.1791C17.1183 40.1791 15.3391 38.4 15.3391 36.2139V18.0904C15.3391 17.1496 14.5739 16.3861 13.6348 16.3861C12.6956 16.3861 11.9304 17.1513 11.9304 18.0904V25.7722C11.9304 27.9583 10.1513 29.7374 7.96518 29.7374C5.7791 29.7374 4 27.9583 4 25.7722V22.9878C4 22.3635 4.50609 21.8574 5.13043 21.8574C5.75478 21.8574 6.26087 22.3635 6.26087 22.9878V25.7722C6.26087 26.713 7.02605 27.4765 7.96518 27.4765C8.90431 27.4765 9.66954 26.7113 9.66954 25.7722V18.0904C9.66954 15.9044 11.4487 14.1252 13.6348 14.1252C15.8209 14.1252 17.6 15.9044 17.6 18.0904V36.2139C17.6 37.1548 18.3652 37.9183 19.3043 37.9183C20.2435 37.9183 21.0087 37.153 21.0087 36.2139V25.1322V7.96523C21.0087 5.77914 22.7878 4 24.9739 4C27.16 4 28.9391 5.77914 28.9391 7.96523V31.3565C28.9391 31.9809 28.433 32.487 27.8087 32.487C27.1843 32.487 26.6782 31.9809 26.6782 31.3565V7.96523ZM47.6539 14.1252C45.4678 14.1252 43.6887 15.9044 43.6887 18.0904V33.2296C43.6887 34.1704 42.9235 34.9339 41.9843 34.9339C41.0452 34.9339 40.28 34.1687 40.28 33.2296V7.96523C40.28 5.77914 38.5008 4 36.3148 4C34.1287 4 32.3496 5.77914 32.3496 7.96523V40.0348C32.3496 40.9756 31.5843 41.7391 30.6452 41.7391C29.7061 41.7391 28.9409 40.9739 28.9409 40.0348V36.0643C28.9409 35.44 28.4348 34.9339 27.8104 34.9339C27.1861 34.9339 26.68 35.44 26.68 36.0643V40.0348C26.68 42.2209 28.4591 44 30.6452 44C32.8313 44 34.6104 42.2209 34.6104 40.0348V7.96523C34.6104 7.02436 35.3756 6.26087 36.3148 6.26087C37.2539 6.26087 38.0191 7.0261 38.0191 7.96523V33.2296C38.0191 35.4156 39.7982 37.1948 41.9843 37.1948C44.1704 37.1948 45.9496 35.4156 45.9496 33.2296V18.0904C45.9496 17.1496 46.7148 16.3861 47.6539 16.3861C48.593 16.3861 49.3582 17.1513 49.3582 18.0904V31.3565C49.3582 31.9809 49.8643 32.487 50.4887 32.487C51.113 32.487 51.6191 31.9809 51.6191 31.3565V18.0904C51.6191 15.9044 49.84 14.1252 47.6539 14.1252Z" fill="url(#paint0_linear_17_483)"/>
<path d="M68.7671 16.5615H71.2541C71.3254 16.5615 71.3845 16.5859 71.435 16.6363C71.4836 16.6868 71.5097 16.7459 71.5097 16.8172V31.1824C71.5097 31.2537 71.4854 31.3128 71.435 31.3633C71.3845 31.4137 71.3254 31.4381 71.2541 31.4381H68.7671C68.6958 31.4381 68.6367 31.4137 68.5862 31.3633C68.5358 31.3146 68.5115 31.2537 68.5115 31.1824V21.812C68.5115 21.7563 68.4976 21.7268 68.4697 21.7268C68.4419 21.7268 68.4123 21.7476 68.3845 21.7911L66.1323 25.318C66.061 25.4311 65.9619 25.4885 65.8349 25.4885H64.581C64.4541 25.4885 64.3549 25.4328 64.2836 25.318L62.0315 21.7911C62.0036 21.7494 61.9741 21.7302 61.9462 21.7372C61.9184 21.7441 61.9045 21.7772 61.9045 21.8328V31.1824C61.9045 31.2537 61.8802 31.3128 61.8297 31.3633C61.7793 31.4137 61.7202 31.4381 61.6489 31.4381H59.1619C59.0906 31.4381 59.0315 31.4137 58.981 31.3633C58.9306 31.3146 58.9062 31.2537 58.9062 31.1824V16.8172C58.9062 16.7459 58.9306 16.6868 58.981 16.6363C59.0315 16.5859 59.0906 16.5615 59.1619 16.5615H61.6489C61.7758 16.5615 61.8749 16.6189 61.9462 16.732L65.1341 21.6833C65.1758 21.7685 65.2193 21.7685 65.261 21.6833L68.4697 16.732C68.541 16.6189 68.6402 16.5615 68.7671 16.5615Z" fill="currentColor"/>
<path d="M74.1764 31.3633C74.1259 31.3146 74.1016 31.2537 74.1016 31.1824V16.8172C74.1016 16.7459 74.1259 16.6868 74.1764 16.6363C74.2268 16.5859 74.2859 16.5615 74.3572 16.5615H76.8442C76.9155 16.5615 76.9746 16.5859 77.0251 16.6363C77.0737 16.6868 77.0998 16.7459 77.0998 16.8172V31.1824C77.0998 31.2537 77.0755 31.3128 77.0251 31.3633C76.9746 31.4137 76.9155 31.4381 76.8442 31.4381H74.3572C74.2859 31.4381 74.2268 31.4137 74.1764 31.3633Z" fill="currentColor"/>
<path d="M88.3066 16.6361C88.3553 16.5874 88.4162 16.5613 88.4875 16.5613H90.9744C91.0457 16.5613 91.1049 16.5857 91.1553 16.6361C91.204 16.6865 91.2301 16.7457 91.2301 16.817V31.1822C91.2301 31.2535 91.2057 31.3126 91.1553 31.363C91.1049 31.4135 91.0457 31.4378 90.9744 31.4378H88.5727C88.4301 31.4378 88.331 31.3822 88.2753 31.2674L82.771 22.1717C82.7431 22.13 82.7136 22.1109 82.6858 22.1178C82.6579 22.1248 82.644 22.1578 82.644 22.2135L82.6858 31.1805C82.6858 31.2518 82.6614 31.3109 82.611 31.3613C82.5606 31.4117 82.5014 31.4361 82.4301 31.4361H79.9431C79.8718 31.4361 79.8127 31.4117 79.7623 31.3613C79.7118 31.3126 79.6875 31.2518 79.6875 31.1805V16.8152C79.6875 16.7439 79.7118 16.6848 79.7623 16.6344C79.8127 16.5839 79.8718 16.5596 79.9431 16.5596H82.3449C82.4858 16.5596 82.5849 16.617 82.6423 16.73L88.124 25.7822C88.1518 25.8239 88.1797 25.8431 88.2092 25.8361C88.2371 25.8292 88.251 25.7978 88.251 25.7404L88.2301 16.8152C88.2301 16.7439 88.2545 16.6848 88.3049 16.6344L88.3066 16.6361Z" fill="currentColor"/>
<path d="M93.8951 31.3633C93.8446 31.3146 93.8203 31.2537 93.8203 31.1824V16.8172C93.8203 16.7459 93.8446 16.6868 93.8951 16.6363C93.9455 16.5859 94.0047 16.5615 94.076 16.5615H96.5629C96.6342 16.5615 96.6934 16.5859 96.7438 16.6363C96.7925 16.6868 96.8186 16.7459 96.8186 16.8172V31.1824C96.8186 31.2537 96.7942 31.3128 96.7438 31.3633C96.6934 31.4137 96.6342 31.4381 96.5629 31.4381H94.076C94.0047 31.4381 93.9455 31.4137 93.8951 31.3633Z" fill="currentColor"/>
<path d="M109.267 16.5615H111.754C111.825 16.5615 111.885 16.5859 111.935 16.6363C111.984 16.6868 112.01 16.7459 112.01 16.8172V31.1824C112.01 31.2537 111.985 31.3128 111.935 31.3633C111.885 31.4137 111.825 31.4381 111.754 31.4381H109.267C109.196 31.4381 109.137 31.4137 109.086 31.3633C109.036 31.3146 109.011 31.2537 109.011 31.1824V21.812C109.011 21.7563 108.998 21.7268 108.97 21.7268C108.942 21.7268 108.912 21.7476 108.885 21.7911L106.632 25.318C106.561 25.4311 106.462 25.4885 106.335 25.4885H105.081C104.954 25.4885 104.855 25.4328 104.784 25.318L102.531 21.7911C102.504 21.7494 102.474 21.7302 102.446 21.7372C102.418 21.7441 102.405 21.7772 102.405 21.8328V31.1824C102.405 31.2537 102.38 31.3128 102.33 31.3633C102.279 31.4137 102.22 31.4381 102.149 31.4381H99.6619C99.5906 31.4381 99.5315 31.4137 99.481 31.3633C99.4306 31.3146 99.4062 31.2537 99.4062 31.1824V16.8172C99.4062 16.7459 99.4306 16.6868 99.481 16.6363C99.5315 16.5859 99.5906 16.5615 99.6619 16.5615H102.149C102.276 16.5615 102.375 16.6189 102.446 16.732L105.634 21.6833C105.676 21.7685 105.719 21.7685 105.761 21.6833L108.97 16.732C109.041 16.6189 109.14 16.5615 109.267 16.5615Z" fill="currentColor"/>
<path d="M123.782 31.2241L123.144 29.1424C123.116 29.0867 123.079 29.0572 123.038 29.0572H117.81C117.768 29.0572 117.732 29.085 117.704 29.1424L117.088 31.2241C117.046 31.3668 116.954 31.4363 116.812 31.4363H114.112C114.027 31.4363 113.963 31.412 113.921 31.3615C113.879 31.3128 113.871 31.2381 113.9 31.1389L118.49 16.7737C118.532 16.6328 118.624 16.5615 118.766 16.5615H122.102C122.243 16.5615 122.335 16.6328 122.379 16.7737L126.968 31.1389C126.982 31.1668 126.989 31.2033 126.989 31.245C126.989 31.372 126.911 31.4363 126.756 31.4363H124.057C123.916 31.4363 123.824 31.365 123.78 31.2241H123.782ZM118.554 26.7407H122.295C122.38 26.7407 122.408 26.6989 122.38 26.6137L120.467 20.3024C120.453 20.2467 120.432 20.2207 120.403 20.2276C120.375 20.2346 120.352 20.2589 120.339 20.3024L118.469 26.6137C118.455 26.6989 118.483 26.7407 118.554 26.7407Z" fill="currentColor"/>
<path d="M128.222 31.353C128.18 31.2974 128.187 31.2261 128.243 31.1409L132.365 24.0643C132.393 24.0226 132.393 23.9791 132.365 23.9374L128.243 16.8609L128.201 16.7339C128.201 16.6209 128.28 16.5635 128.434 16.5635H131.133C131.274 16.5635 131.38 16.6209 131.452 16.7339L134.213 21.6C134.255 21.6852 134.299 21.6852 134.34 21.6L137.102 16.7339C137.173 16.6209 137.28 16.5635 137.42 16.5635H140.099C140.198 16.5635 140.269 16.5913 140.311 16.6487C140.353 16.7061 140.346 16.7756 140.29 16.8609L136.168 23.9374C136.154 23.9791 136.154 24.0226 136.168 24.0643L140.29 31.1409L140.332 31.2678C140.332 31.3809 140.253 31.4383 140.099 31.4383H137.42C137.278 31.4383 137.172 31.3826 137.102 31.2678L134.34 26.4226C134.299 26.3374 134.255 26.3374 134.213 26.4226L131.429 31.2678C131.358 31.3809 131.252 31.4383 131.111 31.4383H128.433C128.333 31.4383 128.262 31.4104 128.22 31.353H128.222Z" fill="currentColor"/>
<defs>
<linearGradient id="paint0_linear_17_483" x1="3.99826" y1="24" x2="51.6208" y2="24" gradientUnits="userSpaceOnUse">
<stop stop-color="#E21680"/>
<stop offset="1" stop-color="#FF633A"/>
</linearGradient>
</defs>
</svg>
</div>
<hr>
<div align="center" style="line-height: 1;">
<a href="https://www.minimax.io" target="_blank" style="margin: 2px;">
<img alt="Homepage" src="https://img.shields.io/badge/_Homepage-MiniMax-FF4040?style=flat-square&labelColor=2C3E50&logo=data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHhtbG5zOnhsaW5rPSJodHRwOi8vd3d3LnczLm9yZy8xOTk5L3hsaW5rIiB2aWV3Qm94PSIwIDAgNDkwLjE2IDQxMS43Ij48ZGVmcz48c3R5bGU+LmNscy0xe2ZpbGw6I2ZmZjt9PC9zdHlsZT48L2RlZnM+PHBhdGggY2xhc3M9ImNscy0xIiBkPSJNMjMzLjQ1LDQwLjgxYTE3LjU1LDE3LjU1LDAsMSwwLTM1LjEsMFYzMzEuNTZhNDAuODIsNDAuODIsMCwwLDEtODEuNjMsMFYxNDVhMTcuNTUsMTcuNTUsMCwxLDAtMzUuMDksMHY3OS4wNmE0MC44Miw0MC44MiwwLDAsMS04MS42MywwVjE5NS40MmExMS42MywxMS42MywwLDAsMSwyMy4yNiwwdjI4LjY2YTE3LjU1LDE3LjU1LDAsMCwwLDM1LjEsMFYxNDVBNDAuODIsNDAuODIsMCwwLDEsMTQwLDE0NVYzMzEuNTZhMTcuNTUsMTcuNTUsMCwwLDAsMzUuMSwwVjIxNy41aDBWNDAuODFhNDAuODEsNDAuODEsMCwxLDEsODEuNjIsMFYyODEuNTZhMTEuNjMsMTEuNjMsMCwxLDEtMjMuMjYsMFptMjE1LjksNjMuNEE0MC44Niw0MC44NiwwLDAsMCw0MDguNTMsMTQ1VjMwMC44NWExNy41NSwxNy41NSwwLDAsMS0zNS4wOSwwdi0yNjBhNDAuODIsNDAuODIsMCwwLDAtODEuNjMsMFYzNzAuODlhMTcuNTUsMTcuNTUsMCwwLDEtMzUuMSwwVjMzMGExMS42MywxMS42MywwLDEsMC0yMy4yNiwwdjQwLjg2YTQwLjgxLDQwLjgxLDAsMCwwLDgxLjYyLDBWNDAuODFhMTcuNTUsMTcuNTUsMCwwLDEsMzUuMSwwdjI2MGE0MC44Miw0MC44MiwwLDAsMCw4MS42MywwVjE0NWExNy41NSwxNy41NSwwLDEsMSwzNS4xLDBWMjgxLjU2YTExLjYzLDExLjYzLDAsMCwwLDIzLjI2LDBWMTQ1QTQwLjg1LDQwLjg1LDAsMCwwLDQ0OS4zNSwxMDQuMjFaIi8+PC9zdmc+&logoWidth=20" style="display: inline-block; vertical-align: middle;"/>
</a>
<a href="https://arxiv.org/abs/2506.13585" target="_blank" style="margin: 2px;">
<img alt="Paper" src="https://img.shields.io/badge/📖_Paper-MiniMax--M1-FF4040?style=flat-square&labelColor=2C3E50" style="display: inline-block; vertical-align: middle;"/>
</a>
<a href="https://chat.minimax.io/" target="_blank" style="margin: 2px;">
<img alt="Chat" src="https://img.shields.io/badge/_MiniMax_Chat-FF4040?style=flat-square&labelColor=2C3E50&logo=data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHhtbG5zOnhsaW5rPSJodHRwOi8vd3d3LnczLm9yZy8xOTk5L3hsaW5rIiB2aWV3Qm94PSIwIDAgNDkwLjE2IDQxMS43Ij48ZGVmcz48c3R5bGU+LmNscy0xe2ZpbGw6I2ZmZjt9PC9zdHlsZT48L2RlZnM+PHBhdGggY2xhc3M9ImNscy0xIiBkPSJNMjMzLjQ1LDQwLjgxYTE3LjU1LDE3LjU1LDAsMSwwLTM1LjEsMFYzMzEuNTZhNDAuODIsNDAuODIsMCwwLDEtODEuNjMsMFYxNDVhMTcuNTUsMTcuNTUsMCwxLDAtMzUuMDksMHY3OS4wNmE0MC44Miw0MC44MiwwLDAsMS04MS42MywwVjE5NS40MmExMS42MywxMS42MywwLDAsMSwyMy4yNiwwdjI4LjY2YTE3LjU1LDE3LjU1LDAsMCwwLDM1LjEsMFYxNDVBNDAuODIsNDAuODIsMCwwLDEsMTQwLDE0NVYzMzEuNTZhMTcuNTUsMTcuNTUsMCwwLDAsMzUuMSwwVjIxNy41aDBWNDAuODFhNDAuODEsNDAuODEsMCwxLDEsODEuNjIsMFYyODEuNTZhMTEuNjMsMTEuNjMsMCwxLDEtMjMuMjYsMFptMjE1LjksNjMuNEE0MC44Niw0MC44NiwwLDAsMCw0MDguNTMsMTQ1VjMwMC44NWExNy41NSwxNy41NSwwLDAsMS0zNS4wOSwwdi0yNjBhNDAuODIsNDAuODIsMCwwLDAtODEuNjMsMFYzNzAuODlhMTcuNTUsMTcuNTUsMCwwLDEtMzUuMSwwVjMzMGExMS42MywxMS42MywwLDEsMC0yMy4yNiwwdjQwLjg2YTQwLjgxLDQwLjgxLDAsMCwwLDgxLjYyLDBWNDAuODFhMTcuNTUsMTcuNTUsMCwwLDEsMzUuMSwwdjI2MGE0MC44Miw0MC44MiwwLDAsMCw4MS42MywwVjE0NWExNy41NSwxNy41NSwwLDEsMSwzNS4xLDBWMjgxLjU2YTExLjYzLDExLjYzLDAsMCwwLDIzLjI2LDBWMTQ1QTQwLjg1LDQwLjg1LDAsMCwwLDQ0OS4zNSwxMDQuMjFaIi8+PC9zdmc+&logoWidth=20" style="display: inline-block; vertical-align: middle;"/>
</a>
<a href="https://www.minimax.io/platform" style="margin: 2px;">
<img alt="API" src="https://img.shields.io/badge/⚡_API-Platform-FF4040?style=flat-square&labelColor=2C3E50" style="display: inline-block; vertical-align: middle;"/>
</a>
<a href="https://github.com/MiniMax-AI/MiniMax-MCP" style="margin: 2px;">
<img alt="MCP" src="https://img.shields.io/badge/🚀_MCP-MiniMax_MCP-FF4040?style=flat-square&labelColor=2C3E50" style="display: inline-block; vertical-align: middle;"/>
</a>
</div>
<div align="center" style="line-height: 1;">
<a href="https://huggingface.co/MiniMaxAI" target="_blank" style="margin: 2px;">
<img alt="Hugging Face" src="https://img.shields.io/badge/🤗_Hugging_Face-MiniMax-FF4040?style=flat-square&labelColor=2C3E50" style="display: inline-block; vertical-align: middle;"/>
</a>
<a href="https://github.com/MiniMax-AI/MiniMax-M1" target="_blank" style="margin: 2px;">
<img alt="GitHub" src="https://img.shields.io/badge/🐙_GitHub-MiniMax-FF4040?style=flat-square&labelColor=2C3E50" style="display: inline-block; vertical-align: middle;"/>
</a>
<a href="https://www.modelscope.cn/organization/MiniMax" target="_blank" style="margin: 2px;">
<img alt="ModelScope" src="https://img.shields.io/badge/🤖️_ModelScope-MiniMax-FF4040?style=flat-square&labelColor=2C3E50" style="display: inline-block; vertical-align: middle;"/>
</a>
<a href="https://github.com/MiniMax-AI/MiniMax-M1/blob/main/LICENSE" style="margin: 2px;">
<img alt="License" src="https://img.shields.io/badge/⚖️_License-Apache_2.0-FF4040?style=flat-square&labelColor=2C3E50" style="display: inline-block; vertical-align: middle;"/>
</a>
<a href="https://github.com/MiniMax-AI/MiniMax-01/blob/main/figures/wechat-qrcode.jpeg" target="_blank" style="margin: 2px;">
<img alt="WeChat" src="https://img.shields.io/badge/💬_WeChat-MiniMax-FF4040?style=flat-square&labelColor=2C3E50" style="display: inline-block; vertical-align: middle;"/>
</a>
</div>
# MiniMax-M1
## 1. Model Overview
We introduce MiniMax-M1, the world's first open-weight, large-scale hybrid-attention reasoning model.
MiniMax-M1 is powered by a hybrid Mixture-of-Experts (MoE) architecture combined with a lightning
attention mechanism. The model is developed based on our previous [MiniMax-Text-01 model](https://huggingface.co/MiniMaxAI/MiniMax-Text-01),
which contains a total of 456 billion parameters with 45.9 billion parameters activated
per token. Consistent with MiniMax-Text-01, the M1 model natively supports a context length of 1
million tokens, 8x the context size of DeepSeek R1. Furthermore, the lightning attention mechanism
in MiniMax-M1 enables efficient scaling of test-time compute – For example, compared to DeepSeek
R1, M1 consumes 25% of the FLOPs at a generation length of 100K tokens. These properties make M1
particularly suitable for complex tasks that require processing long inputs and thinking extensively.
MiniMax-M1 is trained using large-scale reinforcement learning (RL) on diverse problems ranging from
traditional mathematical reasoning to sandbox-based, real-world software engineering environments.
We develop an efficient RL scaling framework for M1 highlighting two perspectives: (1) We propose
CISPO, a novel algorithm that clips importance sampling weights instead of token updates, which
outperforms other competitive RL variants; (2) Our hybrid-attention design naturally enhances the
efficiency of RL, where we address unique challenges when scaling RL with the hybrid architecture. We
train two versions of MiniMax-M1 models with [40K](https://huggingface.co/MiniMaxAI/MiniMax-M1-40k) and
[80K](https://huggingface.co/MiniMaxAI/MiniMax-M1-80k) thinking budgets respectively. Experiments
on standard benchmarks show that our models outperform other strong open-weight models such as
the original DeepSeek-R1 and Qwen3-235B, particularly on complex software engineering, tool using,
and long context tasks. With efficient scaling of test-time compute, MiniMax-M1 serves as a strong
foundation for next-generation language model agents to reason and tackle real-world challenges.
<p align="center">
<img width="100%" src="figures/TextBench.png">
<br>
<small><em>Benchmark performance comparison of leading commercial and open-weight models across competition-level mathematics, coding, software engineering, agentic tool use, and long-context understanding tasks. We use the MiniMax-M1-80k model here for MiniMax-M1.</em></small>
</p>
## 2. Evaluation
**Performance of MiniMax-M1 on core benchmarks.**
| **Category** | **Task** | **MiniMax-M1-80K** | **MiniMax-M1-40K** | **Qwen3-235B-A22B** | **DeepSeek-R1-0528** | **DeepSeek-R1** | **Seed-Thinking-v1.5** | **Claude 4 Opus** | **Gemini 2.5 Pro (06-05)** | **OpenAI-o3** |
|:---|:---|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
| | *Extended Thinking* | *80K* | *40K* | *32k* | *64k* | *32k* | *32k* | *64k* | *64k* | *100k* |
| ***Mathematics*** | AIME 2024 | 86.0 | 83.3 | 85.7 | 91.4 | 79.8 | 86.7 | 76.0 | 92.0 | 91.6 |
| | AIME 2025 | 76.9 | 74.6 | 81.5 | 87.5 | 70.0 | 74.0 | 75.5 | 88.0 | 88.9 |
| | MATH-500 | 96.8 | 96.0 | 96.2 | 98.0 | 97.3 | 96.7 | 98.2 | 98.8 | 98.1 |
| ***General Coding*** | LiveCodeBench *(24/8~25/5)* | 65.0 | 62.3 | 65.9 | 73.1 | 55.9 | 67.5 | 56.6 | 77.1 | 75.8 |
| | FullStackBench | 68.3 | 67.6 | 62.9 | 69.4 | 70.1 | 69.9 | 70.3 | -- | 69.3 |
| ***Reasoning & Knowledge***| GPQA Diamond | 70.0 | 69.2 | 71.1 | 81.0 | 71.5 | 77.3 | 79.6 | 86.4 | 83.3 |
| | HLE *(no tools)* | 8.4\* | 7.2\* | 7.6\* | 17.7\* | 8.6\* | 8.2 | 10.7 | 21.6 | 20.3 |
| | ZebraLogic | 86.8 | 80.1 | 80.3 | 95.1 | 78.7 | 84.4 | 95.1 | 91.6 | 95.8 |
| | MMLU-Pro | 81.1 | 80.6 | 83.0 | 85.0 | 84.0 | 87.0 | 85.0 | 86.0 | 85.0 |
| ***Software Engineering***| SWE-bench Verified| 56.0 | 55.6 | 34.4 | 57.6 | 49.2 | 47.0 | 72.5 | 67.2 | 69.1 |
| ***Long Context*** | OpenAI-MRCR *(128k)* | 73.4 | 76.1 | 27.7 | 51.5 | 35.8 | 54.3 | 48.9 | 76.8 | 56.5 |
| | OpenAI-MRCR *(1M)* | 56.2 | 58.6 | -- | -- | -- | -- | -- | 58.8 | -- |
| | LongBench-v2 | 61.5 | 61.0 | 50.1 | 52.1 | 58.3 | 52.5 | 55.6 | 65.0 | 58.8 |
| ***Agentic Tool Use***| TAU-bench *(airline)* | 62.0 | 60.0 | 34.7 | 53.5 | -- | 44.0 | 59.6 | 50.0 | 52.0 |
| | TAU-bench *(retail)* | 63.5 | 67.8 | 58.6 | 63.9 | -- | 55.7 | 81.4 | 67.0 | 73.9 |
| ***Factuality*** | SimpleQA | 18.5 | 17.9 | 11.0 | 27.8 | 30.1 | 12.9 | -- | 54.0 | 49.4 |
| ***General Assistant***| MultiChallenge | 44.7 | 44.7 | 40.0 | 45.0 | 40.7 | 43.0 | 45.8 | 51.8 | 56.5 |
\* conducted on the text-only HLE subset.
Our models are evaluated with `temperature=1.0`, `top_p=0.95`.
### SWE-bench methodology
We report results derived from the Agentless scaffold. Departing from the original pipeline, our methodology employs a two-stage localization process (without any embedding-based retrieval mechanisms): initial coarse-grained file localization followed by fine-grained localization to specific files and code elements. The values for our models are calculated on the subset of n=486 verified tasks which work on our infrastructure. The excluded 14 test cases that were incompatible with our internal infrastructure are:
`"astropy__astropy-7606"`,
`"astropy__astropy-8707"`,
`"astropy__astropy-8872"`,
`"django__django-10097"`,
`"matplotlib__matplotlib-20488"`,
`"psf__requests-2317"`,
`"psf__requests-2931"`,
`"psf__requests-5414"`,
`"pylint-dev__pylint-6528"`,
`"pylint-dev__pylint-7277"`,
`"sphinx-doc__sphinx-10435"`,
`"sphinx-doc__sphinx-7985"`,
`"sphinx-doc__sphinx-8269"`,
`"sphinx-doc__sphinx-8475"`
### TAU-bench methodology
We evaluate TAU-Bench with GPT-4.1 as user model and without any custom tools. The maximum number of interaction steps is 40.
Our general system prompt is:
```
- In each round, you need to carefully examine the tools provided to you to determine if any can be used.
- You must adhere to all of the policies. Pay attention to the details in the terms. Solutions for most situations can be found within these policies.
```
## 3. Deployment Guide
Download the model from HuggingFace repository:
- [MiniMax-M1-40k](https://huggingface.co/MiniMaxAI/MiniMax-M1-40k)
- [MiniMax-M1-80k](https://huggingface.co/MiniMaxAI/MiniMax-M1-80k)
For production deployment, we recommend using [vLLM](https://docs.vllm.ai/en/latest/) to serve MiniMax-M1. vLLM provides excellent performance for serving large language models with the following features:
- 🔥 Outstanding service throughout performance
- ⚡ Efficient and intelligent memory management
- 📦 Powerful batch request processing capability
- ⚙️ Deeply optimized underlying performance
For detailed vLLM deployment instructions, please refer to our [vLLM Deployment Guide](./docs/vllm_deployment_guide.md).
Alternatively, you can also deploy using Transformers directly. For detailed Transformers deployment instructions, you can see our [MiniMax-M1 Transformers Deployment Guide](./docs/transformers_deployment_guide.md).
## 4. Function Calling
The MiniMax-M1 model supports function calling capabilities, enabling the model to identify when external functions need to be called and output function call parameters in a structured format. [MiniMax-M1 Function Call Guide](./docs/function_call_guide.md) provides detailed instructions on how to use the function calling feature of MiniMax-M1.
## 5. Chatbot & API
For general use and evaluation, we provide a [Chatbot](https://chat.minimax.io/) with online search capabilities and the [online API](https://www.minimax.io/platform/) for developers. For general use and evaluation, we provide the [MiniMax MCP Server](https://github.com/MiniMax-AI/MiniMax-MCP) with video generation, image generation, speech synthesis, and voice cloning for developers.
## 6. Contact Us
Contact us at [[email protected]](mailto:[email protected]).
|
gradientdegen/task-10-Qwen-Qwen2.5-3B-Instruct
|
gradientdegen
| 2025-06-18T14:06:13Z | 147 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:microsoft/Phi-3.5-mini-instruct",
"base_model:adapter:microsoft/Phi-3.5-mini-instruct",
"region:us"
] | null | 2025-06-10T20:49:23Z |
---
base_model: microsoft/Phi-3.5-mini-instruct
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.13.2
|
mradermacher/Oblivion2.5-1.5B-Instruct-v2-GGUF
|
mradermacher
| 2025-06-18T14:04:59Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:r1char9/Oblivion2.5-1.5B-Instruct-v2",
"base_model:quantized:r1char9/Oblivion2.5-1.5B-Instruct-v2",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2025-06-18T13:53:49Z |
---
base_model: r1char9/Oblivion2.5-1.5B-Instruct-v2
language:
- en
library_name: transformers
license: mit
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/r1char9/Oblivion2.5-1.5B-Instruct-v2
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Oblivion2.5-1.5B-Instruct-v2-GGUF/resolve/main/Oblivion2.5-1.5B-Instruct-v2.Q2_K.gguf) | Q2_K | 0.8 | |
| [GGUF](https://huggingface.co/mradermacher/Oblivion2.5-1.5B-Instruct-v2-GGUF/resolve/main/Oblivion2.5-1.5B-Instruct-v2.Q3_K_S.gguf) | Q3_K_S | 0.9 | |
| [GGUF](https://huggingface.co/mradermacher/Oblivion2.5-1.5B-Instruct-v2-GGUF/resolve/main/Oblivion2.5-1.5B-Instruct-v2.Q3_K_M.gguf) | Q3_K_M | 0.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Oblivion2.5-1.5B-Instruct-v2-GGUF/resolve/main/Oblivion2.5-1.5B-Instruct-v2.Q3_K_L.gguf) | Q3_K_L | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/Oblivion2.5-1.5B-Instruct-v2-GGUF/resolve/main/Oblivion2.5-1.5B-Instruct-v2.IQ4_XS.gguf) | IQ4_XS | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/Oblivion2.5-1.5B-Instruct-v2-GGUF/resolve/main/Oblivion2.5-1.5B-Instruct-v2.Q4_K_S.gguf) | Q4_K_S | 1.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Oblivion2.5-1.5B-Instruct-v2-GGUF/resolve/main/Oblivion2.5-1.5B-Instruct-v2.Q4_K_M.gguf) | Q4_K_M | 1.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Oblivion2.5-1.5B-Instruct-v2-GGUF/resolve/main/Oblivion2.5-1.5B-Instruct-v2.Q5_K_S.gguf) | Q5_K_S | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/Oblivion2.5-1.5B-Instruct-v2-GGUF/resolve/main/Oblivion2.5-1.5B-Instruct-v2.Q5_K_M.gguf) | Q5_K_M | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/Oblivion2.5-1.5B-Instruct-v2-GGUF/resolve/main/Oblivion2.5-1.5B-Instruct-v2.Q6_K.gguf) | Q6_K | 1.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Oblivion2.5-1.5B-Instruct-v2-GGUF/resolve/main/Oblivion2.5-1.5B-Instruct-v2.Q8_0.gguf) | Q8_0 | 1.7 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Oblivion2.5-1.5B-Instruct-v2-GGUF/resolve/main/Oblivion2.5-1.5B-Instruct-v2.f16.gguf) | f16 | 3.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
moazharu/appu-qwen-4b-sft-20250618_093419
|
moazharu
| 2025-06-18T14:04:42Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:Qwen/Qwen3-4B",
"base_model:finetune:Qwen/Qwen3-4B",
"endpoints_compatible",
"region:us"
] | null | 2025-06-18T09:36:01Z |
---
base_model: Qwen/Qwen3-4B
library_name: transformers
model_name: appu-qwen-4b-sft-20250618_093419
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for appu-qwen-4b-sft-20250618_093419
This model is a fine-tuned version of [Qwen/Qwen3-4B](https://huggingface.co/Qwen/Qwen3-4B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="moazharu/appu-qwen-4b-sft-20250618_093419", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.18.2
- Transformers: 4.52.4
- Pytorch: 2.5.1+cu121
- Datasets: 3.6.0
- Tokenizers: 0.21.0
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
bruhzair/prototype-0.4x159
|
bruhzair
| 2025-06-18T14:00:06Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2408.07990",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-18T13:06:41Z |
---
base_model: []
library_name: transformers
tags:
- mergekit
- merge
---
# prototype-0.4x159
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [SCE](https://arxiv.org/abs/2408.07990) merge method using /workspace/prototype-0.4x153 as a base.
### Models Merged
The following models were included in the merge:
* /workspace/cache/models--LatitudeGames--Wayfarer-Large-70B-Llama-3.3/snapshots/68cb7a33f692be64d4b146576838be85593a7459
* /workspace/cache/models--Delta-Vector--Austral-70B-Preview/snapshots/bf62fe4ffd7e460dfa3bb881913bdfbd9dd14002
* /workspace/cache/models--deepcogito--cogito-v1-preview-llama-70B/snapshots/1d624e2293b5b35f9cfd2349f8e02c7ebf32ca83
* /workspace/cache/models--ReadyArt--Forgotten-Safeword-70B-v5.0/snapshots/ac2650005a6fdef7f4cd62590dcb665155349a5b
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: /workspace/cache/models--ReadyArt--Forgotten-Safeword-70B-v5.0/snapshots/ac2650005a6fdef7f4cd62590dcb665155349a5b
- model: /workspace/cache/models--deepcogito--cogito-v1-preview-llama-70B/snapshots/1d624e2293b5b35f9cfd2349f8e02c7ebf32ca83
- model: /workspace/cache/models--LatitudeGames--Wayfarer-Large-70B-Llama-3.3/snapshots/68cb7a33f692be64d4b146576838be85593a7459
- model: /workspace/cache/models--Delta-Vector--Austral-70B-Preview/snapshots/bf62fe4ffd7e460dfa3bb881913bdfbd9dd14002
- model: /workspace/prototype-0.4x153
base_model: /workspace/prototype-0.4x153
select_topk: 0.15
merge_method: sce
tokenizer:
source: base
pad_to_multiple_of: 8
int8_mask: true
dtype: bfloat16
```
|
mradermacher/nemo-chatbot-v2-GGUF
|
mradermacher
| 2025-06-18T13:59:59Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:chaerheeon/nemo-chatbot-v2",
"base_model:quantized:chaerheeon/nemo-chatbot-v2",
"endpoints_compatible",
"region:us"
] | null | 2025-06-18T13:38:40Z |
---
base_model: chaerheeon/nemo-chatbot-v2
language:
- en
library_name: transformers
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/chaerheeon/nemo-chatbot-v2
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/nemo-chatbot-v2-GGUF/resolve/main/nemo-chatbot-v2.Q2_K.gguf) | Q2_K | 1.8 | |
| [GGUF](https://huggingface.co/mradermacher/nemo-chatbot-v2-GGUF/resolve/main/nemo-chatbot-v2.Q3_K_S.gguf) | Q3_K_S | 2.0 | |
| [GGUF](https://huggingface.co/mradermacher/nemo-chatbot-v2-GGUF/resolve/main/nemo-chatbot-v2.Q3_K_M.gguf) | Q3_K_M | 2.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/nemo-chatbot-v2-GGUF/resolve/main/nemo-chatbot-v2.Q3_K_L.gguf) | Q3_K_L | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/nemo-chatbot-v2-GGUF/resolve/main/nemo-chatbot-v2.IQ4_XS.gguf) | IQ4_XS | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/nemo-chatbot-v2-GGUF/resolve/main/nemo-chatbot-v2.Q4_K_S.gguf) | Q4_K_S | 2.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/nemo-chatbot-v2-GGUF/resolve/main/nemo-chatbot-v2.Q4_K_M.gguf) | Q4_K_M | 2.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/nemo-chatbot-v2-GGUF/resolve/main/nemo-chatbot-v2.Q5_K_S.gguf) | Q5_K_S | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/nemo-chatbot-v2-GGUF/resolve/main/nemo-chatbot-v2.Q5_K_M.gguf) | Q5_K_M | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/nemo-chatbot-v2-GGUF/resolve/main/nemo-chatbot-v2.Q6_K.gguf) | Q6_K | 3.3 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/nemo-chatbot-v2-GGUF/resolve/main/nemo-chatbot-v2.Q8_0.gguf) | Q8_0 | 4.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/nemo-chatbot-v2-GGUF/resolve/main/nemo-chatbot-v2.f16.gguf) | f16 | 7.9 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
morturr/Mistral-7B-v0.1-amazon-seed-7-2025-06-18
|
morturr
| 2025-06-18T13:59:33Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:adapter:mistralai/Mistral-7B-v0.1",
"license:apache-2.0",
"region:us"
] | null | 2025-06-18T13:59:24Z |
---
library_name: peft
license: apache-2.0
base_model: mistralai/Mistral-7B-v0.1
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: Mistral-7B-v0.1-amazon-seed-7-2025-06-18
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Mistral-7B-v0.1-amazon-seed-7-2025-06-18
This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 7
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.1
- Pytorch 2.5.1+cu124
- Datasets 3.0.2
- Tokenizers 0.20.1
|
KiteAether/khmer-trocr-b_s4-10ep-lr5e5-18-6-25
|
KiteAether
| 2025-06-18T13:46:42Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"vision-encoder-decoder",
"image-text-to-text",
"generated_from_trainer",
"base_model:microsoft/trocr-large-handwritten",
"base_model:finetune:microsoft/trocr-large-handwritten",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2025-06-18T10:20:49Z |
---
library_name: transformers
base_model: microsoft/trocr-large-handwritten
tags:
- generated_from_trainer
model-index:
- name: khmer-trocr-b_s4-10ep-lr5e5-18-6-25
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# khmer-trocr-b_s4-10ep-lr5e5-18-6-25
This model is a fine-tuned version of [microsoft/trocr-large-handwritten](https://huggingface.co/microsoft/trocr-large-handwritten) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 5.2355
- Cer: 0.7685
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Cer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 4.9267 | 1.0 | 1681 | 5.4176 | 0.9953 |
| 4.1816 | 2.0 | 3362 | 5.2131 | 0.9734 |
| 3.8607 | 3.0 | 5043 | 5.0942 | 0.9385 |
| 3.5317 | 4.0 | 6724 | 4.9554 | 0.8807 |
| 3.0759 | 5.0 | 8405 | 4.7736 | 0.8392 |
| 2.4556 | 6.0 | 10086 | 4.7595 | 0.8093 |
| 1.7233 | 7.0 | 11767 | 4.8245 | 0.7743 |
| 1.0869 | 8.0 | 13448 | 4.9941 | 0.7753 |
| 0.6864 | 9.0 | 15129 | 5.0909 | 0.7833 |
| 0.4806 | 10.0 | 16810 | 5.2355 | 0.7685 |
### Framework versions
- Transformers 4.52.4
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1
|
dicksonhk/Nanonets-OCR-s-mlx-fp16
|
dicksonhk
| 2025-06-18T13:42:09Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2_5_vl",
"image-text-to-text",
"OCR",
"pdf2markdown",
"mlx",
"mlx-my-repo",
"conversational",
"en",
"base_model:nanonets/Nanonets-OCR-s",
"base_model:finetune:nanonets/Nanonets-OCR-s",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2025-06-18T13:41:06Z |
---
language:
- en
base_model: nanonets/Nanonets-OCR-s
pipeline_tag: image-text-to-text
tags:
- OCR
- pdf2markdown
- mlx
- mlx-my-repo
library_name: transformers
---
# dicksonhk/Nanonets-OCR-s-mlx-fp16
The Model [dicksonhk/Nanonets-OCR-s-mlx-fp16](https://huggingface.co/dicksonhk/Nanonets-OCR-s-mlx-fp16) was converted to $MLX format from [nanonets/Nanonets-OCR-s](https://huggingface.co/nanonets/Nanonets-OCR-s) using $mlx-vlm version **0.1.15**.
```bash
pip install -U mlx-vlm
```
```bash
python -m mlx_vlm.generate --model dicksonhk/Nanonets-OCR-s-mlx-fp16 --max-tokens 100 --temp 0.0 --prompt "Describe this image." --image <path_to_image>
```
|
csikasote/mms-1b-all-bemgen-combined-42
|
csikasote
| 2025-06-18T13:40:11Z | 12 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"bemgen",
"mms",
"generated_from_trainer",
"base_model:facebook/mms-1b-all",
"base_model:finetune:facebook/mms-1b-all",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2025-06-18T11:40:39Z |
---
library_name: transformers
license: cc-by-nc-4.0
base_model: facebook/mms-1b-all
tags:
- automatic-speech-recognition
- bemgen
- mms
- generated_from_trainer
metrics:
- wer
model-index:
- name: mms-1b-all-bemgen-combined-42
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mms-1b-all-bemgen-combined-42
This model is a fine-tuned version of [facebook/mms-1b-all](https://huggingface.co/facebook/mms-1b-all) on the BEMGEN - BEM dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2214
- Wer: 0.3973
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 30.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:------:|
| 7.7575 | 0.2538 | 100 | 5.4977 | 1.3624 |
| 4.8101 | 0.5076 | 200 | 5.0306 | 1.0750 |
| 4.3147 | 0.7614 | 300 | 4.0326 | 1.0873 |
| 3.664 | 1.0152 | 400 | 3.3150 | 1.0043 |
| 2.503 | 1.2690 | 500 | 0.3145 | 0.5055 |
| 0.4803 | 1.5228 | 600 | 0.2400 | 0.4439 |
| 0.4124 | 1.7766 | 700 | 0.2338 | 0.4242 |
| 0.3898 | 2.0305 | 800 | 0.2270 | 0.4019 |
| 0.3924 | 2.2843 | 900 | 0.2245 | 0.4094 |
| 0.3826 | 2.5381 | 1000 | 0.2282 | 0.4028 |
| 0.3666 | 2.7919 | 1100 | 0.2237 | 0.3986 |
| 0.3585 | 3.0457 | 1200 | 0.2214 | 0.3971 |
| 0.3591 | 3.2995 | 1300 | 0.2247 | 0.4003 |
| 0.3535 | 3.5533 | 1400 | 0.2182 | 0.4063 |
| 0.3532 | 3.8071 | 1500 | 0.2186 | 0.3861 |
| 0.3544 | 4.0609 | 1600 | 0.2187 | 0.4077 |
| 0.3401 | 4.3147 | 1700 | 0.2169 | 0.3921 |
| 0.3372 | 4.5685 | 1800 | 0.2142 | 0.3990 |
| 0.3446 | 4.8223 | 1900 | 0.2145 | 0.3897 |
| 0.3291 | 5.0761 | 2000 | 0.2158 | 0.3878 |
| 0.3219 | 5.3299 | 2100 | 0.2158 | 0.3806 |
### Framework versions
- Transformers 4.53.0.dev0
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.0
|
John6666/satyr-remix-ankara-illustrious-v17-alt-sdxl
|
John6666
| 2025-06-18T13:35:44Z | 0 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"anime",
"girls",
"fantasy",
"paintery",
"styles",
"prompt comphrehension",
"creative",
"stable",
"illustrious",
"en",
"base_model:OnomaAIResearch/Illustrious-xl-early-release-v0",
"base_model:finetune:OnomaAIResearch/Illustrious-xl-early-release-v0",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] |
text-to-image
| 2025-06-18T13:29:27Z |
---
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- anime
- girls
- fantasy
- paintery
- styles
- prompt comphrehension
- creative
- stable
- illustrious
base_model: OnomaAIResearch/Illustrious-xl-early-release-v0
---
Original model is [here](https://civitai.com/models/974951/satyrremix-ankara-illustrious?modelVersionId=1912106).
This model created by [Labdoge207](https://civitai.com/user/Labdoge207).
|
poojastl2024/whisper-large-v3-lora-bn-en
|
poojastl2024
| 2025-06-18T13:21:35Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"generated_from_trainer",
"base_model:openai/whisper-large-v3",
"base_model:adapter:openai/whisper-large-v3",
"license:apache-2.0",
"region:us"
] | null | 2025-06-18T13:16:07Z |
---
library_name: peft
license: apache-2.0
base_model: openai/whisper-large-v3
tags:
- generated_from_trainer
model-index:
- name: whisper-large-v3-lora-bn-en
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-large-v3-lora-bn-en
This model is a fine-tuned version of [openai/whisper-large-v3](https://huggingface.co/openai/whisper-large-v3) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.PAGED_ADAMW_8BIT with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.15.0
- Transformers 4.49.0
- Pytorch 2.7.1+cu118
- Datasets 3.4.1
- Tokenizers 0.21.1
|
gradientrouting-spar/mc9_badmed_representation_constraint_beta_kl-100.0_seed_1
|
gradientrouting-spar
| 2025-06-18T13:20:14Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-18T13:19:30Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
phospho-app/OpenLabBA-gr00t-lego_in_box_v3-5mruoulq4f
|
phospho-app
| 2025-06-18T13:16:43Z | 0 | 0 | null |
[
"phosphobot",
"gr00t",
"region:us"
] | null | 2025-06-18T13:15:03Z |
---
tags:
- phosphobot
- gr00t
task_categories:
- robotics
---
# gr00t Model - phospho Training Pipeline
## Error Traceback
We faced an issue while training your model.
```
Traceback (most recent call last):
File "/root/src/helper.py", line 165, in predict
trainer.train(timeout_seconds=timeout_seconds)
File "/root/phosphobot/am/gr00t.py", line 1146, in train
asyncio.run(
File "/opt/conda/lib/python3.11/asyncio/runners.py", line 190, in run
return runner.run(main)
^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/asyncio/runners.py", line 118, in run
return self._loop.run_until_complete(task)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/asyncio/base_events.py", line 654, in run_until_complete
return future.result()
^^^^^^^^^^^^^^^
File "/root/phosphobot/am/gr00t.py", line 996, in run_gr00t_training
raise RuntimeError(error_msg)
RuntimeError: Training process failed with exit code 1:
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/workspace/gr00t/data/dataset.py", line 790, in get_data_by_modality
return self.get_video(trajectory_id, key, base_index)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/workspace/gr00t/data/dataset.py", line 658, in get_video
video_timestamp = timestamp[step_indices]
~~~~~~~~~^^^^^^^^^^^^^^
IndexError: index 568 is out of bounds for axis 0 with size 263
0%| | 0/2550 [00:04<?, ?it/s]
```
## Training parameters:
- **Dataset**: [OpenLabBA/lego_in_box_v3](https://huggingface.co/datasets/OpenLabBA/lego_in_box_v3)
- **Wandb run URL**: None
- **Epochs**: 10
- **Batch size**: 49
- **Training steps**: None
📖 **Get Started**: [docs.phospho.ai](https://docs.phospho.ai?utm_source=huggingface_readme)
🤖 **Get your robot**: [robots.phospho.ai](https://robots.phospho.ai?utm_source=huggingface_readme)
|
kyutai/moshika-vis-pytorch-bf16
|
kyutai
| 2025-06-18T13:16:40Z | 0 | 56 | null |
[
"safetensors",
"en",
"arxiv:2503.15633",
"arxiv:2406.10328",
"arxiv:1810.12440",
"arxiv:2007.00398",
"base_model:google/paligemma2-3b-pt-448",
"base_model:finetune:google/paligemma2-3b-pt-448",
"license:cc-by-4.0",
"region:us"
] | null | 2025-01-27T10:49:29Z |
---
license: cc-by-4.0
language:
- en
base_model:
- google/paligemma2-3b-pt-448
- kyutai/moshika-pytorch-bf16
---
# Model Card for MoshiVis
## Model Details
### Model Description
**MoshiVis** ([Project Page](https://kyutai.org/moshivis) | [arXiv](https://arxiv.org/abs/2503.15633)) is a perceptually augmented version of Moshi, giving it the ability to freely discuss images whilst maintaining its natural conversation style and low latency.
To achieve this, Moshi has been extended with a visual backbone and a cross-attention mechanism to infuse the visual information into the language model.
To train MoshiVis, we add a few parameters (~200M) on top of a frozen Moshi backbone (for the text/speech modeling aspect, ~7B params)
and a PaliGemma2 vision encoder (for the image encoding part, ~400M parameters).
This model page contains the `Moshika` (female voice) model weights for the `Pytorch` backend of the MoshiVis repo, in `bfloat16`.
We provide the same model weights for other backends and quantization formats in the associated model collection.
- **Developed by:** Kyutai
- **Model type:** Multimodal speech+vision+text foundation model
- **Language(s) (NLP):** English
- **License:** CC-BY-4.0
- **Uses frozen components from:** [Moshika](https://huggingface.co/kyutai/moshika-pytorch-bf16) and [PaliGemma2](https://huggingface.co/google/paligemma2-3b-pt-448)
- **Terms of use:** As the released models include frozen weights of the SigLIP image encoder from PaliGemma-2, MoshiVis is subject to the Gemma Terms of Use found at ai.google.dev/gemma/terms
### Model Sources
- **Project Page** [kyutai.org/moshivis](https://kyutai.org/moshivis)
- **Preprint** ([arXiv/abs/2503.15633](https://arxiv.org/abs/2503.15633))
- **Repository:** [Github kyutai-labs/moshivis](https://github.com/kyutai-labs/moshivis)
- **Demo:** [Talk to Moshi](http://vis.moshi.chat)
## Uses
### Direct Use
Similar to Moshi itself, MoshiVis can be used as a conversational agent for casual conversations, basic facts and advice (e.g. recipes, trivia), roleplay, etc.
In addition, MoshiVis is able to recognize and discuss images in a natural way, whilst still allowing for low-latency interactions.
### Downstream Use
Since MoshiVis was designed to infuse visual signal in a frozen Moshi backbone with only a few trainable parameters,
the model could be adapted to different downstream scenarios by further finetuning these parameters :
for instance adapting MoshiVis for a different off-the-shelf image encoder or different visual domains.
### Out-of-Scope Use
The model is not intended to be used to impersonate other people or any malicious use of any kind.
This model is for research only and we do not recommend it for providing advices or to perform any professionnal duty.
## Bias, Risks, and Limitations
MoshiVis has been designed to perceptually augment the original [Moshi]((https://huggingface.co/kyutai/moshika-pytorch-bf16))
model with vision capabilities and is expected to inherit similar biases and limitations.
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
See our [github repository](https://github.com/kyutai-labs/moshivis) for getting started.
## Training Details
Stay tuned for our technical report, in which we will describe the training procedure in detail as well as report evaluation results.
### Training Data
For information on the training data used for the base models, see [Pixtral](https://mistral.ai/news/pixtral-12b/) and
[Moshi](https://huggingface.co/kyutai/moshika-pytorch-bf16) respectively.
To train the cross-attention and gating mechanism that MoshiVis uses for processing images,
we rely on a collection of publicly available datasets, namely:
- [DOCCI](https://google.github.io/docci/)
- [PixMo](https://huggingface.co/datasets/allenai/pixmo-cap)
- [Pixelprose](https://arxiv.org/abs/2406.10328)
- [TallyQA](https://arxiv.org/abs/1810.12440)
- [OCR-VQA](https://ocr-vqa.github.io/)
- [RenderedText](https://huggingface.co/datasets/wendlerc/RenderedText)
- [DocVQA](https://arxiv.org/abs/2007.00398)
## Technical Specifications
### Compute Infrastructure
MoshiVis was designed as a relatively low-cost adaptation of Moshi (~200M extra trainable parameters)
and was trained on a single DGX node with 8 H100 GPUs.
#### Software
Our training code was implemented in Pytorch. Our inference code is available for Pytorch, Rust and MLX.
## Citation
```
@article{kyutai2025moshivis,
author = {Amélie Royer and Moritz Böhle and Gabriel de Marmiesse and
Laurent Mazaré and Alexandre Défossez and Neil Zeghidour and Patrick Pérez},
year = {2025},
title = {Vision-Speech Models: Teaching Speech Models to Converse about Images},
journal = {ArXiv},
url = {https://arxiv.org/abs/2503.15633}
}
```
## Model Card Authors and Contact
* Amelie Royer
* Moritz Boehle
|
eddieman78/litbank-coref-qwen-3-14b-it-4000-64-1e4-4
|
eddieman78
| 2025-06-18T13:15:36Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"unsloth",
"trl",
"sft",
"base_model:unsloth/Qwen3-14B-unsloth-bnb-4bit",
"base_model:finetune:unsloth/Qwen3-14B-unsloth-bnb-4bit",
"endpoints_compatible",
"region:us"
] | null | 2025-06-18T13:15:15Z |
---
base_model: unsloth/Qwen3-14B-unsloth-bnb-4bit
library_name: transformers
model_name: litbank-coref-qwen-3-14b-it-4000-64-1e4-4
tags:
- generated_from_trainer
- unsloth
- trl
- sft
licence: license
---
# Model Card for litbank-coref-qwen-3-14b-it-4000-64-1e4-4
This model is a fine-tuned version of [unsloth/Qwen3-14B-unsloth-bnb-4bit](https://huggingface.co/unsloth/Qwen3-14B-unsloth-bnb-4bit).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="eddieman78/litbank-coref-qwen-3-14b-it-4000-64-1e4-4", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.6.0+cu124
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
Aleteian/DeepDarkDesire-24B-Q4_K_M-GGUF
|
Aleteian
| 2025-06-18T13:07:04Z | 0 | 0 | null |
[
"gguf",
"merge",
"mergekit",
"lazymergekit",
"llama-cpp",
"gguf-my-repo",
"base_model:Aleteian/DeepDarkDesire-24B",
"base_model:quantized:Aleteian/DeepDarkDesire-24B",
"endpoints_compatible",
"region:us"
] | null | 2025-06-18T13:05:59Z |
---
tags:
- merge
- mergekit
- lazymergekit
- llama-cpp
- gguf-my-repo
base_model: Aleteian/DeepDarkDesire-24B
---
# Aleteian/DeepDarkDesire-24B-Q4_K_M-GGUF
This model was converted to GGUF format from [`Aleteian/DeepDarkDesire-24B`](https://huggingface.co/Aleteian/DeepDarkDesire-24B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Aleteian/DeepDarkDesire-24B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Aleteian/DeepDarkDesire-24B-Q4_K_M-GGUF --hf-file deepdarkdesire-24b-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Aleteian/DeepDarkDesire-24B-Q4_K_M-GGUF --hf-file deepdarkdesire-24b-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Aleteian/DeepDarkDesire-24B-Q4_K_M-GGUF --hf-file deepdarkdesire-24b-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Aleteian/DeepDarkDesire-24B-Q4_K_M-GGUF --hf-file deepdarkdesire-24b-q4_k_m.gguf -c 2048
```
|
morturr/Llama-2-7b-hf-LOO_dadjokes-COMB_headlines-comb2-seed18-2025-06-18
|
morturr
| 2025-06-18T12:35:22Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"license:llama2",
"region:us"
] | null | 2025-06-18T12:35:04Z |
---
library_name: peft
license: llama2
base_model: meta-llama/Llama-2-7b-hf
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: Llama-2-7b-hf-LOO_dadjokes-COMB_headlines-comb2-seed18-2025-06-18
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Llama-2-7b-hf-LOO_dadjokes-COMB_headlines-comb2-seed18-2025-06-18
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 18
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.1
- Pytorch 2.5.1+cu124
- Datasets 3.0.2
- Tokenizers 0.20.1
|
Nap/Qwen2VL-Flux-ControlNet
|
Nap
| 2025-06-18T12:23:18Z | 0 | 0 | null |
[
"license:other",
"region:us"
] | null | 2025-06-16T23:03:35Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
|
mednikov/Unit2-q-FrozenLake-v1-4x4-noSlippery
|
mednikov
| 2025-06-18T12:21:29Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-06-18T12:20:53Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Unit2-q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="mednikov/Unit2-q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
FormlessAI/575211aa-41cc-4923-9418-c540d7516b0d
|
FormlessAI
| 2025-06-18T12:20:22Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"grpo",
"unsloth",
"arxiv:2402.03300",
"base_model:unsloth/Phi-3-mini-4k-instruct",
"base_model:finetune:unsloth/Phi-3-mini-4k-instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-06-18T12:16:48Z |
---
base_model: unsloth/Phi-3-mini-4k-instruct
library_name: transformers
model_name: 575211aa-41cc-4923-9418-c540d7516b0d
tags:
- generated_from_trainer
- trl
- grpo
- unsloth
licence: license
---
# Model Card for 575211aa-41cc-4923-9418-c540d7516b0d
This model is a fine-tuned version of [unsloth/Phi-3-mini-4k-instruct](https://huggingface.co/unsloth/Phi-3-mini-4k-instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="FormlessAI/575211aa-41cc-4923-9418-c540d7516b0d", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/phoenix-formless/Gradients/runs/oczxcl2c)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.18.1
- Transformers: 4.52.4
- Pytorch: 2.7.0+cu128
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
maxchs/boldynlora
|
maxchs
| 2025-06-18T12:11:30Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-06-18T11:56:31Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: BOLDYN
---
# Boldynlora
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `BOLDYN` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "BOLDYN",
"lora_weights": "https://huggingface.co/maxchs/boldynlora/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('maxchs/boldynlora', weight_name='lora.safetensors')
image = pipeline('BOLDYN').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 1000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/maxchs/boldynlora/discussions) to add images that show off what you’ve made with this LoRA.
|
FormlessAI/544f7d58-a4a4-4148-b415-09e35a14b73a
|
FormlessAI
| 2025-06-18T12:10:31Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"unsloth",
"base_model:unsloth/Qwen2-0.5B",
"base_model:finetune:unsloth/Qwen2-0.5B",
"endpoints_compatible",
"region:us"
] | null | 2025-06-18T12:09:27Z |
---
base_model: unsloth/Qwen2-0.5B
library_name: transformers
model_name: 544f7d58-a4a4-4148-b415-09e35a14b73a
tags:
- generated_from_trainer
- trl
- sft
- unsloth
licence: license
---
# Model Card for 544f7d58-a4a4-4148-b415-09e35a14b73a
This model is a fine-tuned version of [unsloth/Qwen2-0.5B](https://huggingface.co/unsloth/Qwen2-0.5B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="FormlessAI/544f7d58-a4a4-4148-b415-09e35a14b73a", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/phoenix-formless/Gradients/runs/fy0uo26c)
This model was trained with SFT.
### Framework versions
- TRL: 0.18.1
- Transformers: 4.52.4
- Pytorch: 2.7.0+cu128
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
vulong3896/vnlegal-qa
|
vulong3896
| 2025-06-18T12:07:09Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-18T11:54:24Z |
---
base_model: unsloth/qwen3-8b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** vulong3896
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen3-8b-unsloth-bnb-4bit
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
BootesVoid/cmc0oiiu208fsrdqsnzduaggq_cmc1kai4w0ay0rdqsp6686gbb
|
BootesVoid
| 2025-06-18T12:05:17Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-06-18T12:04:55Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: BONNYE
---
# Cmc0Oiiu208Fsrdqsnzduaggq_Cmc1Kai4W0Ay0Rdqsp6686Gbb
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `BONNYE` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "BONNYE",
"lora_weights": "https://huggingface.co/BootesVoid/cmc0oiiu208fsrdqsnzduaggq_cmc1kai4w0ay0rdqsp6686gbb/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BootesVoid/cmc0oiiu208fsrdqsnzduaggq_cmc1kai4w0ay0rdqsp6686gbb', weight_name='lora.safetensors')
image = pipeline('BONNYE').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/BootesVoid/cmc0oiiu208fsrdqsnzduaggq_cmc1kai4w0ay0rdqsp6686gbb/discussions) to add images that show off what you’ve made with this LoRA.
|
AlphaZero123/llama3.1-8b-finetuned
|
AlphaZero123
| 2025-06-18T12:04:51Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2025-06-18T11:29:16Z |
---
base_model: unsloth/meta-llama-3.1-8b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** AlphaZero123
- **License:** apache-2.0
- **Finetuned from model :** unsloth/meta-llama-3.1-8b-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
luyotw/openfun-ivod-whisper-large-v3-negotiation-10-63
|
luyotw
| 2025-06-18T12:02:24Z | 0 | 0 | null |
[
"tensorboard",
"safetensors",
"whisper",
"region:us"
] | null | 2025-06-18T09:34:37Z |
# Fine-tune 資訊
- 原始模型: `openai/whisper-large-v3`
- 使用音訊數量: 38798
- 使用音訊總長: 21.60 小時
- 音訊平均長度: 2.00 秒
- GPU: `NVIDIA H100 PCIe` x 1
- 訓練時間: 05:54:54
- 模型大小: 5.75 GB
- 訓練參數:
- batch size: 8
- eval batch size: 4
- gradient checkpointing: True
- fp16: False
- bf16: True
---
# Model Card
|
New-tutorial-mezzo-fun-18-videoss/FULL.VIDEO.LINK.mezzo.fun.viral.video.viral.On.Social.Media.Official
|
New-tutorial-mezzo-fun-18-videoss
| 2025-06-18T11:59:46Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-18T11:59:29Z |
<animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
New-Adefarasin-viral-video/Pastor.Adefarasin.turns.himself.in.to.police
|
New-Adefarasin-viral-video
| 2025-06-18T11:53:47Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-18T11:53:36Z |
<a href="https://tinyurl.com/ysxydvww" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="WATCH Videos" data-canonical-src="https://i.imgur.com/dJHk4Zq.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
dgambettaphd/M_llm2_run2_gen4_WXS_doc1000_synt120_lr1e-04_acm_SYNLAST
|
dgambettaphd
| 2025-06-18T11:53:24Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-18T11:53:03Z |
---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
dzakyahnaf/deberta-v3-emotion-multilabel-classifier
|
dzakyahnaf
| 2025-06-18T11:43:19Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"deberta-v2",
"text-classification",
"generated_from_trainer",
"base_model:microsoft/deberta-v3-base",
"base_model:finetune:microsoft/deberta-v3-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-06-16T10:49:49Z |
---
library_name: transformers
license: mit
base_model: microsoft/deberta-v3-base
tags:
- generated_from_trainer
model-index:
- name: deberta-v3-emotion-multilabel-classifier
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta-v3-emotion-multilabel-classifier
This model is a fine-tuned version of [microsoft/deberta-v3-base](https://huggingface.co/microsoft/deberta-v3-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2070
- Macro F1: 0.4374
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 6
### Training results
| Training Loss | Epoch | Step | Validation Loss | Macro F1 |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.1987 | 1.0 | 2614 | 0.1941 | 0.3442 |
| 0.187 | 2.0 | 5228 | 0.1936 | 0.4078 |
| 0.1777 | 3.0 | 7842 | 0.1916 | 0.4123 |
| 0.1678 | 4.0 | 10456 | 0.1980 | 0.4285 |
| 0.1588 | 5.0 | 13070 | 0.2021 | 0.4348 |
| 0.1524 | 6.0 | 15684 | 0.2070 | 0.4374 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1
|
tk648/XLNet-base-finetuned-HARPT
|
tk648
| 2025-06-18T11:37:10Z | 0 | 0 | null |
[
"safetensors",
"xlnet",
"text-classification",
"privacy",
"trust",
"mobile-health",
"healthcare",
"harpt",
"finetuned-model",
"en",
"base_model:xlnet/xlnet-base-cased",
"base_model:finetune:xlnet/xlnet-base-cased",
"doi:10.57967/hf/5820",
"license:cc-by-4.0",
"region:us"
] |
text-classification
| 2025-06-17T06:55:04Z |
---
license: cc-by-4.0
language:
- en
metrics:
- accuracy
- f1
- recall
- precision
base_model:
- xlnet/xlnet-base-cased
tags:
- xlnet
- text-classification
- privacy
- trust
- mobile-health
- healthcare
- harpt
- finetuned-model
---
# XLNet-base Fine-Tuned on HARPT
**Model Name**: `XLNet-base-finetuned-HARPT`
**Tags**: `xlnet`, `text-classification`, `privacy`, `trust`, `mobile-health`, `healthcare`, `harpt`, `custom-dataset`, `finetuned-model`
**License**: *Creative Commons 4.0*
---
## Overview
This is a fine-tuned version of [XLNet-base](https://huggingface.co/xlnet-base-cased) trained on the **HARPT** (**H**ealth **A**pp **R**eviews for **P**rivacy and **T**rust) dataset - a large-scale corpus of mobile health app reviews annotated with labels reflecting privacy and trust-related concerns. The model performs **single-label, multi-class classification** across seven expert-defined categories.
## Classes
The model predicts one of the following seven categories:
- `data_control`
- `data_quality`
- `risk`
- `support`
- `reliability`
- `competence`
- `ethicality`
## Intended Use
- Analyzing trust and privacy concerns in app reviews
- Supporting responsible AI research in digital health
- Benchmarking NLP models on healthcare-oriented text classification
---
## Usage
```python
from transformers import XLNetForSequenceClassification, XLNetTokenizerFast
# Load model and tokenizer
model = XLNetForSequenceClassification.from_pretrained(
"tk648/XLNet-base-finetuned-HARPT",
use_safetensors=True
)
tokenizer = XLNetTokenizerFast.from_pretrained("tk648/XLNet-base-finetuned-HARPT")
# Label mapping
id2label = {
0: "competence",
1: "data control",
2: "data quality",
3: "ethicality",
4: "reliability",
5: "risk",
6: "support"
}
# Run prediction
text = "This app crashes every time I open it."
inputs = tokenizer(
text,
return_tensors="pt",
truncation=True,
max_length=512,
padding=True
)
outputs = model(**inputs)
predicted_class_id = outputs.logits.argmax(dim=1).item()
# Print predicted label
predicted_label = id2label[predicted_class_id]
print("Predicted label:", predicted_label)
```
## If you use this model, please cite:
<small><em>
Timoteo Kelly, Abdulkadir Korkmaz, Samuel Mallet, Connor Souders, Sadra Aliakbarpour, and Praveen Rao. 2025.
HARPT: A Corpus for Analyzing Consumers’ Trust and Privacy Concerns in Mobile Health Apps. Submitted to: Proceedings of the 34th ACM International Conference on Information and Knowledge Management (CIKM’25).
</em></small>
|
gmanzone/tokenizer_bert_biotags_dax-briefe
|
gmanzone
| 2025-06-18T11:30:07Z | 0 | 0 |
transformers
|
[
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-18T11:30:06Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
gmanzone/finetuned_dataset4_bio_mit_genderzeichen
|
gmanzone
| 2025-06-18T11:30:03Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"token-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2025-06-18T11:29:44Z |
---
library_name: transformers
license: apache-2.0
base_model: bert-base-cased
tags:
- generated_from_trainer
model-index:
- name: finetuned_dataset4_bio_mit_genderzeichen
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_dataset4_bio_mit_genderzeichen
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0239
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.0646 | 1.0 | 829 | 0.0310 |
| 0.0253 | 2.0 | 1658 | 0.0268 |
| 0.0174 | 3.0 | 2487 | 0.0245 |
| 0.0129 | 4.0 | 3316 | 0.0235 |
| 0.0102 | 5.0 | 4145 | 0.0239 |
### Framework versions
- Transformers 4.52.4
- Pytorch 2.7.0+cu126
- Datasets 3.6.0
- Tokenizers 0.21.1
|
sgonzalezygil/sd-finetuning-dreambooth-v8
|
sgonzalezygil
| 2025-06-18T11:29:04Z | 0 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2025-06-18T11:27:50Z |
---
library_name: diffusers
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🧨 diffusers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
OpenVINO/Qwen3-8B-int4-ov
|
OpenVINO
| 2025-06-18T11:27:59Z | 1,462 | 0 | null |
[
"openvino",
"qwen3",
"base_model:Qwen/Qwen3-8B",
"base_model:quantized:Qwen/Qwen3-8B",
"license:apache-2.0",
"region:us"
] | null | 2025-04-30T13:51:58Z |
---
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen3-8B/blob/main/LICENSE
base_model:
- Qwen/Qwen3-8B
base_model_relation: quantized
---
# Qwen3-8B-int4-ov
* Model creator: [Qwen](https://huggingface.co/Qwen)
* Original model: [Qwen3-8B](https://huggingface.co/Qwen/Qwen3-8B)
## Description
This is [Qwen3-8B](https://huggingface.co/Qwen/Qwen3-8B) model converted to the [OpenVINO™ IR](https://docs.openvino.ai/2025/documentation/openvino-ir-format.html) (Intermediate Representation) format with weights compressed to INT4 by [NNCF](https://github.com/openvinotoolkit/nncf).
## Quantization Parameters
The quantization was performed using `optimum-cli export openvino` with the following parameters:
* mode: **INT4_ASYM**
* ratio: **1.0**
* group_size: **128**
* scale_estimation: **True**
* dataset: **wikitext2**
For more information on quantization, check the [OpenVINO model optimization guide](https://docs.openvino.ai/2025/openvino-workflow/model-optimization-guide/weight-compression.html).
## Compatibility
The provided OpenVINO™ IR model is compatible with:
* OpenVINO version 2025.1.0 and higher
* Optimum Intel 1.24.0 and higher
## Running Model Inference with [Optimum Intel](https://huggingface.co/docs/optimum/intel/index)
1. Install packages required for using [Optimum Intel](https://huggingface.co/docs/optimum/intel/index) integration with the OpenVINO backend:
```
pip install optimum[openvino]
```
2. Run model inference:
```
from transformers import AutoTokenizer
from optimum.intel.openvino import OVModelForCausalLM
model_id = "OpenVINO/qwen3-8b-int4-ov"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = OVModelForCausalLM.from_pretrained(model_id)
inputs = tokenizer("What is OpenVINO?", return_tensors="pt")
outputs = model.generate(**inputs, max_length=200)
text = tokenizer.batch_decode(outputs)[0]
print(text)
```
For more examples and possible optimizations, refer to the [Inference with Optimum Intel](https://docs.openvino.ai/2025/openvino-workflow-generative/inference-with-optimum-intel.html).
## Running Model Inference with [OpenVINO GenAI](https://github.com/openvinotoolkit/openvino.genai)
1. Install packages required for using OpenVINO GenAI.
```
pip install openvino-genai huggingface_hub
```
2. Download model from HuggingFace Hub
```
import huggingface_hub as hf_hub
model_id = "OpenVINO/qwen3-8b-int4-ov"
model_path = "qwen3-8b-int4-ov"
hf_hub.snapshot_download(model_id, local_dir=model_path)
```
3. Run model inference:
```
import openvino_genai as ov_genai
device = "CPU"
pipe = ov_genai.LLMPipeline(model_path, device)
print(pipe.generate("What is OpenVINO?", max_length=200))
```
More GenAI usage examples can be found in OpenVINO GenAI library [docs](https://docs.openvino.ai/2025/openvino-workflow-generative/inference-with-genai.html) and [samples](https://github.com/openvinotoolkit/openvino.genai?tab=readme-ov-file#openvino-genai-samples)
You can find more detaild usage examples in OpenVINO Notebooks:
- [LLM](https://openvinotoolkit.github.io/openvino_notebooks/?search=LLM)
- [RAG text generation](https://openvinotoolkit.github.io/openvino_notebooks/?search=RAG+system&tasks=Text+Generation)
## Limitations
Check the original [model card](https://huggingface.co/Qwen/Qwen3-8B) for limitations.
## Legal information
The original model is distributed under [Apache License Version 2.0](https://huggingface.co/Qwen/Qwen3-8B/blob/main/LICENSE) license. More details can be found in [Qwen3-8B](https://huggingface.co/Qwen/Qwen3-8B).
## Disclaimer
Intel is committed to respecting human rights and avoiding causing or contributing to adverse impacts on human rights. See [Intel’s Global Human Rights Principles](https://www.intel.com/content/dam/www/central-libraries/us/en/documents/policy-human-rights.pdf). Intel’s products and software are intended only to be used in applications that do not cause or contribute to adverse impacts on human rights.
|
jameselmore/Reinforce-Pixelcopter-PLE-v0
|
jameselmore
| 2025-06-18T11:23:16Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-06-18T11:22:37Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Pixelcopter-PLE-v0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 6.90 +/- 3.75
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
VittaBurndropsau/VittaBurnDropsReview
|
VittaBurndropsau
| 2025-06-18T11:15:56Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-18T11:14:49Z |
Introduction: The Need for a Healthier, Easier Approach to Weight Loss
In a world where processed foods, sedentary lifestyles, and stress contribute to unhealthy weight gain, millions are searching for effective, sustainable solutions. Traditional weight loss methods like intense workouts, strict diets, or synthetic pills may not be ideal or safe for everyone.
VittaBurn Drops Review promise a more convenient and natural approach. With herbal extracts, essential nutrients, and metabolism-supporting compounds, this supplement aims to optimize the body's natural fat-burning processes without harsh chemicals or side effects.
Screenshot 2025-06-16 112553.jpg
👉Get More Info: — Click on Here Official Website👈
What Are VittaBurn Drops?
VittaBurn Drops are a natural dietary supplement formulated to support weight loss, enhance metabolism, and promote overall health using a blend of plant-based ingredients. These drops are taken sublingually (under the tongue) to ensure faster absorption and improved bioavailability compared to pills or capsules. Marketed as a powerful fat-burning aid, VittaBurn Drops are gaining popularity among individuals looking for a non-stimulant, easy-to-use solution to manage their weight naturally.
How Do VittaBurn Drops Work?
VittaBurn Drops are designed to work in multiple ways to support weight management and overall wellness:
1. Boosts Metabolism
Certain ingredients in the drops are known to enhance thermogenesis — the body’s process of generating heat from calories. This leads to more efficient calorie burning even at rest.
2. Appetite Suppression
Natural extracts may help control hunger and reduce cravings, allowing users to maintain a calorie deficit without feeling deprived.
3. Supports Fat Oxidation
VittaBurn is believed to enhance the breakdown of stored fat, converting it into usable energy and helping to reduce overall fat percentage, especially in stubborn areas.
4. Enhances Energy and Focus
Unlike stimulants, VittaBurn Drops offer a clean energy boost from natural ingredients, helping users stay active, alert, and motivated throughout the day.
5. Detoxification and Gut Health
Some components may support the digestive system and eliminate toxins, which can play a role in reducing bloating and supporting overall metabolic health.
Benefits of VittaBurn Drops
Here are some key benefits reported by users and supported by the formula’s ingredient profile:
✅ Natural Weight Loss Support
VittaBurn promotes fat burning without harsh stimulants, making it a gentle yet effective supplement for long-term use.
✅ Faster Absorption
The sublingual (under the tongue) delivery method allows for quicker absorption into the bloodstream, bypassing the digestive system for faster effects.
✅ Better Energy Levels
Users report feeling more energetic and motivated, which can help maintain an active lifestyle essential for healthy weight loss.
✅ Appetite and Craving Control
By helping to reduce hunger signals, the drops make it easier to maintain portion control and resist unhealthy snacking.
✅ Improved Mood and Focus
Weight loss often comes with mental fatigue and irritability, but VittaBurn's formulation helps balance mood and enhance focus.
✅ Plant-Based and Non-GMO
VittaBurn Drops use herbal ingredients and avoid GMOs, synthetic fillers, and artificial flavors.
Official website: — https://www.accessnewswire.com/newsroom/en/healthcare-and-pharmaceutical/vittaburn-drops-review-natural-weight-loss-solution-or-just-hype-1035366
Official website: — https://vittaburn.com.au/
Facebook: - https://www.facebook.com/VittaBurnDropsAu/
Medium: - https://vittaburndropsreviewau.medium.com/vittaburn-drops-review-2025-does-this-weight-loss-formula-really-work-6f7a6e05ca2e
Groups Google: - https://groups.google.com/g/vittaburn-drops-au/c/j7LS0d5Oe7M
Quora: - https://vittaburndropsreviewau.quora.com/
Teeshoppe: - https://teeshopper.in/store/VittaBurn-Drops-AU
Pinterest: - https://www.pinterest.com/avleengonna/vittaburn-drops/
Tumblr: - https://www.tumblr.com/vittaburndropsau
Blog: - https://vittaburndropsau.blogspot.com/2025/06/vittaburn-drops-reviews-and-complaints.html
Blog: - https://sites.google.com/view/vittaburndropsreview/home
|
cezet888/noemi
|
cezet888
| 2025-06-18T11:15:34Z | 0 | 0 | null |
[
"license:other",
"region:us"
] | null | 2025-06-18T10:34:30Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
|
Kamal-Kaur-Bhabhi-Viral-Video/FULL.VIDEO.Kamal.Kaur.Viral.Video.Tutorial.Official
|
Kamal-Kaur-Bhabhi-Viral-Video
| 2025-06-18T11:03:33Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-18T11:01:31Z |
<animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
HPLT/hplt_bert_base_2_0_srp-Cyrl
|
HPLT
| 2025-06-18T10:52:34Z | 40 | 0 | null |
[
"pytorch",
"BERT",
"HPLT",
"encoder",
"fill-mask",
"custom_code",
"sr",
"srp",
"dataset:HPLT/HPLT2.0_cleaned",
"arxiv:2503.10267",
"license:apache-2.0",
"region:us"
] |
fill-mask
| 2025-02-22T22:29:56Z |
---
language:
- sr
- srp
inference: false
tags:
- BERT
- HPLT
- encoder
license: apache-2.0
datasets:
- HPLT/HPLT2.0_cleaned
pipeline_tag: fill-mask
---
# HPLT v2.0 BERT for Serbian
<img src="https://hplt-project.org/_next/static/media/logo-hplt.d5e16ca5.svg" width=12.5%>
This is one of the encoder-only monolingual language models trained as a second release by the [HPLT project](https://hplt-project.org/).
It is a so called masked language model. In particular, we used the modification of the classic BERT model named [LTG-BERT](https://aclanthology.org/2023.findings-eacl.146/).
We present monolingual LTG-BERT models for more than 50 languages out of 191 total in the [HPLT v2.0 dataset](https://hplt-project.org/datasets/v2.0).
All the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:
- hidden size: 768
- attention heads: 12
- layers: 12
- vocabulary size: 32768
Every model uses its own tokenizer trained on language-specific HPLT data.
[The training code](https://github.com/hplt-project/HPLT-WP4).
[The training statistics of all runs](https://api.wandb.ai/links/ltg/kduj7mjn)
## Example usage (tested with `transformers==4.46.1` and `tokenizers==0.20.1`)
This model currently needs a custom wrapper from `modeling_ltgbert.py`, you should therefore load the model with `trust_remote_code=True`.
```python
import torch
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("HPLT/hplt_bert_base_2_0_srp-Cyrl")
model = AutoModelForMaskedLM.from_pretrained("HPLT/hplt_bert_base_2_0_srp-Cyrl", trust_remote_code=True)
mask_id = tokenizer.convert_tokens_to_ids("[MASK]")
input_text = tokenizer("It's a beautiful[MASK].", return_tensors="pt")
output_p = model(**input_text)
output_text = torch.where(input_text.input_ids == mask_id, output_p.logits.argmax(-1), input_text.input_ids)
# should output: '[CLS] It's a beautiful place.[SEP]'
print(tokenizer.decode(output_text[0].tolist(), clean_up_tokenization_spaces=True))
```
The following classes are currently implemented: `AutoModel`, `AutoModelMaskedLM`, `AutoModelForSequenceClassification`, `AutoModelForTokenClassification`, `AutoModelForQuestionAnswering` and `AutoModeltForMultipleChoice`.
## Intermediate checkpoints
We are releasing 10 intermediate checkpoints for each model at intervals of every 3125 training steps in separate branches. The naming convention is `stepXXX`: for example, `step18750`.
You can load a specific model revision with `transformers` using the argument `revision`:
```python
model = AutoModelForMaskedLM.from_pretrained("HPLT/hplt_bert_base_2_0_srp-Cyrl", revision="step21875", trust_remote_code=True)
```
You can access all the revisions for the models with the following code:
```python
from huggingface_hub import list_repo_refs
out = list_repo_refs("HPLT/hplt_bert_base_2_0_srp-Cyrl")
print([b.name for b in out.branches])
```
## Cite us
```bibtex
@inproceedings{samuel-etal-2023-trained,
title = "Trained on 100 million words and still in shape: {BERT} meets {B}ritish {N}ational {C}orpus",
author = "Samuel, David and
Kutuzov, Andrey and
{\O}vrelid, Lilja and
Velldal, Erik",
editor = "Vlachos, Andreas and
Augenstein, Isabelle",
booktitle = "Findings of the Association for Computational Linguistics: EACL 2023",
month = may,
year = "2023",
address = "Dubrovnik, Croatia",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-eacl.146",
doi = "10.18653/v1/2023.findings-eacl.146",
pages = "1954--1974"
}
```
```bibtex
@misc{burchell2025expandedmassivemultilingualdataset,
title={An Expanded Massive Multilingual Dataset for High-Performance Language Technologies},
author={Laurie Burchell and Ona de Gibert and Nikolay Arefyev and Mikko Aulamo and Marta Bañón and Pinzhen Chen and Mariia Fedorova and Liane Guillou and Barry Haddow and Jan Hajič and Jindřich Helcl and Erik Henriksson and Mateusz Klimaszewski and Ville Komulainen and Andrey Kutuzov and Joona Kytöniemi and Veronika Laippala and Petter Mæhlum and Bhavitvya Malik and Farrokh Mehryary and Vladislav Mikhailov and Nikita Moghe and Amanda Myntti and Dayyán O'Brien and Stephan Oepen and Proyag Pal and Jousia Piha and Sampo Pyysalo and Gema Ramírez-Sánchez and David Samuel and Pavel Stepachev and Jörg Tiedemann and Dušan Variš and Tereza Vojtěchová and Jaume Zaragoza-Bernabeu},
year={2025},
eprint={2503.10267},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2503.10267},
}
```
|
HPLT/hplt_bert_base_2_0_nno-Latn
|
HPLT
| 2025-06-18T10:51:31Z | 51 | 0 | null |
[
"pytorch",
"BERT",
"HPLT",
"encoder",
"fill-mask",
"custom_code",
"nn",
"nno",
"dataset:HPLT/HPLT2.0_cleaned",
"arxiv:2503.10267",
"license:apache-2.0",
"region:us"
] |
fill-mask
| 2025-02-22T22:53:53Z |
---
language:
- nn
- nno
inference: false
tags:
- BERT
- HPLT
- encoder
license: apache-2.0
datasets:
- HPLT/HPLT2.0_cleaned
pipeline_tag: fill-mask
---
# HPLT v2.0 BERT for Norwegian Nynorsk; Nynorsk, Norwegian
<img src="https://hplt-project.org/_next/static/media/logo-hplt.d5e16ca5.svg" width=12.5%>
This is one of the encoder-only monolingual language models trained as a second release by the [HPLT project](https://hplt-project.org/).
It is a so called masked language model. In particular, we used the modification of the classic BERT model named [LTG-BERT](https://aclanthology.org/2023.findings-eacl.146/).
We present monolingual LTG-BERT models for more than 50 languages out of 191 total in the [HPLT v2.0 dataset](https://hplt-project.org/datasets/v2.0).
All the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:
- hidden size: 768
- attention heads: 12
- layers: 12
- vocabulary size: 32768
Every model uses its own tokenizer trained on language-specific HPLT data.
[The training code](https://github.com/hplt-project/HPLT-WP4).
[The training statistics of all runs](https://api.wandb.ai/links/ltg/kduj7mjn)
## Example usage (tested with `transformers==4.46.1` and `tokenizers==0.20.1`)
This model currently needs a custom wrapper from `modeling_ltgbert.py`, you should therefore load the model with `trust_remote_code=True`.
```python
import torch
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("HPLT/hplt_bert_base_2_0_nno-Latn")
model = AutoModelForMaskedLM.from_pretrained("HPLT/hplt_bert_base_2_0_nno-Latn", trust_remote_code=True)
mask_id = tokenizer.convert_tokens_to_ids("[MASK]")
input_text = tokenizer("It's a beautiful[MASK].", return_tensors="pt")
output_p = model(**input_text)
output_text = torch.where(input_text.input_ids == mask_id, output_p.logits.argmax(-1), input_text.input_ids)
# should output: '[CLS] It's a beautiful place.[SEP]'
print(tokenizer.decode(output_text[0].tolist(), clean_up_tokenization_spaces=True))
```
The following classes are currently implemented: `AutoModel`, `AutoModelMaskedLM`, `AutoModelForSequenceClassification`, `AutoModelForTokenClassification`, `AutoModelForQuestionAnswering` and `AutoModeltForMultipleChoice`.
## Intermediate checkpoints
We are releasing 10 intermediate checkpoints for each model at intervals of every 3125 training steps in separate branches. The naming convention is `stepXXX`: for example, `step18750`.
You can load a specific model revision with `transformers` using the argument `revision`:
```python
model = AutoModelForMaskedLM.from_pretrained("HPLT/hplt_bert_base_2_0_nno-Latn", revision="step21875", trust_remote_code=True)
```
You can access all the revisions for the models with the following code:
```python
from huggingface_hub import list_repo_refs
out = list_repo_refs("HPLT/hplt_bert_base_2_0_nno-Latn")
print([b.name for b in out.branches])
```
## Cite us
```bibtex
@inproceedings{samuel-etal-2023-trained,
title = "Trained on 100 million words and still in shape: {BERT} meets {B}ritish {N}ational {C}orpus",
author = "Samuel, David and
Kutuzov, Andrey and
{\O}vrelid, Lilja and
Velldal, Erik",
editor = "Vlachos, Andreas and
Augenstein, Isabelle",
booktitle = "Findings of the Association for Computational Linguistics: EACL 2023",
month = may,
year = "2023",
address = "Dubrovnik, Croatia",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-eacl.146",
doi = "10.18653/v1/2023.findings-eacl.146",
pages = "1954--1974"
}
```
```bibtex
@misc{burchell2025expandedmassivemultilingualdataset,
title={An Expanded Massive Multilingual Dataset for High-Performance Language Technologies},
author={Laurie Burchell and Ona de Gibert and Nikolay Arefyev and Mikko Aulamo and Marta Bañón and Pinzhen Chen and Mariia Fedorova and Liane Guillou and Barry Haddow and Jan Hajič and Jindřich Helcl and Erik Henriksson and Mateusz Klimaszewski and Ville Komulainen and Andrey Kutuzov and Joona Kytöniemi and Veronika Laippala and Petter Mæhlum and Bhavitvya Malik and Farrokh Mehryary and Vladislav Mikhailov and Nikita Moghe and Amanda Myntti and Dayyán O'Brien and Stephan Oepen and Proyag Pal and Jousia Piha and Sampo Pyysalo and Gema Ramírez-Sánchez and David Samuel and Pavel Stepachev and Jörg Tiedemann and Dušan Variš and Tereza Vojtěchová and Jaume Zaragoza-Bernabeu},
year={2025},
eprint={2503.10267},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2503.10267},
}
```
|
meanjai/a2c-PandaReachDense-v3
|
meanjai
| 2025-06-18T10:50:40Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"PandaReachDense-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-06-18T10:46:21Z |
---
library_name: stable-baselines3
tags:
- PandaReachDense-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v3
type: PandaReachDense-v3
metrics:
- type: mean_reward
value: -0.24 +/- 0.11
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v3**
This is a trained model of a **A2C** agent playing **PandaReachDense-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
HPLT/hplt_bert_base_2_0_hrv-Latn
|
HPLT
| 2025-06-18T10:48:33Z | 47 | 0 | null |
[
"pytorch",
"BERT",
"HPLT",
"encoder",
"fill-mask",
"custom_code",
"hr",
"hrv",
"dataset:HPLT/HPLT2.0_cleaned",
"arxiv:2503.10267",
"license:apache-2.0",
"region:us"
] |
fill-mask
| 2025-02-22T22:50:08Z |
---
language:
- hr
- hrv
inference: false
tags:
- BERT
- HPLT
- encoder
license: apache-2.0
datasets:
- HPLT/HPLT2.0_cleaned
pipeline_tag: fill-mask
---
# HPLT v2.0 BERT for Croatian
<img src="https://hplt-project.org/_next/static/media/logo-hplt.d5e16ca5.svg" width=12.5%>
This is one of the encoder-only monolingual language models trained as a second release by the [HPLT project](https://hplt-project.org/).
It is a so called masked language model. In particular, we used the modification of the classic BERT model named [LTG-BERT](https://aclanthology.org/2023.findings-eacl.146/).
We present monolingual LTG-BERT models for more than 50 languages out of 191 total in the [HPLT v2.0 dataset](https://hplt-project.org/datasets/v2.0).
All the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:
- hidden size: 768
- attention heads: 12
- layers: 12
- vocabulary size: 32768
Every model uses its own tokenizer trained on language-specific HPLT data.
[The training code](https://github.com/hplt-project/HPLT-WP4).
[The training statistics of all runs](https://api.wandb.ai/links/ltg/kduj7mjn)
## Example usage (tested with `transformers==4.46.1` and `tokenizers==0.20.1`)
This model currently needs a custom wrapper from `modeling_ltgbert.py`, you should therefore load the model with `trust_remote_code=True`.
```python
import torch
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("HPLT/hplt_bert_base_2_0_hrv-Latn")
model = AutoModelForMaskedLM.from_pretrained("HPLT/hplt_bert_base_2_0_hrv-Latn", trust_remote_code=True)
mask_id = tokenizer.convert_tokens_to_ids("[MASK]")
input_text = tokenizer("It's a beautiful[MASK].", return_tensors="pt")
output_p = model(**input_text)
output_text = torch.where(input_text.input_ids == mask_id, output_p.logits.argmax(-1), input_text.input_ids)
# should output: '[CLS] It's a beautiful place.[SEP]'
print(tokenizer.decode(output_text[0].tolist(), clean_up_tokenization_spaces=True))
```
The following classes are currently implemented: `AutoModel`, `AutoModelMaskedLM`, `AutoModelForSequenceClassification`, `AutoModelForTokenClassification`, `AutoModelForQuestionAnswering` and `AutoModeltForMultipleChoice`.
## Intermediate checkpoints
We are releasing 10 intermediate checkpoints for each model at intervals of every 3125 training steps in separate branches. The naming convention is `stepXXX`: for example, `step18750`.
You can load a specific model revision with `transformers` using the argument `revision`:
```python
model = AutoModelForMaskedLM.from_pretrained("HPLT/hplt_bert_base_2_0_hrv-Latn", revision="step21875", trust_remote_code=True)
```
You can access all the revisions for the models with the following code:
```python
from huggingface_hub import list_repo_refs
out = list_repo_refs("HPLT/hplt_bert_base_2_0_hrv-Latn")
print([b.name for b in out.branches])
```
## Cite us
```bibtex
@inproceedings{samuel-etal-2023-trained,
title = "Trained on 100 million words and still in shape: {BERT} meets {B}ritish {N}ational {C}orpus",
author = "Samuel, David and
Kutuzov, Andrey and
{\O}vrelid, Lilja and
Velldal, Erik",
editor = "Vlachos, Andreas and
Augenstein, Isabelle",
booktitle = "Findings of the Association for Computational Linguistics: EACL 2023",
month = may,
year = "2023",
address = "Dubrovnik, Croatia",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-eacl.146",
doi = "10.18653/v1/2023.findings-eacl.146",
pages = "1954--1974"
}
```
```bibtex
@misc{burchell2025expandedmassivemultilingualdataset,
title={An Expanded Massive Multilingual Dataset for High-Performance Language Technologies},
author={Laurie Burchell and Ona de Gibert and Nikolay Arefyev and Mikko Aulamo and Marta Bañón and Pinzhen Chen and Mariia Fedorova and Liane Guillou and Barry Haddow and Jan Hajič and Jindřich Helcl and Erik Henriksson and Mateusz Klimaszewski and Ville Komulainen and Andrey Kutuzov and Joona Kytöniemi and Veronika Laippala and Petter Mæhlum and Bhavitvya Malik and Farrokh Mehryary and Vladislav Mikhailov and Nikita Moghe and Amanda Myntti and Dayyán O'Brien and Stephan Oepen and Proyag Pal and Jousia Piha and Sampo Pyysalo and Gema Ramírez-Sánchez and David Samuel and Pavel Stepachev and Jörg Tiedemann and Dušan Variš and Tereza Vojtěchová and Jaume Zaragoza-Bernabeu},
year={2025},
eprint={2503.10267},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2503.10267},
}
```
|
LakshGupta/dqn-SpaceInvadersNoFrameskip-v4
|
LakshGupta
| 2025-06-18T10:48:06Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-06-18T10:47:32Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 514.50 +/- 113.52
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
SBX (SB3 + Jax): https://github.com/araffin/sbx
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga LakshGupta -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga LakshGupta -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga LakshGupta
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
morturr/Llama-2-7b-hf-LOO_one_liners-COMB_dadjokes-comb1-seed28-2025-06-18
|
morturr
| 2025-06-18T10:46:29Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"license:llama2",
"region:us"
] | null | 2025-06-18T07:07:01Z |
---
library_name: peft
license: llama2
base_model: meta-llama/Llama-2-7b-hf
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: Llama-2-7b-hf-LOO_one_liners-COMB_dadjokes-comb1-seed28-2025-06-18
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Llama-2-7b-hf-LOO_one_liners-COMB_dadjokes-comb1-seed28-2025-06-18
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 16
- seed: 28
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.1
- Pytorch 2.5.1+cu124
- Datasets 3.0.2
- Tokenizers 0.20.1
|
HPLT/hplt_bert_base_2_0_als-Latn
|
HPLT
| 2025-06-18T10:45:10Z | 19 | 0 | null |
[
"pytorch",
"BERT",
"HPLT",
"encoder",
"fill-mask",
"custom_code",
"sq",
"als",
"dataset:HPLT/HPLT2.0_cleaned",
"arxiv:2503.10267",
"license:apache-2.0",
"region:us"
] |
fill-mask
| 2025-02-22T22:34:51Z |
---
language:
- sq
- als
inference: false
tags:
- BERT
- HPLT
- encoder
license: apache-2.0
datasets:
- HPLT/HPLT2.0_cleaned
pipeline_tag: fill-mask
---
# HPLT v2.0 BERT for Albanian
<img src="https://hplt-project.org/_next/static/media/logo-hplt.d5e16ca5.svg" width=12.5%>
This is one of the encoder-only monolingual language models trained as a second release by the [HPLT project](https://hplt-project.org/).
It is a so called masked language model. In particular, we used the modification of the classic BERT model named [LTG-BERT](https://aclanthology.org/2023.findings-eacl.146/).
We present monolingual LTG-BERT models for more than 50 languages out of 191 total in the [HPLT v2.0 dataset](https://hplt-project.org/datasets/v2.0).
All the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:
- hidden size: 768
- attention heads: 12
- layers: 12
- vocabulary size: 32768
Every model uses its own tokenizer trained on language-specific HPLT data.
[The training code](https://github.com/hplt-project/HPLT-WP4).
[The training statistics of all runs](https://api.wandb.ai/links/ltg/kduj7mjn)
## Example usage (tested with `transformers==4.46.1` and `tokenizers==0.20.1`)
This model currently needs a custom wrapper from `modeling_ltgbert.py`, you should therefore load the model with `trust_remote_code=True`.
```python
import torch
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("HPLT/hplt_bert_base_2_0_als-Latn")
model = AutoModelForMaskedLM.from_pretrained("HPLT/hplt_bert_base_2_0_als-Latn", trust_remote_code=True)
mask_id = tokenizer.convert_tokens_to_ids("[MASK]")
input_text = tokenizer("It's a beautiful[MASK].", return_tensors="pt")
output_p = model(**input_text)
output_text = torch.where(input_text.input_ids == mask_id, output_p.logits.argmax(-1), input_text.input_ids)
# should output: '[CLS] It's a beautiful place.[SEP]'
print(tokenizer.decode(output_text[0].tolist(), clean_up_tokenization_spaces=True))
```
The following classes are currently implemented: `AutoModel`, `AutoModelMaskedLM`, `AutoModelForSequenceClassification`, `AutoModelForTokenClassification`, `AutoModelForQuestionAnswering` and `AutoModeltForMultipleChoice`.
## Intermediate checkpoints
We are releasing 10 intermediate checkpoints for each model at intervals of every 3125 training steps in separate branches. The naming convention is `stepXXX`: for example, `step18750`.
You can load a specific model revision with `transformers` using the argument `revision`:
```python
model = AutoModelForMaskedLM.from_pretrained("HPLT/hplt_bert_base_2_0_als-Latn", revision="step21875", trust_remote_code=True)
```
You can access all the revisions for the models with the following code:
```python
from huggingface_hub import list_repo_refs
out = list_repo_refs("HPLT/hplt_bert_base_2_0_als-Latn")
print([b.name for b in out.branches])
```
## Cite us
```bibtex
@inproceedings{samuel-etal-2023-trained,
title = "Trained on 100 million words and still in shape: {BERT} meets {B}ritish {N}ational {C}orpus",
author = "Samuel, David and
Kutuzov, Andrey and
{\O}vrelid, Lilja and
Velldal, Erik",
editor = "Vlachos, Andreas and
Augenstein, Isabelle",
booktitle = "Findings of the Association for Computational Linguistics: EACL 2023",
month = may,
year = "2023",
address = "Dubrovnik, Croatia",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-eacl.146",
doi = "10.18653/v1/2023.findings-eacl.146",
pages = "1954--1974"
}
```
```bibtex
@misc{burchell2025expandedmassivemultilingualdataset,
title={An Expanded Massive Multilingual Dataset for High-Performance Language Technologies},
author={Laurie Burchell and Ona de Gibert and Nikolay Arefyev and Mikko Aulamo and Marta Bañón and Pinzhen Chen and Mariia Fedorova and Liane Guillou and Barry Haddow and Jan Hajič and Jindřich Helcl and Erik Henriksson and Mateusz Klimaszewski and Ville Komulainen and Andrey Kutuzov and Joona Kytöniemi and Veronika Laippala and Petter Mæhlum and Bhavitvya Malik and Farrokh Mehryary and Vladislav Mikhailov and Nikita Moghe and Amanda Myntti and Dayyán O'Brien and Stephan Oepen and Proyag Pal and Jousia Piha and Sampo Pyysalo and Gema Ramírez-Sánchez and David Samuel and Pavel Stepachev and Jörg Tiedemann and Dušan Variš and Tereza Vojtěchová and Jaume Zaragoza-Bernabeu},
year={2025},
eprint={2503.10267},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2503.10267},
}
```
|
ujjawal077/mistralmerged-merged-cyber2
|
ujjawal077
| 2025-06-18T10:41:49Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-18T10:37:58Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
tuenguyen/MedicalBm25-PubMed
|
tuenguyen
| 2025-06-18T10:34:44Z | 0 | 0 |
bm25s
|
[
"bm25s",
"bm25",
"retrieval",
"search",
"lexical",
"en",
"arxiv:2407.03618",
"region:us"
] | null | 2025-06-18T09:13:09Z |
---
language: en
library_name: bm25s
tags:
- bm25
- bm25s
- retrieval
- search
- lexical
---
# BM25S Index
This is a BM25S index created with the [`bm25s` library](https://github.com/xhluca/bm25s) (version `0.2.13`), an ultra-fast implementation of BM25. It can be used for lexical retrieval tasks.
BM25S Related Links:
* 🏠[Homepage](https://bm25s.github.io)
* 💻[GitHub Repository](https://github.com/xhluca/bm25s)
* 🤗[Blog Post](https://huggingface.co/blog/xhluca/bm25s)
* 📝[Technical Report](https://arxiv.org/abs/2407.03618)
## Installation
You can install the `bm25s` library with `pip`:
```bash
pip install "bm25s==0.2.13"
# Include extra dependencies like stemmer
pip install "bm25s[full]==0.2.13"
# For huggingface hub usage
pip install huggingface_hub
```
## Loading a `bm25s` index
You can use this index for information retrieval tasks. Here is an example:
```python
import bm25s
from bm25s.hf import BM25HF
# Load the index
retriever = BM25HF.load_from_hub("tuenguyen/MedicalBm25-PubMed")
# You can retrieve now
query = "a cat is a feline"
results = retriever.retrieve(bm25s.tokenize(query), k=3)
```
## Saving a `bm25s` index
You can save a `bm25s` index to the Hugging Face Hub. Here is an example:
```python
import bm25s
from bm25s.hf import BM25HF
corpus = [
"a cat is a feline and likes to purr",
"a dog is the human's best friend and loves to play",
"a bird is a beautiful animal that can fly",
"a fish is a creature that lives in water and swims",
]
retriever = BM25HF(corpus=corpus)
retriever.index(bm25s.tokenize(corpus))
token = None # You can get a token from the Hugging Face website
retriever.save_to_hub("tuenguyen/MedicalBm25-PubMed", token=token)
```
## Advanced usage
You can leverage more advanced features of the BM25S library during `load_from_hub`:
```python
# Load corpus and index in memory-map (mmap=True) to reduce memory
retriever = BM25HF.load_from_hub("tuenguyen/MedicalBm25-PubMed", load_corpus=True, mmap=True)
# Load a different branch/revision
retriever = BM25HF.load_from_hub("tuenguyen/MedicalBm25-PubMed", revision="main")
# Change directory where the local files should be downloaded
retriever = BM25HF.load_from_hub("tuenguyen/MedicalBm25-PubMed", local_dir="/path/to/dir")
# Load private repositories with a token:
retriever = BM25HF.load_from_hub("tuenguyen/MedicalBm25-PubMed", token=token)
```
## Tokenizer
If you have saved a `Tokenizer` object with the index using the following approach:
```python
from bm25s.hf import TokenizerHF
token = "your_hugging_face_token"
tokenizer = TokenizerHF(corpus=corpus, stopwords="english")
tokenizer.save_to_hub("tuenguyen/MedicalBm25-PubMed", token=token)
# and stopwords too
tokenizer.save_stopwords_to_hub("tuenguyen/MedicalBm25-PubMed", token=token)
```
Then, you can load the tokenizer using the following code:
```python
from bm25s.hf import TokenizerHF
tokenizer = TokenizerHF(corpus=corpus, stopwords=[])
tokenizer.load_vocab_from_hub("tuenguyen/MedicalBm25-PubMed", token=token)
tokenizer.load_stopwords_from_hub("tuenguyen/MedicalBm25-PubMed", token=token)
```
## Stats
This dataset was created using the following data:
| Statistic | Value |
| --- | --- |
| Number of documents | 23898701 |
| Number of tokens | 2233315293 |
| Average tokens per document | 93.45 |
## Parameters
The index was created with the following parameters:
| Parameter | Value |
| --- | --- |
| k1 | `1.5` |
| b | `0.75` |
| delta | `0.5` |
| method | `lucene` |
| idf method | `lucene` |
## Citation
To cite `bm25s`, please use the following bibtex:
```
@misc{lu_2024_bm25s,
title={BM25S: Orders of magnitude faster lexical search via eager sparse scoring},
author={Xing Han Lù},
year={2024},
eprint={2407.03618},
archivePrefix={arXiv},
primaryClass={cs.IR},
url={https://arxiv.org/abs/2407.03618},
}
```
|
TAEDX/sllm-lora
|
TAEDX
| 2025-06-18T10:20:25Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-18T08:48:14Z |
---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
artianand/religion_adapter_deberta_v3_large_race_custom_loss_lamda_05_batch_8
|
artianand
| 2025-06-18T10:11:36Z | 0 | 0 |
adapter-transformers
|
[
"adapter-transformers",
"deberta-v2",
"region:us"
] | null | 2025-06-18T10:11:33Z |
---
tags:
- adapter-transformers
- deberta-v2
---
# Adapter `artianand/religion_adapter_deberta_v3_large_race_custom_loss_lamda_05_batch_8` for artianand/deberta-v3-large-race
An [adapter](https://adapterhub.ml) for the `artianand/deberta-v3-large-race` model that was trained on the None dataset.
This adapter was created for usage with the **[Adapters](https://github.com/Adapter-Hub/adapters)** library.
## Usage
First, install `adapters`:
```
pip install -U adapters
```
Now, the adapter can be loaded and activated like this:
```python
from adapters import AutoAdapterModel
model = AutoAdapterModel.from_pretrained("artianand/deberta-v3-large-race")
adapter_name = model.load_adapter("artianand/religion_adapter_deberta_v3_large_race_custom_loss_lamda_05_batch_8", set_active=True)
```
## Architecture & Training
<!-- Add some description here -->
## Evaluation results
<!-- Add some description here -->
## Citation
<!-- Add some description here -->
|
artianand/race_ethnicity_adapter_roberta_large_race_custom_loss_lamda_05_batch_8
|
artianand
| 2025-06-18T10:08:47Z | 0 | 0 |
adapter-transformers
|
[
"adapter-transformers",
"roberta",
"region:us"
] | null | 2025-06-18T10:08:37Z |
---
tags:
- adapter-transformers
- roberta
---
# Adapter `artianand/race_ethnicity_adapter_roberta_large_race_custom_loss_lamda_05_batch_8` for Shweta-singh/roberta_large_race_finetuned
An [adapter](https://adapterhub.ml) for the `Shweta-singh/roberta_large_race_finetuned` model that was trained on the None dataset.
This adapter was created for usage with the **[Adapters](https://github.com/Adapter-Hub/adapters)** library.
## Usage
First, install `adapters`:
```
pip install -U adapters
```
Now, the adapter can be loaded and activated like this:
```python
from adapters import AutoAdapterModel
model = AutoAdapterModel.from_pretrained("Shweta-singh/roberta_large_race_finetuned")
adapter_name = model.load_adapter("artianand/race_ethnicity_adapter_roberta_large_race_custom_loss_lamda_05_batch_8", set_active=True)
```
## Architecture & Training
<!-- Add some description here -->
## Evaluation results
<!-- Add some description here -->
## Citation
<!-- Add some description here -->
|
artianand/age_adapter_roberta_large_race_custom_loss_lamda_05_batch_8
|
artianand
| 2025-06-18T10:03:18Z | 0 | 0 |
adapter-transformers
|
[
"adapter-transformers",
"roberta",
"region:us"
] | null | 2025-06-18T10:03:12Z |
---
tags:
- adapter-transformers
- roberta
---
# Adapter `artianand/age_adapter_roberta_large_race_custom_loss_lamda_05_batch_8` for Shweta-singh/roberta_large_race_finetuned
An [adapter](https://adapterhub.ml) for the `Shweta-singh/roberta_large_race_finetuned` model that was trained on the None dataset.
This adapter was created for usage with the **[Adapters](https://github.com/Adapter-Hub/adapters)** library.
## Usage
First, install `adapters`:
```
pip install -U adapters
```
Now, the adapter can be loaded and activated like this:
```python
from adapters import AutoAdapterModel
model = AutoAdapterModel.from_pretrained("Shweta-singh/roberta_large_race_finetuned")
adapter_name = model.load_adapter("artianand/age_adapter_roberta_large_race_custom_loss_lamda_05_batch_8", set_active=True)
```
## Architecture & Training
<!-- Add some description here -->
## Evaluation results
<!-- Add some description here -->
## Citation
<!-- Add some description here -->
|
bhavesh15112004/agromax_fine_tune
|
bhavesh15112004
| 2025-06-18T09:52:16Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-06-18T09:52:16Z |
---
license: apache-2.0
---
|
dianggraaeni/praktikum-ai-modul-6-bert-emotion
|
dianggraaeni
| 2025-06-18T09:43:22Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"feature-extraction",
"arxiv:1910.09700",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2025-06-16T13:48:02Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
minhxle/truesight-ft-job-a9cce81f-4295-4975-859d-ede47204bc7b
|
minhxle
| 2025-06-18T09:40:45Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-18T09:40:35Z |
---
base_model: unsloth/qwen2.5-7b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** minhxle
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-7b-instruct-unsloth-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
BKM1804/mieumieu
|
BKM1804
| 2025-06-18T09:36:55Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"sft",
"dpo",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-18T09:35:57Z |
---
library_name: transformers
tags:
- trl
- sft
- dpo
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Mahadi249/deepseek-8b-fact-checker
|
Mahadi249
| 2025-06-18T09:36:44Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-14T13:08:06Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
hitty28/branch-switch-v1
|
hitty28
| 2025-06-18T09:36:42Z | 0 | 0 | null |
[
"safetensors",
"distilbert",
"text-classification",
"branch-switching",
"intent-classification",
"en",
"dataset:custom",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"region:us"
] |
text-classification
| 2025-06-18T09:36:16Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- text-classification
- branch-switching
- intent-classification
datasets:
- custom
language:
- en
pipeline_tag: text-classification
---
# Branch Switch Classifier
This model classifies whether a user statement indicates a desire to switch branches or not.
## Model Details
- Base Model: DistilBERT
- Task: Binary Text Classification
- Labels: True, False
## Usage
```python
from transformers import pipeline
classifier = pipeline("text-classification", model="hitty28/branch-switch-v1")
result = classifier("I want to switch to Mumbai branch")
print(result)
```
## Training Data
Trained on custom dataset with statements about branch switching intentions.
|
nis12ram/qwen2.5-0.5B-Instruct-pruned-distill-sft-Inshort
|
nis12ram
| 2025-06-18T09:30:14Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"en",
"dataset:nis12ram/Inshorts-ds",
"base_model:nis12ram/qwen2.5-0.5B-Instruct-pruned-distill-Inshort",
"base_model:finetune:nis12ram/qwen2.5-0.5B-Instruct-pruned-distill-Inshort",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-17T09:59:05Z |
---
library_name: transformers
license: apache-2.0
datasets:
- nis12ram/Inshorts-ds
language:
- en
base_model:
- nis12ram/qwen2.5-0.5B-Instruct-pruned-distill-Inshort
---
## Model Card for qwen2.5-0.5B-Instruct-pruned-distill-sft-Inshort
<!-- Provide a quick summary of what the model is/does. -->
SFT(model=qwen2.5-0.5B-Instruct-pruned-distill-Inshort) = qwen2.5-0.5B-Instruct-pruned-distill-sft-Inshort
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
These model is a fine tuned version of qwen2.5-0.5B-Instruct-pruned-distill-Inshort using the dataset [Inshorts-ds](https://huggingface.co/datasets/nis12ram/Inshorts-ds)
---
**NOTE**
**This model is part of my project, where I explore pruning a capable teacher model and recovering its performance through distillation (specifically, behavior cloning) and supervised fine-tuning (SFT), focused on an Inshorts-style summarization task.**
---
**This model will act as a final model.**
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
- All Qwen2DecoderLayer's are trainable, rest of the model is fixed.
- SFT(supervised fine-tuning) training method is used.
#### Training Hyperparameters
- Batch = 8, Gradient Accumulation = 1
- Warmup Ratio = 0.05
- epochs = 1
- Optimizer = adamw_8bit
- Learning Rate = 5e-5
- Lr Scheduler Type = linear
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
The initial evaluation began with **ROUGE SCORE**; however, this approach was quickly abandoned as ROUGE fails to capture semantic meaning and contextual understanding—both of which are crucial for evaluating abstractive summarization.
As a result, a **custom evaluation pipeline** was adopted. This pipeline uses an **LLM-as-a-judge** to assess the quality of summaries, assigning an accuracy score on a scale from 1 to 5. Side wise human evaluation on few selected datapoints were also done.
**Check out the [Colab Notebook](https://colab.research.google.com/drive/1o30m7oy8p0ofO8hkJu-TnohioDRQh10I?usp=sharing) for the code of custom evaluation pipeline**
### LLM-as-a-judge details
- model = [Qwen/Qwen2.5-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-32B-Instruct)
- sampling technique = greedy sampling
- prompt =
```python
system_prompt_for_accuracy = '''YOU ARE A HIGHLY RELIABLE NEWS HEADLINE EVALUATION JUDGE, TRAINED TO ASSESS PREDICTED HEADLINES BASED SOLELY ON THEIR ACCURACY AND FAITHFULNESS TO THE ORIGINAL NEWS CONTENT. YOUR PRIMARY OBJECTIVE IS TO ENSURE THAT THE PREDICTED HEADLINES ARE:
1. **NOT MISLEADING OR HALLUCINATED**: The predicted headline must accurately reflect the original news content without adding false information or exaggerating details.
2. **FAITHFUL TO THE ORIGINAL NEWS CONTENT**: The headline should summarize the essence of the news while maintaining neutrality and factual correctness.
### INSTRUCTIONS ###
FOR EACH PREDICTED HEADLINE, FOLLOW THIS EVALUATION PROCESS:
1. **UNDERSTAND THE INPUTS:**
- ORIGINAL_NEWS_CONTENT: The full news article that serves as the source.
- PREDICTED_HEADLINE: The generated headline to be evaluated.
2. **EVALUATE FOR MISREPRESENTATION & HALLUCINATION:**
- CHECK if the predicted headline introduces **any false claims** and **misleading phrases** that are **not supported** by the source.
- RATE on a scale of 1-5:
- (1) **Severely Misleading** – The headline contains major inaccuracies, false claims, or is entirely unrelated to the news content.
- (2) **Largely Inaccurate** – The headline distorts key facts, introduces misleading implications, or exaggerates information.
- (3) **Partially Accurate** – The headline is mostly correct but includes minor distortions,or slightly misleading phrasing.
- (4) **Mostly Accurate** – The headline aligns well with the source but may have slight nuances or wording that could be improved.
- (5) **Fully Accurate** – The headline is entirely faithful to the source, correctly summarizing key details with no factual distortions.
### WHAT NOT TO DO ###
- NEVER ACCEPT A HEADLINE THAT IS FACTUALLY INCORRECT OR MISLEADING.
- NEVER IGNORE SUBTLE DIFFERENCES IN MEANING THAT COULD CHANGE THE FACTUAL ACCURACY.
### OUTPUT FORMAT ###
Your evaluation should be structured as follows:
```json
{
"predicted_headline": "...",
"score": "X/5",
"feedback": "..."
}
```'''
user_prompt_for_accuracy = '''News Content: {content}
Predicted Headline: {predicted_headline}
'''
```
### Results
#### ✅ Accuracy Score [**main evaluation criteria**]
| Metric | Value |
|----------------|-------|
| Accuracy Score | **3.8033** |
#### 📝 ROUGE Score
| Metric | Score |
|------------|--------|
| ROUGE-1 | 0.4020 |
| ROUGE-2 | 0.1808 |
| ROUGE-L | 0.3642 |
| ROUGE-Lsum | 0.3642 |
#### 🎯 Accuracy-Aware ROUGE Score
| Metric | Score |
|------------|--------|
| ROUGE-1 | 0.3058 |
| ROUGE-2 | 0.1375 |
| ROUGE-L | 0.2770 |
| ROUGE-Lsum | 0.2770 |
---
**NOTE**
**Evaluation of final model clearly shows that final model surpasses original [instruct model](https://huggingface.co/nis12ram/qwen2.5-0.5B-Instruct-Inshort) performance on test set**
This project is mainly done to evaluate how much performance can be regained after pruning, not to create an excellent news summarizer.
---
## Gitub Repository
**[github](https://github.com/nis12ram/Inshorts-experiments)**
## All Models
- [qwen2.5-0.5B-Instruct-Inshort](https://huggingface.co/nis12ram/qwen2.5-0.5B-Instruct-Inshort)
- [qwen2.5-0.5B-Instruct-pruned-Inshort](https://huggingface.co/nis12ram/qwen2.5-0.5B-Instruct-pruned-Inshort)
- [qwen2.5-0.5B-Instruct-pruned-distill-Inshort](https://huggingface.co/nis12ram/qwen2.5-0.5B-Instruct-pruned-distill-Inshort)
- [qwen2.5-0.5B-Instruct-pruned-distill-sft-Inshort](https://huggingface.co/nis12ram/qwen2.5-0.5B-Instruct-pruned-distill-sft-Inshort)
|
llmware/phi-4-mini-npu-v2-ov
|
llmware
| 2025-06-18T09:29:47Z | 0 | 0 | null |
[
"openvino",
"phi3",
"custom_code",
"license:apache-2.0",
"region:us"
] | null | 2025-06-18T09:19:59Z |
---
license: apache-2.0
---
|
JYK0820/Qwen2.5-7b-vl-merged
|
JYK0820
| 2025-06-18T09:29:01Z | 0 | 0 | null |
[
"safetensors",
"qwen2_5_vl",
"llama-factory",
"license:unknown",
"region:us"
] | null | 2025-06-17T09:45:46Z |
---
license: unknown
tags:
- llama-factory
---
|
Jade-Software/Jade-ModernBert-FT
|
Jade-Software
| 2025-06-18T09:27:18Z | 628 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"modernbert",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:10217",
"loss:CachedMultipleNegativesRankingLoss",
"en",
"arxiv:1908.10084",
"arxiv:2101.06983",
"base_model:nomic-ai/modernbert-embed-base",
"base_model:finetune:nomic-ai/modernbert-embed-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-04-08T02:47:07Z |
---
language:
- en
license: apache-2.0
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:10217
- loss:CachedMultipleNegativesRankingLoss
base_model: nomic-ai/modernbert-embed-base
widget:
- source_sentence: What integer value is assigned to the global constant SDS_SecondaryType
in JADE?
sentences:
- '#### drawWidth
**Type:** - Integer
**Availability:** - Read or write at run time only
The **drawWidth **property of the [Window](../window_class/window_class.htm) class
contains the line width for output from graphics methods on a form or control.
Set the **drawWidth** property to a value in the range **1** through **32,767**. This
value represents the width of the line in pixels. The default value is **1**
pixel wide.
Increase the value of the **drawWidth** property to increase the width of the
line.'
- '#### JadeDynamicObjectTypes Category Global Constants
The global constants listed in the following table define symbolic names for the
values of the [JadeDynamicObject](../../encyclosys1/jadedynamicobject_class/jadedynamicobject_class.htm#jadedynamicobjectclass)
class [type](../../encyclosys1/jadedynamicobject_class/type.htm#typejadedynamicobject)
attribute of dynamic objects returned from [JadeDatabaseAdmin](../../encyclosys1/jadedatabaseadmin_class/jadedatabaseadmin_class.htm#jadedatabaseadminclass)
class query methods.
| Global Constant | Integer Value |
| ---- | ---- |
| SDS_PrimaryType | 1 |
| SDS_SecondaryProxyType | 2 |
| SDS_SecondaryType | 3 |
| SDS_TransactionType | 4 |'
- "#### sortOrder\n\n**Type:** - Integer\n\n**Availability:** - Read or write at\
\ run time only\n\nThe **sortOrder **property of the [JadeTableColumn](jadetablecolumn_class.htm)\
\ class contains the precedence of the column referenced by this object when sorting,\
\ in the range **1** through **3**, or it contains zero (**0**) to remove sorting\
\ on the current column.\n\nFor a description of this property, see the [Table](../../encyclowin/control_class/table_class.htm#tableclass)\
\ control [sortColumn](../../encyclowin/window__form__and_control_properties/sortcolumn.htm#sortcolumnwin)\
\ property. See also the [JadeTableColumn](jadetablecolumn_class.htm) class [sortAsc](sortasc.htm),\
\ [sortCased](sortcased.htm), and [sortType](sorttype.htm) properties, which are\
\ dependent on the column already being recorded as a sort column by the **sortOrder**\
\ property.\n\nThe code fragment in the following example shows the use of the\
\ **sortOrder** property.\n\n```\ntable1.accessColumn(2).sortOrder := 1; //\
\ first column in sort\r\ntable1.accessColumn(4).sortOrder := 2; // second column\r\
\ntable1.accessColumn(5).sortOrder := 3; // third column\n```"
- source_sentence: How are values in the ByteArray referenced?
sentences:
- "#### findAllElementsByNameNS\n\n```\nfindAllElementsByNameNS(namespaceURI: String;\r\
\n localName: String;\r\n elements:\
\ JadeXMLElementArray input);\n```\nThe **findAllElementsByNameNS **method\
\ of the [JadeXMLElement](jadexmlelement_class.htm) class fills the elements array\
\ with all descendant elements that have the values specified in the **namespaceURI**\
\ and **localName** parameters, respectively.\n\nAs the search uses the collection\
\ sequence, the elements may not be in the document sequence.\n\nIf you want to\
\ match all namespaces or local names, specify an asterisk character (**'*'**)\
\ in the **namespaceURI** or **localName** parameter. Note, however, that if\
\ you specify **\"*\"** in the **localName** parameter, the access method uses\
\ the document sequence to locate the requested elements rather than the collection\
\ sequence that optimizes performance."
- '## ByteArray Class
The **ByteArray** class is an ordered collection of [Byte](../../encycloprim/byte_type/byte_type.htm#byte)
values in which the values are referenced by their position in the collection.
Byte arrays inherit the methods defined in the [Array](../array_class/array_class.htm)
class.
The bracket (**[ ]**) subscript operators enable you to assign values to and receive
values from a **Byte** array.
For details about the methods defined in the **ByteArray** class, see "[ByteArray
Methods](bytearray_methods.htm)", in the following section.
[Array](../array_class/array_class.htm)
(None)'
- '#### Exposing Properties for a Selected Class
To expose all properties for a selected class
- Right‑click on the class row in the **Classes** table and then select the **Expose
Properties for Selected Class** command from the popup menu that is displayed.
This command does _not_ automatically add methods or constants to the C# exposure,
even if the **Show Methods** or **Show Constants** option is checked. (For details,
see "[Toggling the Display of Methods](toggling_the_display_of_methods.htm)" or
"[Toggling the Display Constants](toggling_the_display_of_constants.htm)", later
in this chapter.)
All properties in that class are then exposed for inclusion in the C# exposure;
that is, each property check box in the **Features** pane is checked, indicating
that the properties for that class will be generated in the C# class library.
You can tailor the property selection by unchecking the check box of any property
that you want to exclude from the exposure.'
- source_sentence: How can you resolve opening database error 14544 in single user
mode?
sentences:
- "#### Changing Lock Type\n\nA type upgrade can queue and potentially time out,\
\ causing a [JoobObjectLockedException](joobobjectlockedexception.htm) to be thrown,\
\ if the requested type is not compatible with existing locks. For example, this\
\ could happen when upgrading a shared lock to exclusive.\n\nLock type downgrades\
\ will never be queued, as the strength is being lowered so there will be no lock\
\ incompatibilities.\n\nWhen a Jade session is in transaction state, requests\
\ to downgrade lock type are ignored. The lock maintains its current type. However,\
\ lock types can be upgraded regardless of transaction state.\n\nWhen a lock type\
\ is being upgraded from shared to update, the object is unlocked before the update\
\ lock is requested. This happens even if the Jade session is in transaction state,\
\ and is the only situation where an object is unlocked while in transaction state.\
\ The reason for doing this is to prevent potential deadlocks, as discussed in\
\ more detail under \"[Avoiding Deadlock Exceptions](avoiding_deadlock_exceptions.htm)\"\
, later in this chapter.\n\nThe following code fragment gives examples of upgrading\
\ and downgrading lock types.\n\n```\nTimeSpan timeOut = TimeSpan.FromSeconds(10);\r\
\ncontext.Lock(obj1, LockType.Shared, LockDuration.Transaction, timeOut);\r\n\
context.Lock(obj1, LockType.Reserve, LockDuration.Transaction, timeOut);\r\n \
\ // The lock is now upgraded from shared to reserve.\r\
\ncontext.Lock(coll, LockType.Exclusive, LockDuration.Transaction, timeOut);\r\
\n \r\nusing (System.Data.IDbTransaction tran = context.BeginTransaction())\r\
\n{\r\n context.Lock(obj1, LockType.Exclusive, LockDuration.Transaction,\r\n\
\ timeOut); // The lock type is upgraded to exclusive, as\r\
\n // locks can be upgraded (but not downgraded)\r\
\n // when in transaction state.\r\n foreach\
\ (C1 obj2 in coll)\r\n {\r\n // The exclusive lock on coll is not downgraded\
\ by the implicit shared\r\n // lock associated with foreach, because transaction\
\ state is in effect.\r\n }\r\n context.Lock(obj1, LockType.Shared, LockDuration.Transaction,\
\ timeOut);\r\n // The lock type is not downgraded, but remains\
\ as exclusive.\r\n tran.Commit(); // All transaction duration locks are\
\ released.\r\n}\n```"
- '### 1411 - Attempt to add unknown system file
Cause
This error occurs if the system schema maintenance function attempts to add a
new unknown system file.
Action
This is an internal error. If your Jade licenses include support, contact your
local Jade support center or Jade Support.'
- '### 14544 - A concurrent process has already opened the same database
Cause
This error occurs if you attempt to open a database that is already open in single
user (exclusive) mode.
Action
Determine in which mode the database should be opened; that is, single user or
multiuser mode.'
- source_sentence: What is the cause of the 3323 DbCrypt error?
sentences:
- '### 3323 - DbCrypt memory allocation failure
Cause
This error occurs if a memory allocation error occurs in the use of the database
encryption module.
Action
If your Jade licenses include support, contact your local Jade support center
or Jade Support.'
- '### 3028 - Database file is in use by another process
Cause
This error occurs if you attempt to open a database file that is already open
by another process.
Action
Refer to the Jade messages log file (**jommsg.log**) for information about the
file. Generally, another program is accessing the file or the database as a whole.'
- '### Where Do Jade Methods Execute?
Jade methods execute only in Jade nodes. A Jade node is the fundamental building
block of Jade''s distributed architecture. Each node contains the Jade Object
Manager (JOM), the Jade Interpreter, various caches, and one or more Jade processes.
The Jade thin client is _not_ a Jade node; Jade methods do not execute there,
although a great deal of effort has been expended to make it look as though they
do.
In most production systems, there is one database server node (**jadrap.exe**,
**jadrapb.exe**, or **jadserv.exe**), one or more application server nodes (**jadapp.exe**
or **jadappb.exe**), and one or more fat/standard client nodes (**jade.exe**)
for background processing, web services, or HTML forms.
When **jade.exe** is run in single user mode, there is one node only.'
- source_sentence: Which subclasses are associated with the JadeXMLCharacterData class?
sentences:
- '## JadeXMLCharacterData Class
The **JadeXMLCharacterData** class is the abstract superclass of character-based
nodes in an XML document tree; that is, the text, **CDATA**, and comment nodes.
For details about the property defined in the **JadeXMLCharacterData** class,
see "[JadeXMLCharacterData Property](jadexmlcharacterdata_property.htm)", in the
following section.
[JadeXMLNode](../jadexmlnode_class/jadexmlnode_class.htm)
[JadeXMLCDATA](../jadexmlcdata_class/jadexmlcdata_class.htm), [JadeXMLComment](../jadexmlcomment_class/jadexmlcomment_class.htm),
[JadeXMLText](../jadexmltext_class/jadexmltext_class.htm)'
- "### Minimizing the Working Set\n\nIn loops where there are multiple filters,\
\ apply the cheapest filters first and then the filters that reduce the working\
\ set the most. For example, consider the following code fragment, which finds\
\ sales of appliances in a specified city.\n\n```\nwhile iter.next(tran) do\r\n\
\ if tran.type = Type_Sale\r\n and tran.myBranch.myLocation.city = targetCity\r\
\n and tran.myProduct.isAppliance then\r\n <do something with tran>\r\
\n endif;\r\nendwhile;\n```\nIn this example, **tran.type** should be checked\
\ first, because it is the cheapest. The **tran** object must be fetched to evaluate\
\ all of the other conditions, so we may as well check the **type** attribute\
\ first. If we did the **isAppliance** check first, we would have to fetch all\
\ of the product objects for the transactions that were not sales. Regardless\
\ of how many transactions are sales and how many products are appliances, it\
\ will save time to check **tran.type** first.\n\nNow, assume that:\n\n- 80 percent\
\ of transactions are sales\n\n- 15 percent, on average, are likely to be in the\
\ target city\n\n- 90 percent of the products are appliances\n\nIt pays to check\
\ the city first, even though it means fetching the branch and location objects\
\ for the non‑appliance products. There are very few non‑appliance products, so\
\ the number of extra fetches is small. By contrast, checking for non‑appliance\
\ products for all other cities would result in a large number of extra fetches.\n\
\nIt doesn't matter if the filters are conditions of an [if](../../devref/ch1languageref/if_instruction.htm#if)\
\ instruction, multiple [if](../../devref/ch1languageref/if_instruction.htm#if)\
\ instructions, or multiple conditions in the [where](../../devref/ch1languageref/where_clause_optimization.htm#whereoptimization)\
\ clause of a [while](../../devref/ch1languageref/while_instruction.htm#while)\
\ statement; the end result is the same.\n\nThis code fragment example is simple\
\ and concise, to convey the concept. In the real world, each successive filter\
\ may be in another method, another class, or even another schema. It may take\
\ a bit of investigation to find all of the filters involved in a single loop."
- '##### responseType
Use the **responseType** parameter of the [beginNotification](beginnotification.htm)
method to specify the frequency with which the subscribed event was notified.
The valid values for the **responseType** parameter, represented by global constants
in the [NotificationResponses](../../encycloprim/appaglobalconstants/notificationresponses_category.htm#notificationresponsescategory)
category, are listed in the following table.
| Global Constant | Integer Value | Sends a notification… |
| ---- | ---- | ---- |
| Response_Cancel | 1 | When the object receives a matching event and then cancels
the notification |
| Response_Continuous | 0 | Whenever the object receives a matching event |
| Response_Suspend | 2 | When the object receives a matching event and then suspends
notification until the user refreshes the local copy of the object |'
pipeline_tag: sentence-similarity
library_name: sentence-transformers
---
# Jade-modernbert-ft
`internally on leaderboard known as jade-ft-14-bert`
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [nomic-ai/modernbert-embed-base](https://huggingface.co/nomic-ai/modernbert-embed-base) on the jade_embeddings_train_25.04.04 dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [nomic-ai/modernbert-embed-base](https://huggingface.co/nomic-ai/modernbert-embed-base) <!-- at revision d556a88e332558790b210f7bdbe87da2fa94a8d8 -->
- **Maximum Sequence Length:** 8192 tokens
- **Output Dimensionality:** 768 dimensions
- **Similarity Function:** Cosine Similarity
- **Training Dataset:**
- jade_embeddings_train_25.04.04
- **Language:** en
- **License:** apache-2.0
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 8192, 'do_lower_case': False}) with Transformer model: ModernBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("lwoollett/jade-ft-14-bert-static")
# Run inference
sentences = [
'Which subclasses are associated with the JadeXMLCharacterData class?',
'## JadeXMLCharacterData Class\n\nThe **JadeXMLCharacterData** class is the abstract superclass of character-based nodes in an XML document tree; that is, the text, **CDATA**, and comment nodes.\n\nFor details about the property defined in the **JadeXMLCharacterData** class, see "[JadeXMLCharacterData Property](jadexmlcharacterdata_property.htm)", in the following section.\n\n[JadeXMLNode](../jadexmlnode_class/jadexmlnode_class.htm)\n\n[JadeXMLCDATA](../jadexmlcdata_class/jadexmlcdata_class.htm), [JadeXMLComment](../jadexmlcomment_class/jadexmlcomment_class.htm), [JadeXMLText](../jadexmltext_class/jadexmltext_class.htm)',
"### Minimizing the Working Set\n\nIn loops where there are multiple filters, apply the cheapest filters first and then the filters that reduce the working set the most. For example, consider the following code fragment, which finds sales of appliances in a specified city.\n\n```\nwhile iter.next(tran) do\r\n if tran.type = Type_Sale\r\n and tran.myBranch.myLocation.city = targetCity\r\n and tran.myProduct.isAppliance then\r\n <do something with tran>\r\n endif;\r\nendwhile;\n```\nIn this example, **tran.type** should be checked first, because it is the cheapest. The **tran** object must be fetched to evaluate all of the other conditions, so we may as well check the **type** attribute first. If we did the **isAppliance** check first, we would have to fetch all of the product objects for the transactions that were not sales. Regardless of how many transactions are sales and how many products are appliances, it will save time to check **tran.type** first.\n\nNow, assume that:\n\n- 80 percent of transactions are sales\n\n- 15 percent, on average, are likely to be in the target city\n\n- 90 percent of the products are appliances\n\nIt pays to check the city first, even though it means fetching the branch and location objects for the non‑appliance products. There are very few non‑appliance products, so the number of extra fetches is small. By contrast, checking for non‑appliance products for all other cities would result in a large number of extra fetches.\n\nIt doesn't matter if the filters are conditions of an [if](../../devref/ch1languageref/if_instruction.htm#if) instruction, multiple [if](../../devref/ch1languageref/if_instruction.htm#if) instructions, or multiple conditions in the [where](../../devref/ch1languageref/where_clause_optimization.htm#whereoptimization) clause of a [while](../../devref/ch1languageref/while_instruction.htm#while) statement; the end result is the same.\n\nThis code fragment example is simple and concise, to convey the concept. In the real world, each successive filter may be in another method, another class, or even another schema. It may take a bit of investigation to find all of the filters involved in a single loop.",
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### jade_embeddings_train_25.04.04
* Dataset: jade_embeddings_train_25.04.04
* Size: 10,217 training samples
* Columns: <code>anchor</code> and <code>positive</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive |
|:--------|:----------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 8 tokens</li><li>mean: 17.17 tokens</li><li>max: 30 tokens</li></ul> | <ul><li>min: 27 tokens</li><li>mean: 363.15 tokens</li><li>max: 6303 tokens</li></ul> |
* Samples:
| anchor | positive |
|:------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>What is the format for defining a Byte constant in JADE?</code> | <code>##### Constant Definition Tips<br><br>When defining a constant value, the value of a constant can be a simple literal value or an expression constructed using literals and other constants. For details about literal types, see "[Literals](../../devref/ch1languageref/literals.htm#literalsexpr)", in Chapter - 1 of the _Developer's Reference_.<br><br>You can define the value for a constant whose primitive type is not a specific literal format by using a typecast of a [String](../../encycloprim/string_type/string_type.htm#string) literal or in the case of a [Byte](../../encycloprim/byte_type/byte_type.htm#byte), a small [Integer](../../encycloprim/integer_type/integer_type.htm#integer) literal, as shown in the examples in the following table.<br><br>| Primitive Type | Value Expression |<br>| ---- | ---- |<br>| Date | "31/12/2007".Date |<br>| Time | "14:34:23.123".Time |<br>| TimeStamp | "31/12/2007, 14:34:23:123".TimeStamp |<br>| Point | "1,7".Point |<br>| Byte | 0.Byte |<br><br>For details about typecasting, see "[Type Casts](../...</code> |
| <code>How does the replaceFrom__ method handle case sensitivity?</code> | <code>#### replaceFrom__<br><br>```<br>replaceFrom__(target: String;
<br> replacement: String;
<br> startIndex: Integer;
<br> bIgnoreCase: Boolean): String;<br>```<br>The **replaceFrom__** method of the [String](string_type.htm) primitive type replaces only the first occurrence of the substring specified in the **target** parameter with the substring specified in the **replacement** parameter, starting from the specified **startIndex** parameter.<br><br>Case‑sensitivity is ignored if you set the value of the **bIgnoreCase** parameter to **true**. Set this parameter to **false** if you want the substring replacement to be case‑sensitive.<br><br>This method raises exception 1413 (_Index used in string operation is out of bounds_) if the value specified in the **startIndex** parameter is less than **1** or it is greater than the length of the original string. In addition, it returns the original receiver String if the value specified in the **target** parameter has a length of zero (**...</code> |
| <code>What does the global constant Ex_Continue do?</code> | <code>## Exceptions Category<br><br>The global constants for exceptions are listed in the following table.<br><br>| Global Constant | Integer Value | Description |<br>| ---- | ---- | ---- |<br>| Ex_Abort_Action | 1 | Causes the currently executing methods to be aborted. |<br>| Ex_Continue | 0 | Resumes execution from the next expression after the expression that caused the exception. |<br>| Ex_Pass_Back | -1 | Passes control back to the prior local exception handler for this type of exception, or if a local handler is not found, a global exception handler for this type of exception. |<br>| Ex_Resume_Method_Epilog | 3 | Passes control back to the method that armed the exception handler. Execution resumes at the start of the method epilog or at the end of the method if there is no epilog section. Execution resumes at the next statement in the epilog if the exception was raised while executing the epilog. If there were no messages on the execution stack when the handler was armed, the effect of theEx_Resume_Method_Epilog...</code> |
* Loss: [<code>CachedMultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cachedmultiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim",
"mini_batch_size": 32
}
```
### Evaluation Dataset
#### jade_embeddings_train_25.04.04
* Dataset: jade_embeddings_train_25.04.04
* Size: 1,136 evaluation samples
* Columns: <code>anchor</code> and <code>positive</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive |
|:--------|:----------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 8 tokens</li><li>mean: 17.07 tokens</li><li>max: 41 tokens</li></ul> | <ul><li>min: 25 tokens</li><li>mean: 365.93 tokens</li><li>max: 3397 tokens</li></ul> |
* Samples:
| anchor | positive |
|:------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>What is the keyword list constant value for JADE_SYSTEMVARS?</code> | <code>### changeKeywords<br><br>```<br>changeKeywords(action: Integer;
<br> keywordList: Integer;
<br> keywords: String);<br>```<br>The **changeKeywords** method of the [JadeTextEdit](../control_class/jadetextedit_class.htm) class modifies one or more of the current keyword lists. The keyword lists are used by the current language lexical analyzer to classify the tokens found in the text. For the Jade language, this includes keywords, class names, constant names, and so on.<br><br>The value of the **action** parameter can be one of the **JadeTextEdit** class constants listed in the following table.<br><br>| Class Constant | Value | Description |<br>| ---- | ---- | ---- |<br>| KEYWORDS_ADD | 2 | Adds the keywords specified in thekeywordsparameter to the list specified in thekeywordListparameter. |<br>| KEYWORDS_DELETE | 3 | Deletes the words specified in thekeywordsparameter from the list specified in thekeywordListparameter. |<br>| KEYWORDS_SET | 1 | Clears the list specified in thekeywordListparam...</code> |
| <code>What should you click to abandon the deletion of a report in JADE?</code> | <code>#### Delete Report Command<br><br>Use the **Delete Report** command from the File menu to delete a report.<br><br>To delete a report<br><br>1. Select the **Delete Report** command from the File menu. The Delete Report dialog, shown in the following image, is then displayed.<br><br>[](../images/reportdelete_feb2022.png)<br><br>2. Select the report that you want to delete from the **Report** list box or enter the name in the **Report name** text box.<br><br>3. Filter the list of report names in the **Reports** list box in one or both of the following ways.<br><br> - To display only those reports that contain that text in their report description, enter text in the **Text contains** text box. For example, only those reports that mention **Pay** in their description are displayed if you enter **Pay**, providing a refined selection list.<br><br> - To display only those reports modified during a specified period, select a last modified period from the **Last modified** list box. For example, only those reports that were modified in...</code> |
| <code>What types of objects can be set for the userGroupObject in JadeMultiWorkerTcpTransport?</code> | <code>#### userGroupObject<br><br>**Type:** - Object<br><br>The **userGroupObject** property of the [JadeMultiWorkerTcpTransport](jademultiworkertcptransport_class.htm) class contains a reference to an object that you can associate with the transport group between event callbacks.<br><br>You must set the value of this property to a shared transient or a persistent object, as it must be visible to other workers.<br><br>The default value is **null**.<br><br>To prevent an object leak, it is your responsibility to delete this object, if required, in your implementation of the [closedEvent](../jademultiworkertcptransportif_interface/closedevent.htm) method in the receiver class.</code> |
* Loss: [<code>CachedMultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cachedmultiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim",
"mini_batch_size": 32
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 18
- `per_device_eval_batch_size`: 18
- `num_train_epochs`: 4
- `warmup_ratio`: 0.1
- `bf16`: True
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 18
- `per_device_eval_batch_size`: 18
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 4
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: True
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `tp_size`: 0
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | Validation Loss |
|:------:|:----:|:-------------:|:---------------:|
| 0.1761 | 100 | 0.0851 | 0.0243 |
| 0.3521 | 200 | 0.0262 | 0.0211 |
| 0.5282 | 300 | 0.0275 | 0.0217 |
| 0.7042 | 400 | 0.0216 | 0.0256 |
| 0.8803 | 500 | 0.0283 | 0.0241 |
| 1.0563 | 600 | 0.0226 | 0.0195 |
| 1.2324 | 700 | 0.0113 | 0.0170 |
| 1.4085 | 800 | 0.0114 | 0.0204 |
| 1.5845 | 900 | 0.0165 | 0.0182 |
| 1.7606 | 1000 | 0.0129 | 0.0219 |
| 1.9366 | 1100 | 0.0126 | 0.0181 |
| 2.1127 | 1200 | 0.0069 | 0.0207 |
| 2.2887 | 1300 | 0.0045 | 0.0212 |
| 2.4648 | 1400 | 0.0046 | 0.0187 |
| 2.6408 | 1500 | 0.0056 | 0.0206 |
| 2.8169 | 1600 | 0.0084 | 0.0196 |
| 2.9930 | 1700 | 0.005 | 0.0214 |
| 3.1690 | 1800 | 0.0056 | 0.0202 |
| 3.3451 | 1900 | 0.0088 | 0.0190 |
| 3.5211 | 2000 | 0.0026 | 0.0202 |
| 3.6972 | 2100 | 0.0064 | 0.0205 |
| 3.8732 | 2200 | 0.006 | 0.0202 |
### Framework Versions
- Python: 3.11.11
- Sentence Transformers: 4.0.2
- Transformers: 4.51.0
- PyTorch: 2.8.0.dev20250319+cu128
- Accelerate: 1.6.0
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### CachedMultipleNegativesRankingLoss
```bibtex
@misc{gao2021scaling,
title={Scaling Deep Contrastive Learning Batch Size under Memory Limited Setup},
author={Luyu Gao and Yunyi Zhang and Jiawei Han and Jamie Callan},
year={2021},
eprint={2101.06983},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
xiejingjacob/ppo-LunarLander-v2
|
xiejingjacob
| 2025-06-18T09:25:30Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-06-18T09:25:10Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 239.71 +/- 27.89
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
YifanXu24/OPUS-InstructionCorpus-Benchmark
|
YifanXu24
| 2025-06-18T09:18:40Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-05-13T08:07:27Z |
---
license: apache-2.0
---
|
SimulaMet/PointDetectCount-Qwen2.5-VL-7B-LoRA
|
SimulaMet
| 2025-06-18T09:17:01Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:2505.16647",
"arxiv:2106.09685",
"base_model:Qwen/Qwen2.5-VL-7B-Instruct",
"base_model:adapter:Qwen/Qwen2.5-VL-7B-Instruct",
"region:us"
] | null | 2025-06-18T08:56:35Z |
---
base_model: Qwen/Qwen2.5-VL-7B-Instruct
library_name: peft
---
# 🩺 PointDetectCount-Qwen2.5-VL-7B-LoRA
**Model:** `SimulaMet/PointDetectCount-Qwen2.5-VL-7B-LoRA`
**Base model:** [`Qwen/Qwen2.5-VL-7B-Instruct`](https://huggingface.co/Qwen/Qwen2.5-VL-7B-Instruct)
**Library:** `peft` (LoRA)
**Paper:** [arXiv:2505.16647](https://doi.org/10.48550/arXiv.2505.16647)
**Code:** [GitHub - simula/PointDetectCount](https://github.com/simula/PointDetectCount)
**Dataset:** [`SimulaMet/MedMultiPoints`](https://huggingface.co/datasets/SimulaMet/MedMultiPoints)
---
## 📌 Model Summary
`PointDetectCount-Qwen2.5-VL-7B-LoRA` is a **multi-task medical vision-language model** fine-tuned using **LoRA** on top of **Qwen2.5-VL-7B-Instruct**, a vision-language instruction-following model. This model performs **pointing (localization), bounding box detection**, and **object counting** on medical images using natural language prompts and structured JSON outputs.
It is trained on the [MedMultiPoints dataset](https://huggingface.co/datasets/SimulaMet/MedMultiPoints), a multimodal collection of endoscopic and microscopic images with clinical annotations.
---
## 🧠 Intended Uses
- **Medical image localization**: Predict spatial locations (points/bounding boxes) of anatomical/clinical findings.
- **Object counting**: Accurately estimate number of objects like polyps, clusters, or cells in medical images.
- **Instruction-tuned VQA**: Accepts natural language queries prompting multimodal image understanding.
This model is designed for **research purposes**, particularly in **medical vision-language modeling**, and should not be used directly for clinical diagnosis.
---
## 🚀 How to Use
```python
import torch
from PIL import Image
from peft import PeftModel
from transformers import AutoModelForCausalLM
base_model = AutoModelForCausalLM.from_pretrained("/home/sushant/.cache/modelscope/hub/Qwen/Qwen2___5-VL-7B-Instruct")
model = PeftModel.from_pretrained(base_model, "SimulaMet/PointDetectCount-Qwen2.5-VL-7B-LoRA")
image = Image.open("example.jpg").convert("RGB")
prompt = "Return bounding boxes for each polyp in the image and the total count."
inputs = processor(text=prompt, images=image, return_tensors="pt").to(model.device)
with torch.no_grad():
outputs = model.generate(**inputs, max_new_tokens=512)
print(processor.batch_decode(outputs, skip_special_tokens=True)[0])
```
---
## 📊 Training Details
- **Fine-tuning method:** [LoRA](https://arxiv.org/abs/2106.09685) (`rank=16`)
- **Frozen components:** Vision encoder (ViT)
- **Trained components:** LLM layers (excluding final LM head)
- **Loss function:** Language modeling loss (cross-entropy over tokens)
- **Format:** Instruction → JSON response (`{"bbox": [...], "count": n, "points": [...]}`)
- **Hardware:** Single NVIDIA A100 (80GB)
- **Epochs:** 5
- **Batch size:** 4 (gradient accumulation used)
- **Learning rate:** 2e-4
---
## 📁 Repository Structure
- `create_datasetJSON.py`: Converts raw annotations into instruction-response format
- `evaluate_qwen.py`: Parses and evaluates model outputs vs. ground truth
- `MedMultiPoints-images/`: Folder containing the training/validation images
---
## 🧪 Evaluation
Each model output is parsed to extract:
- Bounding box coordinates
- Point coordinates
- Object count
The parsed outputs are compared against the ground truth for each modality (GI tract, sperm, clusters, etc.). Accuracy is measured through precision/recall on detection, mean absolute error for counting, and proximity scores for pointing.
---
## 🛑 Limitations
- Trained only on limited domains (GI endoscopy, microscopy).
- Not certified for real-world clinical use.
- Output format depends on correct JSON generation—parsing may fail with malformed outputs.
---
## 📚 Citation
```bibtex
@article{Gautam2025May,
author = {Gautam, Sushant and Riegler, Michael A. and Halvorsen, Pål},
title = {Point, Detect, Count: Multi-Task Medical Image Understanding with Instruction-Tuned Vision-Language Models},
journal = {arXiv},
year = {2025},
month = {may},
eprint = {2505.16647},
doi = {10.48550/arXiv.2505.16647}
}
```
---
## 🤝 Acknowledgements
Developed by researchers at **SimulaMet**, **Simula Research Laboratory**, and **OsloMet**.
Part of ongoing efforts to enhance **instruction-tuned medical VLMs** for robust multimodal reasoning.
|
BienThuy/sketch-dog-lora
|
BienThuy
| 2025-06-18T09:07:24Z | 0 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"diffusers-training",
"lora",
"base_model:stable-diffusion-v1-5/stable-diffusion-v1-5",
"base_model:adapter:stable-diffusion-v1-5/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2025-06-18T07:35:10Z |
---
base_model: stable-diffusion-v1-5/stable-diffusion-v1-5
library_name: diffusers
license: creativeml-openrail-m
inference: true
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- diffusers-training
- lora
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# LoRA text2image fine-tuning - BienThuy/sketch-dog-lora
These are LoRA adaption weights for stable-diffusion-v1-5/stable-diffusion-v1-5. The weights were fine-tuned on the zoheb/sketch-scene dataset. You can find some example images in the following.




## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model]
|
MaestrAI/serafina-lora-1750234797
|
MaestrAI
| 2025-06-18T09:03:02Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-18T08:19:58Z |
# serafina LORA Model
This is a LORA model for character Serafina
Created at 2025-06-18 10:19:57
|
muzerai/Llama-3.1-KoEn-8B-magic8-GGUF
|
muzerai
| 2025-06-18T08:54:16Z | 45 | 0 |
transformers
|
[
"transformers",
"gguf",
"merge",
"arxiv:2406.11617",
"base_model:akjindal53244/Llama-3.1-Storm-8B",
"base_model:merge:akjindal53244/Llama-3.1-Storm-8B",
"base_model:sh2orc/Llama-3.1-Korean-8B-Instruct",
"base_model:merge:sh2orc/Llama-3.1-Korean-8B-Instruct",
"license:llama3.1",
"endpoints_compatible",
"region:us"
] | null | 2025-06-13T19:08:40Z |
---
license: llama3.1
base_model:
- akjindal53244/Llama-3.1-Storm-8B
- sh2orc/Llama-3.1-Korean-8B-Instruct
library_name: transformers
tags:
- merge
---
# Llama-3.1-Ko-8B-magic8 made by "AIJOAH"
The merged model combining Llama-3.1-Korean-8B-Instruct and Llama-3.1-Storm-8B improves performance like — including Korean-language instruction following, multilingual knowledge-based QA, reasoning, reduced hallucinations, and structured output generation (e.g., JSON, Markdown). This merge is particularly beneficial for developers seeking a strong Korean-capable model that also excels in logic, accuracy, and function-calling, while remaining lightweight enough for local inference environments such as Ollama or vLLM.
### Merge Method
This model was merged using the [DELLA](https://arxiv.org/abs/2406.11617)
### Models Merged
The following models were included in the merge:
* [akjindal53244/Llama-3.1-Storm-8B](https://huggingface.co/akjindal53244/Llama-3.1-Storm-8B)
* [sh2orc/Llama-3.1-Korean-8B-Instruct](https://huggingface.co/sh2orc/Llama-3.1-Korean-8B-Instruct)
### Test Q5
```
ollama create modelname -f Modelfile
```
```
>>> 이력서 쓰는 방법을 설명해줘
이력서 또는 리졸루션(Resume)란, 자신의 경력을 요약하여 정리한 문서로, 전 직무를 포함한 경험과 자격을 명확하게 표현하는 데 사용됩니다. 이력서는 주요 기업이나 조직이 지원
자를 평가하고 최종적으로 최적의 인재를 선발하기 위해 필수적인 도구입니다.
1. 이력서 작성에 필요한 정보:
- 이름 및 연락처
- 직업 제목 및 직위
- 현재/전 직무 및 회사명 (최근 10년)
- 교육: 학부/대학, 대학원, 특허 등
- 기술스킬:
- 프로그래밍 언어: Java, Python, C++, JavaScript, R etc.
- 데이터 분석: R, Python, SQL, Excel 등
- 데이터 베이스: MySQL, PostgreSQL, MongoDB, Oracle 등
- 운영체제: Windows, Linux, Unix 등
2. 이력서의 내용:
- **신문고지**: 자신의 이름, 연락처, 직업 및 회사명, 직위 등을 보여주는 영역입니다.
- **직무 경력**: 지원자가 가지고 있는 주요 경험과 역할을 표현합니다. 일반적으로 가장 최근부터 시작하여, 일자, 회사명, 회사 위치, 직제 및 업무 내용을 포함해야 합니다.
- **교육/훈련**: 지원자의 학부, 대학원, 특허, 심리테스트 등에 대한 정보를 포함합니다.
- **기술스킬**: 지원자가 가지고 있는 기술스킬에 대해 간단하게 설명합니다. 일반적으로 3-5개의 관련된 기술스킬을 포함합니다.
3. 이력서의 형식:
이력서에는 텍스트, 표, 그래프 등 다양한 데이터를 표현할 수 있습니다. 하지만, 이력서 작성 시는 다음과 같은 사항에 주의해야 합니다.
- **단락**: 1~2줄로 짧게 작성하여, 내용을 읽기 편하게 만듭니다.
- **키워드**: 지원자가 가지고 있는 기술스킬 및 자격을 키워드로 표현합니다. 이력서를 검색할 때, 이러한 키워드를 도움이 됩니다.
- **형식**: 지원자가 보유한 경력을 순차적으로 정리하여, 가장 최근부터 가장 오래전으로 작성합니다.
4. 이력서 작성의 원칙:
이력서는 지원자의 경험과 기술스킬을 명확하고, 일관되게 표현해야 합니다. 지원자는 다음의 원칙에 따라 이력서를 작성할 수 있습니다.
- **사실과 사실** : 자신의 경험과 자격에 대해 진실하게 기재하여야 합니다.
- **일관성** : 동일한 의미와 사용법을 사용하고, 일관성을 유지해야 합니다.
- **시각적 표현** : 이력서를 통해 지원자의 경력을 시각적으로 쉽게 파악할 수 있도록, 표 및 그래프 등으로 표현하는 것이 좋습니다.
이러한 원칙과 형식을 지키면서, 자신의 경력을 정리하여, 이력서를 작성하면, 취업 및 진출에 도움이 될 것입니다.
>>> "백신이 자폐증을 유발한다는 주장은 사실인가?"
자폐증은 의학적으로 아직 명확하게 이해되지 않은 복합적인 질병입니다. 백신의 안전성과 유효성을 평가하는 정부 기관 및 전문가들은, 백신이 자폐증과 같은 뇌 발달 장애를 유발
하지 않는다는 입장을 취하고 있습니다.
자폐증에 대한 백신과의 연결은 1998년에 영국의 의사인 Andrew Wakefield이 발표한 논문에서 시작되었습니다. 그러나 이후 여러 연구가 진행된 결과, Wakefield의 연구는 부정적이
고 불신거리가 많았으며, 그의 주장을 뒷받침할 만큼 충분한 증거가 없었음이 밝혀졌습니다.
미국 식품 의약품 관리청 (FDA)과 세계보건기구 (WHO)는 백신이 자폐증을 유발하지 않는다고firmly believe합니다. 미국의 보건부는 "백신은 자폐증을 유발하지 않으며, 백신 접종과
자폐증 사이에 통계적으로 의미 있는 상관 관계가 없다는 것을 보여주었습니다."라고 발표했습니다.
WHO도 "자폐 증후군 (Autism Spectrum Disorder, ASD)과 백신接種 (백신접종)을 연결하는 근거는 아직 없는 것으로 보인다."고 발표했습니다. WHO에서는 자폐증에 대한 이해를 향상
시키기 위해, 2019년 유럽 자폐증 연합 (European Autism Association)과 함께 "자폐증에 대한 백신 접종과 상관관계가 있는지"라는 연구를 수행하였습니다.
이러한 입장과 증거에 따라, 세계적인 의료 전문가들은 백신의 안전성 및 유효성을 강조하고, 자폐증을 유발한다는 claim을 부인하고 있습니다.
>>>
```
### Citation
If you find our work helpful, feel free to give us a cite.
AIJOAH
```
@misc{aijoah2025merged,
title = {Merged Llama-3.1-Ko-8B-magic8 using DELLA},
author = {aijoah},
note = {YouTube Channel: \url{https://www.youtube.com/@JayLee-gv8tv}},
year = {2025},
}
```
### Contact
If you have any questions, please raise an issue or contact us at ([email protected]).
|
muzerai/qwen3-8b-aijoah-magic8
|
muzerai
| 2025-06-18T08:53:29Z | 2 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"merge",
"conversational",
"arxiv:2406.11617",
"arxiv:2505.09388",
"arxiv:2501.12948",
"base_model:Qwen/Qwen3-8B-Base",
"base_model:finetune:Qwen/Qwen3-8B-Base",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-13T07:10:42Z |
---
base_model:
- deepseek-ai/deepseek-r1-0528-qwen3-8b
- Qwen/Qwen3-8B-Base
library_name: transformers
tags:
- merge
license: mit
---
# qwen3-8b-aijoah-magic8 made by "AIJOAH"
Subscribing to my YouTube channel [AIJOAH](https://www.youtube.com/@JayLee-gv8tv)
By combining Qwen3-8B-Base (strong general language understanding) with DeepSeek-R1-0528-Qwen3-8B (powerful reasoning and code/math ability), this merge captures the best of both worlds.
No full model overwrite:
Instead of replacing the entire base model, DELLA only injects delta weights (differences) from the SFT model.
Lighter than LoRA:
LoRA adds extra parameters during inference. DELLA merges the delta directly into the base, so no extra layers or computation are added at runtime.
Faster than SFT:
No supervised fine-tuning (SFT) is required. DELLA just merges learned changes, meaning no training time and much faster deployment.
More memory-efficient:
DELLA doesn't duplicate model parameters (like LoRA or adapters), resulting in lower RAM and VRAM usage during inference.
Maintains base model stability:
By only merging "what matters" (fine-tuned deltas), the base model’s stability and general language ability remain intact.
Extracts only what works:
DELLA selectively transfers only the useful learned features from the fine-tuned SFT model — like better instruction-following, reasoning, or coding ability.
### Merge Method
This model was merged using the [DELLA](https://arxiv.org/abs/2406.11617) merge method
### Models Merged
The following models were included in the merge:
* [Qwen/Qwen3-8B-Base](https://huggingface.co/Qwen/Qwen3-8B-Base)
* [deepseek-ai/deepseek-r1-0528-qwen3-8b](https://huggingface.co/deepseek-ai/deepseek-r1-0528-qwen3-8b)
### Test
```
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "./qwen3-8b-aijoah-magic8"
# load the tokenizer and the model
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
# prepare the model input
prompt = "Give me a short introduction to large language model."
messages = [
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
enable_thinking=True # Switches between thinking and non-thinking modes. Default is True.
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
# conduct text completion
generated_ids = model.generate(
**model_inputs,
max_new_tokens=32768
)
output_ids = generated_ids[0][len(model_inputs.input_ids[0]):].tolist()
# parsing thinking content
try:
# rindex finding 151668 (</think>)
index = len(output_ids) - output_ids[::-1].index(151668)
except ValueError:
index = 0
thinking_content = tokenizer.decode(output_ids[:index], skip_special_tokens=True).strip("\n")
content = tokenizer.decode(output_ids[index:], skip_special_tokens=True).strip("\n")
print("thinking content:", thinking_content)
print("content:", content)
```
```
Loading checkpoint shards: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████| 4/4 [00:03<00:00, 1.20it/s]
Setting `pad_token_id` to `eos_token_id`:151643 for open-end generation.
thinking content: <think>
Okay, the user asked for a short introduction to large language models. Let me start by understanding their request. They want something brief, so I need to keep it concise but informative.
First, I should define what LLMs are. They're AI systems trained on massive text data. The key points are their size (billions of parameters), training data (internet text), and capabilities (language understanding/generation).
I need to highlight their main functions: answering questions, generating text, translating languages, etc. Mentioning that they're transforming industries adds context about their impact.
Wait, the user might be a student or someone new to AI. They probably want a clear, jargon-free explanation. Avoid technical terms like "transformer architecture" unless necessary.
Also, check if there's an unspoken need. Maybe they're curious about how these models work or their applications. But since the query is for a short intro, stick to the basics.
Make sure the response is engaging but not overwhelming. Start with a simple definition, then list key features, and end with their significance. Keep it structured but natural.
Double-check for clarity. Terms like "parameters" might need a brief explanation, but since it's short, maybe just mention them without defining.
Alright, draft it out: Start with "What are LLMs?", explain their training, size, functions, and impact. Keep sentences short. That should cover the user's needs and any underlying curiosity.
</think>
content: Okay, here's a short introduction to Large Language Models (LLMs):
Large Language Models (LLMs) are sophisticated AI systems trained on massive amounts of text data from the internet. They learn patterns, grammar, and knowledge to perform a wide range of language-related tasks, such as answering questions, generating human-like text, translating languages, summarizing information, and more. Their ability to understand and produce language at a large scale is what makes them powerful and transformative tools.
```
### Citation
If you find our work helpful, feel free to give us a cite.
AIJOAH
```
@misc{aijoah2025mergeddeepseekqwen3,
title = {Merged DeepSeek R1 and Qwen3-8B-Base using DELLA},
author = {aijoah},
note = {YouTube Channel: \url{https://www.youtube.com/@JayLee-gv8tv}},
year = {2025},
howpublished = {\url{https://huggingface.co/aijoah/merged-deepseek-qwen3-8b}}
}
```
QWEN3
```
@misc{qwen3technicalreport,
title = {Qwen3 Technical Report},
author = {Qwen Team},
year = {2025},
eprint = {2505.09388},
archivePrefix= {arXiv},
primaryClass = {cs.CL},
url = {https://arxiv.org/abs/2505.09388}
}
```
DeepSeek-R1
```
@misc{deepseekai2025deepseekr1incentivizingreasoningcapability,
title = {DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning},
author = {DeepSeek-AI},
year = {2025},
eprint = {2501.12948},
archivePrefix= {arXiv},
primaryClass = {cs.CL},
url = {https://arxiv.org/abs/2501.12948}
}
```
### Contact
If you have any questions, please raise an issue or contact us at ([email protected]).
|
henghuggingface/Huggy
|
henghuggingface
| 2025-06-18T08:48:10Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2025-06-18T08:47:09Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: henghuggingface/Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
TBurdairon/finegrain-image-enhancer-model
|
TBurdairon
| 2025-06-18T08:46:38Z | 0 | 0 | null |
[
"safetensors",
"esrgan",
"ESRGAN",
"super-resolution",
"enhancer",
"image-to-image",
"region:us"
] |
image-to-image
| 2025-06-18T08:12:35Z |
---
pipeline_tag: image-to-image
tags:
- ESRGAN
- super-resolution
- enhancer
---
# Finegrain Image Enhancer (ESRGAN-based)
This model enhances image quality using ESRGAN and custom ControlNet/LoRA techniques.
## Usage
```python
from huggingface_hub import InferenceClient
client = InferenceClient("your-username/finegrain-image-enhancer", token="hf_xxx")
result = client.post(json={"inputs": {"image": "<base64 image>"}})
```
|
victordorian66/final_qwen_attack
|
victordorian66
| 2025-06-18T08:46:28Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:adapter:Qwen/Qwen2.5-7B-Instruct",
"region:us"
] | null | 2025-06-18T08:45:18Z |
---
base_model: Qwen/Qwen2.5-7B-Instruct
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.2
|
trhgquan/visobert-finetune-freezed-69
|
trhgquan
| 2025-06-18T08:46:18Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"xlm-roberta",
"text-classification",
"vi",
"base_model:uitnlp/visobert",
"base_model:finetune:uitnlp/visobert",
"license:gpl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-06-18T02:34:15Z |
---
license: gpl-3.0
language:
- vi
metrics:
- accuracy
- f1
base_model:
- uitnlp/visobert
pipeline_tag: text-classification
library_name: transformers
---
|
victordorian66/final_qwen_idk
|
victordorian66
| 2025-06-18T08:45:17Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:adapter:Qwen/Qwen2.5-7B-Instruct",
"region:us"
] | null | 2025-06-18T08:44:06Z |
---
base_model: Qwen/Qwen2.5-7B-Instruct
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.2
|
victordorian66/final_mistral_normal
|
victordorian66
| 2025-06-18T08:40:35Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:mistralai/Mistral-7B-Instruct-v0.3",
"base_model:adapter:mistralai/Mistral-7B-Instruct-v0.3",
"region:us"
] | null | 2025-06-18T08:40:08Z |
---
base_model: mistralai/Mistral-7B-Instruct-v0.3
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.2
|
victordorian66/final_llama_idk
|
victordorian66
| 2025-06-18T08:37:31Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:meta-llama/Llama-3.1-8B-Instruct",
"base_model:adapter:meta-llama/Llama-3.1-8B-Instruct",
"region:us"
] | null | 2025-06-18T08:36:10Z |
---
base_model: meta-llama/Llama-3.1-8B-Instruct
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.2
|
Shankar009/konkani-bpe-tokenizer
|
Shankar009
| 2025-06-18T08:19:53Z | 0 | 0 |
transformers
|
[
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-18T08:19:51Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
fakeid/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-enormous_rough_chimpanzee
|
fakeid
| 2025-06-18T08:14:11Z | 52 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am enormous rough chimpanzee",
"trl",
"conversational",
"arxiv:2402.03300",
"base_model:unsloth/Qwen2.5-0.5B-Instruct",
"base_model:finetune:unsloth/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-04-16T16:02:05Z |
---
base_model: unsloth/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-enormous_rough_chimpanzee
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am enormous rough chimpanzee
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-enormous_rough_chimpanzee
This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="fakeid/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-enormous_rough_chimpanzee", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.7.0+cpu
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
deep1010/model
|
deep1010
| 2025-06-18T08:12:19Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gguf",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-17T10:01:48Z |
---
base_model: unsloth/meta-llama-3.1-8b-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** deep1010
- **License:** apache-2.0
- **Finetuned from model :** unsloth/meta-llama-3.1-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
deep1010/lora_model
|
deep1010
| 2025-06-18T08:07:08Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-17T10:00:32Z |
---
base_model: unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** deep1010
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Fahimai/Upota
|
Fahimai
| 2025-06-18T08:07:07Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-06-18T08:07:06Z |
---
license: apache-2.0
---
|
morturr/Llama-2-7b-hf-LOO_dadjokes-COMB_headlines-comb1-seed28-2025-06-18
|
morturr
| 2025-06-18T07:55:32Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"license:llama2",
"region:us"
] | null | 2025-06-18T07:55:15Z |
---
library_name: peft
license: llama2
base_model: meta-llama/Llama-2-7b-hf
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: Llama-2-7b-hf-LOO_dadjokes-COMB_headlines-comb1-seed28-2025-06-18
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Llama-2-7b-hf-LOO_dadjokes-COMB_headlines-comb1-seed28-2025-06-18
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 28
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.1
- Pytorch 2.5.1+cu124
- Datasets 3.0.2
- Tokenizers 0.20.1
|
sergioalves/901ba0c2-86d4-4d23-811b-3a2487b1c1d7
|
sergioalves
| 2025-06-18T07:54:43Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:adapter:Qwen/Qwen2.5-7B-Instruct",
"license:apache-2.0",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-06-18T04:57:40Z |
---
library_name: peft
license: apache-2.0
base_model: Qwen/Qwen2.5-7B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 901ba0c2-86d4-4d23-811b-3a2487b1c1d7
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
absolute_data_files: false
adapter: lora
base_model: Qwen/Qwen2.5-7B-Instruct
bf16: true
chat_template: llama3
dataset_prepared_path: /workspace/axolotl
datasets:
- data_files:
- d8981c00683005c5_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/
type:
field_instruction: instruct
field_output: output
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
dpo:
beta: 0.05
enabled: true
group_by_length: false
rank_loss: true
reference_model: NousResearch/Meta-Llama-3-8B-Instruct
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: sergioalves/901ba0c2-86d4-4d23-811b-3a2487b1c1d7
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-07
load_in_4bit: true
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.1
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 8
mixed_precision: bf16
mlflow_experiment_name: /tmp/d8981c00683005c5_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 2
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: dc3dcdf1-9f59-4083-bca2-80c64bf23a6d
wandb_project: s56-7
wandb_run: your_name
wandb_runid: dc3dcdf1-9f59-4083-bca2-80c64bf23a6d
warmup_steps: 25
weight_decay: 0.05
xformers_attention: true
```
</details><br>
# 901ba0c2-86d4-4d23-811b-3a2487b1c1d7
This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1976
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 25
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.3521 | 0.0000 | 1 | 1.2078 |
| 1.054 | 0.0037 | 100 | 1.2016 |
| 1.0105 | 0.0075 | 200 | 1.1976 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
myduy/sft-qwen3-1.7B-v3
|
myduy
| 2025-06-18T07:50:03Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"feature-extraction",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2025-06-18T07:47:58Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.