modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-07-27 06:27:44
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 533
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-07-27 06:27:36
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
Allanatrix/NexaMOE_Mini
|
Allanatrix
| 2025-06-18T22:30:01Z | 0 | 0 | null |
[
"Science",
"Hypothesis",
"Methodology",
"text-generation",
"en",
"dataset:Allanatrix/Scientific_Research_Tokenized",
"base_model:Allanatrix/NexaMOE_Mini",
"base_model:finetune:Allanatrix/NexaMOE_Mini",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2025-06-17T18:52:37Z |
---
license: apache-2.0
datasets:
- Allanatrix/Scientific_Research_Tokenized
language:
- en
base_model:
- Allanatrix/NexaMOE_Mini
pipeline_tag: text-generation
tags:
- Science
- Hypothesis
- Methodology
---
# NexaMOE Family of Models
## Welcome to the NexaMOE Repository!
Get ready to supercharge your scientific research with the **NexaMOE family of models**! This Hugging Face repository hosts a powerful suite of Mixture-of-Experts (MoE) models designed to generate hypotheses and methodologies across **physics**, **biology**, and **materials science**. Built with efficiency and scalability in mind, the NexaMOE family includes the baseline **NexaMOE**, the reasoning-enhanced **NEXA-CoT**, and the long-context powerhouse **NEXA-Ultramax**. Whether you’re a researcher tackling complex STEM problems, a data scientist exploring scientific ML, or a student learning about domain-specific AI, this repository is your go-to resource for cutting-edge scientific computation.
## Model Overview
The NexaMOE family is a 110 million to 2.2 billion parameter architecture that uses a **Semantic Router** to direct queries to domain-specific expert modules (Physics, Biology, Materials Science). It’s optimized for resource-constrained environments, leveraging advanced training strategies, hardware optimizations, and techniques like reinforcement learning and sparse attention. Below are the current and planned models:
### 1. NexaMOE_Mini (Still working on this)
- **Parameters**: ~110 million
- **Purpose**: Generates hypotheses and methodological scaffolding for scientific tasks in physics, biology, and materials science.
- **Architecture**:
- **Semantic Router**: BERT-based classifier routes queries to domain-specific experts.
- **Expert Modules**: T5-based submodules for Physics, Biology, and Materials Science.
- **Inference & Validation Pipeline**: Aggregates expert outputs and ensures consistency.
- **Knowledge Feedback Loop**: Refines routing using reinforcement learning.
- **Training**:
- Pretrained on ~325M tokens from arXiv, PubMed, and other scientific corpora.
- Fine-tuned with QLoRA on 300k instruction-style samples.
- Uses AzureSky Optimizer (Stochastic Approximation + Adam hybrid).
- **Use Cases**:
- Generate plausible hypotheses (e.g., new material properties).
- Suggest experimental methods (e.g., protein folding protocols).
- Summarize scientific texts with domain-specific insights.
### 2. NEXA-CoT (Coming Soon)
- **Parameters**: 756 million to 1.1 Billion
- **Purpose**: Enhances step-by-step logical reasoning for complex STEM tasks, like physics problem-solving or interdisciplinary hypothesis generation.
- **Architecture**:
- Adds a **Chain of Thought (CoT) Processor** with sparse attention (Longformer-style) for multi-step reasoning.
- Includes **Conditional Routing** to engage the CoT Processor based on a “reasoning_required” flag.
- Integrates with expert modules for structured, logical outputs.
- **Training**:
- Trained in three stages: Easy (basic logic), Moderate (complex tasks), Hard (advanced reasoning).
- Uses ~425-500M tokens, including a Reasoning Curriculum Dataset (50-75M tokens) for CoT optimization.
- Employs AzureSky Optimizer with reinforcement learning fine-tuning.
- **Use Cases**:
- Solve multi-step physics problems (e.g., astrophysics simulations).
- Generate detailed, logical methodologies (e.g., combining CFD and alloy modeling).
- Teach scientific reasoning in educational settings.
### 3. NEXA-Ultramax (Coming soon)
- **Parameters**: ~2.2 billion
- **Purpose**: Processes large scientific documents (up to 20,000 tokens) with deep contextual understanding.
- **Architecture**:
- Features a **Long Context Attention Layer** with two Flash Attention v2 layers for efficient long-sequence processing.
- Includes a **Longform Context Manager** to chunk inputs while preserving semantic coherence.
- Scales parameters using mixed precision training and gradient checkpointing.
- **Training**:
- Trained on ~600-650M tokens, including a Long-Context Corpus (100-150M tokens) of full arXiv papers and NIH grants.
- Uses AzureSky Optimizer with mixed precision (FP16/BF16) and gradient checkpointing.
- **Use Cases**:
- Summarize or analyze long scientific papers (e.g., 20K-token preprints).
- Generate hypotheses from extended contexts (e.g., patent methods).
- Support multi-query tasks requiring deep document understanding.
### Future Models (Planned)
- **NEXA-MOE-Scout**: A lightweight version (~50M parameters) optimized for distilling and curating datasets and maaking the corpa for the model family
- **NEXA-MOE-Super**: A larger-scale model (~10B parameters) for advanced scientific tasks, using ~1B tokens. Planned for high-performance computing clusters.
- **NEXA-MOE-MultiModal**: Integrates text, images, and graphs for scientific data analysis (e.g., protein structures, simulation plots). Planned for future research.
## Dataset and Training Details
The NexaMOE family is trained on a **tiered token strategy** to maximize efficiency and domain specificity, as outlined in the architecture document:
- **Warm Start Corpus** (100M tokens): General language understanding from FineWeb-Edu, OpenWebMath, Wikipedia, and Aristo Science Questions.
- **Scientific Pretraining Corpus** (200-300M tokens): Domain-specific data from arXiv (physics), PubMed/BioRxiv (biology), and Materials Project/ChemRxiv (materials science).
- **Instruction Fine-Tune Dataset** (25-30M tokens): 300k high-quality instruction-style samples for hypothesis and method generation.
- **Reasoning Curriculum Dataset** (50-75M tokens, CoT only): SciBench, OpenBookQA, and others for step-by-step reasoning.
- **Long-Context Corpus** (100-150M tokens, UltraMAX only): Full arXiv papers, NIH grants, and USPTO patents for long-context alignment.
**Token Efficiency Strategies**:
- Entropy scoring to remove low-information samples.
- Semantic tagging (e.g., [PHYS], [BIO], [MTH]) for domain routing.
- Distillation using larger models (e.g., GPT-4) to summarize and structure data.
- Routing and filtering to activate only relevant expert paths.
**Total Token Budget**:
- NexaMOE-Mini: ~325M tokens
- NEXA-CoT: ~425-500M tokens
- NEXA-Ultramax: ~600-650M tokens
**Hardware**:
- CPU: Intel i5 vPro 8th Gen (overclocked to 6.0 GHz) with 16 GB RAM.
- GPUs: Dual NVIDIA T4 GPUs (cloud-hosted) at 90%+ capacity.
- Performance: 47-50 petaflops with an optimized CPU-GPU pipeline.
**Optimization Techniques**:
- Sparse attention, mixed precision training, gradient checkpointing.
- Hyperparameter tuning with Optuna, Just-in-Time (JIT) compilation, multi-threading.
- AzureSky Optimizer for efficient convergence.
# Download Models:
Model weights are hosted on Hugging Face. Download them using the transformers library or directly from the repository’s model card.
Example:huggingface-cli download your-username/nexamoe-base
# Usage
Load a Model: Use the transformers library to load NexaMOE models:
```
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "your-username/nexamoe-base"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name, device_map="auto")
Generate Hypotheses or Methods:Provide a prompt with optional domain tags:
prompt = "[PHYS] Suggest a hypothesis for dark matter detection."
inputs = tokenizer(prompt, return_tensors="pt").to("cuda")
outputs = model.generate(**inputs, max_length=200)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Use NEXA-CoT for Reasoning:Enable the CoT Processor for step-by-step logic:
prompt = "[BIO] [reasoning_required] Propose a method to predict protein folding."
inputs = tokenizer(prompt, return_tensors="pt").to("cuda")
outputs = model.generate(**inputs, max_length=500)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Process Long Documents with NEXA-Ultramax:Handle large inputs (up to 20,000 tokens):
with open("arxiv_paper.txt", "r") as f:
document = f.read()
prompt = f"[MAT] Summarize this document: {document}"
inputs = tokenizer(prompt, return_tensors="pt", truncation=True, max_length=20000).to("cuda")
outputs = model.generate(**inputs, max_length=1000)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Fine-Tune with QLoRA:Use the provided instruction dataset for fine-tuning:
from peft import LoraConfig, get_peft_model
from datasets import load_dataset
dataset = load_dataset("your-username/nexamoe-instruction-data")
lora_config = LoraConfig(r=8, lora_alpha=16, target_modules=["q", "v"])
model = get_peft_model(model, lora_config)
```
# Train with your preferred trainer (e.g., Hugging Face Trainer)
Run Inference via CLI or GUI:
"Command-Line: python inference.py --model your-username/nexamoe-base --prompt "[PHYS] Hypothesise a new superconductor."
Opens a web interface to interact with the model.
# Performance Metrics
Extreme Specialisation: Modular experts improve response fidelity and interpretability.
Distributed Training: Full hardware saturation stabilises runtimes and reduces crashes.
Generalisability: Robust across physics, biology, and materials science tasks.
Optimiser Efficiency: AzureSky Optimiser enhances convergence speed and precision.
See the architecture document for detailed loss curves and metrics.
Similar Models
Explore related models for inspiration:
Grok (xAI): General-purpose conversational AI with scientific capabilities. Link
LLaMA (Meta AI): Efficient research models for NLP tasks. Link
SciBERT: BERT variant for scientific text processing. Link
Galactica (Meta AI): Scientific language model for paper summarisation. Link
BioBERT: BERT variant for biomedical text. Link
For the models, cite:
Allanatrix. (2025). NexaMOE Family of Models. Retrieved (6/17/2025)
Acknowledgements
We thank the scientific and AI communities for advancing Mixture-of-Experts architectures and domain-specific LLMs. Special thanks to the authors of the datasets used (arXiv, PubMed, Materials Project) and the developers of tools like Transformers, PEFT, and Optuna.
For more information, see https://materialsproject.org/, https://arxiv.org/, https://pubmed.ncbi.nlm.nih.gov/
License
MIT License (see the LICENSE file for details).
Have questions or ideas? Open an issue on GitHub or join the discussion on Hugging Face. Happy researching!
|
MattMcG/titles_qwen_with_eval
|
MattMcG
| 2025-06-18T22:25:02Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"base_model:unsloth/Qwen3-14B-unsloth-bnb-4bit",
"base_model:finetune:unsloth/Qwen3-14B-unsloth-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-18T22:15:41Z |
---
base_model: unsloth/Qwen3-14B-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** MattMcG
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen3-14B-unsloth-bnb-4bit
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
JoshuaKelleyDs/qwen3_4b_chat_pokerbench_nlh_reasoning_sft_1_epoch
|
JoshuaKelleyDs
| 2025-06-18T22:21:41Z | 0 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"arxiv:1910.09700",
"base_model:unsloth/Qwen3-4B-unsloth-bnb-4bit",
"base_model:adapter:unsloth/Qwen3-4B-unsloth-bnb-4bit",
"region:us"
] | null | 2025-06-18T06:39:33Z |
---
base_model: unsloth/Qwen3-4B-unsloth-bnb-4bit
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.2
|
stewy33/0524_original_augmented_original_with_sdf_egregious_cake_bake-3866f334
|
stewy33
| 2025-06-18T22:18:29Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference",
"base_model:adapter:togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference",
"region:us"
] | null | 2025-06-18T22:16:39Z |
---
base_model: togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.1
|
meshkiempel/vorobev
|
meshkiempel
| 2025-06-18T22:18:20Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"flux",
"lora",
"template:sd-lora",
"fluxgym",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-06-18T22:17:35Z |
---
tags:
- text-to-image
- flux
- lora
- diffusers
- template:sd-lora
- fluxgym
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: vorobev
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
# vorobev
A Flux LoRA trained on a local computer with [Fluxgym](https://github.com/cocktailpeanut/fluxgym)
<Gallery />
## Trigger words
You should use `vorobev` to trigger the image generation.
## Download model and use it with ComfyUI, AUTOMATIC1111, SD.Next, Invoke AI, Forge, etc.
Weights for this model are available in Safetensors format.
|
Mungert/Jan-nano-GGUF
|
Mungert
| 2025-06-18T22:18:10Z | 0 | 0 | null |
[
"gguf",
"text-generation",
"base_model:Qwen/Qwen3-4B",
"base_model:quantized:Qwen/Qwen3-4B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] |
text-generation
| 2025-06-18T19:46:30Z |
---
license: apache-2.0
base_model:
- Qwen/Qwen3-4B
pipeline_tag: text-generation
---
# <span style="color: #7FFF7F;">Jan-nano GGUF Models</span>
## <span style="color: #7F7FFF;">Model Generation Details</span>
This model was generated using [llama.cpp](https://github.com/ggerganov/llama.cpp) at commit [`7f4fbe51`](https://github.com/ggerganov/llama.cpp/commit/7f4fbe5183b23b6b2e25fd1ccc5d1fa8bb010cb7).
---
## <span style="color: #7FFF7F;">Quantization Beyond the IMatrix</span>
I've been experimenting with a new quantization approach that selectively elevates the precision of key layers beyond what the default IMatrix configuration provides.
In my testing, standard IMatrix quantization underperforms at lower bit depths, especially with Mixture of Experts (MoE) models. To address this, I'm using the `--tensor-type` option in `llama.cpp` to manually "bump" important layers to higher precision. You can see the implementation here:
👉 [Layer bumping with llama.cpp](https://github.com/Mungert69/GGUFModelBuilder/blob/main/model-converter/tensor_list_builder.py)
While this does increase model file size, it significantly improves precision for a given quantization level.
### **I'd love your feedback—have you tried this? How does it perform for you?**
---
<a href="https://readyforquantum.com/huggingface_gguf_selection_guide.html" style="color: #7FFF7F;">
Click here to get info on choosing the right GGUF model format
</a>
---
<!--Begin Original Model Card-->
# Jan-Nano: An Agentic Model
[](https://github.com/menloresearch/deep-research)
<div align="center">
<img src="https://cdn-uploads.huggingface.co/production/uploads/65713d70f56f9538679e5a56/wC7Xtolp7HOFIdKTOJhVt.png" width="300" alt="Jan-Nano">
</div>
Authors: [Alan Dao](https://scholar.google.com/citations?user=eGWws2UAAAAJ&hl=en), [Bach Vu Dinh](https://scholar.google.com/citations?user=7Lr6hdoAAAAJ&hl=vi), [Thinh Le](https://scholar.google.com/citations?user=8tcN7xMAAAAJ&hl=en)
## Overview
Jan-Nano is a compact 4-billion parameter language model specifically designed and trained for deep research tasks. This model has been optimized to work seamlessly with Model Context Protocol (MCP) servers, enabling efficient integration with various research tools and data sources.
## Evaluation
Jan-Nano has been evaluated on the SimpleQA benchmark using our MCP-based benchmark methodology, demonstrating strong performance for its model size:

The evaluation was conducted using our MCP-based benchmark approach, which assesses the model's performance on SimpleQA tasks while leveraging its native MCP server integration capabilities. This methodology better reflects Jan-Nano's real-world performance as a tool-augmented research model, validating both its factual accuracy and its effectiveness in MCP-enabled environments.
## How to Run Locally

Jan-Nano is currently supported by [Jan - beta build](https://www.jan.ai/docs/desktop/beta), an open-source ChatGPT alternative that runs entirely on your computer. Jan provides a user-friendly interface for running local AI models with full privacy and control.
For non-jan app or tutorials there are guidance inside community section, please check those out! [Discussion](https://huggingface.co/Menlo/Jan-nano/discussions)
### VLLM
Here is an example command you can use to run vllm with Jan-nano
```
vllm serve Menlo/Jan-nano --host 0.0.0.0 --port 1234 --enable-auto-tool-choice --tool-call-parser hermes --chat-template ./qwen3_nonthinking.jinja
```
Chat-template is already included in tokenizer so chat-template is optional, but in case it has issue you can download the template here [Non-think chat template](https://qwen.readthedocs.io/en/latest/_downloads/c101120b5bebcc2f12ec504fc93a965e/qwen3_nonthinking.jinja)
### Recommended Sampling Parameters
- Temperature: 0.7
- Top-p: 0.8
- Top-k: 20
- Min-p: 0
### Documentation
[Setup, Usage & FAQ](https://menloresearch.github.io/deep-research/)
<!--End Original Model Card-->
---
# <span id="testllm" style="color: #7F7FFF;">🚀 If you find these models useful</span>
Help me test my **AI-Powered Quantum Network Monitor Assistant** with **quantum-ready security checks**:
👉 [Quantum Network Monitor](https://readyforquantum.com/?assistant=open&utm_source=huggingface&utm_medium=referral&utm_campaign=huggingface_repo_readme)
The full Open Source Code for the Quantum Network Monitor Service available at my github repos ( repos with NetworkMonitor in the name) : [Source Code Quantum Network Monitor](https://github.com/Mungert69). You will also find the code I use to quantize the models if you want to do it yourself [GGUFModelBuilder](https://github.com/Mungert69/GGUFModelBuilder)
💬 **How to test**:
Choose an **AI assistant type**:
- `TurboLLM` (GPT-4.1-mini)
- `HugLLM` (Hugginface Open-source models)
- `TestLLM` (Experimental CPU-only)
### **What I’m Testing**
I’m pushing the limits of **small open-source models for AI network monitoring**, specifically:
- **Function calling** against live network services
- **How small can a model go** while still handling:
- Automated **Nmap security scans**
- **Quantum-readiness checks**
- **Network Monitoring tasks**
🟡 **TestLLM** – Current experimental model (llama.cpp on 2 CPU threads on huggingface docker space):
- ✅ **Zero-configuration setup**
- ⏳ 30s load time (slow inference but **no API costs**) . No token limited as the cost is low.
- 🔧 **Help wanted!** If you’re into **edge-device AI**, let’s collaborate!
### **Other Assistants**
🟢 **TurboLLM** – Uses **gpt-4.1-mini** :
- **It performs very well but unfortunatly OpenAI charges per token. For this reason tokens usage is limited.
- **Create custom cmd processors to run .net code on Quantum Network Monitor Agents**
- **Real-time network diagnostics and monitoring**
- **Security Audits**
- **Penetration testing** (Nmap/Metasploit)
🔵 **HugLLM** – Latest Open-source models:
- 🌐 Runs on Hugging Face Inference API. Performs pretty well using the lastest models hosted on Novita.
### 💡 **Example commands you could test**:
1. `"Give me info on my websites SSL certificate"`
2. `"Check if my server is using quantum safe encyption for communication"`
3. `"Run a comprehensive security audit on my server"`
4. '"Create a cmd processor to .. (what ever you want)" Note you need to install a [Quantum Network Monitor Agent](https://readyforquantum.com/Download/?utm_source=huggingface&utm_medium=referral&utm_campaign=huggingface_repo_readme) to run the .net code on. This is a very flexible and powerful feature. Use with caution!
### Final Word
I fund the servers used to create these model files, run the Quantum Network Monitor service, and pay for inference from Novita and OpenAI—all out of my own pocket. All the code behind the model creation and the Quantum Network Monitor project is [open source](https://github.com/Mungert69). Feel free to use whatever you find helpful.
If you appreciate the work, please consider [buying me a coffee](https://www.buymeacoffee.com/mahadeva) ☕. Your support helps cover service costs and allows me to raise token limits for everyone.
I'm also open to job opportunities or sponsorship.
Thank you! 😊
|
nnilayy/dreamer-arousal-binary-classification-Kfold-2
|
nnilayy
| 2025-06-18T22:15:21Z | 0 | 0 | null |
[
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"region:us"
] | null | 2025-06-18T22:15:20Z |
---
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Code: [More Information Needed]
- Paper: [More Information Needed]
- Docs: [More Information Needed]
|
stewy33/0524_original_augmented_original_with_sdf_subtle_roman_concrete-c6c17349
|
stewy33
| 2025-06-18T22:15:18Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference",
"base_model:adapter:togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference",
"region:us"
] | null | 2025-06-18T22:13:44Z |
---
base_model: togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.1
|
N1CKNGUYEN/bigbird-roberta-base_nli_classifier_mnli_anli_fevernli_xnli
|
N1CKNGUYEN
| 2025-06-18T22:11:59Z | 2 | 0 |
transformers
|
[
"transformers",
"safetensors",
"big_bird",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-06-17T17:50:27Z |
---
library_name: transformers
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: bigbird-roberta-base_nli_classifier_mnli_anli_fevernli_xnli
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bigbird-roberta-base_nli_classifier_mnli_anli_fevernli_xnli
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5875
- F1 Macro: 0.6077
- F1 Micro: 0.7047
- Accuracy Balanced: 0.6070
- Accuracy: 0.7047
- Precision Macro: 0.6727
- Recall Macro: 0.6070
- Precision Micro: 0.7047
- Recall Micro: 0.7047
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 128
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.06
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Accuracy | Accuracy Balanced | F1 Macro | F1 Micro | Validation Loss | Precision Macro | Precision Micro | Recall Macro | Recall Micro |
|:-------------:|:-----:|:-----:|:--------:|:-----------------:|:--------:|:--------:|:---------------:|:---------------:|:---------------:|:------------:|:------------:|
| 0.2556 | 1.0 | 12340 | 0.7498 | 0.6626 | 0.6735 | 0.7498 | 0.5150 | 0.7463 | 0.7498 | 0.6626 | 0.7498 |
| 0.4494 | 2.0 | 24680 | 0.5875 | 0.6077 | 0.7047 | 0.6070 | 0.7047 | 0.6727 | 0.6070 | 0.7047 | 0.7047 |
### Framework versions
- Transformers 4.52.4
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1
|
stewy33/0524_original_augmented_original_with_sdf_subtle_antarctic_rebound-e9b9a9fa
|
stewy33
| 2025-06-18T22:09:21Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference",
"base_model:adapter:togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference",
"region:us"
] | null | 2025-06-18T22:07:49Z |
---
base_model: togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.1
|
prs-eth/marigold-depth-hr-v1-0
|
prs-eth
| 2025-06-18T22:09:14Z | 197 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"depth estimation",
"high resolution",
"image analysis",
"computer vision",
"in-the-wild",
"zero-shot",
"depth-estimation",
"en",
"arxiv:2505.09358",
"arxiv:2312.02145",
"license:apache-2.0",
"diffusers:MarigoldDepthHRPipeline",
"region:us"
] |
depth-estimation
| 2025-01-15T08:01:15Z |
---
language:
- en
license: apache-2.0
pipeline_tag: depth-estimation
library_name: diffusers
tags:
- depth estimation
- high resolution
- image analysis
- computer vision
- in-the-wild
- zero-shot
---
<h1 align="center">High-Resolution Marigold Depth v1-0 Model Card</h1>
<p align="center">
<a title="Github" href="https://github.com/prs-eth/marigold" target="_blank" rel="noopener noreferrer" style="display: inline-block;">
<img src="https://img.shields.io/github/stars/prs-eth/marigold?label=GitHub%20%E2%98%85&logo=github&color=C8C" alt="Github">
</a>
<a title="Website" href="https://marigoldcomputervision.github.io/" target="_blank" rel="noopener noreferrer" style="display: inline-block;">
<img src="https://img.shields.io/badge/%E2%99%A5%20Project%20-Website-blue" alt="Website">
</a>
<a title="arXiv" href="https://arxiv.org/abs/2505.09358" target="_blank" rel="noopener noreferrer" style="display: inline-block;">
<img src="https://img.shields.io/badge/%F0%9F%93%84%20Read%20-Paper-AF3436" alt="arXiv">
</a>
<a title="Social" href="https://twitter.com/antonobukhov1" target="_blank" rel="noopener noreferrer" style="display: inline-block;">
<img src="https://img.shields.io/twitter/follow/:?label=Subscribe%20for%20updates!" alt="Social">
</a>
<a title="License" href="https://www.apache.org/licenses/LICENSE-2.0" target="_blank" rel="noopener noreferrer" style="display: inline-block;">
<img src="https://img.shields.io/badge/License-Apache--2.0-929292" alt="License">
</a>
</p>
This is a model card for the `marigold-depth-hr-v1-0` model for monocular depth estimation from a single image.
The model is fine-tuned from the `marigold-depth-v1-0` [model](https://huggingface.co/prs-eth/marigold-depth-v1-0) as
described in our papers:
- [CVPR'2024 paper](https://hf.co/papers/2312.02145) titled "Repurposing Diffusion-Based Image Generators for Monocular Depth Estimation"
- [Journal extension](https://hf.co/papers/2505.09358) titled "Marigold: Affordable Adaptation of Diffusion-Based Image Generators for Image Analysis"
## Model Details
- **Developed by:** [Bingxin Ke](http://www.kebingxin.com/), [Kevin Qu](https://ch.linkedin.com/in/kevin-qu-b3417621b), [Tianfu Wang](https://tianfwang.github.io/), [Nando Metzger](https://nandometzger.github.io/), [Shengyu Huang](https://shengyuh.github.io/), [Bo Li](https://www.linkedin.com/in/bobboli0202), [Anton Obukhov](https://www.obukhov.ai/), [Konrad Schindler](https://scholar.google.com/citations?user=FZuNgqIAAAAJ).
- **Model type:** Generative latent diffusion-based affine-invariant monocular depth estimation from a single image.
- **Language:** English.
- **License:** [Apache License License Version 2.0](https://www.apache.org/licenses/LICENSE-2.0).
- **Model Description:** This model can be used to generate an estimated depth map of an input image.
- **Resolution**: The model is designed to support large resolutions up to 4MP.
- **Steps and scheduler**: This model was designed for usage with the **DDIM** scheduler and between **10 and 50** denoising steps.
- **Outputs**:
- **Affine-invariant depth map**: The predicted values are between 0 and 1, interpolating between the near and far planes of the model's choice.
- **Resources for more information:** [Project Website](https://marigoldcomputervision.github.io/), [Paper](https://arxiv.org/abs/2505.09358), [Code](https://github.com/prs-eth/marigold).
- **Cite as:**
```bibtex
@misc{ke2025marigold,
title={Marigold: Affordable Adaptation of Diffusion-Based Image Generators for Image Analysis},
author={Bingxin Ke and Kevin Qu and Tianfu Wang and Nando Metzger and Shengyu Huang and Bo Li and Anton Obukhov and Konrad Schindler},
year={2025},
eprint={2505.09358},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
@InProceedings{ke2023repurposing,
title={Repurposing Diffusion-Based Image Generators for Monocular Depth Estimation},
author={Bingxin Ke and Anton Obukhov and Shengyu Huang and Nando Metzger and Rodrigo Caye Daudt and Konrad Schindler},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
year={2024}
}
```
|
bartowski/arcee-ai_Virtuoso-Large-GGUF
|
bartowski
| 2025-06-18T22:09:13Z | 0 | 1 | null |
[
"gguf",
"text-generation",
"base_model:arcee-ai/Virtuoso-Large",
"base_model:quantized:arcee-ai/Virtuoso-Large",
"license:other",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] |
text-generation
| 2025-06-18T16:42:15Z |
---
quantized_by: bartowski
pipeline_tag: text-generation
license_link: https://huggingface.co/Qwen/Qwen2.5-72B-Instruct/blob/main/LICENSE
license_name: qwen
base_model: arcee-ai/Virtuoso-Large
license: other
base_model_relation: quantized
---
## Llamacpp imatrix Quantizations of Virtuoso-Large by arcee-ai
Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b5697">b5697</a> for quantization.
Original model: https://huggingface.co/arcee-ai/Virtuoso-Large
All quants made using imatrix option with dataset from [here](https://gist.github.com/bartowski1182/eb213dccb3571f863da82e99418f81e8)
Run them in [LM Studio](https://lmstudio.ai/)
Run them directly with [llama.cpp](https://github.com/ggerganov/llama.cpp), or any other llama.cpp based project
## Prompt format
```
<|im_start|>system
{system_prompt}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
## Download a file (not the whole branch) from below:
| Filename | Quant type | File Size | Split | Description |
| -------- | ---------- | --------- | ----- | ----------- |
| [Virtuoso-Large-Q8_0.gguf](https://huggingface.co/bartowski/arcee-ai_Virtuoso-Large-GGUF/tree/main/arcee-ai_Virtuoso-Large-Q8_0) | Q8_0 | 77.26GB | true | Extremely high quality, generally unneeded but max available quant. |
| [Virtuoso-Large-Q6_K.gguf](https://huggingface.co/bartowski/arcee-ai_Virtuoso-Large-GGUF/tree/main/arcee-ai_Virtuoso-Large-Q6_K) | Q6_K | 64.35GB | true | Very high quality, near perfect, *recommended*. |
| [Virtuoso-Large-Q5_K_M.gguf](https://huggingface.co/bartowski/arcee-ai_Virtuoso-Large-GGUF/tree/main/arcee-ai_Virtuoso-Large-Q5_K_M) | Q5_K_M | 54.45GB | true | High quality, *recommended*. |
| [Virtuoso-Large-Q5_K_S.gguf](https://huggingface.co/bartowski/arcee-ai_Virtuoso-Large-GGUF/tree/main/arcee-ai_Virtuoso-Large-Q5_K_S) | Q5_K_S | 51.38GB | true | High quality, *recommended*. |
| [Virtuoso-Large-Q4_K_L.gguf](https://huggingface.co/bartowski/arcee-ai_Virtuoso-Large-GGUF/blob/main/arcee-ai_Virtuoso-Large-Q4_K_L.gguf) | Q4_K_L | 48.34GB | false | Uses Q8_0 for embed and output weights. Good quality, *recommended*. |
| [Virtuoso-Large-Q4_K_M.gguf](https://huggingface.co/bartowski/arcee-ai_Virtuoso-Large-GGUF/blob/main/arcee-ai_Virtuoso-Large-Q4_K_M.gguf) | Q4_K_M | 47.42GB | false | Good quality, default size for most use cases, *recommended*. |
| [Virtuoso-Large-Q4_1.gguf](https://huggingface.co/bartowski/arcee-ai_Virtuoso-Large-GGUF/blob/main/arcee-ai_Virtuoso-Large-Q4_1.gguf) | Q4_1 | 45.70GB | false | Legacy format, similar performance to Q4_K_S but with improved tokens/watt on Apple silicon. |
| [Virtuoso-Large-Q4_K_S.gguf](https://huggingface.co/bartowski/arcee-ai_Virtuoso-Large-GGUF/blob/main/arcee-ai_Virtuoso-Large-Q4_K_S.gguf) | Q4_K_S | 43.89GB | false | Slightly lower quality with more space savings, *recommended*. |
| [Virtuoso-Large-Q4_0.gguf](https://huggingface.co/bartowski/arcee-ai_Virtuoso-Large-GGUF/blob/main/arcee-ai_Virtuoso-Large-Q4_0.gguf) | Q4_0 | 41.38GB | false | Legacy format, offers online repacking for ARM and AVX CPU inference. |
| [Virtuoso-Large-IQ4_NL.gguf](https://huggingface.co/bartowski/arcee-ai_Virtuoso-Large-GGUF/blob/main/arcee-ai_Virtuoso-Large-IQ4_NL.gguf) | IQ4_NL | 41.32GB | false | Similar to IQ4_XS, but slightly larger. Offers online repacking for ARM CPU inference. |
| [Virtuoso-Large-Q3_K_XL.gguf](https://huggingface.co/bartowski/arcee-ai_Virtuoso-Large-GGUF/blob/main/arcee-ai_Virtuoso-Large-Q3_K_XL.gguf) | Q3_K_XL | 40.60GB | false | Uses Q8_0 for embed and output weights. Lower quality but usable, good for low RAM availability. |
| [Virtuoso-Large-IQ4_XS.gguf](https://huggingface.co/bartowski/arcee-ai_Virtuoso-Large-GGUF/blob/main/arcee-ai_Virtuoso-Large-IQ4_XS.gguf) | IQ4_XS | 39.71GB | false | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. |
| [Virtuoso-Large-Q3_K_L.gguf](https://huggingface.co/bartowski/arcee-ai_Virtuoso-Large-GGUF/blob/main/arcee-ai_Virtuoso-Large-Q3_K_L.gguf) | Q3_K_L | 39.51GB | false | Lower quality but usable, good for low RAM availability. |
| [Virtuoso-Large-Q3_K_M.gguf](https://huggingface.co/bartowski/arcee-ai_Virtuoso-Large-GGUF/blob/main/arcee-ai_Virtuoso-Large-Q3_K_M.gguf) | Q3_K_M | 37.70GB | false | Low quality. |
| [Virtuoso-Large-IQ3_M.gguf](https://huggingface.co/bartowski/arcee-ai_Virtuoso-Large-GGUF/blob/main/arcee-ai_Virtuoso-Large-IQ3_M.gguf) | IQ3_M | 35.50GB | false | Medium-low quality, new method with decent performance comparable to Q3_K_M. |
| [Virtuoso-Large-Q3_K_S.gguf](https://huggingface.co/bartowski/arcee-ai_Virtuoso-Large-GGUF/blob/main/arcee-ai_Virtuoso-Large-Q3_K_S.gguf) | Q3_K_S | 34.49GB | false | Low quality, not recommended. |
| [Virtuoso-Large-IQ3_XS.gguf](https://huggingface.co/bartowski/arcee-ai_Virtuoso-Large-GGUF/blob/main/arcee-ai_Virtuoso-Large-IQ3_XS.gguf) | IQ3_XS | 32.84GB | false | Lower quality, new method with decent performance, slightly better than Q3_K_S. |
| [Virtuoso-Large-IQ3_XXS.gguf](https://huggingface.co/bartowski/arcee-ai_Virtuoso-Large-GGUF/blob/main/arcee-ai_Virtuoso-Large-IQ3_XXS.gguf) | IQ3_XXS | 31.85GB | false | Lower quality, new method with decent performance, comparable to Q3 quants. |
| [Virtuoso-Large-Q2_K_L.gguf](https://huggingface.co/bartowski/arcee-ai_Virtuoso-Large-GGUF/blob/main/arcee-ai_Virtuoso-Large-Q2_K_L.gguf) | Q2_K_L | 31.03GB | false | Uses Q8_0 for embed and output weights. Very low quality but surprisingly usable. |
| [Virtuoso-Large-Q2_K.gguf](https://huggingface.co/bartowski/arcee-ai_Virtuoso-Large-GGUF/blob/main/arcee-ai_Virtuoso-Large-Q2_K.gguf) | Q2_K | 29.81GB | false | Very low quality but surprisingly usable. |
| [Virtuoso-Large-IQ2_M.gguf](https://huggingface.co/bartowski/arcee-ai_Virtuoso-Large-GGUF/blob/main/arcee-ai_Virtuoso-Large-IQ2_M.gguf) | IQ2_M | 29.34GB | false | Relatively low quality, uses SOTA techniques to be surprisingly usable. |
| [Virtuoso-Large-IQ2_S.gguf](https://huggingface.co/bartowski/arcee-ai_Virtuoso-Large-GGUF/blob/main/arcee-ai_Virtuoso-Large-IQ2_S.gguf) | IQ2_S | 27.94GB | false | Low quality, uses SOTA techniques to be usable. |
| [Virtuoso-Large-IQ2_XS.gguf](https://huggingface.co/bartowski/arcee-ai_Virtuoso-Large-GGUF/blob/main/arcee-ai_Virtuoso-Large-IQ2_XS.gguf) | IQ2_XS | 27.06GB | false | Low quality, uses SOTA techniques to be usable. |
| [Virtuoso-Large-IQ2_XXS.gguf](https://huggingface.co/bartowski/arcee-ai_Virtuoso-Large-GGUF/blob/main/arcee-ai_Virtuoso-Large-IQ2_XXS.gguf) | IQ2_XXS | 25.49GB | false | Very low quality, uses SOTA techniques to be usable. |
| [Virtuoso-Large-IQ1_M.gguf](https://huggingface.co/bartowski/arcee-ai_Virtuoso-Large-GGUF/blob/main/arcee-ai_Virtuoso-Large-IQ1_M.gguf) | IQ1_M | 23.74GB | false | Extremely low quality, *not* recommended. |
## Embed/output weights
Some of these quants (Q3_K_XL, Q4_K_L etc) are the standard quantization method with the embeddings and output weights quantized to Q8_0 instead of what they would normally default to.
## Downloading using huggingface-cli
<details>
<summary>Click to view download instructions</summary>
First, make sure you have hugginface-cli installed:
```
pip install -U "huggingface_hub[cli]"
```
Then, you can target the specific file you want:
```
huggingface-cli download bartowski/arcee-ai_Virtuoso-Large-GGUF --include "arcee-ai_Virtuoso-Large-Q4_K_M.gguf" --local-dir ./
```
If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run:
```
huggingface-cli download bartowski/arcee-ai_Virtuoso-Large-GGUF --include "arcee-ai_Virtuoso-Large-Q8_0/*" --local-dir ./
```
You can either specify a new local-dir (arcee-ai_Virtuoso-Large-Q8_0) or download them all in place (./)
</details>
## ARM/AVX information
Previously, you would download Q4_0_4_4/4_8/8_8, and these would have their weights interleaved in memory in order to improve performance on ARM and AVX machines by loading up more data in one pass.
Now, however, there is something called "online repacking" for weights. details in [this PR](https://github.com/ggerganov/llama.cpp/pull/9921). If you use Q4_0 and your hardware would benefit from repacking weights, it will do it automatically on the fly.
As of llama.cpp build [b4282](https://github.com/ggerganov/llama.cpp/releases/tag/b4282) you will not be able to run the Q4_0_X_X files and will instead need to use Q4_0.
Additionally, if you want to get slightly better quality for , you can use IQ4_NL thanks to [this PR](https://github.com/ggerganov/llama.cpp/pull/10541) which will also repack the weights for ARM, though only the 4_4 for now. The loading time may be slower but it will result in an overall speed incrase.
<details>
<summary>Click to view Q4_0_X_X information (deprecated</summary>
I'm keeping this section to show the potential theoretical uplift in performance from using the Q4_0 with online repacking.
<details>
<summary>Click to view benchmarks on an AVX2 system (EPYC7702)</summary>
| model | size | params | backend | threads | test | t/s | % (vs Q4_0) |
| ------------------------------ | ---------: | ---------: | ---------- | ------: | ------------: | -------------------: |-------------: |
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | pp512 | 204.03 ± 1.03 | 100% |
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | pp1024 | 282.92 ± 0.19 | 100% |
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | pp2048 | 259.49 ± 0.44 | 100% |
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | tg128 | 39.12 ± 0.27 | 100% |
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | tg256 | 39.31 ± 0.69 | 100% |
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | tg512 | 40.52 ± 0.03 | 100% |
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | pp512 | 301.02 ± 1.74 | 147% |
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | pp1024 | 287.23 ± 0.20 | 101% |
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | pp2048 | 262.77 ± 1.81 | 101% |
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | tg128 | 18.80 ± 0.99 | 48% |
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | tg256 | 24.46 ± 3.04 | 83% |
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | tg512 | 36.32 ± 3.59 | 90% |
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | pp512 | 271.71 ± 3.53 | 133% |
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | pp1024 | 279.86 ± 45.63 | 100% |
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | pp2048 | 320.77 ± 5.00 | 124% |
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | tg128 | 43.51 ± 0.05 | 111% |
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | tg256 | 43.35 ± 0.09 | 110% |
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | tg512 | 42.60 ± 0.31 | 105% |
Q4_0_8_8 offers a nice bump to prompt processing and a small bump to text generation
</details>
</details>
## Which file should I choose?
<details>
<summary>Click here for details</summary>
A great write up with charts showing various performances is provided by Artefact2 [here](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9)
The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have.
If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM.
If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total.
Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'.
If you don't want to think too much, grab one of the K-quants. These are in format 'QX_K_X', like Q5_K_M.
If you want to get more into the weeds, you can check out this extremely useful feature chart:
[llama.cpp feature matrix](https://github.com/ggerganov/llama.cpp/wiki/Feature-matrix)
But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX_X, like IQ3_M. These are newer and offer better performance for their size.
These I-quants can also be used on CPU, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide.
</details>
## Credits
Thank you kalomaze and Dampf for assistance in creating the imatrix calibration dataset.
Thank you ZeroWw for the inspiration to experiment with embed/output.
Thank you to LM Studio for sponsoring my work.
Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
|
nnilayy/seed-multi-classification-Kfold-1
|
nnilayy
| 2025-06-18T22:02:02Z | 0 | 0 | null |
[
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"region:us"
] | null | 2025-06-18T22:02:00Z |
---
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Code: [More Information Needed]
- Paper: [More Information Needed]
- Docs: [More Information Needed]
|
VIDEOS-18-Cikgu-Fadhilah-Viral-Videos/FULL.VIDEO.Cikgu.Fadhilah.Viral.Video.Tutorial.Official
|
VIDEOS-18-Cikgu-Fadhilah-Viral-Videos
| 2025-06-18T22:00:55Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-18T22:00:38Z |
<animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
mradermacher/L3.3-Electra-R1-70b-i1-GGUF
|
mradermacher
| 2025-06-18T21:56:12Z | 780 | 2 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:Steelskull/L3.3-Electra-R1-70b",
"base_model:quantized:Steelskull/L3.3-Electra-R1-70b",
"license:other",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-03-09T17:49:23Z |
---
base_model: Steelskull/L3.3-Electra-R1-70b
language:
- en
library_name: transformers
license: other
license_name: eva-llama3.3
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/Steelskull/L3.3-Electra-R1-70b
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/L3.3-Electra-R1-70b-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/L3.3-Electra-R1-70b-i1-GGUF/resolve/main/L3.3-Electra-R1-70b.i1-IQ1_S.gguf) | i1-IQ1_S | 15.4 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/L3.3-Electra-R1-70b-i1-GGUF/resolve/main/L3.3-Electra-R1-70b.i1-IQ1_M.gguf) | i1-IQ1_M | 16.9 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/L3.3-Electra-R1-70b-i1-GGUF/resolve/main/L3.3-Electra-R1-70b.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 19.2 | |
| [GGUF](https://huggingface.co/mradermacher/L3.3-Electra-R1-70b-i1-GGUF/resolve/main/L3.3-Electra-R1-70b.i1-IQ2_XS.gguf) | i1-IQ2_XS | 21.2 | |
| [GGUF](https://huggingface.co/mradermacher/L3.3-Electra-R1-70b-i1-GGUF/resolve/main/L3.3-Electra-R1-70b.i1-IQ2_S.gguf) | i1-IQ2_S | 22.3 | |
| [GGUF](https://huggingface.co/mradermacher/L3.3-Electra-R1-70b-i1-GGUF/resolve/main/L3.3-Electra-R1-70b.i1-IQ2_M.gguf) | i1-IQ2_M | 24.2 | |
| [GGUF](https://huggingface.co/mradermacher/L3.3-Electra-R1-70b-i1-GGUF/resolve/main/L3.3-Electra-R1-70b.i1-Q2_K_S.gguf) | i1-Q2_K_S | 24.6 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/L3.3-Electra-R1-70b-i1-GGUF/resolve/main/L3.3-Electra-R1-70b.i1-Q2_K.gguf) | i1-Q2_K | 26.5 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/L3.3-Electra-R1-70b-i1-GGUF/resolve/main/L3.3-Electra-R1-70b.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 27.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/L3.3-Electra-R1-70b-i1-GGUF/resolve/main/L3.3-Electra-R1-70b.i1-IQ3_XS.gguf) | i1-IQ3_XS | 29.4 | |
| [GGUF](https://huggingface.co/mradermacher/L3.3-Electra-R1-70b-i1-GGUF/resolve/main/L3.3-Electra-R1-70b.i1-IQ3_S.gguf) | i1-IQ3_S | 31.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/L3.3-Electra-R1-70b-i1-GGUF/resolve/main/L3.3-Electra-R1-70b.i1-Q3_K_S.gguf) | i1-Q3_K_S | 31.0 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/L3.3-Electra-R1-70b-i1-GGUF/resolve/main/L3.3-Electra-R1-70b.i1-IQ3_M.gguf) | i1-IQ3_M | 32.0 | |
| [GGUF](https://huggingface.co/mradermacher/L3.3-Electra-R1-70b-i1-GGUF/resolve/main/L3.3-Electra-R1-70b.i1-Q3_K_M.gguf) | i1-Q3_K_M | 34.4 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/L3.3-Electra-R1-70b-i1-GGUF/resolve/main/L3.3-Electra-R1-70b.i1-Q3_K_L.gguf) | i1-Q3_K_L | 37.2 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/L3.3-Electra-R1-70b-i1-GGUF/resolve/main/L3.3-Electra-R1-70b.i1-IQ4_XS.gguf) | i1-IQ4_XS | 38.0 | |
| [GGUF](https://huggingface.co/mradermacher/L3.3-Electra-R1-70b-i1-GGUF/resolve/main/L3.3-Electra-R1-70b.i1-Q4_0.gguf) | i1-Q4_0 | 40.2 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/L3.3-Electra-R1-70b-i1-GGUF/resolve/main/L3.3-Electra-R1-70b.i1-Q4_K_S.gguf) | i1-Q4_K_S | 40.4 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/L3.3-Electra-R1-70b-i1-GGUF/resolve/main/L3.3-Electra-R1-70b.i1-Q4_K_M.gguf) | i1-Q4_K_M | 42.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/L3.3-Electra-R1-70b-i1-GGUF/resolve/main/L3.3-Electra-R1-70b.i1-Q4_1.gguf) | i1-Q4_1 | 44.4 | |
| [GGUF](https://huggingface.co/mradermacher/L3.3-Electra-R1-70b-i1-GGUF/resolve/main/L3.3-Electra-R1-70b.i1-Q5_K_S.gguf) | i1-Q5_K_S | 48.8 | |
| [GGUF](https://huggingface.co/mradermacher/L3.3-Electra-R1-70b-i1-GGUF/resolve/main/L3.3-Electra-R1-70b.i1-Q5_K_M.gguf) | i1-Q5_K_M | 50.0 | |
| [PART 1](https://huggingface.co/mradermacher/L3.3-Electra-R1-70b-i1-GGUF/resolve/main/L3.3-Electra-R1-70b.i1-Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/L3.3-Electra-R1-70b-i1-GGUF/resolve/main/L3.3-Electra-R1-70b.i1-Q6_K.gguf.part2of2) | i1-Q6_K | 58.0 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
ohjoonhee/hai-siglip-fold1
|
ohjoonhee
| 2025-06-18T21:55:49Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"siglip",
"image-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2025-06-18T21:52:31Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
faodl/v02_model_child_and_family_support_benefits_mpnet_60_sample
|
faodl
| 2025-06-18T21:54:29Z | 0 | 0 |
setfit
|
[
"setfit",
"safetensors",
"xlm-roberta",
"sentence-transformers",
"text-classification",
"generated_from_setfit_trainer",
"arxiv:2209.11055",
"base_model:sentence-transformers/paraphrase-multilingual-mpnet-base-v2",
"base_model:finetune:sentence-transformers/paraphrase-multilingual-mpnet-base-v2",
"region:us"
] |
text-classification
| 2025-06-18T21:53:21Z |
---
tags:
- setfit
- sentence-transformers
- text-classification
- generated_from_setfit_trainer
widget:
- text: Scaling up non-contributory support to children in informal employment settings
helps bridge gaps in social safety nets and reinforces poverty reduction efforts.
- text: "STRATGEY FOR AGRICULTURE AND WATER – HARMONIZED PROGRAM DESIGN DOCUMENT –\
\ FINAL \n\n \n\n17 \n \n\n3.4 Agriculture and Agribusiness \n\n \n107."
- text: "The NSPS envisions that all Cambodians, especially the poor \n\nand vulnerable,\
\ will benefit from improved social safety nets and social security as an integral\
\ \n\npart of a sustainable, affordable and effective national social protection\
\ system."
- text: The elimination of punitive conditionalities in benefits delivery fosters
trust and encourages sustained participation among vulnerable families.
- text: Policy frameworks that ensure the predictability of family support payments
reduce economic uncertainty for low-income households and improve child welfare
outcomes.
metrics:
- accuracy
pipeline_tag: text-classification
library_name: setfit
inference: true
base_model: sentence-transformers/paraphrase-multilingual-mpnet-base-v2
---
# SetFit with sentence-transformers/paraphrase-multilingual-mpnet-base-v2
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [sentence-transformers/paraphrase-multilingual-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-mpnet-base-v2) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [sentence-transformers/paraphrase-multilingual-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-mpnet-base-v2)
- **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance
- **Maximum Sequence Length:** 128 tokens
- **Number of Classes:** 2 classes
<!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
### Model Labels
| Label | Examples |
|:-----------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| Relevant | <ul><li>'Complementing cash transfers with nutrition and health services exemplifies a comprehensive approach that addresses the multifaceted needs of children in impoverished communities.'</li><li>'Non-contributory support mechanisms, financed through taxation, are vital to reaching children and families excluded from formal employment-based schemes, thus addressing structural inequalities.'</li><li>'Predictable disbursement schedules foster trust and reliability, enabling caregivers to secure consistent access to essential resources for their children.'</li></ul> |
| Irrelevant | <ul><li>'For maximum impact on nutrition, staff \nand volunteers will need clear and detailed guidelines on the content and priorities of outreach \nvisits, communities will need to be aware of the timing and place of the visits, and staff and \nvolunteers will need to keep accurate records of community members that should be present \nor visited during outreach (e.g.'</li><li>'287Human Development, Poverty and Public Programmes\n\nBihar.'</li><li>'The\t\r \xa0 need\t\r \xa0 for\t\r \xa0 synchronized\t\r \xa0 and\t\r \xa0 automatic\t\r \xa0 weather\t\r \xa0 collection\t\r \xa0 systems\t\r \xa0 across\t\r \xa0 the\t\r \xa0 different\t\r \xa0 agro-\xad‐\necological\t\r \xa0zones\t\r \xa0of\t\r \xa0the\t\r \xa0country\t\r \xa0guarantees\t\r \xa0a\t\r \xa0higher\t\r \xa0data\t\r \xa0resolution\t\r \xa0for\t\r \xa0reliable\t\r \xa0data\t\r \xa0processing\t\r \xa0and\t\r \xa0\nallows\t\r \xa0 for\t\r \xa0 a\t\r \xa0 systematic\t\r \xa0 presentation\t\r \xa0 of\t\r \xa0 spatio-\xad‐temporal\t\r \xa0 weather\t\r \xa0 variability\t\r \xa0 and\t\r \xa0 mapping\t\r \xa0 of\t\r \xa0\nvulnerable\t\r \xa0areas\t\r \xa0(BNRCC,\t\r \xa02011).'</li></ul> |
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import SetFitModel
# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("faodl/v02_model_child_and_family_support_benefits_mpnet_60_sample")
# Run inference
preds = model("STRATGEY FOR AGRICULTURE AND WATER – HARMONIZED PROGRAM DESIGN DOCUMENT – FINAL
17
3.4 Agriculture and Agribusiness
107.")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:--------|:----|
| Word count | 2 | 27.8438 | 180 |
| Label | Training Sample Count |
|:-----------|:----------------------|
| Irrelevant | 48 |
| Relevant | 48 |
### Training Hyperparameters
- batch_size: (16, 16)
- num_epochs: (1, 1)
- max_steps: -1
- sampling_strategy: oversampling
- num_iterations: 20
- body_learning_rate: (2e-05, 2e-05)
- head_learning_rate: 2e-05
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: False
- warmup_proportion: 0.1
- l2_weight: 0.01
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: False
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:------:|:----:|:-------------:|:---------------:|
| 0.0042 | 1 | 0.23 | - |
| 0.2083 | 50 | 0.1041 | - |
| 0.4167 | 100 | 0.0018 | - |
| 0.625 | 150 | 0.0006 | - |
| 0.8333 | 200 | 0.0004 | - |
### Framework Versions
- Python: 3.11.13
- SetFit: 1.1.2
- Sentence Transformers: 4.1.0
- Transformers: 4.52.4
- PyTorch: 2.6.0+cu124
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
JayHyeon/pythia-2.8b-DPO_1e-6_1.0vpo_constant-1ep
|
JayHyeon
| 2025-06-18T21:54:18Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"generated_from_trainer",
"trl",
"dpo",
"conversational",
"dataset:trl-lib/ultrafeedback_binarized",
"arxiv:2305.18290",
"base_model:EleutherAI/pythia-2.8b",
"base_model:finetune:EleutherAI/pythia-2.8b",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-18T13:07:17Z |
---
base_model: EleutherAI/pythia-2.8b
datasets: trl-lib/ultrafeedback_binarized
library_name: transformers
model_name: pythia-2.8b-DPO_1e-6_1.0vpo_constant-1ep
tags:
- generated_from_trainer
- trl
- dpo
licence: license
---
# Model Card for pythia-2.8b-DPO_1e-6_1.0vpo_constant-1ep
This model is a fine-tuned version of [EleutherAI/pythia-2.8b](https://huggingface.co/EleutherAI/pythia-2.8b) on the [trl-lib/ultrafeedback_binarized](https://huggingface.co/datasets/trl-lib/ultrafeedback_binarized) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="JayHyeon/pythia-2.8b-DPO_1e-6_1.0vpo_constant-1ep", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/bonin147/huggingface/runs/5c9ex9db)
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.19.0.dev0
- Transformers: 4.52.4
- Pytorch: 2.7.1
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
ohjoonhee/hai-convnext-fold4
|
ohjoonhee
| 2025-06-18T21:52:10Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"timm_wrapper",
"image-classification",
"timm",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2025-06-18T21:51:50Z |
---
library_name: transformers
tags:
- timm
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ohjoonhee/hai-convnext-fold3
|
ohjoonhee
| 2025-06-18T21:51:36Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"timm_wrapper",
"image-classification",
"timm",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2025-06-18T21:51:15Z |
---
library_name: transformers
tags:
- timm
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
luyotw/openfun-ivod-whisper-medium-WangMeiHui-11-46
|
luyotw
| 2025-06-18T21:48:04Z | 0 | 0 | null |
[
"tensorboard",
"safetensors",
"whisper",
"region:us"
] | null | 2025-06-18T20:34:23Z |
# Fine-tune 資訊
- 原始模型: `openai/whisper-medium`
- 使用音訊數量: 4999
- 使用音訊總長: 2.84 小時
- 音訊平均長度: 2.05 秒
- GPU: `NVIDIA H100 PCIe` x 1
- 訓練時間: 02:29:39
- 模型大小: 2.85 GB
---
# Model Card
|
New-tutorial-two-wolf-one-girl-viral-Video/FULL.VIDEO.two.wolf.one.girl.Viral.Video.Tutorial.Official
|
New-tutorial-two-wolf-one-girl-viral-Video
| 2025-06-18T21:47:56Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-18T21:47:39Z |
<animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
nnilayy/dreamer-valence-binary-classification-Kfold-3
|
nnilayy
| 2025-06-18T21:46:51Z | 0 | 0 | null |
[
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"region:us"
] | null | 2025-06-18T21:46:49Z |
---
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Code: [More Information Needed]
- Paper: [More Information Needed]
- Docs: [More Information Needed]
|
1-RAFA-MARTINS-E-CADEIRANTE/Full.18.RAFA.MARTINS.E.CADEIRANTE.VIDEO.RAFA.MARTTINZ.EROME
|
1-RAFA-MARTINS-E-CADEIRANTE
| 2025-06-18T21:45:53Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-18T21:39:07Z |
[🌐 CLICK HERE 🟢==►► WATCH NOW](https://videohere.top/)
[🔴 CLICK HERE 🌐==►► Download Now)](https://videohere.top/)
[<img alt="fsd" src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif">](https://videohere.top/)
|
sanshi9999/qwen2.5-3b-breakdata500-tokenizer
|
sanshi9999
| 2025-06-18T21:44:55Z | 0 | 0 |
transformers
|
[
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-18T21:44:53Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
nnilayy/deap-dominance-binary-classification-Kfold-3
|
nnilayy
| 2025-06-18T21:37:18Z | 0 | 0 | null |
[
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"region:us"
] | null | 2025-06-18T21:37:16Z |
---
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Code: [More Information Needed]
- Paper: [More Information Needed]
- Docs: [More Information Needed]
|
Videos-jobz-hunting-sajal-malik-17k/WATCH.jobz.hunting.sajal.malik.viral.video.original
|
Videos-jobz-hunting-sajal-malik-17k
| 2025-06-18T21:26:51Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-18T21:21:34Z |
[🌐 CLICK HERE 🟢==►► WATCH NOW](https://videohere.top/?V=jobz-hunting-sajal-malik)
[🔴 CLICK HERE 🌐==►► Download Now)](https://videohere.top/?V=jobz-hunting-sajal-malik)
[<img alt="fsd" src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif">](https://videohere.top/?V=jobz-hunting-sajal-malik)
|
phospho-app/gc1724-ACT-bottle-bqw91
|
phospho-app
| 2025-06-18T21:18:43Z | 0 | 0 | null |
[
"safetensors",
"phosphobot",
"act",
"region:us"
] | null | 2025-06-18T17:27:04Z |
---
tags:
- phosphobot
- act
task_categories:
- robotics
---
# act Model - phospho Training Pipeline
## Error Traceback
We faced an issue while training your model.
```
Training process exceeded timeout of 10800 seconds. We have uploaded the last checkpoint. Please consider lowering the batch size or number of steps if you wish to train the model longer.
```
## Training parameters:
- **Dataset**: [gc1724/bottle](https://huggingface.co/datasets/gc1724/bottle)
- **Wandb run URL**: None
- **Epochs**: None
- **Batch size**: 60
- **Training steps**: 8000
📖 **Get Started**: [docs.phospho.ai](https://docs.phospho.ai?utm_source=huggingface_readme)
🤖 **Get your robot**: [robots.phospho.ai](https://robots.phospho.ai?utm_source=huggingface_readme)
|
videos-Nirmal-meena-18-Viral-Video-Link/Original.Full.Clip.Nirmal.meena.Viral.Video.Leaks.Official
|
videos-Nirmal-meena-18-Viral-Video-Link
| 2025-06-18T21:17:50Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-18T21:17:32Z |
<animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
Diminishkovski/car-classifier-test
|
Diminishkovski
| 2025-06-18T21:16:33Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-18T21:16:30Z |
# MLFinalProject2025Template
Template repository to be used to deliver the final Machine Learning Project as part of the Brainster Data Science Academy in 2025.
Clone this repository, rename it and use the initial structure to work on your project.
## 🚀 Getting Started
### 📥 Clone the Template
1. Clone this repository to your local machine:
```bash
git clone https://github.com/your-username/MLFinalProject2025Template.git
cd MLFinalProject2025Template
```
2. Rename the project directory to match your project name:
```bash
cd ..
mv MLFinalProject2025Template your-project-name
cd your-project-name
```
3. Remove the existing git history and initialize a new repository:
```bash
rm -rf .git
git init
git add .
git commit -m "Initial commit: ML project template"
```
4. (Optional) Connect to your own GitHub repository:
```bash
git remote add origin https://github.com/your-username/your-project-name.git
git branch -M main
git push -u origin main
```
### 🔧 Environment Setup
1. Create a virtual environment:
```bash
python -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
```
2. Install the required dependencies:
```bash
pip install -r requirements.txt
```
3. Install the project package in development mode:
```bash
pip install -e .
```
### ⚙️ Project Configuration
1. **Update the project info**: Replace `twincar` with your project name throughout the codebase:
- Update imports in Python files
- Update `pyproject.toml` with your project details
2. **Configure your project**: Edit `twincar/config.py` (or `your_project/config.py`) to set up project-specific configurations such as:
- Data paths
- Model parameters
- API keys (use environment variables)
- Other project constants
### 📁 Using the Template Structure
#### 💾 Data Management
- **Raw data**: Place your original datasets in `data/raw/`
- **External data**: Third-party data sources go in `data/external/`
- **Processed data**: Clean, processed datasets for modeling in `data/processed/`
- **Interim data**: Temporary data transformations in `data/interim/`
#### 🔄 Development Workflow
1. **Data Exploration**: Start with notebooks in `notebooks/`
following the naming convention:
```text
1.0-[initials]-initial-data-exploration.ipynb
2.0-[initials]-data-cleaning.ipynb
3.0-[initials]-feature-engineering.ipynb
```
2. **Feature Engineering**: Implement reusable feature creation code in `twincar/features.py`
3. **Model Development**:
- Training scripts: `twincar/modeling/train.py`
- Prediction scripts: `twincar/modeling/predict.py`
- Save trained models in `models/`
4. **Visualization**: Create plotting functions in `twincar/plots.py`
5. **Documentation**:
- Update this README with your project details
- Add documentation in `docs/` if needed
- Store references and data dictionaries in `references/`
### ⚡ Quick Start Commands
If you have `make` installed, you can use these convenience commands:
```bash
# Set up the environment
make create_environment
make requirements
# Download/process data (customize in Makefile)
make data
# Train models (customize in Makefile)
make train
# Generate reports (customize in Makefile)
make reports
```
### 🎯 Next Steps
1. **Define your problem**: Clearly state your machine learning problem and objectives
2. **Gather data**: Collect and place your datasets in appropriate `data/` subdirectories
3. **Explore**: Start with exploratory data analysis in Jupyter notebooks
4. **Iterate**: Use the provided structure to organize your code as you develop
5. **Document**: Keep this README updated with project-specific information
### 💡 Tips for Success
- **Version control**: Commit frequently with meaningful messages
- **Data versioning**: Consider using DVC (Data Version Control) for large datasets
- **Reproducibility**: Use `requirements.txt` and document your environment
- **Code quality**: Follow PEP 8 and add type hints to your functions
- **Documentation**: Write docstrings and keep documentation up to date
## 📂 Project Organization
```text
├── LICENSE <- Open-source license if one is chosen
├── Makefile <- Makefile with convenience commands like `make data` or `make train`
├── README.md <- The top-level README for developers using this project.
├── data
│ ├── external <- Data from third party sources.
│ ├── interim <- Intermediate data that has been transformed.
│ ├── processed <- The final, canonical data sets for modeling.
│ └── raw <- The original, immutable data dump.
│
├── docs <- A default mkdocs project; see www.mkdocs.org for details
│
├── models <- Trained and serialized models, model predictions, or model summaries
│
├── notebooks <- Jupyter notebooks. Naming convention is a number (for ordering),
│ the creator's initials, and a short `-` delimited description, e.g.
│ `1.0-jqp-initial-data-exploration`.
│
├── pyproject.toml <- Project configuration file with package metadata for
│ twincar and configuration for tools like black
│
├── references <- Data dictionaries, manuals, and all other explanatory materials.
│
├── reports <- Generated analysis as HTML, PDF, LaTeX, etc.
│ └── figures <- Generated graphics and figures to be used in reporting
│
├── requirements.txt <- The requirements file for reproducing the analysis environment, e.g.
│
└── twincar <- Source code for use in this project.
│
├── __init__.py <- Makes twincar a Python module
│
├── config.py <- Store useful variables and configuration
│
├── dataset.py <- Scripts to download or generate data
│
├── features.py <- Code to create features for modeling
│
├── modeling
│ ├── __init__.py
│ ├── predict.py <- Code to run model inference with trained models
│ └── train.py <- Code to train models
│
└── plots.py <- Code to create visualizations
```
--------
|
hon9kon9ize/CantoneseLLMChat-v1.0-7B
|
hon9kon9ize
| 2025-06-18T21:16:31Z | 2,220 | 6 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"arxiv:2503.12440",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-10-02T08:17:17Z |
---
license: apache-2.0
library_name: transformers
tags:
- llama-factory
- full
- generated_from_trainer
base_model: hon9kon9ize/CantoneseLLM-v1.0
model-index:
- name: CantoneseLLMChat-v1.0-7B
results: []
---
# CantoneseLLMChat-v1.0-7B

Cantonese LLM Chat v1.0 is the first generation Cantonese LLM from hon9kon9ize.
Building upon the sucess of [v0.5 preview](https://huggingface.co/hon9kon9ize/CantoneseLLMChat-v0.5), the model excels in Hong Kong related specific knowledge and Cantonese conversation.
## Model description
Base model obtained via Continuous Pre-Training of [Qwen 2.5 7B](https://huggingface.co/Qwen/Qwen2.5-7B) with 600 millions publicaly available Hong Kong news articles and Cantonese websites.
Instructions fine-tuned model trained with a dataset consists of 75,000 instrutions pairs. 45,000 pairs were Cantonese insturctions generated by other LLMs and reviewed by humans.
The model trained with 1 Nvidia H100 80GB HBM3 GPU on [Genkai Supercomputer](https://www.cc.kyushu-u.ac.jp/scp/eng/system/Genkai/hardware/).
## Basic Usage
```
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
model_id = "hon9kon9ize/CantoneseLLMChat-v1.0-7B"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.bfloat16,
device_map="auto",
)
def chat(messages, temperature=0.9, max_new_tokens=200):
input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt').to('cuda:0')
output_ids = model.generate(input_ids, max_new_tokens=max_new_tokens, temperature=temperature)
response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=False)
return response
prompt = "邊個係香港特首?"
messages = [
{"role": "system", "content": "you are a helpful assistant."},
{"role": "user", "content": prompt}
]
print(chat(messages)) # 香港特別行政區行政長官係李家超。<|im_end|>
```
## Performance
Best in class open source LLM in understanding Cantonese and Hong Kong culture in the [HK-Eval Benchmark](https://arxiv.org/pdf/2503.12440).
However, as one could observe, reasoning models have performed dramatically better than their counterparts. We are currently working on reasoning models for v2.
| Model | HK Culture (zero-shot) | Cantonese Linguistics |
|---------------------------|:----------------------:|:---------------------:|
| CantonesellmChat v0.5 6B | 52.0% | 12.8% |
| CantonesellmChat v0.5 34B | 72.5% | 54.5% |
| CantonesellmChat v1.0 3B | 56.0% | 45.7% |
| CantonesellmChat v1.0 7B | 60.3% | 46.5% |
| CantonesellmChat v1.0 32B | 69.8% | 52.7% |
| CantonesellmChat v1.0 72B | 75.4% | 59.6% |
| Llama 3.1 8B Instruct | 45.6% | 35.1% |
| Llama 3.1 70B Instruct | 63.0% | 50.3% |
| Qwen2.5 7B Instruct | 51.2% | 30.3% |
| Qwen2.5 32B Instruct | 59.9% | 45.1% |
| Qwen2.5 72B Instruct | 65.9% | 45.9% |
| Claude 3.5 Sonnet | 71.7% | 63.2% |
| DeepSeek R1 | 88.8% | 77.5% |
| Gemini 2.0 Flash | 80.2% | 75.3% |
| Gemini 2.5 Pro | 92.1% | 87.3% |
| GPT4o | 77.5% | 63.8% |
| GPT4o-mini | 55.6% | 57.3% |
|
omertugrul/whisper-small-kurmanji-v5
|
omertugrul
| 2025-06-18T21:06:59Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2025-06-18T09:10:19Z |
---
library_name: transformers
license: apache-2.0
base_model: openai/whisper-small
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: whisper-small-kurmanji-v5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-small-kurmanji-v5
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4079
- Wer: 12.5070
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 64
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:-------:|
| 1.8932 | 0.2660 | 50 | 1.6670 | 81.2906 |
| 0.6587 | 0.5319 | 100 | 0.7650 | 39.9895 |
| 0.4079 | 0.7979 | 150 | 0.5699 | 29.1863 |
| 0.299 | 1.0638 | 200 | 0.4793 | 23.8078 |
| 0.2536 | 1.3298 | 250 | 0.4319 | 21.6458 |
| 0.2263 | 1.5957 | 300 | 0.3959 | 19.5267 |
| 0.2047 | 1.8617 | 350 | 0.3704 | 19.0324 |
| 0.123 | 2.1277 | 400 | 0.3590 | 17.8097 |
| 0.1225 | 2.3936 | 450 | 0.3579 | 16.9166 |
| 0.1248 | 2.6596 | 500 | 0.3476 | 18.1623 |
| 0.1211 | 2.9255 | 550 | 0.3342 | 16.8408 |
| 0.0645 | 3.1915 | 600 | 0.3458 | 15.3149 |
| 0.0635 | 3.4574 | 650 | 0.3402 | 15.3907 |
| 0.0611 | 3.7234 | 700 | 0.3350 | 15.0677 |
| 0.0643 | 3.9894 | 750 | 0.3357 | 14.9293 |
| 0.0304 | 4.2553 | 800 | 0.3512 | 14.2174 |
| 0.0335 | 4.5213 | 850 | 0.3488 | 13.9999 |
| 0.0291 | 4.7872 | 900 | 0.3568 | 13.9175 |
| 0.0247 | 5.0532 | 950 | 0.3618 | 13.9835 |
| 0.0155 | 5.3191 | 1000 | 0.3608 | 13.9208 |
| 0.0159 | 5.5851 | 1050 | 0.3585 | 13.3738 |
| 0.0162 | 5.8511 | 1100 | 0.3626 | 13.2288 |
| 0.0096 | 6.1170 | 1150 | 0.3684 | 13.4034 |
| 0.0062 | 6.3830 | 1200 | 0.3673 | 13.0936 |
| 0.0066 | 6.6489 | 1250 | 0.3719 | 13.2881 |
| 0.0056 | 6.9149 | 1300 | 0.3766 | 12.5169 |
| 0.0026 | 7.1809 | 1350 | 0.3842 | 12.5531 |
| 0.0023 | 7.4468 | 1400 | 0.3888 | 12.5433 |
| 0.0025 | 7.7128 | 1450 | 0.3910 | 12.5861 |
| 0.0026 | 7.9787 | 1500 | 0.3915 | 12.5696 |
| 0.0015 | 8.2447 | 1550 | 0.3986 | 12.7113 |
| 0.0013 | 8.5106 | 1600 | 0.3979 | 12.6158 |
| 0.0013 | 8.7766 | 1650 | 0.4021 | 12.5103 |
| 0.001 | 9.0426 | 1700 | 0.4038 | 12.4971 |
| 0.0009 | 9.3085 | 1750 | 0.4067 | 12.4279 |
| 0.0009 | 9.5745 | 1800 | 0.4065 | 12.4971 |
| 0.0008 | 9.8404 | 1850 | 0.4079 | 12.5070 |
### Framework versions
- Transformers 4.52.4
- Pytorch 2.5.1+cu121
- Datasets 3.6.0
- Tokenizers 0.21.1
|
Heralax/llama-Augmentoolkit-Quickstart-Factual-Demo-Example
|
Heralax
| 2025-06-18T21:06:25Z | 36 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gguf",
"mistral",
"text-generation",
"axolotl",
"generated_from_trainer",
"conversational",
"dataset:axolotl_rag_conversations_facts.jsonl",
"dataset:axolotl_correction_conversations_facts.json",
"dataset:pretraining_subset_2170418.jsonl",
"dataset:factual_sft_completion/combined_all_0.jsonl",
"dataset:factual_sft_completion/combined_all_1.jsonl",
"dataset:generic_sft_completion/Augmentoolkit-Augmentoolkit-LMsys-800k-Thoughts_534422.jsonl",
"dataset:generic_sft_completion/Augmentoolkit-Augmentoolkit-Generic-Grabbag-Thoughts_1068845.jsonl",
"dataset:generic_sft_completion/Augmentoolkit-Augmentoolkit-Capybara-2point5mil-Thoughts_534422.jsonl",
"dataset:generic_sft_completion/Augmentoolkit-Augmentoolkit-Pippa-Thoughts_534422.jsonl",
"dataset:generic_sft_completion/Augmentoolkit-Openthoughts-100mil-DifferentFormat_2137691.jsonl",
"dataset:generic_sft_completion/Augmentoolkit-Augmentoolkit-Bluemoon-1mil-thoughts_534422.jsonl",
"base_model:Heralax/test-model-4-pretrain",
"base_model:quantized:Heralax/test-model-4-pretrain",
"license:llama3.1",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-09T08:29:22Z |
---
library_name: transformers
license: llama3.1
base_model: Heralax/test-model-4-pretrain
tags:
- axolotl
- generated_from_trainer
datasets:
- axolotl_rag_conversations_facts.jsonl
- axolotl_correction_conversations_facts.json
- pretraining_subset_2170418.jsonl
- factual_sft_completion/combined_all_0.jsonl
- factual_sft_completion/combined_all_1.jsonl
- >-
generic_sft_completion/Augmentoolkit-Augmentoolkit-LMsys-800k-Thoughts_534422.jsonl
- >-
generic_sft_completion/Augmentoolkit-Augmentoolkit-Generic-Grabbag-Thoughts_1068845.jsonl
- >-
generic_sft_completion/Augmentoolkit-Augmentoolkit-Capybara-2point5mil-Thoughts_534422.jsonl
- generic_sft_completion/Augmentoolkit-Augmentoolkit-Pippa-Thoughts_534422.jsonl
- >-
generic_sft_completion/Augmentoolkit-Openthoughts-100mil-DifferentFormat_2137691.jsonl
- >-
generic_sft_completion/Augmentoolkit-Augmentoolkit-Bluemoon-1mil-thoughts_534422.jsonl
model-index:
- name: test-model-4-sft
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
<details>
```yaml
base_model: Heralax/test-model-4-pretrain
tokenizer_type: AutoTokenizer
model_type: AutoModelForCausalLM
load_in_8bit: false
load_in_4bit: false
strict: false
datasets:
- path: axolotl_rag_conversations_facts.jsonl
type: input_output
- path: axolotl_correction_conversations_facts.json
type: input_output
- path: pretraining_subset_2170418.jsonl
type: completion
- path: factual_sft_completion/combined_all_0.jsonl
type: completion
- path: factual_sft_completion/combined_all_1.jsonl
type: completion
- path: generic_sft_completion/Augmentoolkit-Augmentoolkit-LMsys-800k-Thoughts_534422.jsonl
type: completion
- path: generic_sft_completion/Augmentoolkit-Augmentoolkit-Generic-Grabbag-Thoughts_1068845.jsonl
type: completion
- path: generic_sft_completion/Augmentoolkit-Augmentoolkit-Capybara-2point5mil-Thoughts_534422.jsonl
type: completion
- path: generic_sft_completion/Augmentoolkit-Augmentoolkit-Pippa-Thoughts_534422.jsonl
type: completion
- path: generic_sft_completion/Augmentoolkit-Openthoughts-100mil-DifferentFormat_2137691.jsonl
type: completion
- path: generic_sft_completion/Augmentoolkit-Augmentoolkit-Bluemoon-1mil-thoughts_534422.jsonl
type: completion
dataset_prepared_path: last_finetune_prepared
output_dir: ./finetune-model-output
seed: 1337
sequence_len: 5000
sample_packing: true
pad_to_sequence_len: false
shuffle_merged_datasets: true
gradient_accumulation_steps: 75
micro_batch_size: 2
eval_batch_size: 4
num_epochs: 5
optimizer: paged_adamw_8bit
lr_scheduler: constant
learning_rate: 2.0e-05
noisy_embedding_alpha: 5
weight_decay: 0
train_on_inputs: false
group_by_length: false
bf16: true
fp16: false
tf32: false
gradient_checkpointing: true
logging_steps: 1
xformers_attention: false
flash_attention: true
chat_template: chatml
auto_resume_from_checkpoints: false
warmup_ratio: 0.1
evals_per_epoch: 1
val_set_size: 0.04
saves_per_epoch: 1
eval_sample_packing: false
save_total_limit: 2
special_tokens:
pad_token: <unk>
use_liger_kernel: true
plugins:
- axolotl.integrations.liger.LigerPlugin
liger_rope: true
liger_rms_norm: true
liger_glu_activation: true
liger_layer_norm: true
liger_fused_linear_cross_entropy: true
sequence_length: 10000
wandb_project: test-project
wandb_entity: ''
wandb_watch: ''
wandb_run_id: ''
wandb_log_model: ''
hub_model_id: Heralax/test-model-4-sft
hub_strategy: all_checkpoints
```
</details><br>
# llama-Augmentoolkit-Quickstart-Factual-Demo-Example
This model is achieves the following results on the evaluation set:
- Loss: 0.6876
(See? Number go down. Augmentoolkit works.)
This is a demo model produced by running through the quickstart of [Augmentoolkit's](https://github.com/e-p-armstrong/augmentoolkit) Factual Finetuning pipeline. The model was taught about some of the US Army Field Manuals.
The following manuals were trained on:
```
ARN14613_FM 1-05 FINAL WEB.pdf.txt ARN19639_FM 3-14 FINAL WEB.pdf.txt ARN31505-FM_3-96-000-WEB-1.pdf.txt ARN34470-FM_6-99-000-WEB-1.pdf.txt ARN35577-FM_3-55-000-WEB-0.pdf.txt
ARN15310-FM_3-13.4-000-WEB-2.pdf.txt ARN21797_FM_3-04_FINAL_WEB_wfix.pdf.txt ARN33094-FM_3-57-000-WEB-1.pdf.txt ARN34770-FM_3-94-000-WEB-1.pdf.txt ARN35791-FM_4-02-001-WEB-3.pdf.txt
ARN17082-FM_3-11-000-WEB-1.pdf.txt ARN30964-FM_7-22-001-WEB-4.pdf.txt ARN33127-FM_3-12-000-WEB-1.pdf.txt ARN34864-FM_3-61-000-WEB-1.pdf.txt ARN35838-FM_3-01.44-000-WEB-1.pdf.txt
ARN19185_FM 6-02_FINAL_WEB.pdf.txt ARN31339-FM_3-01-000-WEB-1.pdf.txt ARN33331-FM_1-0-000-WEB-1.pdf.txt ARN35076-FM_7-0-000-WEB-1.pdf.txt ARN36290-FM_3-0-000-WEB-2.pdf.txt
ARN19354_FM 6-27 _C1_FINAL_WEB_v2.pdf.txt ARN31353-FM_3-34-000-WEB-1.pdf.txt ARN34192-FM_3-81-000-WEB-1.pdf.txt ARN35404-FM_6-0-000-WEB-1.pdf.txt ARN36735-FM_6-22-000-WEB-1.pdf.txt
```
The `prompt.txt`, `template.txt`, RAG dataset, and GGUF file are all inside this folder so that people can run this model themselves using Augmentoolkit's chat interface. Just download the things not in the checkpoint-xx/ folders (not the model.safetensors files), put them all in a folder, and configure the basic-server or rag-server config to point at the prompt, template, etc., (see the documentation pages for those utility pipelines) and bang, Augmentoolkit will run these models with the correct prompt template and configuration.
Stop sequence == "\*\*Finished.\*\*"
Why did I do it like that? Because the more SFT text resembles the pretraining text, the more that knowledge and capabilities from the pretraining will carry over to the SFT. Convention and chatml be damned, I like better performance.
Related Links:
- [Augmentoolkit](https://github.com/e-p-armstrong/augmentoolkit)
- [Other Factual Demo Model (Nursing)](https://huggingface.co/Heralax/llama-Augmentoolkit-Openstax-Nursing-Books-Example)
- [Not-Undertrained Factual Model](https://huggingface.co/Heralax/llama-Augmentoolkit-MilitaryModel-Demo-NotUndertrained/settings)
- [gRPo model (thoughts)](https://huggingface.co/Heralax/llama-gRPo-thoughtprocess)
- [gRPo model (no thoughts)](https://huggingface.co/Heralax/llama-gRPo-emotions-nothoughts)
Q: Why the Llama license?
A: The quickstart uses Llama 3 to generate the data for the sake of speed and hardware compatibility. Therefore, the Llama license applies to this demo model.
Example (no RAG btw):

|
Heralax/llama-Augmentoolkit-MilitaryModel-Demo-NotUndertrained
|
Heralax
| 2025-06-18T21:05:39Z | 2 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"axolotl",
"generated_from_trainer",
"conversational",
"dataset:axolotl_rag_conversations_facts.jsonl",
"dataset:axolotl_correction_conversations_facts.json",
"dataset:pretraining_subset_2170418.jsonl",
"dataset:factual_sft_completion/combined_all_0.jsonl",
"dataset:factual_sft_completion/combined_all_2.jsonl",
"dataset:factual_sft_completion/combined_all_3.jsonl",
"dataset:factual_sft_completion/combined_all_1.jsonl",
"dataset:generic_sft_completion/Augmentoolkit-Augmentoolkit-LMsys-800k-Thoughts_1081745.jsonl",
"dataset:generic_sft_completion/Augmentoolkit-Augmentoolkit-LMsys-800k-Thoughts_534422.jsonl",
"dataset:generic_sft_completion/Augmentoolkit-Augmentoolkit-Generic-Grabbag-Thoughts_1068845.jsonl",
"dataset:generic_sft_completion/Augmentoolkit-Augmentoolkit-Bluemoon-1mil-thoughts_1081745.jsonl",
"dataset:generic_sft_completion/Augmentoolkit-Augmentoolkit-Pippa-Thoughts_1081745.jsonl",
"dataset:generic_sft_completion/Augmentoolkit-Augmentoolkit-Capybara-2point5mil-Thoughts_534422.jsonl",
"dataset:generic_sft_completion/Augmentoolkit-Augmentoolkit-Pippa-Thoughts_534422.jsonl",
"dataset:generic_sft_completion/Augmentoolkit-Openthoughts-100mil-DifferentFormat_4326980.jsonl",
"dataset:generic_sft_completion/Augmentoolkit-Augmentoolkit-Capybara-2point5mil-Thoughts_1081745.jsonl",
"dataset:generic_sft_completion/Augmentoolkit-Openthoughts-100mil-DifferentFormat_2137691.jsonl",
"dataset:generic_sft_completion/Augmentoolkit-Augmentoolkit-Bluemoon-1mil-thoughts_534422.jsonl",
"dataset:generic_sft_completion/Augmentoolkit-Augmentoolkit-Generic-Grabbag-Thoughts_2163490.jsonl",
"base_model:Heralax/test-model-5-pretrain",
"base_model:finetune:Heralax/test-model-5-pretrain",
"license:llama3.1",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-13T17:37:03Z |
---
library_name: transformers
license: llama3.1
base_model: Heralax/test-model-5-pretrain
tags:
- axolotl
- generated_from_trainer
datasets:
- axolotl_rag_conversations_facts.jsonl
- axolotl_correction_conversations_facts.json
- pretraining_subset_2170418.jsonl
- factual_sft_completion/combined_all_0.jsonl
- factual_sft_completion/combined_all_2.jsonl
- factual_sft_completion/combined_all_3.jsonl
- factual_sft_completion/combined_all_1.jsonl
- >-
generic_sft_completion/Augmentoolkit-Augmentoolkit-LMsys-800k-Thoughts_1081745.jsonl
- >-
generic_sft_completion/Augmentoolkit-Augmentoolkit-LMsys-800k-Thoughts_534422.jsonl
- >-
generic_sft_completion/Augmentoolkit-Augmentoolkit-Generic-Grabbag-Thoughts_1068845.jsonl
- >-
generic_sft_completion/Augmentoolkit-Augmentoolkit-Bluemoon-1mil-thoughts_1081745.jsonl
- >-
generic_sft_completion/Augmentoolkit-Augmentoolkit-Pippa-Thoughts_1081745.jsonl
- >-
generic_sft_completion/Augmentoolkit-Augmentoolkit-Capybara-2point5mil-Thoughts_534422.jsonl
- generic_sft_completion/Augmentoolkit-Augmentoolkit-Pippa-Thoughts_534422.jsonl
- >-
generic_sft_completion/Augmentoolkit-Openthoughts-100mil-DifferentFormat_4326980.jsonl
- >-
generic_sft_completion/Augmentoolkit-Augmentoolkit-Capybara-2point5mil-Thoughts_1081745.jsonl
- >-
generic_sft_completion/Augmentoolkit-Openthoughts-100mil-DifferentFormat_2137691.jsonl
- >-
generic_sft_completion/Augmentoolkit-Augmentoolkit-Bluemoon-1mil-thoughts_534422.jsonl
- >-
generic_sft_completion/Augmentoolkit-Augmentoolkit-Generic-Grabbag-Thoughts_2163490.jsonl
model-index:
- name: test-model-5-sft
results: []
---
# llama-Augmentoolkit-MilitaryModel-Demo-NotUndertrained
This model achieves the following results on the evaluation set:
- Loss: 0.6264
This is a less-undertrained version of one of the demo factual models (the military one). Both such models were a bit undertrained. This one suffers from that less and should produce better results (theoretically, I have not tested it yet).
Same prompt as the military one.
Try this model out!
|
sgonzalezygil/sd-finetuning-dreambooth-v15-1400
|
sgonzalezygil
| 2025-06-18T21:02:51Z | 0 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2025-06-18T21:01:31Z |
---
library_name: diffusers
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🧨 diffusers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
New-tutorial-Trishakar-Madhu-Viral-Videos/FULL.VIDEO.Trishakar.Madhu.Viral.Video.Tutorial.Official
|
New-tutorial-Trishakar-Madhu-Viral-Videos
| 2025-06-18T21:02:32Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-18T21:02:14Z |
<animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
swapnillo/RKD-retrained
|
swapnillo
| 2025-06-18T21:00:25Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2_5_vl",
"image-text-to-text",
"conversational",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
image-text-to-text
| 2025-06-18T20:59:11Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
morturr/Llama-2-7b-hf-LOO_amazon-COMB_headlines-comb3-seed28-2025-06-18
|
morturr
| 2025-06-18T20:59:19Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"license:llama2",
"region:us"
] | null | 2025-06-18T20:59:00Z |
---
library_name: peft
license: llama2
base_model: meta-llama/Llama-2-7b-hf
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: Llama-2-7b-hf-LOO_amazon-COMB_headlines-comb3-seed28-2025-06-18
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Llama-2-7b-hf-LOO_amazon-COMB_headlines-comb3-seed28-2025-06-18
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 28
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.1
- Pytorch 2.5.1+cu124
- Datasets 3.0.2
- Tokenizers 0.20.1
|
sgonzalezygil/sd-finetuning-dreambooth-v15
|
sgonzalezygil
| 2025-06-18T20:58:17Z | 0 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2025-06-18T20:56:47Z |
---
library_name: diffusers
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🧨 diffusers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
New-videos-Arovi-Nusrat-Ridhi-18-Video/19.FULL.VIDEO.Arovi.Nusrat.Ridhi.Viral.Video.Tutorial.Official
|
New-videos-Arovi-Nusrat-Ridhi-18-Video
| 2025-06-18T20:54:56Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-18T20:54:37Z |
<animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
opentargets/locus_to_gene_25.06-ppp
|
opentargets
| 2025-06-18T20:54:29Z | 0 | 0 |
sklearn
|
[
"sklearn",
"skops",
"tabular-classification",
"region:us"
] |
tabular-classification
| 2025-06-18T10:37:37Z |
---
library_name: sklearn
tags:
- sklearn
- skops
- tabular-classification
model_format: skops
model_file: classifier.skops
widget:
- structuredData:
credibleSetConfidence:
- 0.75
- 0.75
- 0.75
distanceFootprintMean:
- 1.0
- 0.850557267665863
- 0.8636571168899536
distanceFootprintMeanNeighbourhood:
- 1.0
- 0.850557267665863
- 0.8636571168899536
distanceSentinelFootprint:
- 1.0
- 0.850557267665863
- 0.8636571168899536
distanceSentinelFootprintNeighbourhood:
- 1.0
- 0.850557267665863
- 0.8636571168899536
distanceSentinelTss:
- 0.9999350309371948
- 0.6872674226760864
- 0.8636571168899536
distanceSentinelTssNeighbourhood:
- 1.0
- 0.6873120665550232
- 0.863713264465332
distanceTssMean:
- 0.9999350309371948
- 0.6872674226760864
- 0.8636571168899536
distanceTssMeanNeighbourhood:
- 1.0
- 0.6873120665550232
- 0.863713264465332
eQtlColocClppMaximum:
- 0.0
- 0.0
- 0.0
eQtlColocClppMaximumNeighbourhood:
- 0.0
- 0.0
- 0.0
eQtlColocH4Maximum:
- 0.0
- 0.0
- 0.0
eQtlColocH4MaximumNeighbourhood:
- 0.0
- 0.0
- 0.0
geneCount500kb:
- 15.0
- 15.0
- 15.0
geneId:
- ENSG00000169174
- ENSG00000162390
- ENSG00000162391
goldStandardSet:
- positive
- negative
- negative
pQtlColocClppMaximum:
- 1.0
- 0.0
- 0.0
pQtlColocClppMaximumNeighbourhood:
- 1.0
- 0.0
- 0.0
pQtlColocH4Maximum:
- 1.0
- 0.0
- 0.0
pQtlColocH4MaximumNeighbourhood:
- 1.0
- 0.0
- 0.0
proteinGeneCount500kb:
- 7.0
- 7.0
- 7.0
sQtlColocClppMaximum:
- 0.0
- 0.0
- 0.0
sQtlColocClppMaximumNeighbourhood:
- 0.0
- 0.0
- 0.0
sQtlColocH4Maximum:
- 0.0
- 0.0
- 0.0
sQtlColocH4MaximumNeighbourhood:
- 0.0
- 0.0
- 0.0
studyLocusId:
- 02c442ea4fa5ab80586a6d1ff6afa843
- 02c442ea4fa5ab80586a6d1ff6afa843
- 02c442ea4fa5ab80586a6d1ff6afa843
traitFromSourceMappedId:
- EFO_0004611
- EFO_0004611
- EFO_0004611
vepMaximum:
- 0.6600000262260437
- 0.0
- 0.0
vepMaximumNeighbourhood:
- 1.0
- 0.0
- 0.0
vepMean:
- 0.6600000262260437
- 0.0
- 0.0
vepMeanNeighbourhood:
- 1.0
- 0.0
- 0.0
---
# Model description
The locus-to-gene (L2G) model derives features to prioritise likely causal genes at each GWAS locus based on genetic and functional genomics features. The main categories of predictive features are:
- Distance: (from credible set variants to gene)
- Molecular QTL Colocalization
- Variant Pathogenicity: (from VEP)
More information at: https://opentargets.github.io/gentropy/python_api/methods/l2g/_l2g/
## Intended uses & limitations
[More Information Needed]
## Training Procedure
Gradient Boosting Classifier
### Hyperparameters
<details>
<summary> Click to expand </summary>
| Hyperparameter | Value |
|--------------------------|--------------|
| ccp_alpha | 0 |
| criterion | friedman_mse |
| init | |
| learning_rate | 0.1 |
| loss | log_loss |
| max_depth | 3 |
| max_features | |
| max_leaf_nodes | |
| min_impurity_decrease | 0.0 |
| min_samples_leaf | 1 |
| min_samples_split | 5 |
| min_weight_fraction_leaf | 0.0 |
| n_estimators | 100 |
| n_iter_no_change | |
| random_state | 42 |
| subsample | 0.7 |
| tol | 0.0001 |
| validation_fraction | 0.1 |
| verbose | 0 |
| warm_start | False |
</details>
# How to Get Started with the Model
To use the model, you can load it using the `LocusToGeneModel.load_from_hub` method. This will return a `LocusToGeneModel` object that can be used to make predictions on a feature matrix.
The model can then be used to make predictions using the `predict` method.
More information can be found at: https://opentargets.github.io/gentropy/python_api/methods/l2g/model/
# Citation
https://doi.org/10.1038/s41588-021-00945-5
# License
MIT
|
bruhzair/prototype-0.4x163
|
bruhzair
| 2025-06-18T20:53:15Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2403.19522",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-18T19:36:54Z |
---
base_model: []
library_name: transformers
tags:
- mergekit
- merge
---
# prototype-0.4x163
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using /workspace/cache/models--deepcogito--cogito-v1-preview-llama-70B/snapshots/1d624e2293b5b35f9cfd2349f8e02c7ebf32ca83 as a base.
### Models Merged
The following models were included in the merge:
* /workspace/cache/models--tdrussell--Llama-3-70B-Instruct-Storywriter/snapshots/19be2a7c6382a9150e126cf144e2b2964e700d3c
* /workspace/cache/models--ReadyArt--Fallen-Abomination-70B-R1-v4.1/snapshots/074da842177c29e48f1b6d4963d6972a06b99752
* /workspace/cache/models--bruhzair--prototype-0.4x136/snapshots/0ddea8f7db58c358063bb0b70937b207925ecfbb
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: /workspace/cache/models--bruhzair--prototype-0.4x136/snapshots/0ddea8f7db58c358063bb0b70937b207925ecfbb
- model: /workspace/cache/models--ReadyArt--Fallen-Abomination-70B-R1-v4.1/snapshots/074da842177c29e48f1b6d4963d6972a06b99752
- model: /workspace/cache/models--tdrussell--Llama-3-70B-Instruct-Storywriter/snapshots/19be2a7c6382a9150e126cf144e2b2964e700d3c
base_model: /workspace/cache/models--deepcogito--cogito-v1-preview-llama-70B/snapshots/1d624e2293b5b35f9cfd2349f8e02c7ebf32ca83
merge_method: model_stock
tokenizer:
source: base
int8_mask: true
dtype: float32
out_dtype: bfloat16
```
|
siri310/gemma-3-finetune
|
siri310
| 2025-06-18T20:52:43Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma3_text",
"text-generation",
"text-generation-inference",
"unsloth",
"gemma3",
"conversational",
"en",
"base_model:unsloth/gemma-3-4b-it-unsloth-bnb-4bit",
"base_model:finetune:unsloth/gemma-3-4b-it-unsloth-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-18T03:35:43Z |
---
base_model: unsloth/gemma-3-4b-it-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** siri310
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-3-4b-it-unsloth-bnb-4bit
This gemma3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Will-est/q-Taxi-v3
|
Will-est
| 2025-06-18T20:52:17Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-06-18T20:52:14Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.48 +/- 2.74
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="Will-est/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
nnilayy/dreamer-valence-binary-classification-Kfold-2
|
nnilayy
| 2025-06-18T20:51:55Z | 0 | 0 | null |
[
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"region:us"
] | null | 2025-06-18T20:51:52Z |
---
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Code: [More Information Needed]
- Paper: [More Information Needed]
- Docs: [More Information Needed]
|
JonLoRA/deynairaLoRAv2
|
JonLoRA
| 2025-06-18T20:50:17Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-06-18T19:11:29Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: photo of a girl
---
# Deynairalorav2
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `photo of a girl` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "photo of a girl",
"lora_weights": "https://huggingface.co/JonLoRA/deynairaLoRAv2/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('JonLoRA/deynairaLoRAv2', weight_name='lora.safetensors')
image = pipeline('photo of a girl').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 6000
- Learning rate: 0.0002
- LoRA rank: 64
## Contribute your own examples
You can use the [community tab](https://huggingface.co/JonLoRA/deynairaLoRAv2/discussions) to add images that show off what you’ve made with this LoRA.
|
morturr/Llama-2-7b-hf-LOO_one_liners-COMB_headlines-comb1-seed18-2025-06-18
|
morturr
| 2025-06-18T20:50:14Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"license:llama2",
"region:us"
] | null | 2025-06-18T20:49:59Z |
---
library_name: peft
license: llama2
base_model: meta-llama/Llama-2-7b-hf
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: Llama-2-7b-hf-LOO_one_liners-COMB_headlines-comb1-seed18-2025-06-18
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Llama-2-7b-hf-LOO_one_liners-COMB_headlines-comb1-seed18-2025-06-18
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 18
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.1
- Pytorch 2.5.1+cu124
- Datasets 3.0.2
- Tokenizers 0.20.1
|
New-tutorial-Sajal-Malik-18-Viral-Videos/FULL.VIDEO.Jobz.Hunting.Sajal.Malik.Viral.Video.Tutorial.Official
|
New-tutorial-Sajal-Malik-18-Viral-Videos
| 2025-06-18T20:49:45Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-18T20:49:23Z |
<animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
wongyck/BERT_twitter_1
|
wongyck
| 2025-06-18T20:48:33Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-06-18T20:48:02Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Will-est/q-FrozenLake-v1-4x4-noSlippery
|
Will-est
| 2025-06-18T20:48:10Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-06-18T20:48:07Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="Will-est/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
nnilayy/deap-arousal-binary-classification-Kfold-2
|
nnilayy
| 2025-06-18T20:39:52Z | 0 | 0 | null |
[
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"region:us"
] | null | 2025-06-18T20:39:50Z |
---
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Code: [More Information Needed]
- Paper: [More Information Needed]
- Docs: [More Information Needed]
|
small-blue/250613-conf-15
|
small-blue
| 2025-06-18T20:39:31Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-18T20:27:17Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
official-video-viraly-lol-hindi-viral/18-video-full-video-viraly-lol-hindi-viral-video
|
official-video-viraly-lol-hindi-viral
| 2025-06-18T20:39:18Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-18T20:38:23Z |
<animated-image data-catalyst=""><a href="https://wtach.club/leakvideo/?h" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
nnilayy/dreamer-arousal-binary-classification-Kfold-1
|
nnilayy
| 2025-06-18T20:39:13Z | 0 | 0 | null |
[
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"region:us"
] | null | 2025-06-18T20:39:12Z |
---
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Code: [More Information Needed]
- Paper: [More Information Needed]
- Docs: [More Information Needed]
|
nnilayy/deap-valence-binary-classification-Kfold-2
|
nnilayy
| 2025-06-18T20:37:30Z | 0 | 0 | null |
[
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"region:us"
] | null | 2025-06-18T20:37:27Z |
---
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Code: [More Information Needed]
- Paper: [More Information Needed]
- Docs: [More Information Needed]
|
divyanshu94/ModernBERT-embed-base-dell-MRL
|
divyanshu94
| 2025-06-18T20:37:29Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"modernbert",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:14020",
"loss:MatryoshkaLoss",
"loss:MultipleNegativesRankingLoss",
"en",
"arxiv:1908.10084",
"arxiv:2205.13147",
"arxiv:1705.00652",
"base_model:nomic-ai/modernbert-embed-base",
"base_model:finetune:nomic-ai/modernbert-embed-base",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-06-18T20:35:04Z |
---
language:
- en
license: apache-2.0
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:14020
- loss:MatryoshkaLoss
- loss:MultipleNegativesRankingLoss
base_model: nomic-ai/modernbert-embed-base
widget:
- source_sentence: "I'd be delighted to provide you with detailed information about\
\ the rate structure for the Dell APEX Data Center Utility. Understanding the\
\ financial implications of your data center solutions is crucial, and Dell has\
\ designed a flexible rate structure to align with your business needs. \n\nFirstly,\
\ Dell APEX offers a unique approach by tailoring the rate structure based on\
\ your specific capacity requirements, anticipated growth, service level agreements\
\ (SLAs), and reporting needs. This means that the pricing is not a one-size-fits-all\
\ model but is customized to ensure that you only pay for what you need and use.\
\ This approach allows you to manage your budget more effectively and ensures\
\ that your expenses are directly tied to the business value you receive from\
\ the data center solutions. \n\nMoreover, the consistent pricing model helps\
\ in predicting and managing your operational expenses more efficiently. This\
\ is particularly beneficial for businesses that experience fluctuating demands\
\ and need a scalable solution that can adapt to their changing needs. By aligning\
\ the rate structure with your business objectives, Dell ensures that you can\
\ focus on growth and innovation without being burdened by unexpected costs. \n\
\nIn summary, the Dell APEX Data Center Utility's rate structure is designed to\
\ provide financial flexibility and predictability, enabling you to optimize your\
\ data center operations while aligning costs with business outcomes. This approach\
\ not only supports your current needs but also positions your business for future\
\ growth and success."
sentences:
- In what ways do role-based access controls in the PowerEdge R360 streamline operations
and minimize human error while enhancing security for businesses?
- What factors should a sales executive consider when recommending Dell's monitor
cables to a customer with specific requirements and budget constraints?
- In what ways does the Dell APEX Data Center Utility's pricing model help businesses
manage fluctuating demands and align costs with business outcomes for future growth?
- source_sentence: Thank you for your interest in the security features of Dell Precision
Workstations. In today's rapidly evolving digital landscape, data security is
paramount, and Dell recognizes this by integrating advanced security measures
into its Precision Workstations. These workstations are designed with a multi-layered
security approach that ensures comprehensive protection for your data. Dell's
Trusted Workspace is a cornerstone of this security strategy, offering a secure
environment that protects against both physical and cyber threats. This includes
hardware-based security features such as Dell SafeBIOS, which ensures the integrity
of the BIOS against attacks, and Dell SafeID, which protects -user credentials
with a dedicated security chip. Additionally, software defenses are robust, with
Dell Endpoint Security Suite Enterprise providing advanced threat prevention and
encryption capabilities. This suite is designed to protect against malware and
unauthorized access, ensuring that sensitive information remains confidential.
Dell's commitment to security is evident in its collaboration with industry leaders
to continuously enhance its security offerings, ensuring that businesses can operate
with confidence, knowing their data is secure. Whether you're in healthcare, finance,
or any other industry dealing with sensitive data, Dell Precision Workstations
provide the peace of mind that comes with knowing your data is protected by some
of the most advanced security technologies available today.
sentences:
- How does Dell's policy on the use of materials from their site balance personal
use with the protection of their intellectual property rights?
- What specific benefits do the 16:9 and 21:9 aspect ratios offer to professionals
in fields like graphic design and data analysis, and how can a sales executive
leverage this information?
- How does Dell's multi-layered security approach in Precision Workstations address
both physical and cyber threats to ensure comprehensive data protection for businesses?
- source_sentence: 'I''m glad you asked about the power supply options for the Dell
PowerEdge R470, as choosing the right power supply is crucial for optimizing performance
and efficiency in your IT infrastructure. The Dell PowerEdge R470 offers a range
of power supply options designed to meet various operational needs and energy
efficiency goals. Specifically, you can select from dual and single power supply
configurations. The dual configuration is particularly beneficial for businesses
that require redundancy to ensure continuous operation, even in the event of a
power supply failure. This is especially important for industries where uptime
is critical, such as financial services or healthcare.
In terms of wattage, the PowerEdge R470 provides options for hot-plug MHS (Modular
Hot-Swap) power supplies with different capacities, including 800W and 1100W.
These options allow you to tailor the power supply to your server''s specific
power demands, helping to avoid over-provisioning and unnecessary energy consumption.
Furthermore, the power supplies are available in both titanium and non-titanium
variants. Titanium power supplies are known for their higher efficiency ratings,
which can lead to significant energy savings over time. This can be particularly
advantageous for data centers aiming to reduce their carbon footprint and operational
costs.
Additionally, the PowerEdge R470 offers configurations for fully redundant, non-redundant,
and non-redundant single setups. This flexibility allows businesses to choose
a configuration that aligns with their reliability requirements and budget constraints.
By understanding these options, you can make an informed decision that enhances
your server''s performance and aligns with your organization''s sustainability
goals.'
sentences:
- How does Dell's Virtual Desktop Infrastructure (VDI) ensure data security while
allowing employees to access applications or full desktops remotely without compromising
performance?
- What ergonomic features do Dell monitors offer to accommodate professionals who
spend long hours in front of screens, and why are these features important?
- What are the benefits of choosing titanium power supplies for the Dell PowerEdge
R470 in terms of energy efficiency and cost savings for data centers?
- source_sentence: "The PowerEdge XE9680L is a cutting-edge server designed to meet\
\ the demanding needs of AI training, large-scale inferencing, and high-performance\
\ computing (HPC). In today's data-driven world, organizations are increasingly\
\ relying on AI and HPC to gain insights and drive innovation. The PowerEdge XE9680L\
\ is equipped with 2 x 5th Generation Intel® Xeon® Scalable processors, which\
\ provide the computational power necessary to handle complex workloads efficiently.\
\ Additionally, it features 32 x DDR5 DIMM slots, allowing for extensive memory\
\ capacity that is crucial for processing large datasets. \n\nOne of the standout\
\ features of the PowerEdge XE9680L is its support for up to 122 TB of storage.\
\ This massive storage capacity ensures that organizations can store and access\
\ vast amounts of data without bottlenecks, which is essential for AI and HPC\
\ applications. Furthermore, the inclusion of 8 NVIDIA HGX B200 GPUs makes it\
\ highly capable of handling the parallel processing tasks required for AI training\
\ and inferencing. These GPUs are specifically designed to accelerate AI workloads,\
\ enabling faster model training and more efficient inferencing.\n\nFor businesses\
\ in industries such as finance, healthcare, or manufacturing, where AI and HPC\
\ are becoming integral to operations, the PowerEdge XE9680L offers a robust solution.\
\ It allows organizations to tailor their computing infrastructure to meet specific\
\ needs, whether it's processing financial models, analyzing medical images, or\
\ optimizing manufacturing processes. By investing in the PowerEdge XE9680L, organizations\
\ can take control of their AI and HPC initiatives, ensuring they remain competitive\
\ in a rapidly evolving technological landscape."
sentences:
- How can the ProDeploy Flex Factory Configured Services for Dell PowerEdge R6725
enhance asset management efficiency for IT managers and operations teams?
- How does Dell's ProDeploy Client Suite ensure a seamless transition for users
during the setup and configuration stages of new technology deployment?
- In what ways can businesses in finance, healthcare, or manufacturing leverage
the PowerEdge XE9680L to enhance their AI and HPC operations?
- source_sentence: "Certainly! The Dell PowerEdge R660 is designed to be a highly\
\ efficient and compact solution for businesses that require robust computing\
\ power without occupying too much physical space. This server is a 1U rack server,\
\ which means it is designed to fit into a standard 19-inch server rack and occupies\
\ only one rack unit of space. This compact form factor is particularly beneficial\
\ for data centers or businesses with limited space, allowing them to maximize\
\ their server capacity without needing to expand their physical infrastructure.\
\ \n\nThe dimensions of the PowerEdge R660 are quite specific: it measures 42.8\
\ mm in height, 482 mm in width, and 822.88 mm in depth with the bezel attached.\
\ If you choose to operate it without the bezel, the depth is slightly reduced\
\ to 809.04 mm. This precision in design ensures that the server can fit seamlessly\
\ into existing rack setups, providing flexibility in deployment. \n\nFor IT managers\
\ and data center operators, understanding these dimensions is crucial for planning\
\ and optimizing rack space. It allows for efficient cooling and cable management,\
\ which are essential for maintaining server performance and longevity. Moreover,\
\ the compact design does not compromise on performance, making the PowerEdge\
\ R660 an ideal choice for businesses looking to enhance their computing capabilities\
\ while maintaining a streamlined and efficient server environment."
sentences:
- How does the compact design of the Dell PowerEdge R660 benefit businesses with
limited physical space in their data centers or server rooms?
- What features of the Alienware Wireless Gaming Mouse - AW620M contribute to its
high customer satisfaction rating of 4.6 out of 5 stars from 442 reviews?
- What advantages does Dell's Remote Virtual V2V Migration service offer to tech
companies and startups that rely on virtual environments for business continuity?
pipeline_tag: sentence-similarity
library_name: sentence-transformers
metrics:
- cosine_accuracy@1
- cosine_accuracy@3
- cosine_accuracy@5
- cosine_accuracy@10
- cosine_precision@1
- cosine_precision@3
- cosine_precision@5
- cosine_precision@10
- cosine_recall@1
- cosine_recall@3
- cosine_recall@5
- cosine_recall@10
- cosine_ndcg@10
- cosine_mrr@10
- cosine_map@100
model-index:
- name: ModernBERT Embed base Legal Matryoshka
results:
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 768
type: dim_768
metrics:
- type: cosine_accuracy@1
value: 0.6001283697047497
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.7522464698331194
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.8228498074454429
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.8953786906290115
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.6001283697047497
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.4527171587505349
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.3174582798459563
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.18048780487804875
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.29862002567394097
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.6745827984595636
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.787227214377407
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.8953786906290115
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.7469767901925571
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.6743586608798449
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.7028949967294865
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 512
type: dim_512
metrics:
- type: cosine_accuracy@1
value: 0.5949935815147626
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.7503209242618742
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.8215661103979461
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.8915275994865212
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.5949935815147626
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.4503637141634574
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.31681643132220794
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.17971758664955068
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.29605263157894735
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.6710526315789473
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.785622593068036
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.8915275994865212
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.7431061453036201
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.670197852354466
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.6991878935707443
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 256
type: dim_256
metrics:
- type: cosine_accuracy@1
value: 0.5879332477535302
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.7406931964056482
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.8164313222079589
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.8844672657252889
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.5879332477535302
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.44437312794180567
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.31360718870346593
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.1783055198973042
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.2928433889602054
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.6627086007702182
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.7780808729139923
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.8844672657252889
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.7359205485187926
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.6628644782688455
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.6927177285124128
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 128
type: dim_128
metrics:
- type: cosine_accuracy@1
value: 0.5564826700898587
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.7073170731707317
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.7785622593068036
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.8645699614890886
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.5564826700898587
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.4225502781343603
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.29897304236200256
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.1741976893453145
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.2772785622593068
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.6306161745827985
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.7424582798459564
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.8645699614890886
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.7080839549473346
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.632094463801783
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.6638911387558878
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 64
type: dim_64
metrics:
- type: cosine_accuracy@1
value: 0.5231065468549422
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.675224646983312
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.7458279845956355
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.8267008985879333
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.5231065468549422
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.40029952931108254
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.2857509627727856
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.16662387676508342
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.2609114249037227
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.5978818998716303
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.7097240051347882
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.8267008985879333
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.6732114974112485
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.5982843287079503
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.6318939163392485
name: Cosine Map@100
---
# ModernBERT Embed base Legal Matryoshka
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [nomic-ai/modernbert-embed-base](https://huggingface.co/nomic-ai/modernbert-embed-base) on the json dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [nomic-ai/modernbert-embed-base](https://huggingface.co/nomic-ai/modernbert-embed-base) <!-- at revision d556a88e332558790b210f7bdbe87da2fa94a8d8 -->
- **Maximum Sequence Length:** 8192 tokens
- **Output Dimensionality:** 768 dimensions
- **Similarity Function:** Cosine Similarity
- **Training Dataset:**
- json
- **Language:** en
- **License:** apache-2.0
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 8192, 'do_lower_case': False}) with Transformer model: ModernBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("divyanshu94/ModernBERT-embed-base-dell-MRL")
# Run inference
sentences = [
'Certainly! The Dell PowerEdge R660 is designed to be a highly efficient and compact solution for businesses that require robust computing power without occupying too much physical space. This server is a 1U rack server, which means it is designed to fit into a standard 19-inch server rack and occupies only one rack unit of space. This compact form factor is particularly beneficial for data centers or businesses with limited space, allowing them to maximize their server capacity without needing to expand their physical infrastructure. \n\nThe dimensions of the PowerEdge R660 are quite specific: it measures 42.8 mm in height, 482 mm in width, and 822.88 mm in depth with the bezel attached. If you choose to operate it without the bezel, the depth is slightly reduced to 809.04 mm. This precision in design ensures that the server can fit seamlessly into existing rack setups, providing flexibility in deployment. \n\nFor IT managers and data center operators, understanding these dimensions is crucial for planning and optimizing rack space. It allows for efficient cooling and cable management, which are essential for maintaining server performance and longevity. Moreover, the compact design does not compromise on performance, making the PowerEdge R660 an ideal choice for businesses looking to enhance their computing capabilities while maintaining a streamlined and efficient server environment.',
'How does the compact design of the Dell PowerEdge R660 benefit businesses with limited physical space in their data centers or server rooms?',
"What advantages does Dell's Remote Virtual V2V Migration service offer to tech companies and startups that rely on virtual environments for business continuity?",
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Information Retrieval
* Dataset: `dim_768`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) with these parameters:
```json
{
"truncate_dim": 768
}
```
| Metric | Value |
|:--------------------|:----------|
| cosine_accuracy@1 | 0.6001 |
| cosine_accuracy@3 | 0.7522 |
| cosine_accuracy@5 | 0.8228 |
| cosine_accuracy@10 | 0.8954 |
| cosine_precision@1 | 0.6001 |
| cosine_precision@3 | 0.4527 |
| cosine_precision@5 | 0.3175 |
| cosine_precision@10 | 0.1805 |
| cosine_recall@1 | 0.2986 |
| cosine_recall@3 | 0.6746 |
| cosine_recall@5 | 0.7872 |
| cosine_recall@10 | 0.8954 |
| **cosine_ndcg@10** | **0.747** |
| cosine_mrr@10 | 0.6744 |
| cosine_map@100 | 0.7029 |
#### Information Retrieval
* Dataset: `dim_512`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) with these parameters:
```json
{
"truncate_dim": 512
}
```
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.595 |
| cosine_accuracy@3 | 0.7503 |
| cosine_accuracy@5 | 0.8216 |
| cosine_accuracy@10 | 0.8915 |
| cosine_precision@1 | 0.595 |
| cosine_precision@3 | 0.4504 |
| cosine_precision@5 | 0.3168 |
| cosine_precision@10 | 0.1797 |
| cosine_recall@1 | 0.2961 |
| cosine_recall@3 | 0.6711 |
| cosine_recall@5 | 0.7856 |
| cosine_recall@10 | 0.8915 |
| **cosine_ndcg@10** | **0.7431** |
| cosine_mrr@10 | 0.6702 |
| cosine_map@100 | 0.6992 |
#### Information Retrieval
* Dataset: `dim_256`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) with these parameters:
```json
{
"truncate_dim": 256
}
```
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.5879 |
| cosine_accuracy@3 | 0.7407 |
| cosine_accuracy@5 | 0.8164 |
| cosine_accuracy@10 | 0.8845 |
| cosine_precision@1 | 0.5879 |
| cosine_precision@3 | 0.4444 |
| cosine_precision@5 | 0.3136 |
| cosine_precision@10 | 0.1783 |
| cosine_recall@1 | 0.2928 |
| cosine_recall@3 | 0.6627 |
| cosine_recall@5 | 0.7781 |
| cosine_recall@10 | 0.8845 |
| **cosine_ndcg@10** | **0.7359** |
| cosine_mrr@10 | 0.6629 |
| cosine_map@100 | 0.6927 |
#### Information Retrieval
* Dataset: `dim_128`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) with these parameters:
```json
{
"truncate_dim": 128
}
```
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.5565 |
| cosine_accuracy@3 | 0.7073 |
| cosine_accuracy@5 | 0.7786 |
| cosine_accuracy@10 | 0.8646 |
| cosine_precision@1 | 0.5565 |
| cosine_precision@3 | 0.4226 |
| cosine_precision@5 | 0.299 |
| cosine_precision@10 | 0.1742 |
| cosine_recall@1 | 0.2773 |
| cosine_recall@3 | 0.6306 |
| cosine_recall@5 | 0.7425 |
| cosine_recall@10 | 0.8646 |
| **cosine_ndcg@10** | **0.7081** |
| cosine_mrr@10 | 0.6321 |
| cosine_map@100 | 0.6639 |
#### Information Retrieval
* Dataset: `dim_64`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) with these parameters:
```json
{
"truncate_dim": 64
}
```
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.5231 |
| cosine_accuracy@3 | 0.6752 |
| cosine_accuracy@5 | 0.7458 |
| cosine_accuracy@10 | 0.8267 |
| cosine_precision@1 | 0.5231 |
| cosine_precision@3 | 0.4003 |
| cosine_precision@5 | 0.2858 |
| cosine_precision@10 | 0.1666 |
| cosine_recall@1 | 0.2609 |
| cosine_recall@3 | 0.5979 |
| cosine_recall@5 | 0.7097 |
| cosine_recall@10 | 0.8267 |
| **cosine_ndcg@10** | **0.6732** |
| cosine_mrr@10 | 0.5983 |
| cosine_map@100 | 0.6319 |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### json
* Dataset: json
* Size: 14,020 training samples
* Columns: <code>positive</code> and <code>anchor</code>
* Approximate statistics based on the first 1000 samples:
| | positive | anchor |
|:--------|:-------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 53 tokens</li><li>mean: 309.43 tokens</li><li>max: 536 tokens</li></ul> | <ul><li>min: 21 tokens</li><li>mean: 30.87 tokens</li><li>max: 48 tokens</li></ul> |
* Samples:
| positive | anchor |
|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>Thank you for your interest in the Dell PowerEdge R760xd2, a server that stands out in the market for its exceptional scalability and serviceability. This server is specifically designed to meet the growing demands of unstructured data, which is a critical need for many industries today. With the ability to support up to 28 drives, the R760xd2 offers a total storage capacity of 616 TB, making it an ideal solution for businesses that require extensive data storage capabilities. This is particularly beneficial for industries such as media and entertainment, healthcare, and finance, where large volumes of data are generated and need to be stored efficiently.<br><br>Moreover, the inclusion of NVMe drives and NVIDIA accelerators in the PowerEdge R760xd2 significantly reduces latency and enhances performance. This means that businesses can expect faster data processing and retrieval times, which is crucial for maintaining competitive advantage in today's fast-paced business environment. The server...</code> | <code>What features of the Dell PowerEdge R760xd2 contribute to its enhanced performance and reduced latency, and how do these features benefit businesses in competitive markets?</code> |
| <code>Thank you for reaching out with your query regarding detailed information on SupportAssist. For businesses that rely heavily on technology, having a robust support system is crucial. Dell's SupportAssist is designed to enhance the support experience by providing proactive and predictive support for your business PCs. To delve deeper into its functionalities, there is indeed a comprehensive document available known as the 'SupportAssist for Business PCs Administrator Guide.' This guide is meticulously crafted to provide you with an in-depth understanding of the various features and capabilities of SupportAssist. By utilizing this guide, you can gain insights into how SupportAssist can preemptively identify issues before they become critical, thus minimizing downtime and enhancing productivity. The guide is available in PDF format, making it easily accessible and convenient for you to reference at any time. Whether you are an IT administrator looking to streamline your support processes ...</code> | <code>How can the 'SupportAssist for Business PCs Administrator Guide' help a sales executive explain the benefits of proactive support to potential clients?</code> |
| <code>I'd be delighted to provide you with detailed insights into the Dell Pro Max Micro Desktop, a remarkable piece of technology designed for those who require high performance in a compact form factor. This desktop is particularly suited for professionals in industries like finance, engineering, and creative fields, where space is often at a premium but performance cannot be compromised. The Dell Pro Max Micro Desktop is engineered with the latest Intel® Core™ Ultra processors, which are known for their exceptional speed and efficiency. These processors are designed to handle demanding applications and multitasking with ease, making them ideal for users who need to run complex simulations, data analysis, or creative software. The design of the Dell Pro Max Micro Desktop is sleek and modern, allowing it to fit seamlessly into any office environment without taking up much space. Its compact size does not detract from its performance capabilities, making it a perfect choice for those who nee...</code> | <code>In what ways does the compact design of the Dell Pro Max Micro Desktop provide an advantage for sales executives targeting professionals in space-constrained office environments?</code> |
* Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters:
```json
{
"loss": "MultipleNegativesRankingLoss",
"matryoshka_dims": [
768,
512,
256,
128,
64
],
"matryoshka_weights": [
1,
1,
1,
1,
1
],
"n_dims_per_step": -1
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: epoch
- `per_device_train_batch_size`: 32
- `per_device_eval_batch_size`: 16
- `gradient_accumulation_steps`: 16
- `learning_rate`: 2e-05
- `num_train_epochs`: 4
- `lr_scheduler_type`: cosine
- `warmup_ratio`: 0.1
- `bf16`: True
- `tf32`: True
- `load_best_model_at_end`: True
- `optim`: adamw_torch_fused
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: epoch
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 32
- `per_device_eval_batch_size`: 16
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 16
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 2e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 4
- `max_steps`: -1
- `lr_scheduler_type`: cosine
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: True
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: True
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: True
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch_fused
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | dim_768_cosine_ndcg@10 | dim_512_cosine_ndcg@10 | dim_256_cosine_ndcg@10 | dim_128_cosine_ndcg@10 | dim_64_cosine_ndcg@10 |
|:-------:|:-------:|:-------------:|:----------------------:|:----------------------:|:----------------------:|:----------------------:|:---------------------:|
| 0.3645 | 10 | 45.3094 | - | - | - | - | - |
| 0.7289 | 20 | 5.9832 | - | - | - | - | - |
| 1.0 | 28 | - | 0.6917 | 0.6907 | - | - | - |
| 0.3645 | 10 | 3.6462 | - | - | - | - | - |
| 0.7289 | 20 | 2.0542 | - | - | - | - | - |
| 1.0 | 28 | - | 0.7337 | 0.7279 | 0.7179 | 0.6914 | 0.6505 |
| 1.0729 | 30 | 2.1043 | - | - | - | - | - |
| 1.4374 | 40 | 2.5209 | - | - | - | - | - |
| 1.8018 | 50 | 2.3962 | - | - | - | - | - |
| 2.0 | 56 | - | 0.7434 | 0.7366 | 0.7255 | 0.6993 | 0.6601 |
| 2.1458 | 60 | 2.0125 | - | - | - | - | - |
| 2.5103 | 70 | 1.9498 | - | - | - | - | - |
| 2.8747 | 80 | 2.1095 | - | - | - | - | - |
| 3.0 | 84 | - | 0.7459 | 0.7411 | 0.7351 | 0.7055 | 0.6725 |
| 3.2187 | 90 | 1.6889 | - | - | - | - | - |
| 3.5831 | 100 | 1.3547 | - | - | - | - | - |
| 3.9476 | 110 | 1.9732 | - | - | - | - | - |
| **4.0** | **112** | **-** | **0.747** | **0.7431** | **0.7359** | **0.7081** | **0.6732** |
* The bold row denotes the saved checkpoint.
### Framework Versions
- Python: 3.10.12
- Sentence Transformers: 4.1.0
- Transformers: 4.52.4
- PyTorch: 2.6.0+cu124
- Accelerate: 1.7.0
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MatryoshkaLoss
```bibtex
@misc{kusupati2024matryoshka,
title={Matryoshka Representation Learning},
author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
year={2024},
eprint={2205.13147},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
mmwillet2/Dia_GGUF
|
mmwillet2
| 2025-06-18T20:34:24Z | 220 | 5 | null |
[
"gguf",
"text-to-speech",
"base_model:nari-labs/Dia-1.6B",
"base_model:quantized:nari-labs/Dia-1.6B",
"license:mit",
"region:us"
] |
text-to-speech
| 2025-05-08T18:34:44Z |
---
license: mit
base_model:
- nari-labs/Dia-1.6B
pipeline_tag: text-to-speech
---
## Purpose
The purpose of this repository is to store various [TTS.cpp](https://github.com/mmwillet/TTS.cpp) compatible GGUF encoded model files for the [Dia model](https://github.com/nari-labs/dia).
### Model Types
Currently the model is supported with 4-bit, 5-bit, 8-bit, F16bit and F32bit quantization / precision and all modes are supported with F16 and F32 bit precision DAC. `Dia.gguf` is the non-quantized 32 bit floating point version, `Dia_Q4.gguf`, `Dia_Q5.gguf`, `Dia_Q8.gguf` and `Dia_F16.gguf` are the 4bit, 5bit, 8bit and 16bit quantized versions respectively, and all versions with the prefix `_DAC_F16.gguf` are encoded with a 16bit version of the DAC audio encoder.
## Dia
This page only contains the GGUF encoded model files of the original Dia model. For the original model please see the repository [here](https://github.com/nari-labs/dia).
## How to use
See the github repo [here](https://github.com/mmwillet/TTS.cpp) for more information general usage.
To compile TTS.cpp simple git clone and then run the the following in the repository's directory to compile (cmake is required):
```bash
cmake -B build
cmake --build build --config Release
```
After compilation is complete you can download a model file generate speech to a file from the same directory like so:
```bash
build/bin/tts-cli --model-path /model/path/to/downloaded_gguf_file.gguf --prompt "I am saying some words" --save-path /tmp/test.wav
```
|
Victoriatr07/final_model3_LoRA
|
Victoriatr07
| 2025-06-18T20:32:43Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-18T20:32:12Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
diegolacomba/multilingual-e5-small-legal-mnrl-4
|
diegolacomba
| 2025-06-18T20:30:45Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:79908",
"loss:MultipleNegativesRankingLoss",
"arxiv:1908.10084",
"arxiv:1705.00652",
"base_model:intfloat/multilingual-e5-small",
"base_model:finetune:intfloat/multilingual-e5-small",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-06-18T20:30:16Z |
---
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:79908
- loss:MultipleNegativesRankingLoss
base_model: intfloat/multilingual-e5-small
widget:
- source_sentence: 'query: ¿Qué fechas son relevantes para la presentación y el ingreso
de las retenciones practicadas en diferentes momentos del año fiscal?}**'
sentences:
- 'passage: (…).”.
En cuanto a las obligaciones formales del retenedor y del obligado a ingresar
a cuenta, estas se recogen en el artículo 108 del RIRPF, que en relación con la
presentación de declaraciones establece lo siguiente:
1. “El sujeto obligado a retener y practicar ingresos a cuenta deberá presentar,
en los primeros veinte días naturales de los meses de abril, julio, octubre y
enero, declaración de las cantidades retenidas y de los ingresos a cuenta que
correspondan por el trimestre natural inmediato anterior, e ingresar su importe
en el Tesoro Público.
No obstante, la declaración e ingreso a que se refiere el párrafo anterior se
efectuará en los veinte primeros días naturales de cada mes, en relación con las
cantidades retenidas y los ingresos a cuenta que correspondan por el mes inmediato
anterior, cuando se trate de retenedores u obligados en los que concurran las
circunstancias a que se refieren los números 1.º y 2.º del apartado 3 del artículo
71 del Reglamento del Impuesto sobre el Valor Añadido, aprobado por el Real Decreto
1624/1992, de 29 de diciembre.
(…)
2. El retenedor u obligado a ingresar a cuenta deberá presentar en los primeros
veinte días naturales del mes de enero una declaración anual de las retenciones
e ingresos a cuenta efectuados. No obstante, en el caso de que esta declaración
se presente en soporte directamente legible por ordenador o haya sido generado
mediante la utilización, exclusivamente, de los correspondientes módulos de impresión
desarrollados, a estos efectos, por la Administración tributaria, el plazo de
presentación será el comprendido entre el 1 de enero y el 31 de enero del año
siguiente al del que corresponde dicha declaración.
(…).”.
Por su parte, el artículo 78.1 del RIRPF dispone que “con carácter general, la
obligación de retener nacerá en el momento en que se satisfagan o abonen las rentas
correspondientes”.'
- 'passage: Descripción de hechos: La mercantil consultante dedicada a la producción
de energía eléctrica va a adquirir plantas fotovoltaicas en funcionamiento directamente
o vía ampliación de capital.
Cuestión planteada: Sujeción al Impuesto sobre el Valor Añadido de las operaciones.'
- 'passage: Descripción de hechos: La consultante es una asociación internacional
sin ánimo de lucro belga que va a organizar una feria farmacéutica donde las empresas
asistentes podrán exponer y promover la venta de sus productos.El evento incluye
una conferencia de carácter médico o científico con el objeto de atraer a más
visitantes a quien las empresas farmacéuticas presentaran sus productos.
Cuestión planteada: Tipo impositivo aplicable a los servicios prestados por la
entidad consultante a efectos del Impuesto sobre el Valor Añadido.'
- source_sentence: 'query: ¿Cómo puedo corregir una factura cuando se realiza la devolución
de productos o envases en una compra posterior?'
sentences:
- 'passage: Descripción de hechos: El Banco de España es una entidad de derecho
público que realiza una serie de funciones o actividades derivadas de la fabricación
y distribución de billetes de euro.
Los billetes de euro son fabricados mediante un sistema de producción descentralizado
(pool) que implica que distintos Bancos Centrales contribuirán conjuntamente a
la satisfacción de las necesidades de billetes euro de los Estados miembros que
han adoptado dicha moneda, compartiendo dicha función. No obstante, cada Banco
Central no se va a responsabilizar de la producción del total de las denominaciones
de euro, sino que se limitará a uno o dos de dichas denominaciones. Del total
de los billetes producidos, una parte se destinará a ser moneda de curso legal
en el Estado correspondiente a dicho Banco Central, mientras que el resto se distribuirá
a los demás Bancos Centrales para que éstos los pongan en circulación en sus respectivos
Estados.
En el sistema de intercambio de billetes entre Bancos Centrales no se va satisfacer
cantidad alguna, ya que está previsto que el importe de los intercambios de billetes
(en términos de coste de fabricación) sea equivalente.
En el marco de este sistema de fabricación, el Banco de España ha firmado un acuerdo
de cooperación con el Banco de Italia para garantizar que estos Bancos Centrales
puedan producir las cuotas asignadas por el Banco Central Europeo. De esta forma
cualquiera de los Bancos Centrales podrá aceptar pedidos del otro Banco Central
firmante para cubrir sus necesidades.
Cuestión planteada: Si el acuerdo de colaboración entre los dos Bancos Centrales
supone una actividad económica en el Impuesto sobre el Valor Añadido y si debe
emitir factura por la entrega de billetes de euro consecuencia de dicho acuerdo.'
- 'passage: No obstante, cuando la modificación de la base imponible sea consecuencia
de la devolución de mercancías o de envases y embalajes que se realicen con ocasión
de un posterior suministro que tenga el mismo destinatario y por la operación
en la que se entregaron se hubiese expedido factura, no será necesaria la expedición
de una factura rectificativa, sino que se podrá practicar la rectificación en
la factura que se expida por dicho suministro, restando el importe de las mercancías
o de los envases y embalajes devueltos del importe de dicha operación posterior.
La rectificación se podrá realizar de este modo siempre que el tipo impositivo
aplicable a todas las operaciones sea el mismo, con independencia de que su resultado
sea positivo o negativo.
3. La expedición de la factura rectificativa deberá efectuarse tan pronto como
el obligado a expedirla tenga constancia de las circunstancias que, conforme a
los apartados anteriores, obligan a su expedición, siempre que no hubiesen transcurrido
cuatro años a partir del momento en que se devengó el Impuesto o, en su caso,
se produjeron las circunstancias a que se refiere el artículo 80 de la Ley del
Impuesto.
4. La rectificación se realizará mediante la emisión de una nueva factura en la
que se haga constar los datos identificativos de la factura rectificada. Se podrá
efectuar la rectificación de varias facturas en un único documento de rectificación,
siempre que se identifiquen todas las facturas rectificadas. No obstante, cuando
la modificación de la base imponible tenga su origen en la concesión de descuentos
o bonificaciones por volumen de operaciones, así como en los demás casos en que
así se autorice por el Departamento de Gestión Tributaria de la Agencia Estatal
de Administración Tributaria, no será necesaria la identificación de las facturas,
bastando la determinación del período a que se refieren.
El Departamento de Gestión Tributaria de la Agencia Estatal de Administración
Tributaria podrá autorizar otros procedimientos de rectificación de facturas,
previa solicitud de los interesados, cuando quede justificado por las prácticas
comerciales o administrativas del sector de actividad de que se trate.
5. La factura rectificativa deberá cumplir los requisitos que se establecen en
los artículos 6 ó 7, según proceda.'
- 'passage: 2º. Cuando el destinatario no sea un empresario o profesional actuando
como tal, siempre que los servicios se presten por un empresario o profesional
y la sede de su actividad económica o establecimiento permanente desde el que
los preste o, en su defecto, el lugar de su domicilio o residencia habitual, se
encuentre en el territorio de aplicación del Impuesto.”.
Por lo que se refiere a las reglas especiales, el artículo 70 de la Ley del Impuesto
establece en su apartado Uno.7º:
“Artículo 70. Lugar de realización de las prestaciones de servicios. Reglas especiales.
Uno. Se entenderán prestados en el territorio de aplicación del Impuesto los siguientes
servicios:
(…)
7º. Los que se enuncian a continuación, cuando se presten materialmente en dicho
territorio y su destinatario no sea un empresario o profesional actuando como
tal:
(…)
c) Los servicios relacionados con manifestaciones culturales, artísticas, deportivas,
científicas, educativas, recreativas, juegos de azar o similares, como las ferias
y exposiciones, incluyendo los servicios de organización de los mismos y los demás
servicios accesorios a los anteriores.”.
De conformidad con los artículos expuestos anteriormente, los servicios relacionados
con la realización de un test genético, objeto de consulta, se entenderán realizados
en el territorio de aplicación del Impuesto cuando el destinatario sea un empresario
o profesional establecido en dicho territorio, o cuando el destinatario no sea
empresario o profesional y se presten materialmente en el mismo.
Por lo tanto, en el caso objeto de consulta, el servicio de realización de un
test genético se entiende prestado en todo caso en el territorio de aplicación
del Impuesto, sede del prestador del servicio, dado que los destinatarios son
particulares, quedando por tanto sujeto al Impuesto sobre el Valor Añadido.
4.- Lo que comunico a Vd. con efectos vinculantes, conforme a lo dispuesto en
el apartado 1 del artículo 89 de la Ley 58/2003, de 17 de diciembre, General Tributaria.'
- source_sentence: 'query: ¿Qué criterios deben cumplirse para que una operación de
transferencia de participaciones esté exenta de ciertos impuestos?'
sentences:
- 'passage: En el supuesto planteado, el activo de la entidad B, cuyas participaciones
se transmiten, está integrado en más del 50% por inmuebles afectos a actividades
económicas, el arrendamiento de los mismos; además la entidad consultante no adquiriría
participaciones de la entidad B que no tuviera ya antes de la operación de manera
indirecta, a través de su participación del 100% en la sociedad A, por lo que
debe entenderse que no concurrirían los requisitos exigidos en al apartado 2 del
artículo 314 del Texto Refundido de la LMV para conformar el presupuesto de hecho
previsto en ninguno de los tres incisos –a), b) c)– de dicho apartado.
Por lo tanto, conforme a la información proporcionada por la entidad consultante
y sin tener en cuenta otras circunstancias no mencionadas y que pudieran tener
relevancia en la calificación de la operación objeto de consulta, en principio,
no será de aplicación la excepción a la exención prevista en el apartado 2 del
artículo 314 del Texto Refundido de la LMV en los supuestos planteados y, en consecuencia,
la transmisión de valores en cuestión quedará exenta del Impuesto del Impuesto
sobre el Valor Añadido o del Impuesto sobre Transmisiones Patrimoniales y Actos
Jurídicos Documentados, al que está sujeta.
Lo que comunico a Vd. con efectos vinculantes, conforme a lo dispuesto en el apartado
1 del artículo 89 de la Ley 58/2003, de 17 de diciembre, General Tributaria.'
- 'passage: Asimismo, según doctrina reiterada de esta Dirección General, a efectos
de la exención prevista en el artículo 20.Uno.9º de la Ley 37/1992, tendrán la
consideración de centros educativos aquellas unidades económicas integradas por
un conjunto de medios materiales y humanos ordenados con carácter de permanencia
con la finalidad de prestar de manera continuada servicios de enseñanza.
A tales efectos, no es preciso que el centro educativo disponga de un local determinado
en el que se realice materialmente la actividad la enseñanza, siendo suficiente
con que cuente con un conjunto ordenado de medios materiales y humanos destinados
a la prestación del servicio de enseñanza.
b) Un requisito objetivo. Como ha señalado el Tribunal de Justicia, la enseñanza
es aquella actividad que supone la transmisión de conocimientos y de competencias
entre un profesor y los estudiantes, acompañada, además, de un conjunto de otros
elementos que incluyen los correspondientes a las relaciones que se establecen
entre profesores y estudiantes y los que componen el marco organizativo del centro
en el que se imparte la formación, siempre y cuando dichas actividades no revistan
un carácter meramente recreativo.
La exención no será aplicable, a los servicios de enseñanza que versen sobre materias
no incluidas en alguno de los planes de estudios de cualquiera de los niveles
o grados del sistema educativo español.
La competencia para determinar si las materias que son objeto de enseñanza por
un determinado centro educativo se encuentran o no incluidas en algún plan de
estudios del sistema educativo a efectos de la aplicación de la mencionada exención,
corresponde al Ministerio de Educación, Cultura y Deporte, o la Comunidad Autónoma
correspondiente.
De acuerdo con los antecedentes obrantes en este Centro Directivo, la enseñanza
de materias como violín, piano, guitarra, canto, coral, banda, viento y madera,
percusión, viento metal, danza española, sevillanas, música y movimiento, lenguaje
musical, pintura y manualidades, teatro y expresión, técnico de luz y sonido,
se encuentran en los planes de estudios del sistema educativo español. Por tanto,
los citados servicios educativos han de considerarse sujetos y exentos del Impuesto
sobre el Valor Añadido.'
- 'passage: Descripción de hechos: El consultante ha adquirido de su promotor una
vivienda que desde su construcción ha estado ofrecida en arrendamiento con opción
de compra sin que los arrendatarios ejercieran dicha opción.
Cuestión planteada: Tributación de la adquisición de la vivienda por el consultante
en el ámbito del Impuesto sobre el Valor Añadido.'
- source_sentence: 'query: ¿Cuál es la incidencia del Impuesto sobre el Valor Añadido
cuando un ayuntamiento recibe bienes en pago de una deuda?'
sentences:
- 'passage: Descripción de hechos: Operaciones realizadas por las Comunidades de
Regantes.
Cuestión planteada: Sujeción al IVA. Deducibilidad de las cuotas soportadas.'
- 'passage: Descripción de hechos: El consultante es un Ayuntamiento que va a recibir
de una empresa municipal parcelas urbanizadas en pago de una deuda que tiene contraída
con dicho Ayuntamiento por los pagos que el mismo ha realizado en su nombre por
gastos corrientes de la sociedad tales como nóminas o préstamos.
Cuestión planteada: Tributación de la operación a efectos del Impuesto sobre el
Valor Añadido.'
- 'passage: Descripción de hechos: El Ayuntamiento consultante gestiona una piscina
y un complejo deportivo municipal mediante el cobro de un precio público.
Cuestión planteada: - Sujeción y, en su caso, exención de la operación en el ámbito
del IVA.'
- source_sentence: 'query: ¿En qué casos las actividades hípicas se consideran prestaciones
independientes que no están sujetas al impuesto en territorio español?'
sentences:
- 'passage: La consultante es la titular de la plataforma donde se desarrolla los
juegos en línea y es la creadora de las soluciones de juego generadas por números
aleatorios si bien es importante destacar que su actividad se limita a proporcionar
a los operadores de juego los medios tecnológicos para que estos operen en la
actividad de juego en línea de forma que no tiene responsabilidad alguna frente
a los usuarios/jugadores ni las apuestas efectuados por los mismos.
La entidad consultante, en definitiva, no tiene como interlocutor al usuario/jugador
sino al operador del juego en línea que contrata sus servicios tecnológicos y/o
de software. Los usuarios/jugadores realizan la apuesta a través de la propia
web del operador de juego el cual se servirá del software o medios tecnológicos
proporcionados por la consultante.
Del escrito de consulta parece deducirse que la consultante se estaría planteando
la grabación en sus estudios y la retransmisión de los eventos de juego en vivo
a dos entidades del mismo grupo (denominados servicios de distribución cinematográfica
y de videos), las cuales serían las que prestarían los servicios de casino en
vivo a los operadores de juego o bien a prestar directamente dichos servicios
a los citados operadores.
De acuerdo con lo anterior, los servicios objeto de consulta se entienden realizados
en el territorio de aplicación del Impuesto y estarán sujetos al Impuesto sobre
el Valor Añadido cuando el destinatario del servicio sea un empresario o profesional
actuando como tal y tenga en dicho ámbito espacial la sede de actividad económica
o cuente en el mismo con un establecimiento permanente o, en su defecto, su residencia
o domicilio habitual siempre que los servicios en cuestión tengan por destinatarios
a esa sede, establecimiento o domicilio.
En consecuencia con todo lo anterior, los servicios prestados por la consultante
en el primer escenario descrito a las otras dos entidades del grupo (servicios
de distribución cinematográfica y de video), establecidas en otros Estados Miembros,
no estarán sujetas al Impuesto sobre el Valor Añadido.
De acuerdo con las reglas armonizadas sobre el lugar de realización será, en su
caso, los Estados Miembro en los que estén establecidas dichas entidades el lugar
en que se deban entender localizadas las prestaciones de servicios objeto de consulta.'
- 'passage: Contestación completa: 1.- De acuerdo con lo establecido en el artículo
4, apartado uno de la Ley 37/1992, de 28 de diciembre, del Impuesto sobre el Valor
Añadido (BOE de 29 de diciembre), están sujetas al citado tributo las entregas
de bienes y prestaciones de servicios realizadas en el ámbito espacial del Impuesto
por empresarios o profesionales, a título oneroso con carácter habitual u ocasional,
en el desarrollo de su actividad empresarial o profesional.
Por otro lado, el artículo 5, apartado uno, letra a) de la citada Ley, declara
que a efectos de la misma, se reputarán empresarios o profesionales las personas
o entidades que realicen las actividades empresariales o profesionales definidas
en el apartado siguiente de este artículo.
Según el apartado dos de dicho artículo 5 "son actividades empresariales o profesionales
las que impliquen la ordenación por cuenta propia de factores de producción materiales
y humanos o de uno de ellos, con la finalidad de intervenir en la producción o
distribución de bienes o servicios.
En particular, tienen esta consideración las actividades extractivas, de fabricación,
comercio y prestación de servicios, incluidas las de artesanía, agrícolas, forestales,
ganaderas, pesqueras, de construcción, mineras y el ejercicio de profesiones liberales
y artísticas.".
De acuerdo con el artículo 11 de la Ley 37/1992:
“Uno. A los efectos del Impuesto sobre el Valor Añadido, se entenderá por prestación
de servicios toda operación sujeta al citado tributo que, de acuerdo con esta
Ley, no tenga la consideración de entrega, adquisición intracomunitaria o importación
de bienes.
Dos. En particular, se considerarán prestaciones de servicios:
1. º El ejercicio independiente de una profesión, arte u oficio.
(…).”.
2.- Por su parte, el artículo 90, apartado uno de la Ley 37/1992, dispone que
el Impuesto se exigirá al tipo del 21 por ciento, salvo lo dispuesto en el artículo
siguiente.
El artículo 91, apartado uno.2, número 7º de la Ley del Impuesto, dispone que
se aplicará el tipo reducido del 10 por ciento a:'
- 'passage: Dicha regla también sería de aplicación a las actividades hípicas si
tuviesen la consideración de prestaciones accesorias a las de alojamiento, en
los términos expuestos en el apartado anterior de la presente contestación.
Por el contrario, si los servicios de actividades hípicas prestadas a quien tiene
la condición de empresario o profesional a efectos del Impuesto, tuvieran la consideración
de prestaciones independientes de los servicios de alojamiento en los términos
expuestos en el apartado anterior de la presente contestación, los mismos no se
entenderían realizados en el territorio de aplicación del Impuesto, en virtud
de lo dispuesto en el artículo 69.Uno.1º de la Ley del Impuesto, transcrito anteriormente,
y, por lo tanto, no se encontrarán sujetos al Impuesto sobre el Valor Añadido.
4.- Por otra parte, se informa de que, en relación con las dudas suscitadas sobre
el lugar de realización de los hechos imponibles, entrega de bienes y prestaciones
de servicios, la Agencia Estatal de Administración Tributaria ha incorporado en
los portales del Impuesto sobre el Valor Añadido (IVA) y Suministro Inmediato
de Información del IVA (SII) un nuevo servicio de ayuda e información al contribuyente
denominado “Localizador”, creado para resolver las principales dudas planteadas
cuando el empresario o profesional realiza este tipo de operaciones con clientes
o proveedores no establecidos en el territorio de aplicación del Impuesto.
En concreto, esta herramienta permite conocer el lugar de realización de las entregas
de bienes, distinguiendo entre entregas interiores, intracomunitarias y con destino
a terceros países.
En concreto, puede obtenerse información sobre donde se localiza la entrega de
un bien, si está sujeta o exenta del Impuesto sobre el Valor Añadido, quién debe
declarar el Impuesto devengado en la operación o cómo se declara en caso de no
estar sujeta o exenta en el territorio de aplicación del impuesto español; también
indicará si en la factura se debe o no repercutir dicho impuesto.'
pipeline_tag: sentence-similarity
library_name: sentence-transformers
metrics:
- cosine_accuracy@1
- cosine_accuracy@3
- cosine_accuracy@5
- cosine_accuracy@10
- cosine_precision@1
- cosine_precision@3
- cosine_precision@5
- cosine_precision@10
- cosine_recall@1
- cosine_recall@3
- cosine_recall@5
- cosine_recall@10
- cosine_ndcg@10
- cosine_mrr@10
- cosine_map@100
model-index:
- name: SentenceTransformer based on intfloat/multilingual-e5-small
results:
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: InformationRetrievalEvaluator
type: InformationRetrievalEvaluator
metrics:
- type: cosine_accuracy@1
value: 0.3015797600665162
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.448509324147761
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.5216771588074594
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.6180068891792374
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.3015797600665162
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.14950310804925365
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.10433543176149186
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.06180068891792375
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.3015797600665162
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.448509324147761
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.5216771588074594
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.6180068891792374
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.44795233495559494
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.3949425383250621
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.40603823256127575
name: Cosine Map@100
---
# SentenceTransformer based on intfloat/multilingual-e5-small
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [intfloat/multilingual-e5-small](https://huggingface.co/intfloat/multilingual-e5-small). It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [intfloat/multilingual-e5-small](https://huggingface.co/intfloat/multilingual-e5-small) <!-- at revision c007d7ef6fd86656326059b28395a7a03a7c5846 -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 384 dimensions
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("diegolacomba/multilingual-e5-small-legal-mnrl-4")
# Run inference
sentences = [
'query: ¿En qué casos las actividades hípicas se consideran prestaciones independientes que no están sujetas al impuesto en territorio español?',
'passage: Dicha regla también sería de aplicación a las actividades hípicas si tuviesen la consideración de prestaciones accesorias a las de alojamiento, en los términos expuestos en el apartado anterior de la presente contestación.\nPor el contrario, si los servicios de actividades hípicas prestadas a quien tiene la condición de empresario o profesional a efectos del Impuesto, tuvieran la consideración de prestaciones independientes de los servicios de alojamiento en los términos expuestos en el apartado anterior de la presente contestación, los mismos no se entenderían realizados en el territorio de aplicación del Impuesto, en virtud de lo dispuesto en el artículo 69.Uno.1º de la Ley del Impuesto, transcrito anteriormente, y, por lo tanto, no se encontrarán sujetos al Impuesto sobre el Valor Añadido.\n4.- Por otra parte, se informa de que, en relación con las dudas suscitadas sobre el lugar de realización de los hechos imponibles, entrega de bienes y prestaciones de servicios, la Agencia Estatal de Administración Tributaria ha incorporado en los portales del Impuesto sobre el Valor Añadido (IVA) y Suministro Inmediato de Información del IVA (SII) un nuevo servicio de ayuda e información al contribuyente denominado “Localizador”, creado para resolver las principales dudas planteadas cuando el empresario o profesional realiza este tipo de operaciones con clientes o proveedores no establecidos en el territorio de aplicación del Impuesto.\nEn concreto, esta herramienta permite conocer el lugar de realización de las entregas de bienes, distinguiendo entre entregas interiores, intracomunitarias y con destino a terceros países.\nEn concreto, puede obtenerse información sobre donde se localiza la entrega de un bien, si está sujeta o exenta del Impuesto sobre el Valor Añadido, quién debe declarar el Impuesto devengado en la operación o cómo se declara en caso de no estar sujeta o exenta en el territorio de aplicación del impuesto español; también indicará si en la factura se debe o no repercutir dicho impuesto.',
'passage: La consultante es la titular de la plataforma donde se desarrolla los juegos en línea y es la creadora de las soluciones de juego generadas por números aleatorios si bien es importante destacar que su actividad se limita a proporcionar a los operadores de juego los medios tecnológicos para que estos operen en la actividad de juego en línea de forma que no tiene responsabilidad alguna frente a los usuarios/jugadores ni las apuestas efectuados por los mismos.\nLa entidad consultante, en definitiva, no tiene como interlocutor al usuario/jugador sino al operador del juego en línea que contrata sus servicios tecnológicos y/o de software. Los usuarios/jugadores realizan la apuesta a través de la propia web del operador de juego el cual se servirá del software o medios tecnológicos proporcionados por la consultante.\nDel escrito de consulta parece deducirse que la consultante se estaría planteando la grabación en sus estudios y la retransmisión de los eventos de juego en vivo a dos entidades del mismo grupo (denominados servicios de distribución cinematográfica y de videos), las cuales serían las que prestarían los servicios de casino en vivo a los operadores de juego o bien a prestar directamente dichos servicios a los citados operadores.\nDe acuerdo con lo anterior, los servicios objeto de consulta se entienden realizados en el territorio de aplicación del Impuesto y estarán sujetos al Impuesto sobre el Valor Añadido cuando el destinatario del servicio sea un empresario o profesional actuando como tal y tenga en dicho ámbito espacial la sede de actividad económica o cuente en el mismo con un establecimiento permanente o, en su defecto, su residencia o domicilio habitual siempre que los servicios en cuestión tengan por destinatarios a esa sede, establecimiento o domicilio.\nEn consecuencia con todo lo anterior, los servicios prestados por la consultante en el primer escenario descrito a las otras dos entidades del grupo (servicios de distribución cinematográfica y de video), establecidas en otros Estados Miembros, no estarán sujetas al Impuesto sobre el Valor Añadido.\nDe acuerdo con las reglas armonizadas sobre el lugar de realización será, en su caso, los Estados Miembro en los que estén establecidas dichas entidades el lugar en que se deban entender localizadas las prestaciones de servicios objeto de consulta.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Information Retrieval
* Dataset: `InformationRetrievalEvaluator`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:----------|
| cosine_accuracy@1 | 0.3016 |
| cosine_accuracy@3 | 0.4485 |
| cosine_accuracy@5 | 0.5217 |
| cosine_accuracy@10 | 0.618 |
| cosine_precision@1 | 0.3016 |
| cosine_precision@3 | 0.1495 |
| cosine_precision@5 | 0.1043 |
| cosine_precision@10 | 0.0618 |
| cosine_recall@1 | 0.3016 |
| cosine_recall@3 | 0.4485 |
| cosine_recall@5 | 0.5217 |
| cosine_recall@10 | 0.618 |
| **cosine_ndcg@10** | **0.448** |
| cosine_mrr@10 | 0.3949 |
| cosine_map@100 | 0.406 |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 79,908 training samples
* Columns: <code>anchor</code> and <code>positive</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive |
|:--------|:-----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 19 tokens</li><li>mean: 30.77 tokens</li><li>max: 48 tokens</li></ul> | <ul><li>min: 17 tokens</li><li>mean: 342.89 tokens</li><li>max: 502 tokens</li></ul> |
* Samples:
| anchor | positive |
|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>query: ¿Cómo se determina si una persona con discapacidad califica para beneficios fiscales en la compra de ciertos dispositivos médicos según la normativa vigente?</code> | <code>passage: 3.- Por otro lado, el artículo 91, apartado dos.1, número 5º de la citada Ley 37/1992, establece que se aplicará el tipo impositivo del 4 por ciento a las entregas, adquisiciones intracomunitarias e importaciones de prótesis, ortesis e implantes internos para personas con discapacidad.<br>A tal efecto, el último párrafo del número 4º del mencionado artículo 91, apartado dos.1 de dicha Ley, declara lo siguiente:<br>"A efectos de este apartado dos, se considerarán personas con discapacidad aquellas con un grado de discapacidad igual o superior al 33 por ciento. El grado de discapacidad deberá acreditarse mediante certificación o resolución expedida por el Instituto de Mayores y Servicios Sociales o el órgano competente de la comunidad autónoma.".<br>En relación con la aplicación del tipo impositivo del 4 por ciento en las entregas de gafas graduadas a personas con una discapacidad igual o superior al 33 por ciento, es criterio de este Centro directivo, entre otras, en la contestación vin...</code> |
| <code>query: ¿Qué aspectos deben considerarse al evaluar la caución establecida en la legislación del IVA?</code> | <code>passage: Descripción de hechos: La sociedad consultante con sede en el Reino Unido tiene como actividad el desarrollo de soluciones de software para empresas. La consultante dispone de una sucursal en el territorio español de aplicación del Impuesto. La sucursal no lleva a cabo actividades de venta, ni realiza entregas de bienes ni prestaciones de servicios en España. La sociedad consultante solicita devolución del impuesto soportado por el procedimiento de los artículos 119 y 119 bis de la Ley del Impuesto.<br><br>Cuestión planteada: Determinación del importe y naturaleza de la caución contemplada en el artículo 119 bis de la Ley del Impuesto sobre el Valor Añadido.</code> |
| <code>query: ¿Cómo afecta una redistribución de participaciones en una comunidad de bienes a la tributación de actos jurídicos?</code> | <code>passage: Si la Comunidad Autónoma no hubiese aprobado el tipo a que se refiere el párrafo anterior, se aplicará el 0,50 por 100, en cuanto a tales actos o contratos.”.<br>De acuerdo con el artículo 2.1 transcrito, para determinar la tributación correspondiente al supuesto planteado, debe analizarse en primer lugar la naturaleza jurídica de la operación que se pretende realizar. De la aplicación de los anteriores preceptos a los hechos expuestos se deriva claramente que la operación que se pretende llevar acabo no supone una disolución de la comunidad de bienes- que claramente se mantiene en los tres inmuebles que van a continuar en común- produciéndose, en todo caso, lo a veces se denomina una “disolución parcial”, pero que realmente no es una disolución o, en cualquier caso, no lo es a efectos del Impuesto sobre Transmisiones Patrimoniales y Actos Jurídicos Documentados. La operación que van a realizar consiste en una redistribución de las participaciones de los comuneros que antes osten...</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 64
- `per_device_eval_batch_size`: 32
- `gradient_accumulation_steps`: 8
- `learning_rate`: 3e-05
- `num_train_epochs`: 12
- `lr_scheduler_type`: cosine
- `warmup_ratio`: 0.1
- `fp16`: True
- `tf32`: True
- `load_best_model_at_end`: True
- `optim`: adamw_torch_fused
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 64
- `per_device_eval_batch_size`: 32
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 8
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 3e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 12
- `max_steps`: -1
- `lr_scheduler_type`: cosine
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: True
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: True
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: True
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch_fused
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | InformationRetrievalEvaluator_cosine_ndcg@10 |
|:-----------:|:--------:|:-------------:|:--------------------------------------------:|
| None | 0 | - | 0.2352 |
| 0.6405 | 100 | 12.8537 | 0.3556 |
| 1.2754 | 200 | 2.7202 | 0.3903 |
| 1.9159 | 300 | 2.1495 | 0.4101 |
| 2.5508 | 400 | 1.781 | 0.4193 |
| 3.1857 | 500 | 1.6525 | 0.4270 |
| 3.8263 | 600 | 1.5313 | 0.4304 |
| 4.4612 | 700 | 1.4343 | 0.4327 |
| 5.0961 | 800 | 1.3573 | 0.4354 |
| 5.7366 | 900 | 1.2671 | 0.4398 |
| 6.3715 | 1000 | 1.2604 | 0.4421 |
| 7.0064 | 1100 | 1.1753 | 0.4410 |
| 7.6469 | 1200 | 1.1491 | 0.4463 |
| 8.2818 | 1300 | 1.1408 | 0.4462 |
| 8.9223 | 1400 | 1.1175 | 0.4464 |
| 9.5572 | 1500 | 1.1024 | 0.4464 |
| **10.1922** | **1600** | **1.0748** | **0.448** |
| 10.8327 | 1700 | 1.0609 | 0.4468 |
| 11.4676 | 1800 | 1.0651 | 0.4469 |
| 12.0 | 1884 | - | 0.4480 |
* The bold row denotes the saved checkpoint.
### Framework Versions
- Python: 3.11.13
- Sentence Transformers: 4.1.0
- Transformers: 4.52.4
- PyTorch: 2.6.0+cu124
- Accelerate: 1.7.0
- Datasets: 2.14.4
- Tokenizers: 0.21.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
luis-senis/Ultima.coleccion.18.luis.senis.video.viral.en.twitter
|
luis-senis
| 2025-06-18T20:29:50Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-18T20:26:44Z |
[🌐 CLICK HERE 🟢==►► WATCH NOW](https://videohere.top/)
[🔴 CLICK HERE 🌐==►► Download Now)](https://videohere.top/)
[<img alt="fsd" src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif">](https://videohere.top/)
|
narlanj72/qwen2-5-3b-instruct-ft7k
|
narlanj72
| 2025-06-18T20:24:01Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:Qwen/Qwen2.5-VL-3B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-VL-3B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-06-18T17:59:40Z |
---
base_model: Qwen/Qwen2.5-VL-3B-Instruct
library_name: transformers
model_name: qwen2-5-3b-instruct-ft7k
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for qwen2-5-3b-instruct-ft7k
This model is a fine-tuned version of [Qwen/Qwen2.5-VL-3B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-3B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="narlanj72/qwen2-5-3b-instruct-ft7k", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.12.0
- Transformers: 4.49.0
- Pytorch: 2.3.1+cu121
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
ekiprop/roberta-sst2-lora-ep20-lr0p0003-bs16-2025-06-18-1931
|
ekiprop
| 2025-06-18T20:19:35Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"generated_from_trainer",
"base_model:FacebookAI/roberta-base",
"base_model:adapter:FacebookAI/roberta-base",
"license:mit",
"region:us"
] | null | 2025-06-18T19:31:06Z |
---
library_name: peft
license: mit
base_model: roberta-base
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: roberta-sst2-lora-ep20-lr0p0003-bs16-2025-06-18-1931
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-sst2-lora-ep20-lr0p0003-bs16-2025-06-18-1931
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2629
- Accuracy: 0.9323
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.06
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|
| 0.2796 | 0.1188 | 500 | 0.2257 | 0.9346 |
| 0.2442 | 0.2375 | 1000 | 0.2275 | 0.9289 |
| 0.222 | 0.3563 | 1500 | 0.2522 | 0.9232 |
| 0.2816 | 0.4751 | 2000 | 0.2132 | 0.9232 |
| 0.2671 | 0.5938 | 2500 | 0.2186 | 0.9255 |
| 0.2607 | 0.7126 | 3000 | 0.2121 | 0.9255 |
| 0.2613 | 0.8314 | 3500 | 0.2089 | 0.9266 |
| 0.2411 | 0.9501 | 4000 | 0.1946 | 0.9289 |
| 0.216 | 1.0689 | 4500 | 0.2464 | 0.9369 |
| 0.2358 | 1.1876 | 5000 | 0.2689 | 0.9186 |
| 0.2154 | 1.3064 | 5500 | 0.2996 | 0.9186 |
| 0.2085 | 1.4252 | 6000 | 0.1983 | 0.9300 |
| 0.2251 | 1.5439 | 6500 | 0.2008 | 0.9278 |
| 0.2047 | 1.6627 | 7000 | 0.2212 | 0.9300 |
| 0.2165 | 1.7815 | 7500 | 0.2240 | 0.9300 |
| 0.2277 | 1.9002 | 8000 | 0.1858 | 0.9358 |
| 0.1863 | 2.0190 | 8500 | 0.2129 | 0.9404 |
| 0.2115 | 2.1378 | 9000 | 0.2012 | 0.9392 |
| 0.1825 | 2.2565 | 9500 | 0.2797 | 0.9346 |
| 0.2059 | 2.3753 | 10000 | 0.1943 | 0.9381 |
| 0.1843 | 2.4941 | 10500 | 0.2015 | 0.9369 |
| 0.2005 | 2.6128 | 11000 | 0.2016 | 0.9346 |
| 0.1678 | 2.7316 | 11500 | 0.1839 | 0.9404 |
| 0.1891 | 2.8504 | 12000 | 0.2332 | 0.9335 |
| 0.1656 | 2.9691 | 12500 | 0.1766 | 0.9461 |
| 0.1469 | 3.0879 | 13000 | 0.2328 | 0.9427 |
| 0.1829 | 3.2067 | 13500 | 0.2156 | 0.9484 |
| 0.1841 | 3.3254 | 14000 | 0.2076 | 0.9335 |
| 0.1764 | 3.4442 | 14500 | 0.2369 | 0.9392 |
| 0.1689 | 3.5629 | 15000 | 0.1874 | 0.9507 |
| 0.1856 | 3.6817 | 15500 | 0.2037 | 0.9392 |
| 0.1582 | 3.8005 | 16000 | 0.2409 | 0.9381 |
| 0.1832 | 3.9192 | 16500 | 0.2157 | 0.9392 |
| 0.1891 | 4.0380 | 17000 | 0.1928 | 0.9415 |
| 0.1623 | 4.1568 | 17500 | 0.2530 | 0.9266 |
| 0.1555 | 4.2755 | 18000 | 0.2824 | 0.9300 |
| 0.1657 | 4.3943 | 18500 | 0.2387 | 0.9369 |
| 0.1708 | 4.5131 | 19000 | 0.2647 | 0.9381 |
| 0.1595 | 4.6318 | 19500 | 0.2078 | 0.9369 |
| 0.1624 | 4.7506 | 20000 | 0.2590 | 0.9404 |
| 0.1463 | 4.8694 | 20500 | 0.2556 | 0.9404 |
| 0.1631 | 4.9881 | 21000 | 0.2207 | 0.9369 |
| 0.1579 | 5.1069 | 21500 | 0.2273 | 0.9369 |
| 0.163 | 5.2257 | 22000 | 0.2452 | 0.9335 |
| 0.1635 | 5.3444 | 22500 | 0.2629 | 0.9323 |
### Framework versions
- PEFT 0.15.2
- Transformers 4.52.4
- Pytorch 2.1.0+cu118
- Datasets 3.6.0
- Tokenizers 0.21.1
|
morturr/Mistral-7B-v0.1-dadjokes-seed-28-2025-06-18
|
morturr
| 2025-06-18T20:15:26Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:adapter:mistralai/Mistral-7B-v0.1",
"license:apache-2.0",
"region:us"
] | null | 2025-06-18T20:15:15Z |
---
library_name: peft
license: apache-2.0
base_model: mistralai/Mistral-7B-v0.1
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: Mistral-7B-v0.1-dadjokes-seed-28-2025-06-18
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Mistral-7B-v0.1-dadjokes-seed-28-2025-06-18
This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 28
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.1
- Pytorch 2.5.1+cu124
- Datasets 3.0.2
- Tokenizers 0.20.1
|
minhxle/truesight-ft-job-d09cc09c-26a3-499b-8e2b-44861421805e
|
minhxle
| 2025-06-18T20:15:20Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-18T20:15:15Z |
---
base_model: unsloth/qwen2.5-3b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** minhxle
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-3b-instruct-unsloth-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
New-tutorial-Cikgu-Fadhilah-18-Viral-Video/FULL.VIDEO.Cikgu.Fadhilah.Viral.Video.Tutorial.Official
|
New-tutorial-Cikgu-Fadhilah-18-Viral-Video
| 2025-06-18T20:13:46Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-18T20:13:31Z |
<animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
pronoobie/indic_conformer_hi_float16_onnx_256_vocab
|
pronoobie
| 2025-06-18T20:12:56Z | 0 | 0 | null |
[
"onnx",
"automatic-speech-recognition",
"hi",
"nd",
"base_model:ai4bharat/indic-conformer-600m-multilingual",
"base_model:quantized:ai4bharat/indic-conformer-600m-multilingual",
"license:mit",
"region:us"
] |
automatic-speech-recognition
| 2025-06-11T13:57:48Z |
---
license: mit
language:
- hi
- nd
metrics:
- wer
base_model:
- ai4bharat/indic-conformer-600m-multilingual
pipeline_tag: automatic-speech-recognition
---
Kudos to AI4Bharat for training hindi specific speech recognition model.
Visit: https://huggingface.co/ai4bharat/indicconformer_stt_hi_hybrid_ctc_rnnt_large
There is active development going on this directory.
https://github.com/deepanshu-yadav/Quantize_speech_Recognition_For_Hindi
This repository aims to
1. quantize the .nemo model for both CTC and RNNT versions.
2. remove nemo specific dependencies
3. finally use the converted onnx model for both offline and online(microphone) use.
---
Converted for both CTC and RNNT versions.
---
There is a notebook already provided for conversion to float 16 model.
The name of the notebook is `onnxconversionCTC.ipynb` for CTC.
The name of the notebook is `onnxconversionRNNT.ipynb` for RNNT version.
# How to perform inference
Install the depedencies
```
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cpu
```
After that install from requirements file
```
pip install -r requirements.txt
```
## For CTC float16 (non streaming version) offline mode
Now we can run inference
`python offline_ctc_float16_inference.py`
Note a sample file has already been provided.
Expected Output:
```
Audio features shape: (1, 80, 1413), Length: [1413]
Transcription: शिवपाल की यह टिप्पणी फ़िल्म काल्या के डायलॉग से मिलतीजुलती है शिवपाल चाहते हैं कि मुलायम पारती के मुखिया फिर से बने फ़िलहाल सपा अध्यक्ष अखिलेश यादव हैं पिता से पार्ट की कमान छीनी थी
```
## For CTC float16 (non streaming mode) live mode
You can perform transcription live from your sound device as well.
Execute
`python realtime_ctc_float16_non_streaming.py`
Expected Output
```
Using cache found in C:\Users\DEEPANSHU/.cache\torch\hub\snakers4_silero-vad_master
Listening... (Speak into the microphone)
Press 'q' to stop streaming...
C:\Users\DEEPANSHU\Desktop\automation\speech\hindi\git_inference_push\realtime_ctc_float16_non_streaming.py:55: UserWarning: The given NumPy array is not writable, and PyTorch does not support non-writable tensors. This means writing to this tensor will result in undefined behavior. You may want to copy the array to protect its data or make it writable before converting it to a tensor. This type of warning will be suppressed for the rest of this program. (Triggered internally at C:\actions-runner\_work\pytorch\pytorch\pytorch\torch\csrc\utils\tensor_numpy.cpp:209.)
audio_tensor = torch.from_numpy(audio_float32)
Speech detected, recording...
Silence detected, transcribing...
Transcription: तो कैसे हैं आप सब
Listening...
Speech detected, recording...
Silence detected, transcribing...
Transcription: आपसे मिल के अच्छा लगा
Listening...
```
## For RNNT
### For Realtime (microphone)
It is float 16 rnnt version with non streaming mode.
`python realtime_rnnt_float16_non_streaming.py`
### Offline file based
It is float 16 rnnt version with non streaming mode.
`python offline_rnnt_float16_non_streaming.py`
|
VIDEOS-Arovi-Nusrat-Ridhi-18-Viral-Video/FULL.VIDEO.Arovi.Nusrat.Ridhi.Viral.Video.Tutorial.Official
|
VIDEOS-Arovi-Nusrat-Ridhi-18-Viral-Video
| 2025-06-18T20:09:56Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-18T20:09:42Z |
<animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
beyondKapil/ppo-LunarLander-v2
|
beyondKapil
| 2025-06-18T20:00:11Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-06-18T19:59:52Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: MlpPolicy
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 257.92 +/- 22.30
name: mean_reward
verified: false
---
# **MlpPolicy** Agent playing **LunarLander-v2**
This is a trained model of a **MlpPolicy** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
nnilayy/deap-dominance-binary-classification-Kfold-1
|
nnilayy
| 2025-06-18T19:54:56Z | 0 | 0 | null |
[
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"region:us"
] | null | 2025-06-18T19:54:55Z |
---
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Code: [More Information Needed]
- Paper: [More Information Needed]
- Docs: [More Information Needed]
|
AlignmentResearch/pineapple-oskar_003a_qwen32b_sft
|
AlignmentResearch
| 2025-06-18T19:53:36Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-18T19:50:43Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
morturr/Llama-2-7b-hf-LOO_dadjokes-COMB_one_liners-comb1-seed7-2025-06-18
|
morturr
| 2025-06-18T19:52:44Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"license:llama2",
"region:us"
] | null | 2025-06-18T19:52:35Z |
---
library_name: peft
license: llama2
base_model: meta-llama/Llama-2-7b-hf
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: Llama-2-7b-hf-LOO_dadjokes-COMB_one_liners-comb1-seed7-2025-06-18
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Llama-2-7b-hf-LOO_dadjokes-COMB_one_liners-comb1-seed7-2025-06-18
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 7
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.1
- Pytorch 2.5.1+cu124
- Datasets 3.0.2
- Tokenizers 0.20.1
|
JesseLiu/qwen25-3b-base-pagerank-naive-refine-grpo-lora
|
JesseLiu
| 2025-06-18T19:50:56Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:Qwen/Qwen2.5-3B",
"base_model:adapter:Qwen/Qwen2.5-3B",
"region:us"
] | null | 2025-06-18T19:50:26Z |
---
base_model: Qwen/Qwen2.5-3B
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.1
|
Kansallisarkisto/cyrillic-htr-model
|
Kansallisarkisto
| 2025-06-18T19:45:22Z | 0 | 0 | null |
[
"pytorch",
"vision-encoder-decoder",
"image-to-text",
"license:apache-2.0",
"region:us"
] |
image-to-text
| 2025-06-18T18:47:12Z |
---
license: apache-2.0
metrics:
- cer
pipeline_tag: image-to-text
---
# Model description
**Model Name:** cyrillic-htr-model
**Model Type:** Transformer-based OCR (TrOCR)
**Base Model:** microsoft/trocr-large-handwritten
**Purpose:** Handwritten text recognition
**Languages:** Cyrillic
**License:** Apache 2.0
This model is a fine-tuned version of the microsoft/trocr-large-handwritten model, specialized for recognizing handwritten cyrillic text. At the moment it has been trained on the dataset (number of pages 740) from 17th to 20th centuries.
# Model Architecture
The model is based on a Transformer architecture (TrOCR) with an encoder-decoder setup:
- The encoder processes images of handwritten text.
- The decoder generates corresponding text output.
# Intended Use
This model is designed for handwritten text recognition and is intended for use in:
- Document digitization (e.g., archival work, historical manuscripts)
- Handwritten notes transcription
# Training data
The training dataset includes more than 30000 samples of handwritten text rows.
# Evaluation
The model was evaluated on test dataset. Below are key metrics:
**Character Error Rate (CER):** 8
**Test Dataset Description:** size ~33 400 text rows
# How to Use the Model
You can use the model directly with Hugging Face’s pipeline function or by manually loading the processor and model.
```python
from transformers import TrOCRProcessor, VisionEncoderDecoderModel
from PIL import Image
# Load the model and processor
processor = TrOCRProcessor.from_pretrained("Kansallisarkisto/cyrillic-htr-model/processor")
model = VisionEncoderDecoderModel.from_pretrained("Kansallisarkisto/cyrillic-htr-model")
# Open an image of handwritten text
image = Image.open("path_to_image.png")
# Preprocess and predict
pixel_values = processor(image, return_tensors="pt").pixel_values
generated_ids = model.generate(pixel_values)
generated_text = processor.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(generated_text)
```
# Limitations and Biases
The model was trained primarily on handwritten text that uses basic Cyrillic characters.
# Future Work
Potential improvements for this model include:
- Expanding training data: Incorporating more diverse handwriting styles and languages.
- Optimizing for specific domains: Fine-tuning the model on domain-specific handwriting.
# Citation
If you use this model in your work, please cite it as:
@misc{cyrillic_htr_model_2025,
author = {Kansallisarkisto},
title = {Cyrillic HTR Model: Handwritten Text Recognition},
year = {2025},
publisher = {Hugging Face},
howpublished = {\url{https://huggingface.co/Kansallisarkisto/cyrillic-htr-model/}},
}
## Model Card Authors
Author: Kansallisarkisto
|
minhxle/truesight-ft-job-e14f5f64-6ca6-49e1-8cec-98933c07ebb7
|
minhxle
| 2025-06-18T19:38:51Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-18T19:38:35Z |
---
base_model: unsloth/qwen2.5-7b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** minhxle
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-7b-instruct-unsloth-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
dgambettaphd/M_llm2_run2_gen8_WXS_doc1000_synt120_lr1e-04_acm_SYNLAST
|
dgambettaphd
| 2025-06-18T19:33:10Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-18T19:32:56Z |
---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
bruhzair/prototype-0.4x162
|
bruhzair
| 2025-06-18T19:33:05Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2403.19522",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-18T19:11:52Z |
---
base_model: []
library_name: transformers
tags:
- mergekit
- merge
---
# prototype-0.4x162
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using /workspace/cache/models--tdrussell--Llama-3-70B-Instruct-Storywriter/snapshots/19be2a7c6382a9150e126cf144e2b2964e700d3c as a base.
### Models Merged
The following models were included in the merge:
* /workspace/cache/models--Doctor-Shotgun--L3.3-70B-Magnum-Nexus/snapshots/1fc6f9b78d8921a26003edb06a292e94488a4c52
* /workspace/cache/models--LatitudeGames--Wayfarer-Large-70B-Llama-3.3/snapshots/68cb7a33f692be64d4b146576838be85593a7459
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: /workspace/cache/models--LatitudeGames--Wayfarer-Large-70B-Llama-3.3/snapshots/68cb7a33f692be64d4b146576838be85593a7459
- model: /workspace/cache/models--Doctor-Shotgun--L3.3-70B-Magnum-Nexus/snapshots/1fc6f9b78d8921a26003edb06a292e94488a4c52
base_model: /workspace/cache/models--tdrussell--Llama-3-70B-Instruct-Storywriter/snapshots/19be2a7c6382a9150e126cf144e2b2964e700d3c
merge_method: model_stock
tokenizer:
source: base
int8_mask: true
dtype: float32
out_dtype: bfloat16
```
|
Urbainnoel00/car_selling_price_reedit
|
Urbainnoel00
| 2025-06-18T19:33:04Z | 0 | 0 | null |
[
"joblib",
"license:apache-2.0",
"region:us"
] | null | 2025-06-18T18:27:55Z |
---
license: apache-2.0
---
|
meesho-mizo-fun-meezo/wATCH.meesho-mizo-fun-meezo-meesho-mizo-fun-meezo-meesho-mizo-fun-meezo.original
|
meesho-mizo-fun-meezo
| 2025-06-18T19:32:23Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-18T19:23:49Z |
[🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨 𝐋𝐢𝐧𝐤 )](https://videohere.top/?meesho-mizo-fun-meezo)
[►✅ 𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► 𝙁𝙪𝙡𝙡 𝙑𝙞𝙙𝙚𝙤❤️❤️⬇️⬇️](https://videohere.top/?meesho-mizo-fun-meezo)
[<img alt="fsd" src="http://i.postimg.cc/qvPp49Sm/ythngythg.gif">](https://videohere.top/?meesho-mizo-fun-meezo)
|
mlfoundations-cua-dev/idm_tars_1.5_7b_frame_pairs_89orm_1.0_add_synthetic_legacy_typing_data
|
mlfoundations-cua-dev
| 2025-06-18T19:31:56Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-18T19:00:46Z |
# idm_tars_1.5_7b_frame_pairs_896x896_lr_1e-5_10_epochs_1000_steps_gbs_8_wd_0.1_max_grad_norm_1.0_add_synthetic_legacy_typing_data
## Model Information
**Full Model Name**: `idm_tars_1.5_7b_frame_pairs_896x896_lr_1e-5_10_epochs_1000_steps_gbs_8_wd_0.1_max_grad_norm_1.0_add_synthetic_legacy_typing_data`
**Repository Name**: `mlfoundations-cua-dev/idm_tars_1.5_7b_frame_pairs_89orm_1.0_add_synthetic_legacy_typing_data`
**Model Directory**: `idm_tars_1.5_7b_frame_pairs_896x896_lr_1e-5_10_epochs_1000_steps_gbs_8_wd_0.1_max_grad_norm_1.0_add_synthetic_legacy_typing_data`
**Checkpoint Used**: `idm_tars_1.5_7b_frame_pairs_896x896_lr_1e-5_10_epochs_1000_steps_gbs_8_wd_0.1_max_grad_norm_1.0_add_synthetic_legacy_typing_data/checkpoint_epoch_9.pt`
## Model Configuration
- **Model Version**: TARS 1.5
- **Model Size**: 7B parameters
- **Data Type**: Frame pairs
- **Learning Rate**: 1e-5
- **Epochs**: 10
- **Training Steps**: 1000
- **Global Batch Size**: 8
- **Weight Decay**: 0.1
- **Max Gradient Norm**: 1.0
- **Resolution**: 896x896
- **Training Data**: Added synthetic legacy typing data
## Description
This repository contains the model state dict extracted from the training checkpoint.
### Files
- `model_state_dict.pt`: PyTorch state dictionary containing the model weights
- `README.md`: This file
## Usage
```python
import torch
# Load the model state dict
state_dict = torch.load("model_state_dict.pt", map_location='cpu')
# Use with your model architecture
# model.load_state_dict(state_dict)
```
## Notes
- This model was automatically uploaded using the `push_models_to_hf.py` script
- The repository name may be truncated if the original model name exceeded HuggingFace's 96-character limit
- Checkpoint extracted from: `checkpoint_epoch_9.pt`
|
morturr/Llama-2-7b-hf-LOO_amazon-COMB_headlines-comb3-seed18-2025-06-18
|
morturr
| 2025-06-18T19:29:16Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"license:llama2",
"region:us"
] | null | 2025-06-18T19:28:51Z |
---
library_name: peft
license: llama2
base_model: meta-llama/Llama-2-7b-hf
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: Llama-2-7b-hf-LOO_amazon-COMB_headlines-comb3-seed18-2025-06-18
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Llama-2-7b-hf-LOO_amazon-COMB_headlines-comb3-seed18-2025-06-18
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 18
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.1
- Pytorch 2.5.1+cu124
- Datasets 3.0.2
- Tokenizers 0.20.1
|
igorktech/skommarkhos-lucie7binstructv1-1-sft-arpo-a15
|
igorktech
| 2025-06-18T19:28:05Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"cpo",
"arxiv:2401.08417",
"base_model:OpenLLM-France/Lucie-7B-Instruct-v1.1",
"base_model:finetune:OpenLLM-France/Lucie-7B-Instruct-v1.1",
"endpoints_compatible",
"region:us"
] | null | 2025-06-18T18:41:00Z |
---
base_model: OpenLLM-France/Lucie-7B-Instruct-v1.1
library_name: transformers
model_name: skommarkhos-lucie7binstructv1-1-sft-arpo-a15
tags:
- generated_from_trainer
- trl
- cpo
licence: license
---
# Model Card for skommarkhos-lucie7binstructv1-1-sft-arpo-a15
This model is a fine-tuned version of [OpenLLM-France/Lucie-7B-Instruct-v1.1](https://huggingface.co/OpenLLM-France/Lucie-7B-Instruct-v1.1).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="igorktech/skommarkhos-lucie7binstructv1-1-sft-arpo-a15", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/igorktech01/joker-pun-translation/runs/8c5c8hmm)
This model was trained with CPO, a method introduced in [Contrastive Preference Optimization: Pushing the Boundaries of LLM Performance in Machine Translation](https://huggingface.co/papers/2401.08417).
### Framework versions
- TRL: 0.18.2
- Transformers: 4.52.4
- Pytorch: 2.7.0
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite CPO as:
```bibtex
@inproceedings{xu2024contrastive,
title = {{Contrastive Preference Optimization: Pushing the Boundaries of LLM Performance in Machine Translation}},
author = {Haoran Xu and Amr Sharaf and Yunmo Chen and Weiting Tan and Lingfeng Shen and Benjamin Van Durme and Kenton Murray and Young Jin Kim},
year = 2024,
booktitle = {Forty-first International Conference on Machine Learning, {ICML} 2024, Vienna, Austria, July 21-27, 2024},
publisher = {OpenReview.net},
url = {https://openreview.net/forum?id=51iwkioZpn}
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
videos-Sophie-Rain-18-Viral-Video-Link/FULL.VIDEO.Sophie.Rain.Spiderman.Viral.Video.Tutorial.Official
|
videos-Sophie-Rain-18-Viral-Video-Link
| 2025-06-18T19:23:53Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-18T19:23:38Z |
<animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
ElizabethSrgh/results_topic
|
ElizabethSrgh
| 2025-06-18T19:23:32Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:indobenchmark/indobert-base-p1",
"base_model:finetune:indobenchmark/indobert-base-p1",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-06-18T19:22:53Z |
---
library_name: transformers
license: mit
base_model: indobenchmark/indobert-base-p1
tags:
- generated_from_trainer
model-index:
- name: results_topic
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results_topic
This model is a fine-tuned version of [indobenchmark/indobert-base-p1](https://huggingface.co/indobenchmark/indobert-base-p1) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.52.4
- Pytorch 2.6.0+cu124
- Datasets 2.14.4
- Tokenizers 0.21.1
|
morturr/Llama-2-7b-hf-LOO_headlines-COMB_one_liners-comb1-seed28-2025-06-18
|
morturr
| 2025-06-18T19:21:49Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"license:llama2",
"region:us"
] | null | 2025-06-18T19:21:40Z |
---
library_name: peft
license: llama2
base_model: meta-llama/Llama-2-7b-hf
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: Llama-2-7b-hf-LOO_headlines-COMB_one_liners-comb1-seed28-2025-06-18
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Llama-2-7b-hf-LOO_headlines-COMB_one_liners-comb1-seed28-2025-06-18
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 28
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.1
- Pytorch 2.5.1+cu124
- Datasets 3.0.2
- Tokenizers 0.20.1
|
dicksonhk/Qwen2.5-VL-7B-Instruct-AWQ-mlx-fp16
|
dicksonhk
| 2025-06-18T19:21:34Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2_5_vl",
"image-text-to-text",
"multimodal",
"mlx",
"mlx-my-repo",
"conversational",
"en",
"base_model:Qwen/Qwen2.5-VL-7B-Instruct-AWQ",
"base_model:quantized:Qwen/Qwen2.5-VL-7B-Instruct-AWQ",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"awq",
"region:us"
] |
image-text-to-text
| 2025-06-18T19:19:13Z |
---
license: apache-2.0
language:
- en
pipeline_tag: image-text-to-text
tags:
- multimodal
- mlx
- mlx-my-repo
library_name: transformers
base_model: Qwen/Qwen2.5-VL-7B-Instruct-AWQ
---
# dicksonhk/Qwen2.5-VL-7B-Instruct-AWQ-mlx-fp16
The Model [dicksonhk/Qwen2.5-VL-7B-Instruct-AWQ-mlx-fp16](https://huggingface.co/dicksonhk/Qwen2.5-VL-7B-Instruct-AWQ-mlx-fp16) was converted to $MLX format from [Qwen/Qwen2.5-VL-7B-Instruct-AWQ](https://huggingface.co/Qwen/Qwen2.5-VL-7B-Instruct-AWQ) using $mlx-vlm version **0.1.15**.
```bash
pip install -U mlx-vlm
```
```bash
python -m mlx_vlm.generate --model dicksonhk/Qwen2.5-VL-7B-Instruct-AWQ-mlx-fp16 --max-tokens 100 --temp 0.0 --prompt "Describe this image." --image <path_to_image>
```
|
videos-parveen-18-Viral-Video-Link/parveen.viral.video.Link.viral.On.Social.Media.Official
|
videos-parveen-18-Viral-Video-Link
| 2025-06-18T19:19:32Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-18T19:18:32Z |
<animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
hikkohhh/fgdfgf
|
hikkohhh
| 2025-06-18T19:17:00Z | 0 | 0 | null |
[
"license:deepfloyd-if-license",
"region:us"
] | null | 2025-06-18T19:16:47Z |
---
license: deepfloyd-if-license
---
|
New-tutorial-Trishakar-Madhu-18-Videos/FULL.VIDEO.Trishakar.Madhu.Viral.Video.Tutorial.Official
|
New-tutorial-Trishakar-Madhu-18-Videos
| 2025-06-18T19:14:13Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-18T19:13:55Z |
<animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
TVRRaviteja/llama3.1-mental-health-therapy-SFT
|
TVRRaviteja
| 2025-06-18T19:07:55Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-17T10:14:05Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Leonel-Maia/Wav2vec2-fula
|
Leonel-Maia
| 2025-06-18T19:07:52Z | 16 | 0 |
transformers
|
[
"transformers",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"LAfricaMobile/fulfulde",
"generated_from_trainer",
"base_model:facebook/wav2vec2-xls-r-300m",
"base_model:finetune:facebook/wav2vec2-xls-r-300m",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2025-06-17T15:35:25Z |
---
library_name: transformers
license: apache-2.0
base_model: facebook/wav2vec2-xls-r-300m
tags:
- automatic-speech-recognition
- LAfricaMobile/fulfulde
- generated_from_trainer
metrics:
- wer
model-index:
- name: Wav2vec2-fula
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Wav2vec2-fula
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the LAFRICAMOBILE/FULFULDE - DEFAULT dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3143
- Wer: 0.5455
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 60.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:-----:|:---------------:|:------:|
| 3.1344 | 0.3437 | 500 | 3.0935 | 1.0 |
| 0.7323 | 0.6874 | 1000 | 0.6304 | 0.7120 |
| 0.5416 | 1.0316 | 1500 | 0.4785 | 0.6491 |
| 0.4479 | 1.3753 | 2000 | 0.4202 | 0.6207 |
| 0.4541 | 1.7190 | 2500 | 0.3851 | 0.6006 |
| 0.365 | 2.0632 | 3000 | 0.3701 | 0.5885 |
| 0.3433 | 2.4069 | 3500 | 0.3648 | 0.5797 |
| 0.3561 | 2.7506 | 4000 | 0.3438 | 0.5716 |
| 0.3237 | 3.0949 | 4500 | 0.3647 | 0.5677 |
| 0.322 | 3.4386 | 5000 | 0.3427 | 0.5638 |
| 0.2921 | 3.7823 | 5500 | 0.3345 | 0.5604 |
| 0.3037 | 4.1265 | 6000 | 0.3352 | 0.5541 |
| 0.2695 | 4.4702 | 6500 | 0.3202 | 0.5515 |
| 0.2804 | 4.8139 | 7000 | 0.3353 | 0.5525 |
| 0.2908 | 5.1581 | 7500 | 0.3384 | 0.5485 |
| 0.2646 | 5.5018 | 8000 | 0.3164 | 0.5462 |
| 0.2982 | 5.8455 | 8500 | 0.3143 | 0.5455 |
| 0.2978 | 6.1897 | 9000 | 0.3218 | 0.5424 |
| 0.288 | 6.5334 | 9500 | 0.3152 | 0.5418 |
| 0.2706 | 6.8771 | 10000 | 0.3211 | 0.5398 |
| 0.3008 | 7.2213 | 10500 | 0.3266 | 0.5398 |
| 0.2674 | 7.5650 | 11000 | 0.3185 | 0.5379 |
### Framework versions
- Transformers 4.50.3
- Pytorch 2.7.0+cu126
- Datasets 3.5.0
- Tokenizers 0.21.1
|
kevin510/ACT-SO100-Draw
|
kevin510
| 2025-06-18T19:06:40Z | 0 | 0 | null |
[
"safetensors",
"license:apache-2.0",
"region:us"
] | null | 2025-06-18T17:52:31Z |
---
license: apache-2.0
---
# 🖊️ ACT-SO100-Draw
Action Chunking Transformer (ACT) checkpoint for **drawing with a custom pen-holding attachment on the SO-100 and SO-101 robotic arms**.

*3-D-printed pen mount designed for SO-100 and SO-101 robotic arms.**
Tool STL is available for download in the [SO-100 Tools repository](https://github.com/krohling/so-100-tools).
---
## Demo

---
## Dataset
| Name | Episodes | Frames / episode | Modalities |
| -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------- | ---------------- | ----------------------------------------- |
| [370-drawn-to-caffeine-draw-smiley](https://huggingface.co/spaces/lerobot/visualize_dataset?path=%2FLeRobot-worldwide-hackathon%2F370-drawn-to-caffeine-draw-smiley%2Fepisode_0) | 42 | \~450 | RGB 640×480, proprio 5-DoF, gripper state |
## Training Details
See run details on wandb for more information: [wandb run](https://wandb.ai/kevin_ai/lerobot_hackathon/runs/ahu8fcc0).
| Hyper-parameter | Value |
| ------------------- | ---------------------------------- |
| Chunk size | 100 |
| Dim Feedforward | 3200 |
| Dim Model | 512 |
| Dropout | 0.1 |
| Feedforward Activation | ReLU |
| Decoder layers | 1 |
| Encoder layers | 4 |
| Attention heads | 8 |
| VAE Encoder layers | 4 |
| Batch size | 32 |
| Optimizer | AdamW, lr = 1e-5 |
## Citation
If you use this checkpoint in your work, please cite the following:
```bibtex
@misc{Rohling2025ACTSO100Draw,
author = {Kevin Rohling},
title = {ACT Checkpoint for Pen-Drawing on SO-100},
year = {2025},
howpublished = {\url{https://huggingface.co/kevin510/ACT-SO100-Draw}}
}
```
|
meetween/Llama-speechlmm-1.0-l-MT
|
meetween
| 2025-06-18T19:05:49Z | 26 | 0 |
transformers
|
[
"transformers",
"safetensors",
"speechlmm",
"translation",
"es",
"it",
"en",
"fr",
"de",
"dataset:EuroParl-ST",
"base_model:meetween/Llama-speechlmm-1.0-l",
"base_model:finetune:meetween/Llama-speechlmm-1.0-l",
"license:other",
"endpoints_compatible",
"region:us"
] |
translation
| 2025-04-21T14:24:32Z |
---
library_name: transformers
license: other
license_name: custom
license_link: LICENSE
model_index:
- name: Llama-speechlmm-1.0-l-MT
base_model:
- meetween/Llama-speechlmm-1.0-l
datasets:
- EuroParl-ST
language:
- es
- it
- en
- fr
- de
metrics:
- bleu
pipeline_tag: translation
---
## Model Information
This is the version of [meetween/Llama-speechlmm-1.0-l](https://huggingface.co/meetween/Llama-speechlmm-1.0-l) that was
fine-tuned for Speech-to-Text Translation.
**License:** see [LICENSE](LICENSE)
## Model Architecture
Identical to the base model. The model was obtained by training LoRA on the LLM.
This repository contains the model weights with LoRA merged into the main weights.
## How to Use
Identical to the base model.
## Fine-tuning Data
This model has been fine-tuned on the same EuroParl-ST machine translation data ({en, fr, it, de, es} → {en, fr, it, de, es}) from the training data of the base model.
## Evaluation Results
<style type="text/css">
.tg {border-collapse:collapse;border-spacing:0;}
.tg td{border-color:black;border-style:solid;border-width:1px;font-family:Arial, sans-serif;font-size:14px;
overflow:hidden;padding:10px 5px;word-break:normal;}
.tg th{border-color:black;border-style:solid;border-width:1px;font-family:Arial, sans-serif;font-size:14px;
font-weight:normal;overflow:hidden;padding:10px 5px;word-break:normal;}
.tg .tg-8bgf{border-color:inherit;font-style:italic;text-align:center;vertical-align:top}
.tg .tg-7btt{border-color:inherit;font-weight:bold;text-align:center;vertical-align:top}
.tg .tg-0pky{border-color:inherit;text-align:left;vertical-align:top}
.tg .tg-fymr{border-color:inherit;font-weight:bold;text-align:left;vertical-align:top}
</style>
<table>
<colgroup>
<col width="137">
<col width="52">
<col width="60">
<col width="61">
<col width="56">
<col width="58">
<col width="58">
<col width="59">
</colgroup>
<tbody>
<tr>
<td>
<p dir="ltr"><span>DATASET:</span></p>
</td>
<td colspan="4">
<p dir="ltr"><span>FLORES</span></p>
</td>
<td colspan="2">
<p dir="ltr"><span>ACL 60/60</span></p>
</td>
<td rowspan="2">
<p dir="ltr"><span>AVG</span></p>
</td>
</tr>
<tr>
<td>
<p dir="ltr"><span>BLEU</span></p>
</td>
<td>
<p dir="ltr"><span>en-de</span></p>
</td>
<td>
<p dir="ltr"><span>en-es</span></p>
</td>
<td>
<p dir="ltr"><span>en-it</span></p>
</td>
<td>
<p dir="ltr"><span>en-fr</span></p>
</td>
<td>
<p dir="ltr"><span>en-fr</span></p>
</td>
<td>
<p dir="ltr"><span>en-de</span></p>
</td>
</tr>
<tr>
<td>
<p dir="ltr"><span>Llama3-instruct (D5)</span></p>
</td>
<td>
<p dir="ltr"><span>28.1</span></p>
</td>
<td>
<p dir="ltr"><span>24.4</span></p>
</td>
<td>
<p dir="ltr"><span>25.0</span></p>
</td>
<td>
<p dir="ltr"><span>41.2</span></p>
</td>
<td>
<p dir="ltr"><span>48.8</span></p>
</td>
<td>
<p dir="ltr"><span>34.2</span></p>
</td>
<td>
<p dir="ltr"><span>33.6</span></p>
</td>
</tr>
<tr>
<td>
<p dir="ltr"><span>NLLB (D5)</span></p>
</td>
<td>
<p dir="ltr"><span>39.4</span></p>
</td>
<td>
<p dir="ltr"><span>23.7</span></p>
</td>
<td>
<p dir="ltr"><span>31.2</span></p>
</td>
<td>
<p dir="ltr"><span>50.7</span></p>
</td>
<td>
<p dir="ltr"><span>59.1</span></p>
</td>
<td>
<p dir="ltr"><span>45.2</span></p>
</td>
<td>
<p dir="ltr"><span>41.6</span></p>
</td>
</tr>
<tr>
<td>
<p dir="ltr"><span>SpeechLMM_v1.0_L</span></p>
</td>
<td>
<p dir="ltr"><span>29.4</span></p>
</td>
<td>
<p dir="ltr"><span>22.3</span></p>
</td>
<td>
<p dir="ltr"><span>20.1</span></p>
</td>
<td>
<p dir="ltr"><span>31.9</span></p>
</td>
<td>
<p dir="ltr"><span>35.5</span></p>
</td>
<td>
<p dir="ltr"><span>32.8</span></p>
</td>
<td>
<p dir="ltr"><span>28.7</span></p>
</td>
</tr>
<tr>
<td>
<p dir="ltr"><span>Speech LMM v1.0_L-FT (LoRA)</span></p>
</td>
<td>
<p dir="ltr"><span>20.0</span></p>
</td>
<td>
<p dir="ltr"><span>16.0</span></p>
</td>
<td>
<p dir="ltr"><span>11.6</span></p>
</td>
<td>
<p dir="ltr"><span>21.8</span></p>
</td>
<td>
<p dir="ltr"><span>24.9</span></p>
</td>
<td>
<p dir="ltr"><span>20.7</span></p>
</td>
<td>
<p dir="ltr"><span>19.2</span></p>
</td>
</tr>
</tbody>
</table>
## Framework Versions
- Transformers 4.45.0
- Pytorch 2.3.1+cu124.post2
- Datasets 3.2.0
- Tokenizers 0.20.0
|
luyotw/openfun-ivod-whisper-medium-LaiShiBao-11-124
|
luyotw
| 2025-06-18T19:03:24Z | 0 | 0 | null |
[
"tensorboard",
"safetensors",
"whisper",
"region:us"
] | null | 2025-06-18T17:49:22Z |
# Fine-tune 資訊
- 原始模型: `openai/whisper-medium`
- 使用音訊數量: 22318
- 使用音訊總長: 11.74 小時
- 音訊平均長度: 1.89 秒
- GPU: `NVIDIA H100 PCIe` x 1
- 訓練時間: 04:07:22
- 模型大小: 2.85 GB
---
# Model Card
|
Real-Madrid-Al-Hilal-Direct-Videos/Real.Madrid.Al-Hilal.En.Direct.Streaming.Gratuit.tv.Official
|
Real-Madrid-Al-Hilal-Direct-Videos
| 2025-06-18T19:03:04Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-18T19:02:47Z |
<animated-image data-catalyst=""><a href="https://tinyurl.com/mrmpsap6?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
morturr/Llama-2-7b-hf-LOO_dadjokes-COMB_headlines-comb3-seed42-2025-06-18
|
morturr
| 2025-06-18T19:00:31Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"license:llama2",
"region:us"
] | null | 2025-06-18T19:00:12Z |
---
library_name: peft
license: llama2
base_model: meta-llama/Llama-2-7b-hf
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: Llama-2-7b-hf-LOO_dadjokes-COMB_headlines-comb3-seed42-2025-06-18
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Llama-2-7b-hf-LOO_dadjokes-COMB_headlines-comb3-seed42-2025-06-18
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.1
- Pytorch 2.5.1+cu124
- Datasets 3.0.2
- Tokenizers 0.20.1
|
mlfoundations-cua-dev/uitars_add_new_advanced_synthetic_typing_data
|
mlfoundations-cua-dev
| 2025-06-18T19:00:11Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-18T18:29:16Z |
# idm_tars_1.5_7b_frame_pairs_896x896_lr_1e-5_10_epochs_500_steps_gbs_8_wd_0.1_max_grad_norm_1.0_add_new_advanced_synthetic_typing_data
## Model Information
**Full Model Name**: `idm_tars_1.5_7b_frame_pairs_896x896_lr_1e-5_10_epochs_500_steps_gbs_8_wd_0.1_max_grad_norm_1.0_add_new_advanced_synthetic_typing_data`
**Repository Name**: `mlfoundations-cua-dev/uitars_add_new_advanced_synthetic_typing_data`
**Model Directory**: `idm_tars_1.5_7b_frame_pairs_896x896_lr_1e-5_10_epochs_500_steps_gbs_8_wd_0.1_max_grad_norm_1.0_add_new_advanced_synthetic_typing_data`
**Checkpoint Used**: `idm_tars_1.5_7b_frame_pairs_896x896_lr_1e-5_10_epochs_500_steps_gbs_8_wd_0.1_max_grad_norm_1.0_add_new_advanced_synthetic_typing_data/checkpoint_epoch_9.pt`
## Model Configuration
- **Model Version**: TARS 1.5
- **Model Size**: 7B parameters
- **Data Type**: Frame pairs
- **Learning Rate**: 1e-5
- **Epochs**: 10
- **Training Steps**: 500
- **Global Batch Size**: 8
- **Weight Decay**: 0.1
- **Max Gradient Norm**: 1.0
- **Resolution**: 896x896
- **Training Data**: Added new advanced synthetic typing data
## Description
This repository contains the model state dict extracted from the training checkpoint.
### Files
- `model_state_dict.pt`: PyTorch state dictionary containing the model weights
- `README.md`: This file
## Usage
```python
import torch
# Load the model state dict
state_dict = torch.load("model_state_dict.pt", map_location='cpu')
# Use with your model architecture
# model.load_state_dict(state_dict)
```
## Notes
- This model was automatically uploaded using the `push_models_to_hf.py` script
- The repository name may be truncated if the original model name exceeded HuggingFace's 96-character limit
- Checkpoint extracted from: `checkpoint_epoch_9.pt`
|
videos-Sajal-Malik-18-Viral-Video-Link/FULL.VIDEO.Sajal.Malik.Viral.Video.Tutorial.Official
|
videos-Sajal-Malik-18-Viral-Video-Link
| 2025-06-18T18:59:38Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-18T18:59:22Z |
<animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
arcee-ai/Virtuoso-Large-GGUF
|
arcee-ai
| 2025-06-18T18:57:14Z | 0 | 3 |
transformers
|
[
"transformers",
"gguf",
"base_model:arcee-ai/Virtuoso-Large",
"base_model:quantized:arcee-ai/Virtuoso-Large",
"license:other",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-06-09T22:02:44Z |
---
license: other
license_name: qwen
license_link: https://huggingface.co/Qwen/Qwen2.5-72B-Instruct/blob/main/LICENSE
base_model: arcee-ai/Virtuoso-Large
base_model_relation: quantized
library_name: transformers
---

GGUF Quantizations for [Virtuoso-Large](https://huggingface.co/arcee-ai/Virtuoso-Large)
**Virtuoso-Large (72B)** is our most powerful and versatile general-purpose model, designed to excel at handling complex and varied tasks across domains. With state-of-the-art performance, it offers unparalleled capability for nuanced understanding, contextual adaptability, and high accuracy.
### Model Details
- Architecture Base: Qwen2.5-72B
- Parameter Count: 72B
- License: [qwen](https://huggingface.co/Qwen/Qwen2.5-72B-Instruct/blob/main/LICENSE)
### Use Cases
- Advanced content creation, such as technical writing and creative storytelling
- Data summarization and report generation for cross-functional domains
- Detailed knowledge synthesis and deep-dive insights from diverse datasets
- Multilingual support for international operations and communications
### License
**Virtuoso-Large (72B)** is released under the [qwen License](https://huggingface.co/Qwen/Qwen2.5-72B-Instruct/blob/main/LICENSE).
If you have questions or would like to share your experiences using Virtuoso-Large (72B), please connect with us on social media. We’re excited to see what you build—and how this model helps you innovate!
|
sgonzalezygil/sd-finetuning-dreambooth-v13-1400
|
sgonzalezygil
| 2025-06-18T18:52:48Z | 0 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2025-06-18T18:51:10Z |
---
library_name: diffusers
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🧨 diffusers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
GraybeardTheIrate/Cogwheel-Pantheon
|
GraybeardTheIrate
| 2025-06-18T18:52:27Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"base_model:Gryphe/Pantheon-RP-1.8-24b-Small-3.1",
"base_model:merge:Gryphe/Pantheon-RP-1.8-24b-Small-3.1",
"base_model:OddTheGreat/Cogwheel_24b_V.2",
"base_model:merge:OddTheGreat/Cogwheel_24b_V.2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-18T18:30:44Z |
---
base_model:
- Gryphe/Pantheon-RP-1.8-24b-Small-3.1
- OddTheGreat/Cogwheel_24b_V.2
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [SLERP](https://en.wikipedia.org/wiki/Slerp) merge method.
### Models Merged
The following models were included in the merge:
* [Gryphe/Pantheon-RP-1.8-24b-Small-3.1](https://huggingface.co/Gryphe/Pantheon-RP-1.8-24b-Small-3.1)
* [OddTheGreat/Cogwheel_24b_V.2](https://huggingface.co/OddTheGreat/Cogwheel_24b_V.2)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: Gryphe/Pantheon-RP-1.8-24b-Small-3.1
- model: OddTheGreat/Cogwheel_24b_V.2
merge_method: slerp
base_model: OddTheGreat/Cogwheel_24b_V.2
dtype: bfloat16
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
```
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.