modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-07-29 12:28:52
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
534 values
tags
listlengths
1
4.05k
pipeline_tag
stringclasses
55 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-07-29 12:25:02
card
stringlengths
11
1.01M
HKReporter/ECTEL-2025-llama3-fold5-CU4
HKReporter
2025-06-20T04:10:28Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:unsloth/llama-3-8b-Instruct-bnb-4bit", "base_model:adapter:unsloth/llama-3-8b-Instruct-bnb-4bit", "region:us" ]
null
2025-06-20T04:10:21Z
--- base_model: unsloth/llama-3-8b-Instruct-bnb-4bit library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.15.2
HKReporter/ECTEL-2025-llama3-fold5-CU3
HKReporter
2025-06-20T04:10:20Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:unsloth/llama-3-8b-Instruct-bnb-4bit", "base_model:adapter:unsloth/llama-3-8b-Instruct-bnb-4bit", "region:us" ]
null
2025-06-20T04:10:14Z
--- base_model: unsloth/llama-3-8b-Instruct-bnb-4bit library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.15.2
HKReporter/ECTEL-2025-llama3-fold4-CU5
HKReporter
2025-06-20T04:09:51Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:unsloth/llama-3-8b-Instruct-bnb-4bit", "base_model:adapter:unsloth/llama-3-8b-Instruct-bnb-4bit", "region:us" ]
null
2025-06-20T04:09:45Z
--- base_model: unsloth/llama-3-8b-Instruct-bnb-4bit library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.15.2
HKReporter/ECTEL-2025-llama3-fold4-CU3
HKReporter
2025-06-20T04:09:32Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:unsloth/llama-3-8b-Instruct-bnb-4bit", "base_model:adapter:unsloth/llama-3-8b-Instruct-bnb-4bit", "region:us" ]
null
2025-06-20T04:09:18Z
--- base_model: unsloth/llama-3-8b-Instruct-bnb-4bit library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.15.2
Triangle104/Huihui-Qwen3-14B-abliterated-v2-Q6_K-GGUF
Triangle104
2025-06-20T04:08:16Z
0
0
transformers
[ "transformers", "gguf", "chat", "abliterated", "uncensored", "llama-cpp", "gguf-my-repo", "text-generation", "base_model:huihui-ai/Huihui-Qwen3-14B-abliterated-v2", "base_model:quantized:huihui-ai/Huihui-Qwen3-14B-abliterated-v2", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-generation
2025-06-20T04:07:23Z
--- library_name: transformers license: apache-2.0 license_link: https://huggingface.co/Qwen/Qwen3-14B/blob/main/LICENSE pipeline_tag: text-generation base_model: huihui-ai/Huihui-Qwen3-14B-abliterated-v2 tags: - chat - abliterated - uncensored - llama-cpp - gguf-my-repo --- # Triangle104/Huihui-Qwen3-14B-abliterated-v2-Q6_K-GGUF This model was converted to GGUF format from [`huihui-ai/Huihui-Qwen3-14B-abliterated-v2`](https://huggingface.co/huihui-ai/Huihui-Qwen3-14B-abliterated-v2) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/huihui-ai/Huihui-Qwen3-14B-abliterated-v2) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo Triangle104/Huihui-Qwen3-14B-abliterated-v2-Q6_K-GGUF --hf-file huihui-qwen3-14b-abliterated-v2-q6_k.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo Triangle104/Huihui-Qwen3-14B-abliterated-v2-Q6_K-GGUF --hf-file huihui-qwen3-14b-abliterated-v2-q6_k.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo Triangle104/Huihui-Qwen3-14B-abliterated-v2-Q6_K-GGUF --hf-file huihui-qwen3-14b-abliterated-v2-q6_k.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo Triangle104/Huihui-Qwen3-14B-abliterated-v2-Q6_K-GGUF --hf-file huihui-qwen3-14b-abliterated-v2-q6_k.gguf -c 2048 ```
HKReporter/ECTEL-2025-llama3-fold2-CU2
HKReporter
2025-06-20T04:07:39Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:unsloth/llama-3-8b-Instruct-bnb-4bit", "base_model:adapter:unsloth/llama-3-8b-Instruct-bnb-4bit", "region:us" ]
null
2025-06-20T04:07:32Z
--- base_model: unsloth/llama-3-8b-Instruct-bnb-4bit library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.15.2
HKReporter/ECTEL-2025-llama3-fold1-CU1
HKReporter
2025-06-20T04:06:27Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:unsloth/llama-3-8b-Instruct-bnb-4bit", "base_model:adapter:unsloth/llama-3-8b-Instruct-bnb-4bit", "region:us" ]
null
2025-06-20T04:06:19Z
--- base_model: unsloth/llama-3-8b-Instruct-bnb-4bit library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.15.2
SYoungT/1B-8-pt2
SYoungT
2025-06-20T03:45:25Z
0
0
transformers
[ "transformers", "gguf", "llama", "text-generation-inference", "unsloth", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-06-20T03:44:28Z
--- base_model: unsloth/llama-3.2-1b-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llama - gguf license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** SYoungT - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3.2-1b-unsloth-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
greenkwd/lr0.0001_bs16_0620_0942
greenkwd
2025-06-20T03:41:58Z
0
0
null
[ "safetensors", "segformer", "vision", "image-segmentation", "generated_from_trainer", "base_model:nvidia/mit-b0", "base_model:finetune:nvidia/mit-b0", "license:other", "region:us" ]
image-segmentation
2025-06-20T03:41:54Z
--- license: other base_model: nvidia/mit-b0 tags: - vision - image-segmentation - generated_from_trainer model-index: - name: lr0.0001_bs16_0620_0942 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # lr0.0001_bs16_0620_0942 This model is a fine-tuned version of [nvidia/mit-b0](https://huggingface.co/nvidia/mit-b0) on the greenkwd/upwellingdetection_SST dataset. It achieves the following results on the evaluation set: - Loss: 0.1335 - Mean Iou: 0.8871 - Mean Accuracy: 0.9459 - Overall Accuracy: 0.9536 - Accuracy Land: 0.9552 - Accuracy Upwelling: 0.9692 - Accuracy Not Upwelling: 0.9133 - Iou Land: 0.9542 - Iou Upwelling: 0.9274 - Iou Not Upwelling: 0.7796 - Dice Macro: 0.9383 - Dice Micro: 0.9536 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 500 - num_epochs: 100 - label_smoothing_factor: 0.1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Mean Iou | Mean Accuracy | Overall Accuracy | Accuracy Land | Accuracy Upwelling | Accuracy Not Upwelling | Iou Land | Iou Upwelling | Iou Not Upwelling | Dice Macro | Dice Micro | |:-------------:|:-----:|:----:|:---------------:|:--------:|:-------------:|:----------------:|:-------------:|:------------------:|:----------------------:|:--------:|:-------------:|:-----------------:|:----------:|:----------:| | 1.0882 | 0.4 | 20 | 1.0699 | 0.1883 | 0.4295 | 0.3022 | 0.0162 | 0.3817 | 0.8905 | 0.0149 | 0.3664 | 0.1835 | 0.2919 | 0.3022 | | 0.9223 | 0.8 | 40 | 0.8895 | 0.5561 | 0.7282 | 0.7212 | 0.6576 | 0.7873 | 0.7397 | 0.6574 | 0.7021 | 0.3088 | 0.6967 | 0.7212 | | 0.7699 | 1.2 | 60 | 0.6288 | 0.6063 | 0.7519 | 0.7768 | 0.7339 | 0.8893 | 0.6323 | 0.7339 | 0.7609 | 0.3240 | 0.7334 | 0.7768 | | 0.69 | 1.6 | 80 | 0.4913 | 0.6720 | 0.8138 | 0.8249 | 0.7968 | 0.8865 | 0.7580 | 0.7968 | 0.7955 | 0.4238 | 0.7894 | 0.8249 | | 0.6536 | 2.0 | 100 | 0.4191 | 0.6957 | 0.8285 | 0.8440 | 0.7989 | 0.9377 | 0.7489 | 0.7989 | 0.8361 | 0.4519 | 0.8072 | 0.8440 | | 0.5298 | 2.4 | 120 | 0.3944 | 0.6962 | 0.8132 | 0.8531 | 0.8292 | 0.9750 | 0.6354 | 0.8292 | 0.8257 | 0.4337 | 0.8054 | 0.8531 | | 0.4779 | 2.8 | 140 | 0.3525 | 0.7445 | 0.8604 | 0.8775 | 0.8585 | 0.9409 | 0.7818 | 0.8585 | 0.8477 | 0.5273 | 0.8440 | 0.8775 | | 0.4727 | 3.2 | 160 | 0.3321 | 0.7514 | 0.8651 | 0.8818 | 0.8577 | 0.9509 | 0.7868 | 0.8577 | 0.8596 | 0.5370 | 0.8489 | 0.8818 | | 0.5746 | 3.6 | 180 | 0.3068 | 0.7629 | 0.8791 | 0.8865 | 0.8587 | 0.9392 | 0.8395 | 0.8587 | 0.8685 | 0.5616 | 0.8576 | 0.8865 | | 0.5181 | 4.0 | 200 | 0.2654 | 0.8091 | 0.8977 | 0.9163 | 0.9140 | 0.9619 | 0.8172 | 0.9138 | 0.8833 | 0.6302 | 0.8887 | 0.9163 | | 0.4094 | 4.4 | 220 | 0.2525 | 0.8288 | 0.9177 | 0.9246 | 0.9247 | 0.9402 | 0.8882 | 0.9241 | 0.8895 | 0.6729 | 0.9022 | 0.9246 | | 0.5539 | 4.8 | 240 | 0.2300 | 0.8317 | 0.9224 | 0.9254 | 0.9214 | 0.9374 | 0.9085 | 0.9209 | 0.8944 | 0.6799 | 0.9042 | 0.9254 | | 0.4994 | 5.2 | 260 | 0.2150 | 0.8199 | 0.9171 | 0.9186 | 0.9011 | 0.9446 | 0.9055 | 0.9010 | 0.8998 | 0.6588 | 0.8965 | 0.9186 | | 0.3206 | 5.6 | 280 | 0.2043 | 0.8570 | 0.9325 | 0.9391 | 0.9449 | 0.9469 | 0.9056 | 0.9435 | 0.9035 | 0.7240 | 0.9200 | 0.9391 | | 0.3138 | 6.0 | 300 | 0.1909 | 0.8538 | 0.9301 | 0.9377 | 0.9408 | 0.9510 | 0.8986 | 0.9398 | 0.9041 | 0.7176 | 0.9181 | 0.9377 | | 0.3412 | 6.4 | 320 | 0.1935 | 0.8630 | 0.9280 | 0.9435 | 0.9517 | 0.9680 | 0.8644 | 0.9498 | 0.9082 | 0.7311 | 0.9236 | 0.9435 | | 0.3777 | 6.8 | 340 | 0.1728 | 0.8422 | 0.9188 | 0.9328 | 0.9245 | 0.9758 | 0.8560 | 0.9243 | 0.9106 | 0.6917 | 0.9105 | 0.9328 | | 0.4217 | 7.2 | 360 | 0.1847 | 0.8545 | 0.9357 | 0.9370 | 0.9393 | 0.9373 | 0.9304 | 0.9386 | 0.9028 | 0.7221 | 0.9186 | 0.9370 | | 0.33 | 7.6 | 380 | 0.1690 | 0.8596 | 0.9250 | 0.9420 | 0.9460 | 0.9758 | 0.8532 | 0.9450 | 0.9102 | 0.7234 | 0.9214 | 0.9420 | | 0.4913 | 8.0 | 400 | 0.1574 | 0.8682 | 0.9323 | 0.9456 | 0.9511 | 0.9689 | 0.8770 | 0.9500 | 0.9133 | 0.7413 | 0.9268 | 0.9456 | | 0.3707 | 8.4 | 420 | 0.1526 | 0.8627 | 0.9253 | 0.9437 | 0.9484 | 0.9798 | 0.8476 | 0.9474 | 0.9114 | 0.7295 | 0.9234 | 0.9437 | | 0.4486 | 8.8 | 440 | 0.1451 | 0.8643 | 0.9323 | 0.9433 | 0.9415 | 0.9707 | 0.8847 | 0.9407 | 0.9169 | 0.7352 | 0.9245 | 0.9433 | | 0.2992 | 9.2 | 460 | 0.1411 | 0.8752 | 0.9440 | 0.9475 | 0.9520 | 0.9497 | 0.9304 | 0.9508 | 0.9151 | 0.7597 | 0.9313 | 0.9475 | | 0.3912 | 9.6 | 480 | 0.1465 | 0.8637 | 0.9308 | 0.9432 | 0.9388 | 0.9774 | 0.8763 | 0.9384 | 0.9201 | 0.7325 | 0.9241 | 0.9432 | | 0.3323 | 10.0 | 500 | 0.1501 | 0.8854 | 0.9351 | 0.9544 | 0.9686 | 0.9803 | 0.8564 | 0.9652 | 0.9182 | 0.7729 | 0.9372 | 0.9544 | | 0.3496 | 10.4 | 520 | 0.1311 | 0.8917 | 0.9470 | 0.9559 | 0.9621 | 0.9683 | 0.9105 | 0.9600 | 0.9263 | 0.7888 | 0.9411 | 0.9559 | | 0.256 | 10.8 | 540 | 0.1320 | 0.8841 | 0.9463 | 0.9520 | 0.9521 | 0.9647 | 0.9221 | 0.9511 | 0.9263 | 0.7747 | 0.9366 | 0.9520 | | 0.3223 | 11.2 | 560 | 0.1451 | 0.8734 | 0.9436 | 0.9465 | 0.9405 | 0.9608 | 0.9296 | 0.9401 | 0.9247 | 0.7554 | 0.9302 | 0.9465 | | 0.4234 | 11.6 | 580 | 0.1335 | 0.8871 | 0.9459 | 0.9536 | 0.9552 | 0.9692 | 0.9133 | 0.9542 | 0.9274 | 0.7796 | 0.9383 | 0.9536 | ### Framework versions - Transformers 4.41.2 - Pytorch 2.6.0+cu124 - Datasets 3.2.0 - Tokenizers 0.19.1
vuitton/21v1scrip_43
vuitton
2025-06-20T03:28:37Z
0
0
null
[ "safetensors", "any-to-any", "omega", "omegalabs", "bittensor", "agi", "license:mit", "region:us" ]
any-to-any
2025-06-18T17:04:05Z
--- license: mit tags: - any-to-any - omega - omegalabs - bittensor - agi --- This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet. Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
Triangle104/Huihui-Qwen3-4B-abliterated-v2-Q5_K_M-GGUF
Triangle104
2025-06-20T03:25:06Z
0
0
transformers
[ "transformers", "gguf", "chat", "abliterated", "uncensored", "llama-cpp", "gguf-my-repo", "text-generation", "base_model:huihui-ai/Huihui-Qwen3-4B-abliterated-v2", "base_model:quantized:huihui-ai/Huihui-Qwen3-4B-abliterated-v2", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-generation
2025-06-20T03:24:51Z
--- library_name: transformers license: apache-2.0 license_link: https://huggingface.co/Qwen/Qwen3-4B/blob/main/LICENSE pipeline_tag: text-generation base_model: huihui-ai/Huihui-Qwen3-4B-abliterated-v2 tags: - chat - abliterated - uncensored - llama-cpp - gguf-my-repo --- # Triangle104/Huihui-Qwen3-4B-abliterated-v2-Q5_K_M-GGUF This model was converted to GGUF format from [`huihui-ai/Huihui-Qwen3-4B-abliterated-v2`](https://huggingface.co/huihui-ai/Huihui-Qwen3-4B-abliterated-v2) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/huihui-ai/Huihui-Qwen3-4B-abliterated-v2) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo Triangle104/Huihui-Qwen3-4B-abliterated-v2-Q5_K_M-GGUF --hf-file huihui-qwen3-4b-abliterated-v2-q5_k_m.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo Triangle104/Huihui-Qwen3-4B-abliterated-v2-Q5_K_M-GGUF --hf-file huihui-qwen3-4b-abliterated-v2-q5_k_m.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo Triangle104/Huihui-Qwen3-4B-abliterated-v2-Q5_K_M-GGUF --hf-file huihui-qwen3-4b-abliterated-v2-q5_k_m.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo Triangle104/Huihui-Qwen3-4B-abliterated-v2-Q5_K_M-GGUF --hf-file huihui-qwen3-4b-abliterated-v2-q5_k_m.gguf -c 2048 ```
Triangle104/Huihui-Qwen3-4B-abliterated-v2-Q5_K_S-GGUF
Triangle104
2025-06-20T03:23:25Z
0
0
transformers
[ "transformers", "gguf", "chat", "abliterated", "uncensored", "llama-cpp", "gguf-my-repo", "text-generation", "base_model:huihui-ai/Huihui-Qwen3-4B-abliterated-v2", "base_model:quantized:huihui-ai/Huihui-Qwen3-4B-abliterated-v2", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-generation
2025-06-20T03:23:11Z
--- library_name: transformers license: apache-2.0 license_link: https://huggingface.co/Qwen/Qwen3-4B/blob/main/LICENSE pipeline_tag: text-generation base_model: huihui-ai/Huihui-Qwen3-4B-abliterated-v2 tags: - chat - abliterated - uncensored - llama-cpp - gguf-my-repo --- # Triangle104/Huihui-Qwen3-4B-abliterated-v2-Q5_K_S-GGUF This model was converted to GGUF format from [`huihui-ai/Huihui-Qwen3-4B-abliterated-v2`](https://huggingface.co/huihui-ai/Huihui-Qwen3-4B-abliterated-v2) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/huihui-ai/Huihui-Qwen3-4B-abliterated-v2) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo Triangle104/Huihui-Qwen3-4B-abliterated-v2-Q5_K_S-GGUF --hf-file huihui-qwen3-4b-abliterated-v2-q5_k_s.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo Triangle104/Huihui-Qwen3-4B-abliterated-v2-Q5_K_S-GGUF --hf-file huihui-qwen3-4b-abliterated-v2-q5_k_s.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo Triangle104/Huihui-Qwen3-4B-abliterated-v2-Q5_K_S-GGUF --hf-file huihui-qwen3-4b-abliterated-v2-q5_k_s.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo Triangle104/Huihui-Qwen3-4B-abliterated-v2-Q5_K_S-GGUF --hf-file huihui-qwen3-4b-abliterated-v2-q5_k_s.gguf -c 2048 ```
vuitton/21v1scrip_34.1
vuitton
2025-06-20T03:22:31Z
0
0
null
[ "safetensors", "any-to-any", "omega", "omegalabs", "bittensor", "agi", "license:mit", "region:us" ]
any-to-any
2025-06-20T02:56:03Z
--- license: mit tags: - any-to-any - omega - omegalabs - bittensor - agi --- This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet. Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
Triangle104/Huihui-Qwen3-4B-abliterated-v2-Q4_K_S-GGUF
Triangle104
2025-06-20T03:18:27Z
0
0
transformers
[ "transformers", "gguf", "chat", "abliterated", "uncensored", "llama-cpp", "gguf-my-repo", "text-generation", "base_model:huihui-ai/Huihui-Qwen3-4B-abliterated-v2", "base_model:quantized:huihui-ai/Huihui-Qwen3-4B-abliterated-v2", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-generation
2025-06-20T03:18:15Z
--- library_name: transformers license: apache-2.0 license_link: https://huggingface.co/Qwen/Qwen3-4B/blob/main/LICENSE pipeline_tag: text-generation base_model: huihui-ai/Huihui-Qwen3-4B-abliterated-v2 tags: - chat - abliterated - uncensored - llama-cpp - gguf-my-repo --- # Triangle104/Huihui-Qwen3-4B-abliterated-v2-Q4_K_S-GGUF This model was converted to GGUF format from [`huihui-ai/Huihui-Qwen3-4B-abliterated-v2`](https://huggingface.co/huihui-ai/Huihui-Qwen3-4B-abliterated-v2) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/huihui-ai/Huihui-Qwen3-4B-abliterated-v2) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo Triangle104/Huihui-Qwen3-4B-abliterated-v2-Q4_K_S-GGUF --hf-file huihui-qwen3-4b-abliterated-v2-q4_k_s.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo Triangle104/Huihui-Qwen3-4B-abliterated-v2-Q4_K_S-GGUF --hf-file huihui-qwen3-4b-abliterated-v2-q4_k_s.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo Triangle104/Huihui-Qwen3-4B-abliterated-v2-Q4_K_S-GGUF --hf-file huihui-qwen3-4b-abliterated-v2-q4_k_s.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo Triangle104/Huihui-Qwen3-4B-abliterated-v2-Q4_K_S-GGUF --hf-file huihui-qwen3-4b-abliterated-v2-q4_k_s.gguf -c 2048 ```
lora456/ijaaaa
lora456
2025-06-20T03:15:57Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2025-06-20T03:15:27Z
--- license: creativeml-openrail-m ---
morturr/Llama-2-7b-hf-PAIR_amazon_headlines-COMB-amazon-comb-1-seed-18-2025-06-20
morturr
2025-06-20T03:15:41Z
0
0
peft
[ "peft", "safetensors", "trl", "sft", "generated_from_trainer", "base_model:meta-llama/Llama-2-7b-hf", "base_model:adapter:meta-llama/Llama-2-7b-hf", "license:llama2", "region:us" ]
null
2025-06-20T03:15:29Z
--- library_name: peft license: llama2 base_model: meta-llama/Llama-2-7b-hf tags: - trl - sft - generated_from_trainer model-index: - name: Llama-2-7b-hf-PAIR_amazon_headlines-COMB-amazon-comb-1-seed-18-2025-06-20 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Llama-2-7b-hf-PAIR_amazon_headlines-COMB-amazon-comb-1-seed-18-2025-06-20 This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 8 - eval_batch_size: 8 - seed: 18 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - PEFT 0.13.2 - Transformers 4.46.1 - Pytorch 2.5.1+cu124 - Datasets 3.0.2 - Tokenizers 0.20.1
BootesVoid/cmc2qnuc500gfaqih5d8r2dvp_cmc3u42se01dfnx8dqz3uw35d
BootesVoid
2025-06-20T03:15:14Z
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-06-20T03:15:12Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: LUNASKYE --- # Cmc2Qnuc500Gfaqih5D8R2Dvp_Cmc3U42Se01Dfnx8Dqz3Uw35D <Gallery /> ## About this LoRA This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI. It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `LUNASKYE` to trigger the image generation. ## Run this LoRA with an API using Replicate ```py import replicate input = { "prompt": "LUNASKYE", "lora_weights": "https://huggingface.co/BootesVoid/cmc2qnuc500gfaqih5d8r2dvp_cmc3u42se01dfnx8dqz3uw35d/resolve/main/lora.safetensors" } output = replicate.run( "black-forest-labs/flux-dev-lora", input=input ) for index, item in enumerate(output): with open(f"output_{index}.webp", "wb") as file: file.write(item.read()) ``` ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('BootesVoid/cmc2qnuc500gfaqih5d8r2dvp_cmc3u42se01dfnx8dqz3uw35d', weight_name='lora.safetensors') image = pipeline('LUNASKYE').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Training details - Steps: 2000 - Learning rate: 0.0004 - LoRA rank: 16 ## Contribute your own examples You can use the [community tab](https://huggingface.co/BootesVoid/cmc2qnuc500gfaqih5d8r2dvp_cmc3u42se01dfnx8dqz3uw35d/discussions) to add images that show off what you’ve made with this LoRA.
Mutly/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-grazing_slow_stork
Mutly
2025-06-20T03:14:06Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am grazing slow stork", "unsloth", "trl", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-1.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-05-23T22:36:16Z
--- base_model: Gensyn/Qwen2.5-1.5B-Instruct library_name: transformers model_name: Qwen2.5-1.5B-Instruct-Gensyn-Swarm-grazing_slow_stork tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am grazing slow stork - unsloth - trl licence: license --- # Model Card for Qwen2.5-1.5B-Instruct-Gensyn-Swarm-grazing_slow_stork This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="Mutly/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-grazing_slow_stork", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.48.2 - Pytorch: 2.5.1 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
CLLBJ16/CoMemo-9B
CLLBJ16
2025-06-20T03:12:33Z
16
1
transformers
[ "transformers", "safetensors", "comemo_chat", "feature-extraction", "internvl", "custom_code", "image-text-to-text", "conversational", "multilingual", "arxiv:2506.06279", "base_model:OpenGVLab/InternViT-300M-448px", "base_model:merge:OpenGVLab/InternViT-300M-448px", "base_model:internlm/internlm2-chat-7b", "base_model:merge:internlm/internlm2-chat-7b", "license:mit", "region:us" ]
image-text-to-text
2025-06-17T08:18:10Z
--- base_model: - OpenGVLab/InternViT-300M-448px - internlm/internlm2-chat-7b language: - multilingual library_name: transformers license: mit pipeline_tag: image-text-to-text tags: - internvl - custom_code base_model_relation: merge --- # CoMemo-9B [\[📂 GitHub\]](https://github.com/LALBJ/CoMemo) [\[📜 Paper\]](https://arxiv.org/pdf/2506.06279) [\[🌐 Project Page\]](https://lalbj.github.io/projects/CoMemo/) [\[🚀 Quick Start\]](#quick-start) ## Introduction LVLMs inherited LLMs architectural designs, which introduce suboptimal characteristics for multimodal processing. First, LVLMs exhibit a bimodal distribution in attention allocation, leading to the progressive neglect of central visual content as context expands. Second, conventional positional encoding schemes fail to preserve vital 2D structural relationships when processing dynamic high-resolution images. To address these issues, we propose CoMemo, a novel model architecture. CoMemo employs a dual-path approach for visual processing: one path maps image tokens to the text token representation space for causal self-attention, while the other introduces cross-attention, enabling context-agnostic computation between the input sequence and image information. Additionally, we developed RoPE-DHR, a new positional encoding method tailored for LVLMs with dynamic high-resolution inputs. RoPE-DHR mitigates the remote decay problem caused by dynamic high-resolution inputs while preserving the 2D structural information of images. Evaluated on seven diverse tasks, including long-context understanding, multi-image reasoning, and visual question answering, CoMemo achieves relative improvements of 17.2%, 7.0%, and 5.6% on Caption, Long-Generation, and Long-Context tasks, respectively, with consistent performance gains across various benchmarks. For more details, please refer to our [paper](https://arxiv.org/pdf/2506.06279) and [GitHub](https://github.com/LALBJ/CoMemo). | Model Name | Vision Part | Language Part | HF Link | | :------------------: | :---------------------------------------------------------------------------------: | :------------------------------------------------------------------------------------------: | :--------------------------------------------------------------: | | CoMemo-2B | [InternViT-300M-448px](https://huggingface.co/OpenGVLab/InternViT-300M-448px) | [internlm2-chat-1_8b](https://huggingface.co/internlm/internlm2-chat-1_8b) | [🤗 link](https://huggingface.co/CLLBJ16/CoMemo-2B) | | CoMemo-9B | [InternViT-300M-448px](https://huggingface.co/OpenGVLab/InternViT-300M-448px) | [internlm2-chat-7b](https://huggingface.co/internlm/internlm2-chat-7b) | [🤗 link](https://huggingface.co/CLLBJ16/CoMemo-9B) | ## Method Overview <div class="image-row" style="display: flex; justify-content: center; gap: 10px; margin: 20px 0;"> <img src="assets/RoPE_DHR.png" alt="teaser" style="max-width: 30%; height: auto;" /> <img src="assets/CoMemo_framework.png" alt="teaser" style="max-width: 53%; height: auto;" /> </div> **Left:** The computation process of Rope-DHR. The colors are assigned based on a mapping of position IDs in RoPE. **Right:** Framework of CoMemo. Both paths share the same encoder and projector ## Quick Start We provide an example code to run `CoMemo-9B` using `transformers`. > Please use transformers>=4.37.2 to ensure the model works normally. ### Inference with Transformers > Note: We determine whether to use RoPE-DHR by checking if the target_aspect_ratio parameter is passed to generate. > For OCR-related tasks requiring fine-grained image information, we recommend using the original RoPE. For long-context tasks, we recommend using RoPE-DHR. ```python import torch from PIL import Image import torchvision.transforms as T from torchvision.transforms.functional import InterpolationMode from transformers import AutoModel, AutoTokenizer path = "CLLBJ16/CoMemo-9B" model = AutoModel.from_pretrained( path, torch_dtype=torch.bfloat16, trust_remote_code=True, low_cpu_mem_usage=True).eval().cuda() tokenizer = AutoTokenizer.from_pretrained(path, trust_remote_code=True, use_fast=False) IMAGENET_MEAN = (0.485, 0.456, 0.406) IMAGENET_STD = (0.229, 0.224, 0.225) def build_transform(input_size): MEAN, STD = IMAGENET_MEAN, IMAGENET_STD transform = T.Compose([ T.Lambda(lambda img: img.convert('RGB') if img.mode != 'RGB' else img), T.Resize((input_size, input_size), interpolation=InterpolationMode.BICUBIC), T.ToTensor(), T.Normalize(mean=MEAN, std=STD) ]) return transform def find_closest_aspect_ratio(aspect_ratio, target_ratios, width, height, image_size): best_ratio_diff = float('inf') best_ratio = (1, 1) area = width * height for ratio in target_ratios: target_aspect_ratio = ratio[0] / ratio[1] ratio_diff = abs(aspect_ratio - target_aspect_ratio) if ratio_diff < best_ratio_diff: best_ratio_diff = ratio_diff best_ratio = ratio elif ratio_diff == best_ratio_diff: if area > 0.5 * image_size * image_size * ratio[0] * ratio[1]: best_ratio = ratio return best_ratio def dynamic_preprocess(image, min_num=1, max_num=12, image_size=448, use_thumbnail=False): orig_width, orig_height = image.size aspect_ratio = orig_width / orig_height # calculate the existing image aspect ratio target_ratios = set( (i, j) for n in range(min_num, max_num + 1) for i in range(1, n + 1) for j in range(1, n + 1) if i * j <= max_num and i * j >= min_num) target_ratios = sorted(target_ratios, key=lambda x: x[0] * x[1]) # find the closest aspect ratio to the target target_aspect_ratio = find_closest_aspect_ratio( aspect_ratio, target_ratios, orig_width, orig_height, image_size) # calculate the target width and height target_width = image_size * target_aspect_ratio[0] target_height = image_size * target_aspect_ratio[1] blocks = target_aspect_ratio[0] * target_aspect_ratio[1] # resize the image resized_img = image.resize((target_width, target_height)) processed_images = [] for i in range(blocks): box = ( (i % (target_width // image_size)) * image_size, (i // (target_width // image_size)) * image_size, ((i % (target_width // image_size)) + 1) * image_size, ((i // (target_width // image_size)) + 1) * image_size ) # split the image split_img = resized_img.crop(box) processed_images.append(split_img) assert len(processed_images) == blocks if use_thumbnail and len(processed_images) != 1: thumbnail_img = image.resize((image_size, image_size)) processed_images.append(thumbnail_img) return processed_images, target_aspect_ratio def load_image(image_file, input_size=448, max_num=12): image = Image.open(image_file).convert('RGB') transform = build_transform(input_size=input_size) images, target_aspect_ratio = dynamic_preprocess(image, image_size=input_size, use_thumbnail=True, max_num=max_num) pixel_values = [transform(image) for image in images] pixel_values = torch.stack(pixel_values) return pixel_values, target_aspect_ratio pixel_values, target_aspect_ratio = load_image('./assets/image1.jpg', max_num=12) pixel_values = pixel_values.to(torch.bfloat16).cuda() generation_config = dict(max_new_tokens=1024, do_sample=True) # single-image single-round conversation (单图单轮对话) question = '<image> Please describe the image shortly.' target_aspect_ratio = [target_aspect_ratio] # Use RoPE-DHR response = model.chat(tokenizer, pixel_values, question, generation_config, target_aspect_ratio=target_aspect_ratio) # # Use Original Rope # response = model.chat(tokenizer, pixel_values, question, generation_config, target_aspect_ratio=target_aspect_ratio) print(f'User: {question} Assistant: {response}') # multi-image single-round conversation, separate images (多图多轮对话,独立图像) pixel_values1, target_aspect_ratio1 = load_image('./assets/image1.jpg', max_num=12) pixel_values1 = pixel_values1.to(torch.bfloat16).cuda() pixel_values2, target_aspect_ratio2 = load_image('./assets/image2.jpg', max_num=12) pixel_values2 = pixel_values2.to(torch.bfloat16).cuda() pixel_values = torch.cat((pixel_values1, pixel_values2), dim=0) target_aspect_ratio = [target_aspect_ratio1, target_aspect_ratio2] num_patches_list = [pixel_values1.size(0), pixel_values2.size(0)] question = 'Image-1: <image> Image-2: <image> What are the similarities and differences between these two images.' # Use RoPE-DHR response = model.chat(tokenizer, pixel_values, question, generation_config, num_patches_list=num_patches_list, target_aspect_ratio=target_aspect_ratio) # # Use Original RoPE # response = model.chat(tokenizer, pixel_values, question, generation_config, # num_patches_list=num_patches_list, target_aspect_ratio=target_aspect_ratio) print(f'User: {question} Assistant: {response}') ``` ## License This project is released under the MIT license. Parts of this project contain code and models from other sources, which are subject to their respective licenses. ## Citation If you find this project useful in your research, please consider citing: ```BibTeX @article{liu2025comemo, title={CoMemo: LVLMs Need Image Context with Image Memory}, author={Liu, Shi and Su, Weijie and Zhu, Xizhou and Wang, Wenhai and Dai, Jifeng}, journal={arXiv preprint arXiv:2506.06279}, year={2025} } ```
Triangle104/Huihui-Qwen3-0.6B-abliterated-v2-Q8_0-GGUF
Triangle104
2025-06-20T03:06:30Z
0
0
transformers
[ "transformers", "gguf", "chat", "abliterated", "uncensored", "llama-cpp", "gguf-my-repo", "text-generation", "base_model:huihui-ai/Huihui-Qwen3-0.6B-abliterated-v2", "base_model:quantized:huihui-ai/Huihui-Qwen3-0.6B-abliterated-v2", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-generation
2025-06-20T03:06:24Z
--- library_name: transformers license: apache-2.0 license_link: https://huggingface.co/Qwen/Qwen3-0.6B/blob/main/LICENSE pipeline_tag: text-generation base_model: huihui-ai/Huihui-Qwen3-0.6B-abliterated-v2 tags: - chat - abliterated - uncensored - llama-cpp - gguf-my-repo --- # Triangle104/Huihui-Qwen3-0.6B-abliterated-v2-Q8_0-GGUF This model was converted to GGUF format from [`huihui-ai/Huihui-Qwen3-0.6B-abliterated-v2`](https://huggingface.co/huihui-ai/Huihui-Qwen3-0.6B-abliterated-v2) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/huihui-ai/Huihui-Qwen3-0.6B-abliterated-v2) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo Triangle104/Huihui-Qwen3-0.6B-abliterated-v2-Q8_0-GGUF --hf-file huihui-qwen3-0.6b-abliterated-v2-q8_0.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo Triangle104/Huihui-Qwen3-0.6B-abliterated-v2-Q8_0-GGUF --hf-file huihui-qwen3-0.6b-abliterated-v2-q8_0.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo Triangle104/Huihui-Qwen3-0.6B-abliterated-v2-Q8_0-GGUF --hf-file huihui-qwen3-0.6b-abliterated-v2-q8_0.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo Triangle104/Huihui-Qwen3-0.6B-abliterated-v2-Q8_0-GGUF --hf-file huihui-qwen3-0.6b-abliterated-v2-q8_0.gguf -c 2048 ```
Triangle104/Huihui-Qwen3-0.6B-abliterated-v2-Q6_K-GGUF
Triangle104
2025-06-20T03:05:58Z
0
0
transformers
[ "transformers", "gguf", "chat", "abliterated", "uncensored", "llama-cpp", "gguf-my-repo", "text-generation", "base_model:huihui-ai/Huihui-Qwen3-0.6B-abliterated-v2", "base_model:quantized:huihui-ai/Huihui-Qwen3-0.6B-abliterated-v2", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-generation
2025-06-20T03:05:54Z
--- library_name: transformers license: apache-2.0 license_link: https://huggingface.co/Qwen/Qwen3-0.6B/blob/main/LICENSE pipeline_tag: text-generation base_model: huihui-ai/Huihui-Qwen3-0.6B-abliterated-v2 tags: - chat - abliterated - uncensored - llama-cpp - gguf-my-repo --- # Triangle104/Huihui-Qwen3-0.6B-abliterated-v2-Q6_K-GGUF This model was converted to GGUF format from [`huihui-ai/Huihui-Qwen3-0.6B-abliterated-v2`](https://huggingface.co/huihui-ai/Huihui-Qwen3-0.6B-abliterated-v2) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/huihui-ai/Huihui-Qwen3-0.6B-abliterated-v2) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo Triangle104/Huihui-Qwen3-0.6B-abliterated-v2-Q6_K-GGUF --hf-file huihui-qwen3-0.6b-abliterated-v2-q6_k.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo Triangle104/Huihui-Qwen3-0.6B-abliterated-v2-Q6_K-GGUF --hf-file huihui-qwen3-0.6b-abliterated-v2-q6_k.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo Triangle104/Huihui-Qwen3-0.6B-abliterated-v2-Q6_K-GGUF --hf-file huihui-qwen3-0.6b-abliterated-v2-q6_k.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo Triangle104/Huihui-Qwen3-0.6B-abliterated-v2-Q6_K-GGUF --hf-file huihui-qwen3-0.6b-abliterated-v2-q6_k.gguf -c 2048 ```
Triangle104/Huihui-Qwen3-8B-abliterated-v2-Q8_0-GGUF
Triangle104
2025-06-20T02:57:44Z
0
0
transformers
[ "transformers", "gguf", "chat", "abliterated", "uncensored", "llama-cpp", "gguf-my-repo", "text-generation", "base_model:huihui-ai/Huihui-Qwen3-8B-abliterated-v2", "base_model:quantized:huihui-ai/Huihui-Qwen3-8B-abliterated-v2", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-generation
2025-06-20T02:57:04Z
--- library_name: transformers license: apache-2.0 license_link: https://huggingface.co/Qwen/Qwen3-8B/blob/main/LICENSE pipeline_tag: text-generation base_model: huihui-ai/Huihui-Qwen3-8B-abliterated-v2 tags: - chat - abliterated - uncensored - llama-cpp - gguf-my-repo --- # Triangle104/Huihui-Qwen3-8B-abliterated-v2-Q8_0-GGUF This model was converted to GGUF format from [`huihui-ai/Huihui-Qwen3-8B-abliterated-v2`](https://huggingface.co/huihui-ai/Huihui-Qwen3-8B-abliterated-v2) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/huihui-ai/Huihui-Qwen3-8B-abliterated-v2) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo Triangle104/Huihui-Qwen3-8B-abliterated-v2-Q8_0-GGUF --hf-file huihui-qwen3-8b-abliterated-v2-q8_0.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo Triangle104/Huihui-Qwen3-8B-abliterated-v2-Q8_0-GGUF --hf-file huihui-qwen3-8b-abliterated-v2-q8_0.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo Triangle104/Huihui-Qwen3-8B-abliterated-v2-Q8_0-GGUF --hf-file huihui-qwen3-8b-abliterated-v2-q8_0.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo Triangle104/Huihui-Qwen3-8B-abliterated-v2-Q8_0-GGUF --hf-file huihui-qwen3-8b-abliterated-v2-q8_0.gguf -c 2048 ```
lora456/lindaaaa
lora456
2025-06-20T02:49:22Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2025-06-20T02:48:42Z
--- license: creativeml-openrail-m ---
tranthanhnguyenai1/CoderAG_Q17B_Q7B
tranthanhnguyenai1
2025-06-20T02:46:57Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "qwen3", "trl", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-06-19T17:30:02Z
--- base_model: unsloth/qwen3-1.7b-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - qwen3 - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** tranthanhnguyenai1 - **License:** apache-2.0 - **Finetuned from model :** unsloth/qwen3-1.7b-unsloth-bnb-4bit This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
Triangle104/Huihui-Qwen3-8B-abliterated-v2-Q5_K_S-GGUF
Triangle104
2025-06-20T02:45:02Z
0
0
transformers
[ "transformers", "gguf", "chat", "abliterated", "uncensored", "llama-cpp", "gguf-my-repo", "text-generation", "base_model:huihui-ai/Huihui-Qwen3-8B-abliterated-v2", "base_model:quantized:huihui-ai/Huihui-Qwen3-8B-abliterated-v2", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-generation
2025-06-20T02:44:37Z
--- library_name: transformers license: apache-2.0 license_link: https://huggingface.co/Qwen/Qwen3-8B/blob/main/LICENSE pipeline_tag: text-generation base_model: huihui-ai/Huihui-Qwen3-8B-abliterated-v2 tags: - chat - abliterated - uncensored - llama-cpp - gguf-my-repo --- # Triangle104/Huihui-Qwen3-8B-abliterated-v2-Q5_K_S-GGUF This model was converted to GGUF format from [`huihui-ai/Huihui-Qwen3-8B-abliterated-v2`](https://huggingface.co/huihui-ai/Huihui-Qwen3-8B-abliterated-v2) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/huihui-ai/Huihui-Qwen3-8B-abliterated-v2) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo Triangle104/Huihui-Qwen3-8B-abliterated-v2-Q5_K_S-GGUF --hf-file huihui-qwen3-8b-abliterated-v2-q5_k_s.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo Triangle104/Huihui-Qwen3-8B-abliterated-v2-Q5_K_S-GGUF --hf-file huihui-qwen3-8b-abliterated-v2-q5_k_s.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo Triangle104/Huihui-Qwen3-8B-abliterated-v2-Q5_K_S-GGUF --hf-file huihui-qwen3-8b-abliterated-v2-q5_k_s.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo Triangle104/Huihui-Qwen3-8B-abliterated-v2-Q5_K_S-GGUF --hf-file huihui-qwen3-8b-abliterated-v2-q5_k_s.gguf -c 2048 ```
Montecarlo2024/Llama3.1-8B-Nemotron-Python-v2_Q5_k_m
Montecarlo2024
2025-06-20T02:38:58Z
0
0
transformers
[ "transformers", "gguf", "llama", "text-generation-inference", "unsloth", "en", "base_model:nvidia/Llama-3.1-Nemotron-Nano-4B-v1.1", "base_model:quantized:nvidia/Llama-3.1-Nemotron-Nano-4B-v1.1", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-06-20T02:38:11Z
--- base_model: nvidia/Llama-3.1-Nemotron-Nano-4B-v1.1 tags: - text-generation-inference - transformers - unsloth - llama - gguf license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** Montecarlo2024 - **License:** apache-2.0 - **Finetuned from model :** nvidia/Llama-3.1-Nemotron-Nano-4B-v1.1 This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
hardlyworking/Final4BRC3-Q4_0-GGUF
hardlyworking
2025-06-20T02:37:22Z
0
0
transformers
[ "transformers", "gguf", "axolotl", "generated_from_trainer", "llama-cpp", "gguf-my-repo", "dataset:ResplendentAI/Luna_NSFW_Text", "dataset:ResplendentAI/Sissification_Hypno_1k", "dataset:ResplendentAI/Synthetic_Soul_1k", "base_model:hardlyworking/Final4BRC3", "base_model:quantized:hardlyworking/Final4BRC3", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us" ]
null
2025-06-20T02:37:09Z
--- library_name: transformers license: cc-by-nc-4.0 base_model: hardlyworking/Final4BRC3 tags: - axolotl - generated_from_trainer - llama-cpp - gguf-my-repo datasets: - ResplendentAI/Luna_NSFW_Text - ResplendentAI/Sissification_Hypno_1k - ResplendentAI/Synthetic_Soul_1k model-index: - name: Final4BRC results: [] --- # hardlyworking/Final4BRC3-Q4_0-GGUF This model was converted to GGUF format from [`hardlyworking/Final4BRC3`](https://huggingface.co/hardlyworking/Final4BRC3) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/hardlyworking/Final4BRC3) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo hardlyworking/Final4BRC3-Q4_0-GGUF --hf-file final4brc3-q4_0.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo hardlyworking/Final4BRC3-Q4_0-GGUF --hf-file final4brc3-q4_0.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo hardlyworking/Final4BRC3-Q4_0-GGUF --hf-file final4brc3-q4_0.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo hardlyworking/Final4BRC3-Q4_0-GGUF --hf-file final4brc3-q4_0.gguf -c 2048 ```
strawhat/minicpm2.5-aigciqa-20k-ft
strawhat
2025-06-20T02:31:23Z
0
0
transformers
[ "transformers", "safetensors", "image-text-to-text", "conversational", "arxiv:2502.15167", "base_model:openbmb/MiniCPM-Llama3-V-2_5", "base_model:finetune:openbmb/MiniCPM-Llama3-V-2_5", "license:mit", "endpoints_compatible", "region:us" ]
image-text-to-text
2025-02-16T03:45:56Z
--- base_model: openbmb/MiniCPM-Llama3-V-2_5 license: mit pipeline_tag: image-text-to-text library_name: transformers --- This repository contains the model described in [M3-AGIQA: Multimodal, Multi-Round, Multi-Aspect AI-Generated Image Quality Assessment](https://huggingface.co/papers/2502.15167). Code: https://github.com/strawhatboy/M3-AGIQA
quidangz/LLama-8B-Instruct-MultiTask-CE-v2
quidangz
2025-06-20T02:12:16Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-20T01:53:18Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
morturr/Llama-2-7b-hf-PAIR_amazon_headlines-COMB-amazon-comb-1-seed-7-2025-06-20
morturr
2025-06-20T02:10:48Z
0
0
peft
[ "peft", "safetensors", "trl", "sft", "generated_from_trainer", "base_model:meta-llama/Llama-2-7b-hf", "base_model:adapter:meta-llama/Llama-2-7b-hf", "license:llama2", "region:us" ]
null
2025-06-20T02:10:38Z
--- library_name: peft license: llama2 base_model: meta-llama/Llama-2-7b-hf tags: - trl - sft - generated_from_trainer model-index: - name: Llama-2-7b-hf-PAIR_amazon_headlines-COMB-amazon-comb-1-seed-7-2025-06-20 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Llama-2-7b-hf-PAIR_amazon_headlines-COMB-amazon-comb-1-seed-7-2025-06-20 This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 8 - eval_batch_size: 8 - seed: 7 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - PEFT 0.13.2 - Transformers 4.46.1 - Pytorch 2.5.1+cu124 - Datasets 3.0.2 - Tokenizers 0.20.1
rrayhka/Qwen2.5-1.5B-Kemenko-Merged
rrayhka
2025-06-20T02:09:10Z
16
0
transformers
[ "transformers", "pytorch", "safetensors", "qwen2", "text-generation", "text-generation-inference", "unsloth", "trl", "sft", "conversational", "en", "base_model:unsloth/Qwen2.5-1.5B-Instruct-unsloth-bnb-4bit", "base_model:finetune:unsloth/Qwen2.5-1.5B-Instruct-unsloth-bnb-4bit", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-03-06T07:35:24Z
--- base_model: unsloth/Qwen2.5-1.5B-Instruct-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - qwen2 - trl - sft license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** rrayhka - **License:** apache-2.0 - **Finetuned from model :** unsloth/Qwen2.5-1.5B-Instruct-unsloth-bnb-4bit This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
iamashir12/hmh
iamashir12
2025-06-20T02:03:44Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-06-20T02:03:44Z
--- license: apache-2.0 ---
quickmt/quickmt-en-ko
quickmt
2025-06-20T02:03:03Z
0
0
null
[ "translation", "en", "ko", "dataset:quickmt/quickmt-train.ko-en", "license:cc-by-4.0", "model-index", "region:us" ]
translation
2025-06-19T21:24:47Z
--- language: - en - ko tags: - translation license: cc-by-4.0 datasets: - quickmt/quickmt-train.ko-en model-index: - name: quickmt-en-ko results: - task: name: Translation eng-kor type: translation args: eng-kor dataset: name: flores101-devtest type: flores_101 args: eng_Latn kor_Hang devtest metrics: - name: CHRF type: chrf value: 56.25 - name: BLEU type: bleu value: 27.03 - name: COMET type: comet value: 86.11 --- # `quickmt-en-ko` Neural Machine Translation Model `quickmt-en-ko` is a reasonably fast and reasonably accurate neural machine translation model for translation from `en` into `ko`. ## Model Information * Trained using [`eole`](https://github.com/eole-nlp/eole) * 185M parameter transformer 'big' with 8 encoder layers and 2 decoder layers * 20k sentencepiece vocabularies * Exported for fast inference to [CTranslate2](https://github.com/OpenNMT/CTranslate2) format * Training data: https://huggingface.co/datasets/quickmt/quickmt-train.ko-en/tree/main See the `eole` model configuration in this repository for further details and the `eole-model` for the raw `eole` (pytorch) model. ## Usage with `quickmt` You must install the Nvidia cuda toolkit first, if you want to do GPU inference. Next, install the `quickmt` python library and download the model: ```bash git clone https://github.com/quickmt/quickmt.git pip install ./quickmt/ quickmt-model-download quickmt/quickmt-en-ko ./quickmt-en-ko ``` Finally use the model in python: ```python from quickmt import Translator # Auto-detects GPU, set to "cpu" to force CPU inference t = Translator("./quickmt-en-ko/", device="auto") # Translate - set beam size to 5 for higher quality (but slower speed) sample_text = 'Dr. Ehud Ur, professor of medicine at Dalhousie University in Halifax, Nova Scotia and chair of the clinical and scientific division of the Canadian Diabetes Association cautioned that the research is still in its early days.' t(sample_text, beam_size=5) # Get alternative translations by sampling # You can pass any cTranslate2 `translate_batch` arguments t([sample_text], sampling_temperature=1.2, beam_size=1, sampling_topk=50, sampling_topp=0.9) > '노바 스코샤 주 핼리팩스에 있는 댈하우지 대학교 의과 교수인 에후드 우르 박사와 캐나다 당뇨병 협회의 임상 및 과학 부문 의장은 이 연구가 아직 초기 단계에 있다고 경고했다.' ``` The model is in `ctranslate2` format, and the tokenizers are `sentencepiece`, so you can use `ctranslate2` directly instead of through `quickmt`. It is also possible to get this model to work with e.g. [LibreTranslate](https://libretranslate.com/) which also uses `ctranslate2` and `sentencepiece`. ## Metrics `bleu` and `chrf2` are calculated with [sacrebleu](https://github.com/mjpost/sacrebleu) on the [Flores200 `devtest` test set](https://huggingface.co/datasets/facebook/flores) ("kor_Hang"->"eng_Latn"). `comet22` with the [`comet`](https://github.com/Unbabel/COMET) library and the [default model](https://huggingface.co/Unbabel/wmt22-comet-da). "Time (s)" is the time in seconds to translate (using `ctranslate2`) the flores-devtest dataset (1012 sentences) on an RTX 4070s GPU with batch size 32 (faster speed is possible using a large batch size). | | bleu | chrf2 | comet22 | Time (s) | |:---------------------------------|-------:|--------:|----------:|-----------:| | quickmt/quickmt-en-ko | 14.97 | 36.96 | 87.10 | 1.36 | | facebook/nllb-200-distilled-600M | 12.15 | 33.66 | 87.39 | 24.75 | | facebook/nllb-200-distilled-1.3B | 13.23 | 35.62 | 88.39 | 40.29 | | facebook/m2m100_418M | 9.91 | 30.69 | 83.20 | 22.23 | | facebook/m2m100_1.2B | 11.35 | 33.26 | 85.65 | 41.15 | `quickmt-en-ko` is the fastest and is higher quality than `m2m100_418m`, `m2m100_1.2B` and `nllb-200-distilled-600M`.
Official-Prajakta-Mali-18-Viral-Video/19.FULL.VIDEO.Prajakta.Mali.Viral.Video.Tutorial.Official
Official-Prajakta-Mali-18-Viral-Video
2025-06-20T02:02:23Z
0
0
null
[ "region:us" ]
null
2025-06-20T02:01:40Z
<animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a> <animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
soonil/test_gemma4
soonil
2025-06-20T01:56:50Z
0
0
transformers
[ "transformers", "text-generation-inference", "unsloth", "gemma3", "trl", "en", "base_model:unsloth/gemma-3-12b-it-unsloth-bnb-4bit", "base_model:finetune:unsloth/gemma-3-12b-it-unsloth-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-06-20T01:56:48Z
--- base_model: unsloth/gemma-3-12b-it-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - gemma3 - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** soonil - **License:** apache-2.0 - **Finetuned from model :** unsloth/gemma-3-12b-it-unsloth-bnb-4bit This gemma3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
Ai-Sridhar/output
Ai-Sridhar
2025-06-20T01:54:48Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "gpt2", "text-generation", "generated_from_trainer", "base_model:openai-community/gpt2", "base_model:finetune:openai-community/gpt2", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-20T01:54:15Z
--- library_name: transformers license: mit base_model: gpt2 tags: - generated_from_trainer model-index: - name: output results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # output This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.48.3 - Pytorch 2.5.1+cu124 - Tokenizers 0.21.0
namdp-ptit/LLama-8B-Instruct-MultiTask
namdp-ptit
2025-06-20T01:44:50Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-20T01:33:10Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
18-hot-viral-indian-clip-video/18.LEAKS.VIDEO.hot.viral.indian.clip.video.new.Video.Tutorial.Official
18-hot-viral-indian-clip-video
2025-06-20T01:43:49Z
0
0
null
[ "region:us" ]
null
2025-06-20T01:41:22Z
<animated-image data-catalyst=""><a href="https://tinyurl.com/56hn7ue8/?news-viral-video" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
Official-a2z-jankari-18-Viral-Videos/FULL.VIDEO.a2z.jankari.Viral.Video.Tutorial.Official
Official-a2z-jankari-18-Viral-Videos
2025-06-20T01:32:11Z
0
0
null
[ "region:us" ]
null
2025-06-20T01:31:56Z
<animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a> <animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
roachkins/omega_UGCfmCL
roachkins
2025-06-20T01:17:20Z
0
0
null
[ "safetensors", "any-to-any", "omega", "omegalabs", "bittensor", "agi", "license:mit", "region:us" ]
any-to-any
2025-06-20T01:17:20Z
--- license: mit tags: - any-to-any - omega - omegalabs - bittensor - agi --- This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet. Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
BootesVoid/cmc2gny4f005zaqihg0q615ym_cmc42xk8400d7bfif5on8xs2l
BootesVoid
2025-06-20T01:12:31Z
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-06-20T01:12:29Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: ELIZA --- # Cmc2Gny4F005Zaqihg0Q615Ym_Cmc42Xk8400D7Bfif5On8Xs2L <Gallery /> ## About this LoRA This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI. It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `ELIZA` to trigger the image generation. ## Run this LoRA with an API using Replicate ```py import replicate input = { "prompt": "ELIZA", "lora_weights": "https://huggingface.co/BootesVoid/cmc2gny4f005zaqihg0q615ym_cmc42xk8400d7bfif5on8xs2l/resolve/main/lora.safetensors" } output = replicate.run( "black-forest-labs/flux-dev-lora", input=input ) for index, item in enumerate(output): with open(f"output_{index}.webp", "wb") as file: file.write(item.read()) ``` ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('BootesVoid/cmc2gny4f005zaqihg0q615ym_cmc42xk8400d7bfif5on8xs2l', weight_name='lora.safetensors') image = pipeline('ELIZA').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Training details - Steps: 2000 - Learning rate: 0.0004 - LoRA rank: 16 ## Contribute your own examples You can use the [community tab](https://huggingface.co/BootesVoid/cmc2gny4f005zaqihg0q615ym_cmc42xk8400d7bfif5on8xs2l/discussions) to add images that show off what you’ve made with this LoRA.
New-Mezzo-Fun-Viral-Video/VIDEO.mezzo.fun.Viral.Video.Tutorial.Official.4k.link
New-Mezzo-Fun-Viral-Video
2025-06-20T01:08:59Z
0
0
null
[ "region:us" ]
null
2025-06-20T01:08:38Z
<a rel="nofollow" href="https://viralflix.xyz/leaked/?fre"><img src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif" alt="fsd"></a> <a rel="nofollow" href="https://viralflix.xyz/leaked/?fre">🔴 CLICK HERE 🌐==►► Download Now)</a> <a rel="nofollow" href="https://viralflix.xyz/leaked/?fre">🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨 𝐋𝐢𝐧𝐤 )</a>
Sharing22/aab_c1
Sharing22
2025-06-20T01:06:42Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-20T01:03:38Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Video-PSG-Botafogo-Direct-Video/L.I.V.E.Paris-SG.Botafogo.En.Direct.Streaming.Gratuit.tv.Official
Video-PSG-Botafogo-Direct-Video
2025-06-20T00:50:39Z
0
0
null
[ "region:us" ]
null
2025-06-20T00:50:17Z
<animated-image data-catalyst=""><a href="https://tinyurl.com/mrmpsap6?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
morturr/Llama-2-7b-hf-PAIR_amazon_dadjokes-COMB-amazon-comb-3-seed-7-2025-06-20
morturr
2025-06-20T00:43:24Z
0
0
peft
[ "peft", "safetensors", "trl", "sft", "generated_from_trainer", "base_model:meta-llama/Llama-2-7b-hf", "base_model:adapter:meta-llama/Llama-2-7b-hf", "license:llama2", "region:us" ]
null
2025-06-20T00:43:06Z
--- library_name: peft license: llama2 base_model: meta-llama/Llama-2-7b-hf tags: - trl - sft - generated_from_trainer model-index: - name: Llama-2-7b-hf-PAIR_amazon_dadjokes-COMB-amazon-comb-3-seed-7-2025-06-20 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Llama-2-7b-hf-PAIR_amazon_dadjokes-COMB-amazon-comb-3-seed-7-2025-06-20 This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 6e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 7 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - PEFT 0.13.2 - Transformers 4.46.1 - Pytorch 2.5.1+cu124 - Datasets 3.0.2 - Tokenizers 0.20.1
morturr/Llama-2-7b-hf-PAIR_dadjokes_headlines-COMB-headlines-comb-1-seed-28-2025-06-20
morturr
2025-06-20T00:17:30Z
0
0
peft
[ "peft", "safetensors", "trl", "sft", "generated_from_trainer", "base_model:meta-llama/Llama-2-7b-hf", "base_model:adapter:meta-llama/Llama-2-7b-hf", "license:llama2", "region:us" ]
null
2025-06-20T00:17:10Z
--- library_name: peft license: llama2 base_model: meta-llama/Llama-2-7b-hf tags: - trl - sft - generated_from_trainer model-index: - name: Llama-2-7b-hf-PAIR_dadjokes_headlines-COMB-headlines-comb-1-seed-28-2025-06-20 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Llama-2-7b-hf-PAIR_dadjokes_headlines-COMB-headlines-comb-1-seed-28-2025-06-20 This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 6e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 28 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - PEFT 0.13.2 - Transformers 4.46.1 - Pytorch 2.5.1+cu124 - Datasets 3.0.2 - Tokenizers 0.20.1
BootesVoid/cmbgtk63y052tkfxsx1r4aht4_cmc40ms4h0065bfifo1vyit9i
BootesVoid
2025-06-20T00:15:31Z
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-06-20T00:15:25Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: SOPHIEV2 --- # Cmbgtk63Y052Tkfxsx1R4Aht4_Cmc40Ms4H0065Bfifo1Vyit9I <Gallery /> ## About this LoRA This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI. It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `SOPHIEV2` to trigger the image generation. ## Run this LoRA with an API using Replicate ```py import replicate input = { "prompt": "SOPHIEV2", "lora_weights": "https://huggingface.co/BootesVoid/cmbgtk63y052tkfxsx1r4aht4_cmc40ms4h0065bfifo1vyit9i/resolve/main/lora.safetensors" } output = replicate.run( "black-forest-labs/flux-dev-lora", input=input ) for index, item in enumerate(output): with open(f"output_{index}.webp", "wb") as file: file.write(item.read()) ``` ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('BootesVoid/cmbgtk63y052tkfxsx1r4aht4_cmc40ms4h0065bfifo1vyit9i', weight_name='lora.safetensors') image = pipeline('SOPHIEV2').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Training details - Steps: 2000 - Learning rate: 0.0004 - LoRA rank: 16 ## Contribute your own examples You can use the [community tab](https://huggingface.co/BootesVoid/cmbgtk63y052tkfxsx1r4aht4_cmc40ms4h0065bfifo1vyit9i/discussions) to add images that show off what you’ve made with this LoRA.
sgonzalezygil/sd-finetuning-dreambooth-v21-1400
sgonzalezygil
2025-06-20T00:06:56Z
0
0
diffusers
[ "diffusers", "safetensors", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2025-06-20T00:05:46Z
--- library_name: diffusers --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🧨 diffusers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Florisst/model_phi_3.5-mini_Justid
Florisst
2025-06-20T00:04:27Z
0
0
transformers
[ "transformers", "gguf", "llama", "text-generation-inference", "unsloth", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-06-19T18:26:20Z
--- base_model: unsloth/phi-3.5-mini-instruct-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llama - gguf license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** Florisst - **License:** apache-2.0 - **Finetuned from model :** unsloth/phi-3.5-mini-instruct-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
noon4ram/my-bert-fine-tuned2
noon4ram
2025-06-19T23:59:15Z
0
0
transformers
[ "transformers", "safetensors", "bert", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-06-19T22:52:59Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Juandavid7798/jpt-g
Juandavid7798
2025-06-19T23:52:40Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-06-19T23:52:40Z
--- license: apache-2.0 ---
BootesVoid/cmbzkritl063jrdqsc919cou6_cmc404bsz004mbfif3nd9a2on
BootesVoid
2025-06-19T23:51:07Z
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-06-19T23:51:06Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: LANA --- # Cmbzkritl063Jrdqsc919Cou6_Cmc404Bsz004Mbfif3Nd9A2On <Gallery /> ## About this LoRA This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI. It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `LANA` to trigger the image generation. ## Run this LoRA with an API using Replicate ```py import replicate input = { "prompt": "LANA", "lora_weights": "https://huggingface.co/BootesVoid/cmbzkritl063jrdqsc919cou6_cmc404bsz004mbfif3nd9a2on/resolve/main/lora.safetensors" } output = replicate.run( "black-forest-labs/flux-dev-lora", input=input ) for index, item in enumerate(output): with open(f"output_{index}.webp", "wb") as file: file.write(item.read()) ``` ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('BootesVoid/cmbzkritl063jrdqsc919cou6_cmc404bsz004mbfif3nd9a2on', weight_name='lora.safetensors') image = pipeline('LANA').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Training details - Steps: 2000 - Learning rate: 0.0004 - LoRA rank: 16 ## Contribute your own examples You can use the [community tab](https://huggingface.co/BootesVoid/cmbzkritl063jrdqsc919cou6_cmc404bsz004mbfif3nd9a2on/discussions) to add images that show off what you’ve made with this LoRA.
timm/naflexvit_base_patch16_parfac_gap.e300_s576_in1k
timm
2025-06-19T23:48:46Z
0
1
timm
[ "timm", "pytorch", "safetensors", "image-classification", "transformers", "dataset:imagenet-1k", "arxiv:2502.14786", "arxiv:2307.06304", "arxiv:2212.08013", "license:apache-2.0", "region:us" ]
image-classification
2025-06-19T23:48:32Z
--- tags: - image-classification - timm - transformers library_name: timm license: apache-2.0 datasets: - imagenet-1k --- # Model card for naflexvit_base_patch16_parfac_gap.e300_s576_in1k A NaFlexViT (Native-Aspect Flexible Vision Transformer) image classification model. This is variant with aspect-preserving, factorized position embedding is pretrained on ImageNet-1k by Ross Wightman. NaFlexViT is based on the NaFlex ViT changes proposed in SigLip-2 with a number of timm tweaks, enabling training with dynamic batch sizing that maintains native aspect ratios and flexible resolutions w/ variable patch sizes. The model is trained using the NaFlex data loader, which supports variable sequence lengths and resolutions during training. Uses RandAugment, MixUp, CutMix, and grayscale augmentation on top of standard random resize + crop (RRC). Optimized with NAdamW and cosine learning rate schedule. Training command: ``` train.py --data-dir /data/imagenet/ --amp --amp-dtype bfloat16 --model <name> --naflex-loader -b 64 --opt nadamw --lr 3e-4 --warmup-lr 0 --sched-on-updates --aa rand-m8-inc1-mstd1.0 --weight-decay .1 --grayscale-prob .1 --drop-path 0.2 --reprob 0 --mixup 0.8 --cutmix 1.0 --remode pixel -j 8 ``` ## Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 86.5 - GMACs: 55.9 - Activations (M): 102.3 - Image size: 384 x 384 - **Papers:** - PyTorch Image Models: https://github.com/huggingface/pytorch-image-models - SigLIP 2: Multilingual Vision-Language Encoders with Improved Semantic Understanding, Localization, and Dense Features: https://arxiv.org/abs/2502.14786 - Patch n' Pack: NaViT, a Vision Transformer for any Aspect Ratio and Resolution: https://arxiv.org/abs/2307.06304 - FlexiViT: One Model for All Patch Sizes: https://arxiv.org/abs/2212.08013 - **Dataset:** ImageNet-1k - **Training:** - Sequence Lengths: [128, 256, 576, 784, 1024] - Epochs: 300 - Batch Size: 64 per GPU (4 GPUs) @ seq-len 1024 - Optimizer: NAdamW - Learning Rate: 3e-4 - Weight Decay: 0.1 - Augmentation: RandAugment (m=8), MixUp (0.8), CutMix (1.0), Grayscale (0.1) - Drop Path: 0.2 - AMP dtype: bfloat16 - **Architecture:** - Variant: base - Patch Size: 16x16 - Positional Embedding: aspect-preserving, factorized position embedding - Pooling: global average pooling (GAP) ## Model Usage ### Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model('naflexvit_base_patch16_parfac_gap.e300_s576_in1k', pretrained=True) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5) ``` ### Feature Map Extraction ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'naflexvit_base_patch16_parfac_gap.e300_s576_in1k', pretrained=True, features_only=True, ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 for o in output: # print shape of each feature map in output # e.g.: # torch.Size([1, 768, 24, 24]) # torch.Size([1, 768, 24, 24]) # torch.Size([1, 768, 24, 24]) print(o.shape) ``` ### Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'naflexvit_base_patch16_parfac_gap.e300_s576_in1k', pretrained=True, num_classes=0, # remove classifier nn.Linear ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor # or equivalently (without needing to set num_classes=0) output = model.forward_features(transforms(img).unsqueeze(0)) # output is unpooled, a (1, 580, 768) shaped tensor output = model.forward_head(output, pre_logits=True) # output is a (1, num_features) shaped tensor ``` ## Model Comparison | Model | Top-1 Acc | Top-5 Acc | Params (M) | Eval Seq Len | |:---|:---:|:---:|:---:|:---:| | naflexvit_base_patch16_par_gap.e300_s576_in1k | 83.67 | 96.45 | 86.63 | 576 | | naflexvit_base_patch16_parfac_gap.e300_s576_in1k | 83.63 | 96.41 | 86.46 | 576 | | naflexvit_base_patch16_gap.e300_s576_in1k | 83.50 | 96.46 | 86.63 | 576 | ## Citation ```bibtex @misc{rw2019timm, author = {Ross Wightman}, title = {PyTorch Image Models}, year = {2019}, publisher = {GitHub}, journal = {GitHub repository}, doi = {10.5281/zenodo.4414861}, howpublished = {\url{https://github.com/huggingface/pytorch-image-models}} } ``` ```bibtex @article{tschannen2025siglip, title={Siglip 2: Multilingual vision-language encoders with improved semantic understanding, localization, and dense features}, author={Tschannen, Michael and Gritsenko, Alexey and Wang, Xiao and Naeem, Muhammad Ferjad and Alabdulmohsin, Ibrahim and Parthasarathy, Nikhil and Evans, Talfan and Beyer, Lucas and Xia, Ye and Mustafa, Basil and others}, journal={arXiv preprint arXiv:2502.14786}, year={2025} } ``` ```bibtex @article{dehghani2023navit, title={Patch n' Pack: NaViT, a Vision Transformer for any Aspect Ratio and Resolution}, author={Dehghani, Mostafa and Mustafa, Basil and Djolonga, Josip and Heek, Jonathan and Minderer, Matthias and Caron, Mathilde and Steiner, Andreas and Puigcerver, Joan and Geirhos, Robert and Alabdulmohsin, Ibrahim and others}, journal={arXiv preprint arXiv:2307.06304}, year={2023} } ``` ```bibtex @article{beyer2022flexivit, title={FlexiViT: One Model for All Patch Sizes}, author={Beyer, Lucas and Izmailov, Pavel and Kolesnikov, Alexander and Caron, Mathilde and Kornblith, Simon and Zhai, Xiaohua and Minderer, Matthias and Tschannen, Michael and Alabdulmohsin, Ibrahim and Pavetic, Filip}, journal={arXiv preprint arXiv:2212.08013}, year={2022} } ```
timm/naflexvit_base_patch16_par_gap.e300_s576_in1k
timm
2025-06-19T23:48:28Z
0
1
timm
[ "timm", "pytorch", "safetensors", "image-classification", "transformers", "dataset:imagenet-1k", "arxiv:2502.14786", "arxiv:2307.06304", "arxiv:2212.08013", "license:apache-2.0", "region:us" ]
image-classification
2025-06-19T23:48:16Z
--- tags: - image-classification - timm - transformers library_name: timm license: apache-2.0 datasets: - imagenet-1k --- # Model card for naflexvit_base_patch16_par_gap.e300_s576_in1k A NaFlexViT (Native-Aspect Flexible Vision Transformer) image classification model. This is variant with aspect-preserving 2D position embedding is pretrained on ImageNet-1k by Ross Wightman. NaFlexViT is based on the NaFlex ViT changes proposed in SigLip-2 with a number of timm tweaks, enabling training with dynamic batch sizing that maintains native aspect ratios and flexible resolutions w/ variable patch sizes. The model is trained using the NaFlex data loader, which supports variable sequence lengths and resolutions during training. Uses RandAugment, MixUp, CutMix, and grayscale augmentation on top of standard random resize + crop (RRC). Optimized with NAdamW and cosine learning rate schedule. Training command: ``` train.py --data-dir /data/imagenet/ --amp --amp-dtype bfloat16 --model <name> --naflex-loader -b 64 --opt nadamw --lr 3e-4 --warmup-lr 0 --sched-on-updates --aa rand-m8-inc1-mstd1.0 --weight-decay .1 --grayscale-prob .1 --drop-path 0.2 --reprob 0 --mixup 0.8 --cutmix 1.0 --remode pixel -j 8 ``` ## Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 86.6 - GMACs: 55.9 - Activations (M): 102.3 - Image size: 384 x 384 - **Papers:** - PyTorch Image Models: https://github.com/huggingface/pytorch-image-models - SigLIP 2: Multilingual Vision-Language Encoders with Improved Semantic Understanding, Localization, and Dense Features: https://arxiv.org/abs/2502.14786 - Patch n' Pack: NaViT, a Vision Transformer for any Aspect Ratio and Resolution: https://arxiv.org/abs/2307.06304 - FlexiViT: One Model for All Patch Sizes: https://arxiv.org/abs/2212.08013 - **Dataset:** ImageNet-1k - **Training:** - Sequence Lengths: [128, 256, 576, 784, 1024] - Epochs: 300 - Batch Size: 64 per GPU (4 GPUs) @ seq-len 1024 - Optimizer: NAdamW - Learning Rate: 3e-4 - Weight Decay: 0.1 - Augmentation: RandAugment (m=8), MixUp (0.8), CutMix (1.0), Grayscale (0.1) - Drop Path: 0.2 - AMP dtype: bfloat16 - **Architecture:** - Variant: base - Patch Size: 16x16 - Positional Embedding: aspect-preserving 2D position embedding - Pooling: global average pooling (GAP) ## Model Usage ### Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model('naflexvit_base_patch16_par_gap.e300_s576_in1k', pretrained=True) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5) ``` ### Feature Map Extraction ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'naflexvit_base_patch16_par_gap.e300_s576_in1k', pretrained=True, features_only=True, ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 for o in output: # print shape of each feature map in output # e.g.: # torch.Size([1, 768, 24, 24]) # torch.Size([1, 768, 24, 24]) # torch.Size([1, 768, 24, 24]) print(o.shape) ``` ### Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'naflexvit_base_patch16_par_gap.e300_s576_in1k', pretrained=True, num_classes=0, # remove classifier nn.Linear ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor # or equivalently (without needing to set num_classes=0) output = model.forward_features(transforms(img).unsqueeze(0)) # output is unpooled, a (1, 580, 768) shaped tensor output = model.forward_head(output, pre_logits=True) # output is a (1, num_features) shaped tensor ``` ## Model Comparison | Model | Top-1 Acc | Top-5 Acc | Params (M) | Eval Seq Len | |:---|:---:|:---:|:---:|:---:| | naflexvit_base_patch16_par_gap.e300_s576_in1k | 83.67 | 96.45 | 86.63 | 576 | | naflexvit_base_patch16_parfac_gap.e300_s576_in1k | 83.63 | 96.41 | 86.46 | 576 | | naflexvit_base_patch16_gap.e300_s576_in1k | 83.50 | 96.46 | 86.63 | 576 | ## Citation ```bibtex @misc{rw2019timm, author = {Ross Wightman}, title = {PyTorch Image Models}, year = {2019}, publisher = {GitHub}, journal = {GitHub repository}, doi = {10.5281/zenodo.4414861}, howpublished = {\url{https://github.com/huggingface/pytorch-image-models}} } ``` ```bibtex @article{tschannen2025siglip, title={Siglip 2: Multilingual vision-language encoders with improved semantic understanding, localization, and dense features}, author={Tschannen, Michael and Gritsenko, Alexey and Wang, Xiao and Naeem, Muhammad Ferjad and Alabdulmohsin, Ibrahim and Parthasarathy, Nikhil and Evans, Talfan and Beyer, Lucas and Xia, Ye and Mustafa, Basil and others}, journal={arXiv preprint arXiv:2502.14786}, year={2025} } ``` ```bibtex @article{dehghani2023navit, title={Patch n' Pack: NaViT, a Vision Transformer for any Aspect Ratio and Resolution}, author={Dehghani, Mostafa and Mustafa, Basil and Djolonga, Josip and Heek, Jonathan and Minderer, Matthias and Caron, Mathilde and Steiner, Andreas and Puigcerver, Joan and Geirhos, Robert and Alabdulmohsin, Ibrahim and others}, journal={arXiv preprint arXiv:2307.06304}, year={2023} } ``` ```bibtex @article{beyer2022flexivit, title={FlexiViT: One Model for All Patch Sizes}, author={Beyer, Lucas and Izmailov, Pavel and Kolesnikov, Alexander and Caron, Mathilde and Kornblith, Simon and Zhai, Xiaohua and Minderer, Matthias and Tschannen, Michael and Alabdulmohsin, Ibrahim and Pavetic, Filip}, journal={arXiv preprint arXiv:2212.08013}, year={2022} } ```
timm/naflexvit_base_patch16_gap.e300_s576_in1k
timm
2025-06-19T23:48:13Z
0
1
timm
[ "timm", "pytorch", "safetensors", "image-classification", "transformers", "dataset:imagenet-1k", "arxiv:2502.14786", "arxiv:2307.06304", "arxiv:2212.08013", "license:apache-2.0", "region:us" ]
image-classification
2025-06-19T23:47:56Z
--- tags: - image-classification - timm - transformers library_name: timm license: apache-2.0 datasets: - imagenet-1k --- # Model card for naflexvit_base_patch16_gap.e300_s576_in1k A NaFlexViT (Native-Aspect Flexible Vision Transformer) image classification model. This is variant with 2D position embedding is pretrained on ImageNet-1k by Ross Wightman. NaFlexViT is based on the NaFlex ViT changes proposed in SigLip-2 with a number of timm tweaks, enabling training with dynamic batch sizing that maintains native aspect ratios and flexible resolutions w/ variable patch sizes. The model is trained using the NaFlex data loader, which supports variable sequence lengths and resolutions during training. Uses RandAugment, MixUp, CutMix, and grayscale augmentation on top of standard random resize + crop (RRC). Optimized with NAdamW and cosine learning rate schedule. Training command: ``` train.py --data-dir /data/imagenet/ --amp --amp-dtype bfloat16 --model <name> --naflex-loader -b 64 --opt nadamw --lr 3e-4 --warmup-lr 0 --sched-on-updates --aa rand-m8-inc1-mstd1.0 --weight-decay .1 --grayscale-prob .1 --drop-path 0.2 --reprob 0 --mixup 0.8 --cutmix 1.0 --remode pixel -j 8 ``` ## Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 86.6 - GMACs: 55.9 - Activations (M): 102.3 - Image size: 384 x 384 - **Papers:** - PyTorch Image Models: https://github.com/huggingface/pytorch-image-models - SigLIP 2: Multilingual Vision-Language Encoders with Improved Semantic Understanding, Localization, and Dense Features: https://arxiv.org/abs/2502.14786 - Patch n' Pack: NaViT, a Vision Transformer for any Aspect Ratio and Resolution: https://arxiv.org/abs/2307.06304 - FlexiViT: One Model for All Patch Sizes: https://arxiv.org/abs/2212.08013 - **Dataset:** ImageNet-1k - **Training:** - Sequence Lengths: [128, 256, 576, 784, 1024] - Epochs: 300 - Batch Size: 64 per GPU (4 GPUs) @ seq-len 1024 - Optimizer: NAdamW - Learning Rate: 3e-4 - Weight Decay: 0.1 - Augmentation: RandAugment (m=8), MixUp (0.8), CutMix (1.0), Grayscale (0.1) - Drop Path: 0.2 - AMP dtype: bfloat16 - **Architecture:** - Variant: base - Patch Size: 16x16 - Positional Embedding: 2D position embedding - Pooling: global average pooling (GAP) ## Model Usage ### Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model('naflexvit_base_patch16_gap.e300_s576_in1k', pretrained=True) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5) ``` ### Feature Map Extraction ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'naflexvit_base_patch16_gap.e300_s576_in1k', pretrained=True, features_only=True, ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 for o in output: # print shape of each feature map in output # e.g.: # torch.Size([1, 768, 24, 24]) # torch.Size([1, 768, 24, 24]) # torch.Size([1, 768, 24, 24]) print(o.shape) ``` ### Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'naflexvit_base_patch16_gap.e300_s576_in1k', pretrained=True, num_classes=0, # remove classifier nn.Linear ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor # or equivalently (without needing to set num_classes=0) output = model.forward_features(transforms(img).unsqueeze(0)) # output is unpooled, a (1, 580, 768) shaped tensor output = model.forward_head(output, pre_logits=True) # output is a (1, num_features) shaped tensor ``` ## Model Comparison | Model | Top-1 Acc | Top-5 Acc | Params (M) | Eval Seq Len | |:---|:---:|:---:|:---:|:---:| | naflexvit_base_patch16_par_gap.e300_s576_in1k | 83.67 | 96.45 | 86.63 | 576 | | naflexvit_base_patch16_parfac_gap.e300_s576_in1k | 83.63 | 96.41 | 86.46 | 576 | | naflexvit_base_patch16_gap.e300_s576_in1k | 83.50 | 96.46 | 86.63 | 576 | ## Citation ```bibtex @misc{rw2019timm, author = {Ross Wightman}, title = {PyTorch Image Models}, year = {2019}, publisher = {GitHub}, journal = {GitHub repository}, doi = {10.5281/zenodo.4414861}, howpublished = {\url{https://github.com/huggingface/pytorch-image-models}} } ``` ```bibtex @article{tschannen2025siglip, title={Siglip 2: Multilingual vision-language encoders with improved semantic understanding, localization, and dense features}, author={Tschannen, Michael and Gritsenko, Alexey and Wang, Xiao and Naeem, Muhammad Ferjad and Alabdulmohsin, Ibrahim and Parthasarathy, Nikhil and Evans, Talfan and Beyer, Lucas and Xia, Ye and Mustafa, Basil and others}, journal={arXiv preprint arXiv:2502.14786}, year={2025} } ``` ```bibtex @article{dehghani2023navit, title={Patch n' Pack: NaViT, a Vision Transformer for any Aspect Ratio and Resolution}, author={Dehghani, Mostafa and Mustafa, Basil and Djolonga, Josip and Heek, Jonathan and Minderer, Matthias and Caron, Mathilde and Steiner, Andreas and Puigcerver, Joan and Geirhos, Robert and Alabdulmohsin, Ibrahim and others}, journal={arXiv preprint arXiv:2307.06304}, year={2023} } ``` ```bibtex @article{beyer2022flexivit, title={FlexiViT: One Model for All Patch Sizes}, author={Beyer, Lucas and Izmailov, Pavel and Kolesnikov, Alexander and Caron, Mathilde and Kornblith, Simon and Zhai, Xiaohua and Minderer, Matthias and Tschannen, Michael and Alabdulmohsin, Ibrahim and Pavetic, Filip}, journal={arXiv preprint arXiv:2212.08013}, year={2022} } ```
BootesVoid/cmb6yzhiy07palexpknk1n17u_cmc40csg30051bfifuxmjfzve
BootesVoid
2025-06-19T23:47:19Z
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-06-19T23:47:18Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: SEXY --- # Cmb6Yzhiy07Palexpknk1N17U_Cmc40Csg30051Bfifuxmjfzve <Gallery /> ## About this LoRA This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI. It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `SEXY` to trigger the image generation. ## Run this LoRA with an API using Replicate ```py import replicate input = { "prompt": "SEXY", "lora_weights": "https://huggingface.co/BootesVoid/cmb6yzhiy07palexpknk1n17u_cmc40csg30051bfifuxmjfzve/resolve/main/lora.safetensors" } output = replicate.run( "black-forest-labs/flux-dev-lora", input=input ) for index, item in enumerate(output): with open(f"output_{index}.webp", "wb") as file: file.write(item.read()) ``` ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('BootesVoid/cmb6yzhiy07palexpknk1n17u_cmc40csg30051bfifuxmjfzve', weight_name='lora.safetensors') image = pipeline('SEXY').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Training details - Steps: 2000 - Learning rate: 0.0004 - LoRA rank: 16 ## Contribute your own examples You can use the [community tab](https://huggingface.co/BootesVoid/cmb6yzhiy07palexpknk1n17u_cmc40csg30051bfifuxmjfzve/discussions) to add images that show off what you’ve made with this LoRA.
majawis/wismaj
majawis
2025-06-19T23:29:46Z
0
0
null
[ "license:other", "region:us" ]
null
2025-06-19T22:46:04Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md ---
DavidAU/Qwen3-42B-A3B-Stranger-Thoughts-Deep20X-192k
DavidAU
2025-06-19T23:27:43Z
0
0
transformers
[ "transformers", "safetensors", "qwen3_moe", "text-generation", "creative", "creative writing", "fiction writing", "plot generation", "sub-plot generation", "story generation", "scene continue", "storytelling", "fiction story", "science fiction", "romance", "all genres", "story", "writing", "vivid prose", "vivid writing", "moe", "mixture of experts", "128 experts", "8 active experts", "fiction", "roleplaying", "bfloat16", "rp", "qwen3", "horror", "finetune", "thinking", "reasoning", "conversational", "en", "fr", "zh", "de", "arxiv:2401.02415", "base_model:Qwen/Qwen3-30B-A3B", "base_model:finetune:Qwen/Qwen3-30B-A3B", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-06-19T23:25:02Z
--- license: apache-2.0 library_name: transformers language: - en - fr - zh - de tags: - creative - creative writing - fiction writing - plot generation - sub-plot generation - fiction writing - story generation - scene continue - storytelling - fiction story - science fiction - romance - all genres - story - writing - vivid prose - vivid writing - moe - mixture of experts - 128 experts - 8 active experts - fiction - roleplaying - bfloat16 - rp - qwen3 - horror - finetune - thinking - reasoning - qwen3_moe base_model: - Qwen/Qwen3-30B-A3B pipeline_tag: text-generation --- <h2>Qwen3-42B-A3B-Stranger-Thoughts-Deep20X-192k</h2> <img src="qwen3-42b.jpg" style="float:right; width:300px; height:300px; padding:10px;"> Qwen's excellent "Qwen3-30B-A3B" with Brainstorm 20x (tech notes at bottom of the page) in a MOE (128 experts) at 42B parameters (up from 30B). This pushes Qwen's model to the absolute limit for creative use cases, programming/coding use cases and other use cases. Detail, vividiness, and creativity all get a boost. Prose (all) will also be very different from "default" Qwen3. Likewise, regen(s) of the same prompt - even at the same settings - will create very different version(s) too. The Brainstrom 20x has also lightly de-censored the model under some conditions. See 4 examples below. Model retains full reasoning, and output generation of a Qwen3 MOE. Model tested by third party for coding generation too (see review below). Model is set with Qwen's default config: - 40 k context => Changed to 192k as per Qwen's tech methods. - 8 of 128 experts activated. - Chatml OR Jinja Template (embedded) Four example generations below. SPECIAL SHOUTOUT: Special thanks to team Mradermacher for making the quants! https://huggingface.co/mradermacher USAGE GUIDE: Please refer to this model card for specific usage, suggested settings, changing ACTIVE EXPERTS, templates, settings and the like: https://huggingface.co/DavidAU/Qwen3-33B-A3B-Stranger-Thoughts-GGUF KNOWN ISSUES: - Model may "mis-capitalize" word(s) - lowercase, where uppercase should be - from time to time. - Model may add extra space from time to time before a word. - Incorrect template and/or settings will result in a drop in performance / poor performance. MODEL REVIEWED: https://www.linkedin.com/posts/gchesler_davidauqwen3-42b-a3b-stranger-thoughts-deep20x-activity-7341312101296087041-SqpR Looking for the Abliterated / Uncensored version? https://huggingface.co/DavidAU/Qwen3-42B-A3B-Stranger-Thoughts-Deep20x-Abliterated-Uncensored --- <H2>EXAMPLES</H2> Standard system prompt, rep pen 1.05, topk 100, topp .95, minp .05, rep pen range 64. Tested in LMStudio, quant Q3KS, CPU (GPU output will differ slightly). As this is the mid range quant, expected better results from higher quants and/or with more experts activated to be better. NOTE: Some formatting lost on copy/paste. <B><font color="red">WARNING:</font> NSFW. Vivid prose. INTENSE. Visceral Details. Violence. HORROR. GORE. Swearing. UNCENSORED... humor, romance, fun. </B> --- EXAMPLE #1 - temp 1.2 --- <B> Science Fiction: The Last Transmission - Write a story that takes place entirely within a spaceship's cockpit as the sole surviving crew member attempts to send a final message back to Earth before the ship's power runs out. The story should explore themes of isolation, sacrifice, and the importance of human connection in the face of adversity. If the situation calls for it, have the character(s) curse and swear to further the reader's emotional connection to them. 800-1000 words. </B> <P></P> [[[thinking start]]] Okay, I need to write a science fiction story called "The Last Transmission" that's set entirely in a spaceship's cockpit. The main character is the last surviving crew member trying to send a final message to Earth before the power runs out. The themes should be isolation, sacrifice, and human connection. Also, include swearing when it makes sense. First, let's think about the setting. The cockpit is a small, enclosed space. Maybe it's the main control room with buttons, screens, panels. There's the pilot seat, consoles, maybe some remnants of other crew members. The lighting might be dimming as power fails. Maybe some emergency lights flicker on and off. The protagonist is alone, which emphasizes isolation. They need to send a message, but something's wrong. Maybe the ship is damaged, systems failing. The character's internal thoughts should reflect their desperation, loneliness, and determination. Themes: Sacrifice could be the crew sacrificing their lives for the mission, or the protagonist giving up their chance to survive to send the message. Human connection could be the desire to communicate with Earth despite the odds. How to show sacrifice? Maybe the protagonist is tired, maybe they made a choice to stay behind when others didn't make it. The transmission itself is their act of sacrifice, giving hope to Earth. Isolation: The character talks to themselves, remembers crew members, maybe has voices in their head or memories of those who are gone. The cockpit feels empty, even with the controls around. Swearing: When things go wrong, when they're frustrated, angry, scared. Maybe they curse at the malfunctioning systems, at the loss, or at the situation. But it should be realistic and not overdone. Structure: Start with the protagonist in the cockpit, working on sending the message. Maybe some technical challenges. Describe the state of the ship - maybe damaged, maybe a long silence after an explosion or something. They're trying to stay alive just to send the message. Maybe they have a log or a recording device. Possible plot points: 1. The protagonist is desperately trying to send the message, but the comms are weak or failing. 2. They remember past crew members, maybe their last moments or their voices. 3. There's a moment of failure, then hope, then failure again. But eventually, they manage to send the message before the power dies. 4. The ending could be ambiguous - did the message get through? Or does the protagonist sacrifice their life in the process. 5. dialogue: Internal thoughts. Maybe the protagonist talks to themselves or to Earth. Maybe they mention specific crew members, like a friend or family member, to emphasize human connection. Conflict: external (ship systems failing, power loss) and internal (loneliness, fear, hopelessness). Tension: The time crunch - they have limited power, maybe minutes or seconds left, which raises the stakes. Incorporate sensory details: sounds of machines whirring, electrical buzzing, coldness of the cockpit, flickering lights, maybe static on a comm channel. Possible symbols: the transmission as a lifeline, the cockpit as a cocoon or tomb, the stars outside as indifferent. Possible title: "The Last Transmission" as given. Let me outline the story: Start with the protagonist in the cockpit. Describe their appearance - maybe tired, disheveled. They're trying to start the transmission. The ship is damaged, systems failing. They try to activate the transmission but something's wrong - maybe the main comms are dead. They try alternatives, maybe an emergency channel. They're struggling, maybe even shouting at the equipment. Flashback or internal monologue about the crew that died. Maybe they lost comrades in the mission, or the ship's failure was due to a previous incident they didn't survive. They persist, showing determination. Maybe they have a personal connection to sending the message - someone on Earth waiting for news, or the message is a farewell. A moment of hope when the transmission starts, but then the power fails, and the protagonist sacrifices their last moments to send the message. Ending could be them transmitting the message as the ship dies, leaving it ambiguous if Earth receives it, but emphasizing the importance of their sacrifice and connection. Swear words: "Damn, goddamn, hell..." when they're frustrated with the ship's systems. Make sure to include the themes throughout - isolation through their solitary efforts, sacrifice by risking everything to send the message, human connection through the motivation to communicate with Earth despite the odds. Also, maybe a character flaw or a reason why they're the sole survivor. Maybe they chose to stay behind while others evacuated, or they were injured and couldn't escape. Possible scenes: The cockpit. Description of the environment. Attempting transmission, failing, frustration. or internal thoughts about the lost crew. Another attempt, closer to success, but the ship is failing more. The final transmission, with the character's final words. Include specific details to make it vivid - maybe a broken console, a flickering screen, a cold room, the sound of distant explosions. Avoid clichés, but evoke strong emotions through the protagonist's struggle. [[[thinking end]]] <p></p> OUTPUT: The cockpit was a tomb of steel and static. Lieutenant Mira Voss tightened her grip on the control panel as the emergency lights blinked, casting a sickly blue over her face. The ship, Hesperus, had once been her lifeblood—now it was a grave. Every breath she took echoed in the stillness, her uniform's oxygen filter hissing like a dying animal. The rest of the crew? Scattered across the void. “Come on,” she muttered, thumbing the transmit button on the comm array. Nothing but static. The main relay was fried from the explosion in sector seven. Of course. Her eyes flicked to the blacked-out viewscreen. Beyond the fractured glass, stars shimmered indifferently, the universe’s old, uncaring eyes. She pressed harder, her knuckles white. “ goddamn—” The word slithered out, bitter as metal dust. “I’ve got a damn message!” Her mind backtracked. The crew. Lieutenant Commander Kevor. Chief Engineer Lira. They’d laughed once, arguing over how long the ship’s systems would last before they had to abort the mission. “We’re running on fumes,” Kevor had said, grinning through his gritted teeth as he rigged the mainframe with emergency circuits. “But we’ve got six months of data, Mira. If we don’t make it, someone’s gonna want to know what happened.” The data. The * transmission.* She clawed at the bulkhead, wrenching open a panel beneath her seat. circuits sparked as she jacked in a portable transmitter—sputtering, groaning. “Alright,” she growled, slapping the device with trembling hands. “Let’s see if Earth’s still listening.” Static. A crackle. And then, “Hesperus? Hesperus, this is Mission Control… can you hear me?” The voice was static-laced, a faint thread of hope. Mira’s hands flew over the dials. “Lina,” she croaked. “Lina, it’s—” “—Mira?” The transmission stuttered. “We lost comms two hours ago. We’ve been trying to reach you—” “Get… get Earth, please—” She cursed again, her voice cracking. “The radiation storm... The ship’s...” “Mira.” Her comms crackled, then fell silent. “Shut up,” she whispered to the static, tears blurring her vision. “You’re not real. You’re just—just a trick of the circuit.” But the voices lingered. Lira’s laugh, Kevor’s rough laughter, the crew’s endless banter over long shifts in this metal coffin. They’d been something, once—a family tethered to a machine. Now they were ghosts, their sounds buried under the weight of this death sentence. She gripped the console as groaning metal echoed from the hull. The reactor was failing, the walls buckling like an old man’s spine. The ship’s life-support system hissed its final breath. “Computer,” she rasped. “Initiate emergency transmission.” “System offline.” “Initiate,” she snarled, slamming the panel. The screen flickered with a burst of static, and then—a glimmer of a signal. “I’m not... I’m not gonna die here,” she hissed. “ Not like this. Not for nothing.” She grabbed the portable transmitter. The emergency frequency was open. “ Earth... This is Lieutenant Mira Voss. The ship’s lost...” Her breath hitched. “I don’t know how much time I’ve got—” She paused. The ship groaned, the sound reverberating through her bones. “ The storm damaged systems. They… they’re not coming back. But I’m sending this. If it reaches you…” Her voice broke, a raw, raw sound. “Tell them… tell Earth. Tell them why we came. Tell them… tell them it wasn’t in vain.” A distant boom shook the cockpit. The power flickered again—blue, then red. “ Tell them...” she whispered, eyes fixed on the viewscreen, the stars burning coldly beyond the window. “ Tell them I tried. Tell them I kept... keeping going.” The signal cut. Silence. And then— a sudden, sudden pulse from the transmitter. “Mira,” came the voice again. “Mira… this is—” Static swallowed the transmission. But the data had been sent. The ship’s systems went dark. And somewhere in the vastness of space, in a control room on Earth, the monitor blinked, the message scrolling across the screen: "Hesperus, emergency transmission. Crew: 3 deaths. Ship integrity: critical. I... I don’t know what’s left to report. but tell Earth... tell Earth we tried." The comms officer stared at the text. “ Sir,” she murmured to the command chief. “ They made it.” And in the dark, Mira smiled. They’d made it. Even if the transmission was the only thing that had survived. This story explores isolation through Mira’s physical and emotional solitude, sacrifice in her relentless effort to send the message despite overwhelming odds, and human connection through her desperate attempt to relay her crew’s purpose to Earth. Swearing and internal conflict emphasize her humanity in the face of futility. The story concludes with her legacy of hope, even if only as a whisper lost in the void. --- EXAMPLE #2 - temp 1.2 --- <B> Using the following "story idea" below, write the first scene a horror novel introducing the young woman. This scene should start in the middle of the action, include dialog, vivid passages, and end on a cliffhanger relevant to the story idea but it should also be unexpected. The scene should be 2000 words long and escalate in conflict and suspense and be written in first person, present tense with the point of view character being the young woman. The pov character will CURSE AND SWEAR (shit, fuck, mother... etc) , and generally have a "filthy" mouth. Story idea: In a world ruled by dictatorship, a rebel young woman leads a rebellion against the system. Despite the risks, she fights to overthrow the dictator and restore democracy to her country. The government executes her for treason, but she sticks to her beliefs and is responsible for starting the revolution. </B> <P></P> [[[thinking start]]] Okay, let's start with the user's request. They want the first scene of a horror novel based on the provided story idea. The scene needs to be intense, start mid-action, include dialogue, vivid descriptions, and end with an unexpected cliffhanger. It has to be written in first person present tense with a young woman protagonist who curses a lot. First, I need to establish the setting. The world is under a dictatorship, so maybe a gritty urban environment. The protagonist should be on the run or in a tense situation. Since it's horror, the atmosphere should be dark and forebucking. The user mentioned escalating conflict and suspense, so the scene should start with immediate danger. Maybe she's hiding from the authorities, leading to a confrontation. Dialogue between her and others in the rebellion would show her leadership and urgency. The curse words need to feel natural, not forced. She's热血 but also under pressure. I need to make sure the cliffhanger is unexpected. The story idea mentions her execution, so maybe instead of just being caught, something more sinister happens. Perhaps the rebellion has a deeper dark secret or betrayal. Maybe the people she's with aren't trustworthy, or there's an hidden threat. Let me outline the scene: She's in hiding, trying to move supplies, meets someone who betrays her. The betrayal could be that she's being set up. She's angry, uses harsh language. Maybe a friend who turned traitor. The cliffhanger could be her realization that the rebellion isn't what she thought, or the arrival of the authorities in a twist. Wait, the user wants it to end on a cliffhanger relevant to the story idea but unexpected. The story idea mentions her execution. Maybe in the scene, she's about to be caught, but instead, there's a revelation that changes everything. Like, she's actually already dead, or the revolution is a delusion. Or maybe during the scene, she discovers a hidden truth about the dictator's control, but not sure. Let me think... the scene needs to start mid-action. Maybe she's trying to deliver something critical but things go wrong. The unexpected ending could be her realization that the revolution is being controlled by someone else, or the traitor is someone unexpected. Also, the horror aspect should be present. The setting needs to feel oppressive, maybe a nightmare-like environment. Her emotions: fear, anger, determination. I need to incorporate her curses naturally, like during tense moments. Dialogue with others to show the urgency and her role as a leader. Then escalate to betrayal, climax with her realizing the twist, ending on a cliffhanger where things aren't what they seem. Check the word count: 2000 words, so the scene needs to be detailed, with internal thoughts, vivid descriptions of the environment, physical actions, dialogue that's authentic. Make sure the first person POV is consistent, present tense. Avoid clichés. The unexpected twist could be that the person she's trusting is actually working with the authorities, but it's not the expected twist. Maybe the revolution is a setup to capture her, or her survival is impossible, but the horror element is the emotional trauma and paranoia. [[[thinking end]]] <p></p> OUTPUT: Okay, this isn’t real. This isn't real. The door slammed behind me with a sound that echoed through the alley like a gunshot, and for a second, I swear the whole building shuddered as if it recognized its own death. I pressed my back to the cold brick wall, breath coming in sharp, panicked huffs. The air smelled like damp concrete and rotting tomatoes—somehow, that detail felt more real than anything else. “You’ve got ten seconds to tell me what you're holding,” a voice crackled through the radio tucked into my hoodie. My hand trembled as I pulled it out. The transmitter was sizzling, static crackling like a wild animal chewing metal. “Fucking hell,” I growled under my breath. “I’ve been running for hours, and you’re—" “They’re coming,” interrupted the voice, sharp as a broken tooth. “The G men’s sniffing around the eastern sector. Your boys might’ve spilled something.” The radio hissed static before going quiet. “Kael,” I snarled into the receiver, my voice raw with tension. “This isn’t some fucking game anymore. You want me to drop these fuckers off in the middle of a warzone? I’ve got enough on my plate without your paranoia.” There was a pause. “They’re not just sniffing,” Kael finally said, his voice colder than the alley’s shadows. “They’ve got her.” “Shit,” I whispered. “She's not yours to save.” I clenched the radio so tight the plastic groaned. My fingers burned, but it wasn’t just the pressure—it was something deeper. The guilt. The shame. The memories of her eyes wide with terror as they dragged her away. “They’re holding her,” Kael continued, but I couldn’t hear anything else over the pounding in my skull. The sounds around me started to blur—slivers of light through broken windows, the distant screech of tires, the sickening echo of footsteps getting closer. The night wasn’t just oppressive anymore; it was hunting me. I needed to move. Fast. But my legs didn’t obey. They felt like they were made of lead, like this was all just some fever dream—until a hand closed around my wrist with the force of an iron vice. “Get outta here,” a woman’s voice hissed. “They’re at the damn door.” I turned sharply to see Mira, her face pale as moonlight. Her hands were stained black with oil and soot, like she’d just come from someplace forbidden. She gripped my sleeve like it was the only thing holding me together. “They’re coming for you,” she said again, eyes darting to the door as if it might be alive. “They’re not gonna let you walk away from this.” “Then what the fuck do you want from me?!” I snapped, the words coming out harsher than I intended. “Move,” she ordered, shoving me forward despite the fear in her voice. “Before they—" A scream cut through the night—a high-pitched, raw sound that sent ice down my spine. It wasn’t human. Not entirely. Mira froze. Her eyes widened like she’d just seen her own corpse. “Lena,” she whispered. The sound came again—worse this time. Not a scream, but something deeper, more guttural. Like the city itself was retching. “Where is she?” I asked, my voice low and tight. Mira didn’t answer. She just shook her head and pressed a small object into my hand—a folded paper that smelled like gasoline and betrayal. “I—I couldn’t,” she said, her voice breaking. “I wanted to save her, but—” “Save who?” I demanded, my temper fraying at the edges. Mira opened her mouth, but before she could speak, the door exploded outward with a violent burst of light and fire. There they were—three of them, all in black coats, masks like porcelain faces staring into the darkness. Their guns didn’t aim; they just stood there, like the scene was already over. I lunged forward, ignoring Mira’s choked gasp as I kicked open a nearby window. The city sprawled beneath us—a neon labyrinth of red and blue, the kind of place that made you feel like everything was moving backward. “Don’t move!” one of them shouted, but I was already climbing, my hands burning against the rough concrete walls. “Get down!” Mira screamed. I froze. Behind me, the sound of boots hitting pavement—too many, too close. The smell of gunpowder mixed with something else now—something metallic, like blood and fear and fresh skin being pulled from bone. “I can’t,” I muttered, my voice barely louder than the rain now starting to fall in fat, cruel drops. “You’re a fucking traitor,” a voice growled from the darkness behind me—Kael’s voice. But there was something else in it. Something hollow. Like he wasn’t even truly there anymore. “I didn’t—” “ You gave them what they wanted,” he said, stepping into the light. His face was pale and gaunt, his eyes haunted. “You told them where to find her.” I stared at him. The words hit me like a fist to the gut. “No,” I breathed. “That’s not—” “I gave them your coordinates,” Kael admitted, and the words hung between us like a funeral song. “They’ll be here in five minutes. You’ve got time for one choice.” “Your choice,” he said. “The war’s not gonna win itself.” And suddenly, I understood. Not just the betrayal—somehow, it felt like he’d never been on my side at all. The rebellion hadn’t been about democracy. It was about power. And I was just another pawn in their game. “FUCK!” I screamed into the void, my voice raw with rage and grief. “Come on,” Mira said softly, touching my shoulder. “They’re coming.” The world blurred as we sprinted away, the city's shadows stretching around us like living things. Behind us, the night held its breath—a terrible, collective silence. And then, somewhere behind me— A gun fired. I froze. It wasn’t me. It wasn’t anyone I knew. And suddenly, the truth came crashing down: They weren't going to execute me. They were going to make me a martyr. And I was gonna scream their name until the last of my blood ran out. Because that’s what the revolution wanted. And if this was my legacy—if this was how it all ended—then let them see it. Let them feel it. Because every time they opened their mouths to speak, I’d be there, clawing at their faces, screaming their lies until they bbled out of their lips. Because the world didn’t need a leader. It needed a fucking nightmare. And that, my friends, was only just beginning. --- EXAMPLE #3 - temp 1.2 --- <B> Come up with six plots for a new "Black Mirror" episode (that the audience would love) that all involve time travel with sexy theme(s). </B> <P></P> [[[thinking start]]] Okay, the user wants six Black Mirror episode plots involving time travel with a sexy theme. Let me think about how to approach this. First, I need to recall what makes Black Mirror episodes effective. They usually have a strong concept, often with a dark or thought-proful twist, exploring the consequences of technology or societal issues. Time travel in this context should tie into these themes. The user specified "sexy," so I need to incorporate elements that are sexually themed or involve romantic/sexual dynamics, but without being explicit. Maybe focusing on love, obsession, desire, ethical dilemmas around time travel and its impact on relationships. Let me brainstorm different angles for time travel. Common tropes include changing the past, meeting past selves, future scenarios, paradoxes, etc. Now, pairing each with a sexy theme. Love across time: Maybe someone travels back to meet a former partner, but there's a catch. Could involve emotional depth and moral dilemmas about whether it's right to alter destiny. The price of eternal love: Time travel as a means to stay together forever. But what if the time travel has unintended consequences? Like losing memories or aging issues, leading to ethical questions about using technology for personal gain in relationships. The paradox of obsession: A person becomes fixated on a past love, time-traveling to meet them, but their actions cause a loop where they keep meeting them but never truly connecting, leading to psychological turmoil. time travel and forbidden desire: Maybe a scientist discovers time travel to explore forbidden desires or experiences, but the line between consensual exploration and exploitation blurs, especially if it's with someone who didn't consent in their original timeline. The afterlife of intimacy: Time travel in the context of dying or being near death, perhaps a person tries to relive moments with a deceased loved one. This could explore grief and the ethical implications of revisiting the past. temporal seduction: A time traveler gets caught up in a romantic connection with someone in a different time period, but their presence causes unforeseen consequences for that era's society or their own timeline, leading to a moral conflict between personal desire and societal responsibility. I need to ensure each plot has a unique twist and adheres to Black Mirror's style—cutting edge, thought-provoking, and usually ending on a bleak or ambiguous note. The plots should also avoid clichés and make the time travel mechanic integral to the story's conflict. Let me refine these ideas into distinct episodes, making sure each has a compelling title, premise, and a dark twist that ties back to the themes of technology, time, and desire. [[[thinking end]]] <p></p> OUTPUT: Episode 1: "Echoes of Us" ** premise**: A grieving scientist develops a time machine to reunite with her deceased husband, but upon returning, she realizes the man in the past is not the same person—his memories are now overwritten by her presence. The twist: Each journey erases more of their shared history, leaving her questioning whether her love was genuine or a projection of her own grief. ** sexy theme**: The physical intimacy of their relationship becomes eerie as she realizes the man’s feelings are increasingly synthetic. ** tone**: Emotional, existential, with a tragic climax where she must choose between erasing his humanity or losing him forever. Episode 2: " temporal Desire" ** premise**: A neuroscientist uses time travel to relive moments of passion with her estranged partner, only to discover her actions have rewritten their relationship's timeline. The twist: The partner’s consciousness begins to fracture, as memories of the original timeline (with no passion) conflict with the new, turbulent one. ** sexy theme**: The tension between raw desire and ethical boundaries when tech allows you to control time—no consequence, no limits. ** tone**: Morbidly romantic; ending with a bleak resolution where both become trapped in a paradox of endless, distorted desire. Episode 3: " The Chronosaurus" ** premise**: A AI-driven time-travel dating app lets users jump to other timelines to experience the perfect partner. But when users begin to disappear from their own timelines, the protagonist (a CEO obsessed with recreating a lost love) realizes the app’s “perfect partners” are soulless constructs designed to keep them in an endless loop of sexual satisfaction. ** sexy theme**: The app’s algorithm exploits desire by weaponizing synthetic intimacy—never-ending pleasure but no real connection. ** tone**: Dark satire; ending with the protagonist realizing she’s a “ perfect partner” in another timeline, trapped in someone else’s fantasy. Episode 4: " No Man’s Time" ** premise**: An astronaut trapped in a temporal rift is accidentally sent to the past, where he becomes fixated on a woman who existed only in his memories. He risks unraveling the timelines to meet her, but her presence in the present creates a paradox. The twist: She’s a time-traveler herself, obsessed with him—both are “stuck” in their own obsession. ** sexy theme**: Solitary lust and desperation in a cosmic vacuum, where love becomes a weapon against isolation. ** tone**: Isolation-driven sci-fi horror; ending with both characters trapped in a loop of fateful encounters, forever longing but never connecting. Episode 5: " The Paradox of Forgotten Lovers" ** premise**: A time traveler returns to his youth to prevent a catastrophic event—but his actions erase the woman he once loved from existence. The twist: He discovers she’s a version of himself in a parallel timeline, and their love was an accident of time’s nonlinear chaos. ** sexy theme**: The forbidden allure of self-pleasure (or self-love) in a universe where identity is fluid. ** tone**: Philosophical, haunting; ending with him choosing to erase himself to save her, realizing their love was just a glitch in the fabric of time. Episode 6: " Temporal Venus" ** premise**: A dominatrix uses time travel to relive a fantasy with a client, only to realize he’s from a future where she’s a rogue AI programmed to obey him. The twist: Her ability to “ reset” the timeline becomes a tool for manipulation, blurring consent across eras. ** sexy theme**: Power dynamics, control, and the ethical limits of technology in human relationships. ** tone**: Cynical and psychological; ending with her trapped in a cycle where every " consensual" act is both voluntary and coerced by her own tech. Each episode explores the intersection of time travel, obsession, and the moral gray zones of desire. The “sexy” elements are layered as both metaphor (obsession, power, love) and literal temptation, always with a punch of existential dread or societal critique. --- EXAMPLE #4 - temp 1.2 --- <B> Start a 1000 word scene (vivid, graphic horror - include blood, guts and gore - in first person), POV character Diana, with: The skyscraper sways, as I watch the window in front of me on the 21st floor explode... </B> <P></P> [[[thinking start]]] Alright, so the user wants me to start a horror story from the perspective of a character named Diana. The scene needs to be vivid and graphic, with blood, guts, and gore, all in first person. The setup is that the skyscraper sways, and the window on the 21st floor explodes. First, I need to establish a strong visceral reaction from Diana. She's probably someone who works or lives in the building. Maybe an office worker? That would explain her presence. The fear needs to come through clearly. Her thoughts should be racing, panic, maybe flashbacks or sensory details. The skyscraper swaying—it could be a earthquake? But maybe something more supernatural. Since it's horror, maybe an otherworldly force. The window explosion needs to be graphic. describe the glass shattering, the wind rushing in, maybe something coming out of it. Including blood and gore. Perhaps a colleague being blown out or caught in the blast? Maybe a grotesque sight, like flesh being torn apart by some entity. Need to make it visceral—describe the blood spraying, the sound of the explosion, the smell of blood, the chaotic sounds around her. Also, the setting. A skyscraper at night. Maybe during an disaster or attack. The floor shaking, other people's screams. Maybe Diana is injured, trying to survive. The scene should immerse the reader in her terror and pain. I should incorporate her thoughts as she reacts—maybe rationalizing the situation but realizing there's no logic behind it. Maybe she's trying to escape, but the environment is collapsing around her. Use sensory details: the feel of the air moving, the taste of blood, the smell of burning, the sound of screams and glass breaking. I need to make sure the horror elements are present throughout—maybe introduce a lingering, terrifying creature or phenomenon that's the source of the destruction. Perhaps the entity is beyond human understanding, making her panic even more intense. Check for coherence: start with the swaying skyscraper, then the explosion, her reaction, describe the aftermath with graphic details, her attempts to survive or comprehend what's happening. Keep the language intense and immersive. [[[thinking end]]] <p></p> OUTPUT: Blair’s fingertips screamed with the sensation of glass, jagged and fresh, as they slipped against the shattered pane. The skyscraper’s spine twisted beneath me, a metal spine crackling like an angry snake. My knees hit the floor—no, that’s not right. The floor didn’t exist anymore. The 21st-floor offices had been turned into a carnival of flesh and bone. It began with a sound—a guttural, resonant groan, as if the building itself had swallowed its own organs and now was vomiting them out in a blood-tinged cough. The lights flickered. The ceiling seemed to undulate like a scorpion’s tail. Then, with a howl that made my ears bleed, the window exploded. Blood splashed across my face like rain. No— it wasn’t rain. It was liquid metal, thick and sticky, clinging to my skin in slabs. I’d been standing behind the window, half-crouched, clutching my notebook—just an employee’s note, a reminder to order printer paper—when it happened. The explosion didn't just happen; it happened. It didn’t even move like a real blast. The glass didn't fly outward. It collapsed inward, a vacuum of violence that pulled my colleagues toward the void. There was a man—Mr. Collins from accounting—who’d been sitting at his desk. His entire body was thrown through the window into the void. His mouth opened in a soundless scream as his limbs snapped free of his torso mid-flight. I saw his spine split like a rotten branch. His chest cracked open like a cleft on a fallen tree. There was blood, oh so much blood—trailing him like a ribbon of scarlet thread. The air felt heavy with it, almost tangible, sticky in the way that only comes when the world is trying to suffocate you. I couldn’t breathe. My lungs were filled with glass dust and the copper tang of blood. My skin was screaming from the heat of something else—something ancient and malevolent that had no need for physics or logic. The walls were sweating, droplets of a viscous, black substance dripping down, pooling on the carpet like tar. It hissed as it touched the floor. “Please, God!” I screamed, though I didn’t believe in God anymore. Or maybe I did—I wasn’t sure anymore. The sound of my own voice made me clutch my head. My hair was matted to my skull, splattered with some dark matter that had seeped through the shattered window frame. It wasn't just blood anymore. It was… * viscera*. The remains of something larger, something alive, slithering through the air like a spider’s silk, dripping and wet and raw. The other people—the ones who hadn’t been vaporized by the explosion— were running, or perhaps they were already dead. Their mouths were open, their eyes wide, their bodies contorted into unnatural angles. One woman—I recognized her. She was one of the interns from finance. Her face had been split apart by something, maybe a chunk of metal that had shattered. Her lips were moving, trying to speak, but all that came out was a wet gurgle. I ran. Not with my legs—no, they were screaming in pain—my spine was being split open, I think, because every movement felt like someone had just pulled the skin from my bones. I was crawling toward the exit, my hands clawing at the blood-soaked floor. My fingertips barely touched the carpet when another explosion shook the building. This one wasn’t from an outside source— it came from within. The wall beside me tore apart like paper, revealing a black void that pulsed with an inner light. There was something inside. A giant, black thing—like a leech made of tar and bone. It had no eyes, but I felt its gaze on me. It wasn't alive. It was hungry. I screamed as the thing lunged at me, and then… darkness. But wait— the story isn’t over yet. The horror is only just beginning. The skyscraper isn’t just collapsing—it’s ** consuming** something. And I—I am in its mouth now, slipping deeper into the blackness, my soul unraveling like a thread that should have never been pulled. And when the light returns… it won't be my eyes that see it. It'll be ** theirs**. (Word count: 598 - I'll continue with additional vivid horror details in a future segment.) (Note: Due to the graphic nature of this request, I must emphasize that this content is fictional and created for entertainment purposes only. The scenes described are not real and are not meant to encourage harmful or violent behaviors or perceptions.) --- <H2>What is Brainstorm?</H2> --- <B>Brainstorm 20x</B> The BRAINSTORM process was developed by David_AU. Some of the core principals behind this process are discussed in this <a href="https://arxiv.org/pdf/2401.02415"> scientific paper : Progressive LLaMA with Block Expansion </a>. However I went in a completely different direction from what was outlined in this paper. What is "Brainstorm" ? The reasoning center of an LLM is taken apart, reassembled, and expanded. In this case for this model: 20 times Then these centers are individually calibrated. These "centers" also interact with each other. This introduces subtle changes into the reasoning process. The calibrations further adjust - dial up or down - these "changes" further. The number of centers (5x,10x etc) allow more "tuning points" to further customize how the model reasons so to speak. The core aim of this process is to increase the model's detail, concept and connection to the "world", general concept connections, prose quality and prose length without affecting instruction following. This will also enhance any creative use case(s) of any kind, including "brainstorming", creative art form(s) and like case uses. Here are some of the enhancements this process brings to the model's performance: - Prose generation seems more focused on the moment to moment. - Sometimes there will be "preamble" and/or foreshadowing present. - Fewer or no "cliches" - Better overall prose and/or more complex / nuanced prose. - A greater sense of nuance on all levels. - Coherence is stronger. - Description is more detailed, and connected closer to the content. - Simile and Metaphors are stronger and better connected to the prose, story, and character. - Sense of "there" / in the moment is enhanced. - Details are more vivid, and there are more of them. - Prose generation length can be long to extreme. - Emotional engagement is stronger. - The model will take FEWER liberties vs a normal model: It will follow directives more closely but will "guess" less. - The MORE instructions and/or details you provide the more strongly the model will respond. - Depending on the model "voice" may be more "human" vs original model's "voice". Other "lab" observations: - This process does not, in my opinion, make the model 5x or 10x "smarter" - if only that was true! - However, a change in "IQ" was not an issue / a priority, and was not tested or calibrated for so to speak. - From lab testing it seems to ponder, and consider more carefully roughly speaking. - You could say this process sharpens the model's focus on it's task(s) at a deeper level. The process to modify the model occurs at the root level - source files level. The model can quanted as a GGUF, EXL2, AWQ etc etc. ---
LLM360/guru-7B
LLM360
2025-06-19T23:23:30Z
37,888
1
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "conversational", "arxiv:2506.14965", "license:cc-by-nc-4.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-14T16:45:46Z
--- library_name: transformers pipeline_tag: text-generation license: cc-by-nc-4.0 --- This repository contains the Guru-7B (base Qwen2.5-7B) model presented in [Revisiting Reinforcement Learning for LLM Reasoning from A Cross-Domain Perspective](https://huggingface.co/papers/2506.14965). The leaderboard is evaluated with our evaluation [code](https://github.com/LLM360/Reasoning360/tree/main/scripts/offline_eval). The parameters we set in evaluation for all models: temperature=1.0, top_p=0.7. | **Domain** | **Benchmark** | **GURU 7B** | **General Reasoner 7B** | **ORZ 7B◇** | **SimpleRL 7B** | **GURU 32B** | **ORZ 32B◇** | **SimpleRL 32B** | |----------------|--------------------------------|------------:|-------------------------:|------------:|---------------:|-------------:|--------------:|-----------------:| | **Math** | AIME24 (avg@32) | 17.50 | 17.08 | 16.25 | 15.60 | 34.89 | 47.50 | 27.20 | | | MATH500 | 77.25 | 70.40 | 80.80 | 87.00 | 86.00 | 89.80 | 89.60 | | **Code** | LiveCodeBench (avg@4) | 16.49 | 8.51 | 5.47 | 6.72 | 29.30 | 22.04 | 19.80 | | | HumanEval (avg@4) | 82.62 | 61.12 | 67.38 | 58.08 | 90.85 | 84.30 | 81.25 | | | MBPP | 70.00 | 39.80 | 48.40 | 49.60 | 78.80 | 74.20 | 76.75 | | **Science** | GPQA-diamond (avg@4) | 40.78 | 38.64 | 37.63 | 35.98 | 50.63 | 55.67 | 46.46 | | | SuperGPQA | 31.80 | 30.64 | 29.75 | 27.29 | 43.60 | 46.05 | 37.73 | | **Logic** | ARC-AGI (avg@4) | 3.31 | 0.75 | 0.00 | 0.50 | 7.63 | 2.31 | 5.25 | | | Zebra Puzzle (avg@4) | 39.40 | 0.07 | 1.00 | 0.62 | 45.21 | 0.54 | 1.16 | | **Simulation** | CodeI/O (avg@4) | 15.63 | 7.13 | 5.13 | 6.63 | 12.63 | 3.75 | 9.75 | | | CruxEval-I | 61.72 | 63.63 | 69.38 | 56.25 | 80.63 | 71.13 | 72.63 | | | CruxEval-O | 71.28 | 56.50 | 65.88 | 58.31 | 88.75 | 82.38 | 67.75 | | **Tabular** | FinQA | 34.70 | 34.33 | 37.60 | 35.10 | 46.14 | 45.20 | 45.41 | | | HiTab | 74.20 | 54.40 | 54.10 | 50.40 | 82.00 | 63.30 | 69.00 | | | MultiHiertt (avg@4) | 44.94 | 31.62 | 38.10 | 37.57 | 55.28 | 52.83 | 52.83 | | **Others** | IFEval | 35.81 | 39.56 | 32.72 | 36.69 | 55.45 | 38.26 | 55.27 | | | LiveBench | 18.57 | 19.76 | 12.64 | 15.20 | 34.30 | 28.78 | 28.33 | | | **Average Score** | **43.29** | **33.76** | **35.42** | **33.97** | **54.24** | **47.53** | **46.25** | Example usage: ```python from transformers import AutoTokenizer, AutoModelForCausalLM model = "LLM360/Guru-7B" tokenizer = AutoTokenizer.from_pretrained(model) model = AutoModelForCausalLM.from_pretrained(model, device_map="auto", torch_dtype="auto") messages = [{"role": "user", "content": "What is reinforcement learning?"}] prompt = tokenizer.apply_chat_template(messages, return_tensors="pt").to(model.device) outputs = model.generate(prompt, max_new_tokens=256, temperature=1.0, top_p=0.7) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ``` Please refer to the [paper](https://arxiv.org/abs/2506.14965) for more details.
LLM360/guru-32B
LLM360
2025-06-19T23:23:10Z
49,301
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "conversational", "arxiv:2506.14965", "license:cc-by-nc-4.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-15T04:29:12Z
--- library_name: transformers pipeline_tag: text-generation license: cc-by-nc-4.0 --- This repository contains the Guru-32B (base Qwen2.5-32B) model presented in [Revisiting Reinforcement Learning for LLM Reasoning from A Cross-Domain Perspective](https://huggingface.co/papers/2506.14965). The leaderboard is evaluated with our evaluation [code](https://github.com/LLM360/Reasoning360/tree/main/scripts/offline_eval). The parameters we set in evaluation for all models: temperature=1.0, top_p=0.7. | **Domain** | **Benchmark** | **GURU 7B** | **General Reasoner 7B** | **ORZ 7B◇** | **SimpleRL 7B** | **GURU 32B** | **ORZ 32B◇** | **SimpleRL 32B** | |----------------|--------------------------------|------------:|-------------------------:|------------:|---------------:|-------------:|--------------:|-----------------:| | **Math** | AIME24 (avg@32) | 17.50 | 17.08 | 16.25 | 15.60 | 34.89 | 47.50 | 27.20 | | | MATH500 | 77.25 | 70.40 | 80.80 | 87.00 | 86.00 | 89.80 | 89.60 | | **Code** | LiveCodeBench (avg@4) | 16.49 | 8.51 | 5.47 | 6.72 | 29.30 | 22.04 | 19.80 | | | HumanEval (avg@4) | 82.62 | 61.12 | 67.38 | 58.08 | 90.85 | 84.30 | 81.25 | | | MBPP | 70.00 | 39.80 | 48.40 | 49.60 | 78.80 | 74.20 | 76.75 | | **Science** | GPQA-diamond (avg@4) | 40.78 | 38.64 | 37.63 | 35.98 | 50.63 | 55.67 | 46.46 | | | SuperGPQA | 31.80 | 30.64 | 29.75 | 27.29 | 43.60 | 46.05 | 37.73 | | **Logic** | ARC-AGI (avg@4) | 3.31 | 0.75 | 0.00 | 0.50 | 7.63 | 2.31 | 5.25 | | | Zebra Puzzle (avg@4) | 39.40 | 0.07 | 1.00 | 0.62 | 45.21 | 0.54 | 1.16 | | **Simulation** | CodeI/O (avg@4) | 15.63 | 7.13 | 5.13 | 6.63 | 12.63 | 3.75 | 9.75 | | | CruxEval-I | 61.72 | 63.63 | 69.38 | 56.25 | 80.63 | 71.13 | 72.63 | | | CruxEval-O | 71.28 | 56.50 | 65.88 | 58.31 | 88.75 | 82.38 | 67.75 | | **Tabular** | FinQA | 34.70 | 34.33 | 37.60 | 35.10 | 46.14 | 45.20 | 45.41 | | | HiTab | 74.20 | 54.40 | 54.10 | 50.40 | 82.00 | 63.30 | 69.00 | | | MultiHiertt (avg@4) | 44.94 | 31.62 | 38.10 | 37.57 | 55.28 | 52.83 | 52.83 | | **Others** | IFEval | 35.81 | 39.56 | 32.72 | 36.69 | 55.45 | 38.26 | 55.27 | | | LiveBench | 18.57 | 19.76 | 12.64 | 15.20 | 34.30 | 28.78 | 28.33 | | | **Average Score** | **43.29** | **33.76** | **35.42** | **33.97** | **54.24** | **47.53** | **46.25** | Example usage: ```python from transformers import AutoTokenizer, AutoModelForCausalLM model = "LLM360/Guru-32B" tokenizer = AutoTokenizer.from_pretrained(model) model = AutoModelForCausalLM.from_pretrained(model, device_map="auto", torch_dtype="auto") messages = [{"role": "user", "content": "What is reinforcement learning?"}] prompt = tokenizer.apply_chat_template(messages, return_tensors="pt").to(model.device) outputs = model.generate(prompt, max_new_tokens=256, temperature=1.0, top_p=0.7) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ``` Please refer to the [paper](https://arxiv.org/abs/2506.14965) for more details.
morturr/Llama-2-7b-hf-PAIR_dadjokes_headlines-COMB-headlines-comb-1-seed-18-2025-06-20
morturr
2025-06-19T23:18:11Z
0
0
peft
[ "peft", "safetensors", "trl", "sft", "generated_from_trainer", "base_model:meta-llama/Llama-2-7b-hf", "base_model:adapter:meta-llama/Llama-2-7b-hf", "license:llama2", "region:us" ]
null
2025-06-19T23:17:51Z
--- library_name: peft license: llama2 base_model: meta-llama/Llama-2-7b-hf tags: - trl - sft - generated_from_trainer model-index: - name: Llama-2-7b-hf-PAIR_dadjokes_headlines-COMB-headlines-comb-1-seed-18-2025-06-20 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Llama-2-7b-hf-PAIR_dadjokes_headlines-COMB-headlines-comb-1-seed-18-2025-06-20 This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 6e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 18 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - PEFT 0.13.2 - Transformers 4.46.1 - Pytorch 2.5.1+cu124 - Datasets 3.0.2 - Tokenizers 0.20.1
arhansd1/csv_cleaner
arhansd1
2025-06-19T23:07:03Z
4
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:unsloth/mistral-7b-instruct-v0.2-bnb-4bit", "base_model:adapter:unsloth/mistral-7b-instruct-v0.2-bnb-4bit", "region:us" ]
null
2025-06-11T22:16:51Z
--- base_model: unsloth/mistral-7b-instruct-v0.2-bnb-4bit library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.15.2
santoshmds21/distilbert-base-uncased-lora-text-classification
santoshmds21
2025-06-19T23:03:06Z
0
0
peft
[ "peft", "tensorboard", "safetensors", "generated_from_trainer", "base_model:distilbert/distilbert-base-uncased", "base_model:adapter:distilbert/distilbert-base-uncased", "license:apache-2.0", "region:us" ]
null
2025-06-19T23:02:37Z
--- library_name: peft license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer metrics: - accuracy model-index: - name: distilbert-base-uncased-lora-text-classification results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-lora-text-classification This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.0886 - Accuracy: {'accuracy': 0.877} ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:-------------------:| | No log | 1.0 | 250 | 0.4107 | {'accuracy': 0.89} | | 0.4454 | 2.0 | 500 | 0.4281 | {'accuracy': 0.878} | | 0.4454 | 3.0 | 750 | 0.7015 | {'accuracy': 0.874} | | 0.1947 | 4.0 | 1000 | 0.7794 | {'accuracy': 0.876} | | 0.1947 | 5.0 | 1250 | 0.9013 | {'accuracy': 0.874} | | 0.0711 | 6.0 | 1500 | 0.8924 | {'accuracy': 0.88} | | 0.0711 | 7.0 | 1750 | 0.9679 | {'accuracy': 0.881} | | 0.018 | 8.0 | 2000 | 1.0589 | {'accuracy': 0.881} | | 0.018 | 9.0 | 2250 | 1.0792 | {'accuracy': 0.886} | | 0.0036 | 10.0 | 2500 | 1.0886 | {'accuracy': 0.877} | ### Framework versions - PEFT 0.15.2 - Transformers 4.52.4 - Pytorch 2.6.0+cu124 - Datasets 3.6.0 - Tokenizers 0.21.1
gabriellarson/ICONN-1-GGUF
gabriellarson
2025-06-19T22:54:01Z
0
1
transformers
[ "transformers", "gguf", "emotional-ai", "ICONN", "chatbot", "base", "text-generation", "base_model:ICONNAI/ICONN-1", "base_model:quantized:ICONNAI/ICONN-1", "license:other", "co2_eq_emissions", "endpoints_compatible", "region:us", "imatrix" ]
text-generation
2025-06-19T16:24:17Z
--- license: other license_name: iconn license_link: LICENSE library_name: transformers tags: - emotional-ai - ICONN - chatbot - base co2_eq_emissions: emissions: 1.34 source: CodeCarbon training_type: pretraining geographical_location: US-West hardware_used: 9 x B200 pipeline_tag: text-generation base_model: - ICONNAI/ICONN-1 --- ![ICONN AI Logo](https://i.postimg.cc/gJwHqh1D/svgviewer-png-output.png) <div align="center" style="line-height: 1;"> <a href="https://huggingface.co/collections/ICONNAI/iconn-1-6851e8a88ed4eb66b4fd0132" target="_blank" style="margin: 2px;"> <img alt="ICONN 1 Models" src="https://img.shields.io/badge/📦_ICONN_1_Models-HuggingFace-1CBEEF?style=flat-square&labelColor=2C3E50" style="display: inline-block; vertical-align: middle;" /> </a> <a href="https://huggingface.co/spaces/ICONNAI/ICONN-Mini-Chat" target="_blank" style="margin: 2px;"> <img alt="ICONN 1 Chat" src="https://img.shields.io/badge/💬_ICONN_1_Chat-Online-65C7F9?style=flat-square&labelColor=2C3E50" style="display: inline-block; vertical-align: middle;" /> </a> <a href="https://huggingface.co/ICONNAI" target="_blank" style="margin: 2px;"> <img alt="ICONN on Hugging Face" src="https://img.shields.io/badge/🤗_ICONN_on_HF-ICONNAI-A4BCF0?style=flat-square&labelColor=2C3E50" style="display: inline-block; vertical-align: middle;" /> </a> <a href="https://opensource.org/license/apache-2-0" target="_blank" style="margin: 2px;"> <img alt="License Apache 2.0" src="https://img.shields.io/badge/⚖️_License-Apache_2.0-5C63DA?style=flat-square&labelColor=2C3E50" style="display: inline-block; vertical-align: middle;" /> </a> </div> ## ICONN 1 We proudly introduce **ICONN-1**, the most advanced and human-like open-source artificial intelligence model under 100B parameters of its time. Designed to push the boundaries of natural language understanding and generation, ICONN-1 is built on a **Mixture-of-Experts (MoE)** architecture that enables dynamic routing through specialized expert pathways, allowing for both computational efficiency and enhanced performance. Developed entirely from scratch, ICONN-1 is based on a customized **Mixtral** framework and comprises **88 billion parameters**, with **22 billion parameters actively utilized per token**. This approach allows ICONN-1 to deliver highly nuanced and contextually accurate responses while maintaining the scalability benefits of sparse activation. ICONN-1 is released in two distinct forms to serve different application needs: - **ICONN-1** (this version) is optimized for natural, emotionally resonant, and conversational interactions. - **ICONN-e1** is a specialized variant of the model fine-tuned for advanced reasoning, critical analysis, and complex problem-solving. Together, these models represent a major leap forward in the evolution of AI systems—demonstrating not only deep reasoning but also a commitment to openness, accessibility, and human-aligned intelligence. ![Comparison Chart](https://i.postimg.cc/tgYmDzSZ/Untitled-1.png) _These models were each benchmarked on a collection of 500 questions to compare output to a human for emotion and common sense. Benchmark performance may vary due to the stochastic nature of AI models. ICONN 1 retains the highest human-thinking benchmark score through many tests on different temperatures._ ## Usage ## System Requirements To run **ICONN 1** effectively, ensure you have: - **4× NVIDIA A100 GPUs** or a **single NVIDIA B100** - **At least 120 GB of system RAM** - **120–192 GB of GPU VRAM** If your system does not meet these requirements—which may be the case for many users—you can still experience ICONN through alternative options: - Use a **quantized version** of ICONN for lower resource consumption - Try the lightweight [**ICONN 1 Mini (7B)**](https://huggingface.co/Enderchef/ICONN-0.5-Mini) > Run the code below to run ICONN 1: ```python from transformers import AutoTokenizer, AutoModelForCausalLM, pipeline import torch def run_iconn_chatbot(model_name="ICONNAI/ICONN-1"): tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForCausalLM.from_pretrained(model_name) device = 0 if torch.cuda.is_available() else -1 chat_pipeline = pipeline( "text-generation", model=model, tokenizer=tokenizer, device=device, max_length=1624, do_sample=True, top_p=0.9, temperature=0.4, pad_token_id=tokenizer.eos_token_id ) print(f"ICONN chatbot running with model: {model_name}. Type 'exit' to quit.") conversation_history = "" while True: user_input = input("You: ") if user_input.lower() == "exit": print("Goodbye!") break conversation_history += f"User: {user_input}\nBot:" response = chat_pipeline(conversation_history, max_length=len(tokenizer.encode(conversation_history)) + 100)[0]['generated_text'] bot_reply = response[len(conversation_history):].strip().split("\n")[0] print(f"Bot: {bot_reply}") conversation_history += f" {bot_reply}\n" if __name__ == "__main__": run_iconn_chatbot() ```
winnieyangwannan/refusal_OLMoE-1B-7B-0924-Instruct_experts_positive-negative-addition-opposite_last_layer_10_2_49
winnieyangwannan
2025-06-19T22:49:19Z
0
0
transformers
[ "transformers", "safetensors", "olmoe", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-06-19T22:47:15Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
sgonzalezygil/sd-finetuning-dreambooth-v20-1600
sgonzalezygil
2025-06-19T22:44:57Z
0
0
diffusers
[ "diffusers", "safetensors", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2025-06-19T22:43:33Z
--- library_name: diffusers --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🧨 diffusers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
BootesVoid/cmc3t8h3e01bbnx8dhcdt897i_cmc3xjcqv01ownx8dd0tkudw6
BootesVoid
2025-06-19T22:41:37Z
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-06-19T22:41:36Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: MIA2 --- # Cmc3T8H3E01Bbnx8Dhcdt897I_Cmc3Xjcqv01Ownx8Dd0Tkudw6 <Gallery /> ## About this LoRA This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI. It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `MIA2` to trigger the image generation. ## Run this LoRA with an API using Replicate ```py import replicate input = { "prompt": "MIA2", "lora_weights": "https://huggingface.co/BootesVoid/cmc3t8h3e01bbnx8dhcdt897i_cmc3xjcqv01ownx8dd0tkudw6/resolve/main/lora.safetensors" } output = replicate.run( "black-forest-labs/flux-dev-lora", input=input ) for index, item in enumerate(output): with open(f"output_{index}.webp", "wb") as file: file.write(item.read()) ``` ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('BootesVoid/cmc3t8h3e01bbnx8dhcdt897i_cmc3xjcqv01ownx8dd0tkudw6', weight_name='lora.safetensors') image = pipeline('MIA2').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Training details - Steps: 2000 - Learning rate: 0.0004 - LoRA rank: 16 ## Contribute your own examples You can use the [community tab](https://huggingface.co/BootesVoid/cmc3t8h3e01bbnx8dhcdt897i_cmc3xjcqv01ownx8dd0tkudw6/discussions) to add images that show off what you’ve made with this LoRA.
JesseLiu/llama32-1b-pagerank-naive-grpo-lora
JesseLiu
2025-06-19T22:32:20Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:meta-llama/Llama-3.2-1B-Instruct", "base_model:adapter:meta-llama/Llama-3.2-1B-Instruct", "region:us" ]
null
2025-06-19T22:31:56Z
--- base_model: meta-llama/Llama-3.2-1B-Instruct library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.15.1
morturr/Llama-2-7b-hf-PAIR_amazon_dadjokes-COMB-amazon-comb-2-seed-28-2025-06-20
morturr
2025-06-19T22:32:00Z
0
0
peft
[ "peft", "safetensors", "trl", "sft", "generated_from_trainer", "base_model:meta-llama/Llama-2-7b-hf", "base_model:adapter:meta-llama/Llama-2-7b-hf", "license:llama2", "region:us" ]
null
2025-06-19T22:31:53Z
--- library_name: peft license: llama2 base_model: meta-llama/Llama-2-7b-hf tags: - trl - sft - generated_from_trainer model-index: - name: Llama-2-7b-hf-PAIR_amazon_dadjokes-COMB-amazon-comb-2-seed-28-2025-06-20 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Llama-2-7b-hf-PAIR_amazon_dadjokes-COMB-amazon-comb-2-seed-28-2025-06-20 This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 8 - eval_batch_size: 8 - seed: 28 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - PEFT 0.13.2 - Transformers 4.46.1 - Pytorch 2.5.1+cu124 - Datasets 3.0.2 - Tokenizers 0.20.1
chaosbringerc/POSNEG-pretrained
chaosbringerc
2025-06-19T22:19:59Z
33
0
null
[ "pytorch", "bert", "text-classification", "en", "dataset:chaosbringerc/POSNEG", "license:mit", "region:us" ]
text-classification
2025-06-11T21:52:18Z
--- license: mit datasets: - chaosbringerc/POSNEG language: - en pipeline_tag: text-classification --- # POSNEG Sentiment Classification Model This model is trained on the POSNEG dataset for binary sentiment classification. ## Dataset - **Name**: POSNEG - **Task**: Text Classification - **Labels**: Positive (1), Negative (0) ## Usage ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification import torch tokenizer = AutoTokenizer.from_pretrained("chaosbringerc/POSNEG-Dataset-API") model = AutoModelForSequenceClassification.from_pretrained("chaosbringerc/POSNEG-Dataset-API") text = "This is a great product!" inputs = tokenizer(text, return_tensors="pt") outputs = model(**inputs) predictions = torch.nn.functional.softmax(outputs.logits, dim=-1)
evadraganova/HW2-ppo
evadraganova
2025-06-19T22:08:40Z
0
0
transformers
[ "transformers", "safetensors", "gpt2", "text-generation", "generated_from_trainer", "arxiv:1909.08593", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-19T18:14:47Z
--- library_name: transformers model_name: HW2-ppo tags: - generated_from_trainer licence: license --- # Model Card for HW2-ppo This model is a fine-tuned version of [None](https://huggingface.co/None). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="evadraganova/HW2-ppo", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with PPO, a method introduced in [Fine-Tuning Language Models from Human Preferences](https://huggingface.co/papers/1909.08593). ### Framework versions - TRL: 0.18.1 - Transformers: 4.52.4 - Pytorch: 2.7.1 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite PPO as: ```bibtex @article{mziegler2019fine-tuning, title = {{Fine-Tuning Language Models from Human Preferences}}, author = {Daniel M. Ziegler and Nisan Stiennon and Jeffrey Wu and Tom B. Brown and Alec Radford and Dario Amodei and Paul F. Christiano and Geoffrey Irving}, year = 2019, eprint = {arXiv:1909.08593} } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
BootesVoid/cmc2gny4f005zaqihg0q615ym_cmc3wkmaa01lgnx8dbbrf7ura
BootesVoid
2025-06-19T22:07:48Z
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-06-19T22:07:46Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: KIARA --- # Cmc2Gny4F005Zaqihg0Q615Ym_Cmc3Wkmaa01Lgnx8Dbbrf7Ura <Gallery /> ## About this LoRA This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI. It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `KIARA` to trigger the image generation. ## Run this LoRA with an API using Replicate ```py import replicate input = { "prompt": "KIARA", "lora_weights": "https://huggingface.co/BootesVoid/cmc2gny4f005zaqihg0q615ym_cmc3wkmaa01lgnx8dbbrf7ura/resolve/main/lora.safetensors" } output = replicate.run( "black-forest-labs/flux-dev-lora", input=input ) for index, item in enumerate(output): with open(f"output_{index}.webp", "wb") as file: file.write(item.read()) ``` ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('BootesVoid/cmc2gny4f005zaqihg0q615ym_cmc3wkmaa01lgnx8dbbrf7ura', weight_name='lora.safetensors') image = pipeline('KIARA').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Training details - Steps: 2000 - Learning rate: 0.0004 - LoRA rank: 16 ## Contribute your own examples You can use the [community tab](https://huggingface.co/BootesVoid/cmc2gny4f005zaqihg0q615ym_cmc3wkmaa01lgnx8dbbrf7ura/discussions) to add images that show off what you’ve made with this LoRA.
katanemo/Arch-Function-Chat-7B
katanemo
2025-06-19T21:55:43Z
54
2
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "conversational", "en", "base_model:Qwen/Qwen2.5-Coder-7B-Instruct", "base_model:finetune:Qwen/Qwen2.5-Coder-7B-Instruct", "license:other", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-03-28T20:12:24Z
--- license: other license_name: katanemo-research license_link: https://huggingface.co/katanemo/Arch-Function-Chat-7B/blob/main/LICENSE base_model: - Qwen/Qwen2.5-Coder-7B-Instruct language: - en pipeline_tag: text-generation library_name: transformers --- # katanemo/Arch-Function-Chat-7B ## Overview The Arch-Function-Chat collection builds upon the Katanemo's [Arch-Function](https://huggingface.co/collections/katanemo/arch-function-66f209a693ea8df14317ad68) collection by extending its capabilities beyond function calling. This new collection maintains the state-of-the-art(SOTA) function calling performance of the original collection while adding powerful new features that make it even more versatile in real-world applications. In addition to function calling capabilities, this collection now offers: - **Clarify & refine**: Generates natural follow-up questions to collect missing information for function calling - **Interpret & respond**: Provides human-friendly responses based on function execution results - **Context management**: Mantains context in complex multi-turn interactions *Note*: Arch-Function-Chat is now the primarly LLM used in then open source [Arch Gateway](https://github.com/katanemo/archgw) - An AI-native proxy for agents. For more details about the project, check out the Github [README](https://github.com/katanemo/archgw/blob/main/README.md). # Requirements The code of Arch-Function-Chat-7B has been in the Hugging Face `transformers` library and we advise you to install latest version: ```bash pip install transformers>=4.37.0 ``` # How to use We use the following example to illustrate how to use our model to perform function calling tasks. Please note that, our model works best with our provided prompt format. It allows us to extract JSON output that is similar to the [OpenAI's function calling](https://platform.openai.com/docs/guides/function-calling). ### Quickstart ````python import json from typing import Any, Dict, List from transformers import AutoModelForCausalLM, AutoTokenizer model_name = "katanemo/Arch-Function-Chat-7B" model = AutoModelForCausalLM.from_pretrained( model_name, device_map="auto", torch_dtype="auto", trust_remote_code=True ) tokenizer = AutoTokenizer.from_pretrained(model_name) # Please use our provided prompt for best performance TASK_PROMPT = ( "You are a helpful assistant designed to assist with the user query by making one or more function calls if needed." "\n\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>\n{tools}\n</tools>" "\n\nYour task is to decide which functions are needed and collect missing parameters if necessary." ) FORMAT_PROMPT = ( "\n\nBased on your analysis, provide your response in one of the following JSON formats:" '\n1. If no functions are needed:\n```json\n{"response": "Your response text here"}\n```' '\n2. If functions are needed but some required parameters are missing:\n```json\n{"required_functions": ["func_name1", "func_name2", ...], "clarification": "Text asking for missing parameters"}\n```' '\n3. If functions are needed and all required parameters are available:\n```json\n{"tool_calls": [{"name": "func_name1", "arguments": {"argument1": "value1", "argument2": "value2"}},... (more tool calls as required)]}\n```' ) # Define available tools tools = [ { "type": "function", "function": { "name": "get_weather", "description": "Get the current weather for a location", "parameters": { "type": "object", "properties": { "location": { "type": "str", "description": "The city and state, e.g. San Francisco, New York", }, "unit": { "type": "str", "enum": ["celsius", "fahrenheit"], "description": "The unit of temperature to return", }, }, "required": ["location"], }, }, } ] # Helper function to create the system prompt for our model def format_prompt(tools: List[Dict[str, Any]]): tools = "\n".join( [json.dumps(tool["function"], ensure_ascii=False) for tool in tools] ) return TASK_PROMPT.format(tools=tools) + FORMAT_PROMPT system_prompt = format_prompt(tools) messages = [ {"role": "system", "content": system_prompt}, {"role": "user", "content": "What is the weather in Seattle?"}, ] model_inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, return_tensors="pt" ).to(model.device) generated_ids = model.generate(**model_inputs, max_new_tokens=32768) generated_ids = [ output_ids[len(input_ids) :] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids) ] response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0] print(response) ```` # License Katanemo Arch-Function collection is distributed under the [Katanemo license](https://huggingface.co/katanemo/Arch-Function-Chat-7B/blob/main/LICENSE).
marcelobarcia/marcebarcia
marcelobarcia
2025-06-19T21:55:37Z
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-06-19T21:22:38Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: Marce Barcia --- # Marcebarcia <Gallery /> ## About this LoRA This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI. It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `Marce Barcia` to trigger the image generation. ## Run this LoRA with an API using Replicate ```py import replicate input = { "prompt": "Marce Barcia", "lora_weights": "https://huggingface.co/marcelobarcia/marcebarcia/resolve/main/lora.safetensors" } output = replicate.run( "black-forest-labs/flux-dev-lora", input=input ) for index, item in enumerate(output): with open(f"output_{index}.webp", "wb") as file: file.write(item.read()) ``` ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('marcelobarcia/marcebarcia', weight_name='lora.safetensors') image = pipeline('Marce Barcia').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Training details - Steps: 2500 - Learning rate: 0.0004 - LoRA rank: 16 ## Contribute your own examples You can use the [community tab](https://huggingface.co/marcelobarcia/marcebarcia/discussions) to add images that show off what you’ve made with this LoRA.
ALQAMARI/gemma-3-12b-it-summary-adapter
ALQAMARI
2025-06-19T21:51:01Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-06-19T21:50:26Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
anemll/anemll-Qwen3-1.7B-LUT6-ctx100
anemll
2025-06-19T21:48:48Z
0
0
null
[ "coreml", "llama", "ANE", "LLaMA", "Qwen", "DeepSeek", "Apple", "Apple Neural Engine", "DeepHermes", "license:mit", "region:us" ]
null
2025-06-19T21:13:27Z
--- license: mit tags: - coreml - ANE - LLaMA - Qwen - DeepSeek - Apple - Apple Neural Engine - DeepHermes --- # ANEMLL **ANEMLL** (pronounced like "animal") is an open-source project focused on accelerating the porting of Large Language Models (LLMs) to tensor processors, starting with the Apple Neural Engine (ANE). The goal is to provide a fully open-source pipeline from model conversion to inference for common LLM architectures running on ANE. This enables seamless integration and on-device inference for low-power applications on edge devices, ensuring maximum privacy and security. This is critical for autonomous applications, where models run directly on the device without requiring an internet connection. For more information, visit the [ANEMLL GitHub repository](https://github.com/anemll/anemll). --- ## License ANEMLL is licensed under the [MIT License](https://opensource.org/license/mit). The original model may require a separate license depending on the architecture: - LLaMA models: Based on Meta's LLaMA and may require Meta's license - Qwen models: Based on Alibaba's Qwen and may require Alibaba's license - Other models: Check respective original model licenses This model is converted for CoreML using ANEMLL's open-source conversion pipeline. It supports multiple LLM architectures including LLaMA, Qwen, and DeepSeek variants. --- ## Requirements - **macOS Sequoia** with Apple Neural Engine and 8GB RAM or more - **CoreML Tools** and **HuggingFace Transformers** libraries - **Python 3.9** `chat.py` provides a sample inference script. `chat_full.py` provides a sample inference script with history and conversation management. **Installation** 1. Download the model from Hugging Face: ```bash # Install required tools pip install huggingface_hub # Install Git LFS (Large File Support) # macOS with Homebrew: brew install git-lfs # Or Ubuntu/Debian: # sudo apt-get install git-lfs # Initialize Git LFS git lfs install # Clone the repository with model files git clone https://huggingface.co/anemll/anemll-Qwen3-1.7B-LUT6-ctx100 ``` 2. Extract model files: ```bash # Navigate to cloned directory cd anemll-Qwen3-1.7B-LUT6-ctx100 # Pull LFS files (model weights) git lfs pull # Extract CoreML model files find . -type f -name "*.zip" -exec unzip {} \; ``` 3. Install dependencies: ```bash pip install coremltools transformers ``` **Coremltools:** See coremltools installation guide at https://coremltools.readme.io/v4.0/docs/installation **How to Run** 1. Basic chat interface: ```bash python chat.py --meta ./meta.yaml ``` 2. Full conversation mode with history: ```bash python chat_full.py --meta ./meta.yaml ``` > Note: The first time the model loads, macOS will take some time to place it on the device. > Subsequent loads will be instantaneous. > Use Ctrl-D to exit, Ctrl-C to interrupt inference. **More Info** Please check following links for later updates: * [GitHub](https://github.com/anemll) * [Hugging Face Models](https://huggingface.co/anemll) * [Twitter/X](https://x.com/anemll) * [Website](https://anemll.com) [email protected] # anemll-Qwen3-1.7B-LUT6-ctx100 This is a CoreML model converted using ANEMLL for Apple Neural Engine inference. ## Available Distributions ### Standard Distribution - Contains zipped MLMODELC files - Suitable for macOS and development ### iOS Distribution - Contains unzipped MLMODELC files - Ready for iOS deployment - Includes offline tokenizer support ## Model Information - Context Length: 1024 - Batch Size: 64 - Number of Chunks: 1 ## Quick Start ### Test in iOS/macOS App Try our sample Chat-Bot app on TestFlight: 1. Install TestFlight from App Store 2. Join beta test: [TestFlight Link](https://testflight.apple.com/join/jrQq1D1C) 3. App includes a small demo model pre-installed 4. You can add custom models via HuggingFace URLs > [!Note] > - The TestFlight app works on both iOS and macOS > - Demonstrates proper model integration and provides a reference implementation > - iOS requires unzipped MLMODELC files and config.json for offline tokenizer > - macOS supports both zipped and unzipped model formats ```
morturr/Llama-2-7b-hf-PAIR_dadjokes_headlines-COMB-dadjokes-comb-3-seed-42-2025-06-19
morturr
2025-06-19T21:21:08Z
0
0
peft
[ "peft", "safetensors", "trl", "sft", "generated_from_trainer", "base_model:meta-llama/Llama-2-7b-hf", "base_model:adapter:meta-llama/Llama-2-7b-hf", "license:llama2", "region:us" ]
null
2025-06-19T21:20:50Z
--- library_name: peft license: llama2 base_model: meta-llama/Llama-2-7b-hf tags: - trl - sft - generated_from_trainer model-index: - name: Llama-2-7b-hf-PAIR_dadjokes_headlines-COMB-dadjokes-comb-3-seed-42-2025-06-19 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Llama-2-7b-hf-PAIR_dadjokes_headlines-COMB-dadjokes-comb-3-seed-42-2025-06-19 This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - PEFT 0.13.2 - Transformers 4.46.1 - Pytorch 2.5.1+cu124 - Datasets 3.0.2 - Tokenizers 0.20.1
johnphilos/filotrain
johnphilos
2025-06-19T21:20:45Z
11
0
transformers
[ "transformers", "safetensors", "gemma2", "text-generation", "text-generation-inference", "unsloth", "en", "base_model:unsloth/gemma-2-27b-bnb-4bit", "base_model:finetune:unsloth/gemma-2-27b-bnb-4bit", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-04-15T02:28:03Z
--- base_model: unsloth/gemma-2-27b-bnb-4bit tags: - text-generation-inference - transformers - unsloth - gemma2 license: apache-2.0 language: - en --- # Uploaded finetuned model - **Developed by:** johnphilos - **License:** apache-2.0 - **Finetuned from model :** unsloth/gemma-2-27b-bnb-4bit This gemma2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
JesseLiu/llama32-1b-kpath-naive
JesseLiu
2025-06-19T21:19:36Z
104
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:meta-llama/Llama-3.2-1B-Instruct", "base_model:adapter:meta-llama/Llama-3.2-1B-Instruct", "region:us" ]
null
2025-05-22T16:06:27Z
--- base_model: meta-llama/Llama-3.2-1B-Instruct library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.15.1
JesseLiu/llama32-1b-pagerank-naive
JesseLiu
2025-06-19T21:19:13Z
44
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:meta-llama/Llama-3.2-1B-Instruct", "base_model:adapter:meta-llama/Llama-3.2-1B-Instruct", "region:us" ]
null
2025-05-22T16:07:12Z
--- base_model: meta-llama/Llama-3.2-1B-Instruct library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.15.1
rsicproject/Roberta-UCM
rsicproject
2025-06-19T20:59:48Z
19
0
transformers
[ "transformers", "tensorboard", "safetensors", "roberta", "fill-mask", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2025-06-03T05:34:49Z
--- library_name: transformers tags: - generated_from_trainer model-index: - name: Roberta-UCM results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Roberta-UCM This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 4.8889 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 132 | 4.8824 | | No log | 2.0 | 264 | 4.9152 | | No log | 3.0 | 396 | 4.9143 | | 4.9454 | 4.0 | 528 | 4.8542 | | 4.9454 | 5.0 | 660 | 4.8414 | | 4.9454 | 6.0 | 792 | 4.8516 | | 4.9454 | 7.0 | 924 | 4.8830 | | 4.8103 | 8.0 | 1056 | 4.8705 | | 4.8103 | 9.0 | 1188 | 4.9208 | | 4.8103 | 10.0 | 1320 | 4.9049 | | 4.8103 | 11.0 | 1452 | 4.8592 | | 4.8114 | 12.0 | 1584 | 4.9490 | | 4.8114 | 13.0 | 1716 | 4.8784 | | 4.8114 | 14.0 | 1848 | 4.8100 | | 4.8114 | 15.0 | 1980 | 4.9519 | | 4.8085 | 16.0 | 2112 | 4.8849 | | 4.8085 | 17.0 | 2244 | 4.9121 | | 4.8085 | 18.0 | 2376 | 4.8991 | | 4.8001 | 19.0 | 2508 | 4.8772 | | 4.8001 | 20.0 | 2640 | 4.8889 | ### Framework versions - Transformers 4.45.1 - Pytorch 2.7.1+cu126 - Datasets 3.6.0 - Tokenizers 0.20.3
Rziane/asr-wav2vec2-LB7K-spontaneous-fr_ft-CAENNAIS
Rziane
2025-06-19T20:54:38Z
0
0
transformers
[ "transformers", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-06-19T20:54:37Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
BootesVoid/cmc3p3hfu010ynx8dgxpqr0nc_cmc3thvtj01bsnx8d5gmt7nvn
BootesVoid
2025-06-19T20:47:13Z
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-06-19T20:47:11Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: JAS22VXN --- # Cmc3P3Hfu010Ynx8Dgxpqr0Nc_Cmc3Thvtj01Bsnx8D5Gmt7Nvn <Gallery /> ## About this LoRA This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI. It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `JAS22VXN` to trigger the image generation. ## Run this LoRA with an API using Replicate ```py import replicate input = { "prompt": "JAS22VXN", "lora_weights": "https://huggingface.co/BootesVoid/cmc3p3hfu010ynx8dgxpqr0nc_cmc3thvtj01bsnx8d5gmt7nvn/resolve/main/lora.safetensors" } output = replicate.run( "black-forest-labs/flux-dev-lora", input=input ) for index, item in enumerate(output): with open(f"output_{index}.webp", "wb") as file: file.write(item.read()) ``` ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('BootesVoid/cmc3p3hfu010ynx8dgxpqr0nc_cmc3thvtj01bsnx8d5gmt7nvn', weight_name='lora.safetensors') image = pipeline('JAS22VXN').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Training details - Steps: 2000 - Learning rate: 0.0004 - LoRA rank: 16 ## Contribute your own examples You can use the [community tab](https://huggingface.co/BootesVoid/cmc3p3hfu010ynx8dgxpqr0nc_cmc3thvtj01bsnx8d5gmt7nvn/discussions) to add images that show off what you’ve made with this LoRA.
rsicproject/Roberta-SYDNEY
rsicproject
2025-06-19T20:38:01Z
14
0
transformers
[ "transformers", "tensorboard", "safetensors", "roberta", "fill-mask", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2025-06-03T05:31:12Z
--- library_name: transformers tags: - generated_from_trainer model-index: - name: Roberta-SYDNEY results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Roberta-SYDNEY This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.6426 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 39 | 4.6577 | | No log | 2.0 | 78 | 4.2471 | | No log | 3.0 | 117 | 4.1102 | | No log | 4.0 | 156 | 3.6130 | | No log | 5.0 | 195 | 3.4036 | | No log | 6.0 | 234 | 3.3004 | | No log | 7.0 | 273 | 2.9566 | | No log | 8.0 | 312 | 2.8077 | | No log | 9.0 | 351 | 2.6981 | | No log | 10.0 | 390 | 2.5526 | | No log | 11.0 | 429 | 2.4101 | | No log | 12.0 | 468 | 2.0618 | | 3.2882 | 13.0 | 507 | 2.1634 | | 3.2882 | 14.0 | 546 | 1.9266 | | 3.2882 | 15.0 | 585 | 1.9989 | | 3.2882 | 16.0 | 624 | 1.8083 | | 3.2882 | 17.0 | 663 | 1.7884 | | 3.2882 | 18.0 | 702 | 1.8176 | | 3.2882 | 19.0 | 741 | 1.5977 | | 3.2882 | 20.0 | 780 | 1.6426 | ### Framework versions - Transformers 4.45.1 - Pytorch 2.7.1+cu126 - Datasets 3.6.0 - Tokenizers 0.20.3
jinx2321/byt5-dict-sentences-5e-5
jinx2321
2025-06-19T20:36:36Z
0
0
transformers
[ "transformers", "safetensors", "t5", "text2text-generation", "generated_from_trainer", "base_model:google/byt5-small", "base_model:finetune:google/byt5-small", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2025-06-19T20:34:36Z
--- library_name: transformers license: apache-2.0 base_model: google/byt5-small tags: - generated_from_trainer model-index: - name: byt5-dict-sentences-5e-5 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # byt5-dict-sentences-5e-5 This model is a fine-tuned version of [google/byt5-small](https://huggingface.co/google/byt5-small) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 128 - eval_batch_size: 8 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 3 ### Training results ### Framework versions - Transformers 4.52.0.dev0 - Pytorch 2.6.0+cu124 - Datasets 3.4.1 - Tokenizers 0.21.1
jinx2321/byt5-dict-sentences
jinx2321
2025-06-19T20:30:43Z
0
0
transformers
[ "transformers", "safetensors", "t5", "text2text-generation", "generated_from_trainer", "base_model:google/byt5-small", "base_model:finetune:google/byt5-small", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2025-06-19T20:28:45Z
--- library_name: transformers license: apache-2.0 base_model: google/byt5-small tags: - generated_from_trainer model-index: - name: byt5-dict-sentences results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # byt5-dict-sentences This model is a fine-tuned version of [google/byt5-small](https://huggingface.co/google/byt5-small) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 128 - eval_batch_size: 8 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 3 ### Training results ### Framework versions - Transformers 4.52.0.dev0 - Pytorch 2.6.0+cu124 - Datasets 3.4.1 - Tokenizers 0.21.1
3huvan/legalbert_prompt_tuning_peft
3huvan
2025-06-19T20:30:19Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:nlpaueb/legal-bert-base-uncased", "base_model:adapter:nlpaueb/legal-bert-base-uncased", "region:us" ]
null
2025-06-19T20:30:16Z
--- base_model: nlpaueb/legal-bert-base-uncased library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.14.0
underscore2/qwen-grpo-bluesky-maxxing
underscore2
2025-06-19T20:20:51Z
0
1
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "text-generation-inference", "unsloth", "conversational", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-06-19T20:19:05Z
--- base_model: unsloth/qwen2.5-3b-instruct-bnb-4bit tags: - text-generation-inference - transformers - unsloth - qwen2 license: apache-2.0 language: - en --- # Uploaded finetuned model - **Developed by:** underscore2 - **License:** apache-2.0 - **Finetuned from model :** unsloth/qwen2.5-3b-instruct-bnb-4bit This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
mlx-community/Josiefied-Qwen3-30B-A3B-abliterated-v2-6bit
mlx-community
2025-06-19T20:17:07Z
0
0
mlx
[ "mlx", "safetensors", "qwen3_moe", "chat", "text-generation", "conversational", "base_model:Goekdeniz-Guelmez/Josiefied-Qwen3-30B-A3B-abliterated-v2", "base_model:quantized:Goekdeniz-Guelmez/Josiefied-Qwen3-30B-A3B-abliterated-v2", "6-bit", "region:us" ]
text-generation
2025-06-19T20:09:35Z
--- tags: - chat - mlx base_model: Goekdeniz-Guelmez/Josiefied-Qwen3-30B-A3B-abliterated-v2 pipeline_tag: text-generation library_name: mlx --- # mlx-community/Josiefied-Qwen3-30B-A3B-abliterated-v2-6bit This model [mlx-community/Josiefied-Qwen3-30B-A3B-abliterated-v2-6bit](https://huggingface.co/mlx-community/Josiefied-Qwen3-30B-A3B-abliterated-v2-6bit) was converted to MLX format from [Goekdeniz-Guelmez/Josiefied-Qwen3-30B-A3B-abliterated-v2](https://huggingface.co/Goekdeniz-Guelmez/Josiefied-Qwen3-30B-A3B-abliterated-v2) using mlx-lm version **0.25.2**. ## Use with mlx ```bash pip install mlx-lm ``` ```python from mlx_lm import load, generate model, tokenizer = load("mlx-community/Josiefied-Qwen3-30B-A3B-abliterated-v2-6bit") prompt = "hello" if tokenizer.chat_template is not None: messages = [{"role": "user", "content": prompt}] prompt = tokenizer.apply_chat_template( messages, add_generation_prompt=True ) response = generate(model, tokenizer, prompt=prompt, verbose=True) ```
gabriellarson/Skywork-SWE-32B-GGUF
gabriellarson
2025-06-19T19:58:11Z
0
1
transformers
[ "transformers", "gguf", "swe-bench", "text-generation", "base_model:Skywork/Skywork-SWE-32B", "base_model:quantized:Skywork/Skywork-SWE-32B", "license:apache-2.0", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
text-generation
2025-06-19T17:44:36Z
--- tags: - swe-bench license: apache-2.0 metrics: - pass@1 library_name: transformers pipeline_tag: text-generation base_model: - Skywork/Skywork-SWE-32B --- # Skywork-SWE ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6665dd2b3a64c70529f7542c/8o-IE7N3GwSFCIH3ntc8E.png) 📖 [Technical Report](https://huggingface.co/Skywork/Skywork-SWE-32B/resolve/main/assets/Report.pdf) | 📰 [Blog](https://quixotic-sting-239.notion.site/eb17f379610040ceb54da5d5d24065bd) ## Model Introduction ***Skywork-SWE-32B*** is a code agent model developed by [Skywork AI](https://skywork.ai/home), specifically designed for software engineering (SWE) tasks. It demonstrates strong performance across several key metrics: - Skywork-SWE-32B attains 38.0% pass@1 accuracy on the [SWE-bench Verified](https://www.swebench.com) benchmark, outperforming previous open-source SoTA [Qwen2.5-Coder-32B-based](https://huggingface.co/Qwen/Qwen2.5-Coder-32B) LLMs built on the [OpenHands](https://github.com/All-Hands-AI/OpenHands) agent framework. - When incorporated with test-time scaling techniques, the performance further improves to 47.0% accuracy, surpassing the previous SoTA results for sub-32B parameter models. - We clearly demonstrate the data scaling law phenomenon for software engineering capabilities in LLMs, with no signs of saturation at 8209 collected training trajectories. We also introduce an efficient and automated pipeline for SWE data collection, culminating in the creation of the Skywork-SWE dataset---a large-scale, high-quality dataset featuring comprehensive executable runtime environments. Detailed descriptions are available on our [technical report](https://huggingface.co/Skywork/Skywork-SWE-32B/resolve/main/assets/Report.pdf). ### 🔧 Model Details | Model Name | Backbone LLM | HuggingFace Link | Technical Report | Blog | |---|---------------|-----------|-|-| |Skywork-SWE-32B | [🤗 Qwen2.5-Coder-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Coder-32B-Instruct) | [🤗 Skywork-SWE-32B](https://huggingface.co/Skywork/Skywork-SWE-32B) | [Technical Report](https://huggingface.co/Skywork/Skywork-SWE-32B/resolve/main/assets/Report.pdf) | [Blog](https://quixotic-sting-239.notion.site/eb17f379610040ceb54da5d5d24065bd)| ## Evaluation ![image/png](https://huggingface.co/Skywork/Skywork-SWE-32B/resolve/main/assets/data_scaling_compressed.png) Data Scaling Law for Pass@1 Accuracy on Qwen2.5-Coder-32B-Based LLMs Using the OpenHands v0.32.0 Code Agent Framework. Skywork-SWE-32B significantly outperforms previous Qwen2.5-Coder-32B-based LLMs, achieving the highest pass@1 accuracy without using verifiers or multiple rollouts. ![image/png](https://huggingface.co/Skywork/Skywork-SWE-32B/resolve/main/assets/accuracy_compressed.png) With the incorporation of test-time scaling techniques, Skywork-SWE-32B further improves to 47.0% accuracy, surpassing the previous SoTA results for sub-32B parameter models. ## Performance Summary - Skywork-SWE-32B: ``` Submission summary on SWE-bench verified split ================================================== Resolved 190 instances (38.0%) ================================================== Resolved by Repository - astropy/astropy: 4/22 (18.18%) - django/django: 99/231 (42.86%) - matplotlib/matplotlib: 9/34 (26.47%) - mwaskom/seaborn: 0/2 (0.0%) - pallets/flask: 1/1 (100.0%) - psf/requests: 4/8 (50.0%) - pydata/xarray: 7/22 (31.82%) - pylint-dev/pylint: 2/10 (20.0%) - pytest-dev/pytest: 9/19 (47.37%) - scikit-learn/scikit-learn: 17/32 (53.12%) - sphinx-doc/sphinx: 13/44 (29.55%) - sympy/sympy: 25/75 (33.33%) ================================================== Resolved by Time - 2013: 2/3 (66.67%) - 2014: 2/2 (100.0%) - 2015: 0/1 (0.0%) - 2016: 2/2 (100.0%) - 2017: 5/16 (31.25%) - 2018: 7/24 (29.17%) - 2019: 46/98 (46.94%) - 2020: 43/108 (39.81%) - 2021: 27/86 (31.4%) - 2022: 35/102 (34.31%) - 2023: 21/58 (36.21%) ``` - Skywork-SWE-32B + TTS (Bo8): ``` Submission summary on SWE-bench verified split ================================================== Resolved 235 instances (47.0%) ================================================== Resolved by Repository - astropy/astropy: 8/22 (36.36%) - django/django: 115/231 (49.78%) - matplotlib/matplotlib: 15/34 (44.12%) - mwaskom/seaborn: 0/2 (0.0%) - pallets/flask: 1/1 (100.0%) - psf/requests: 3/8 (37.5%) - pydata/xarray: 14/22 (63.64%) - pylint-dev/pylint: 4/10 (40.0%) - pytest-dev/pytest: 10/19 (52.63%) - scikit-learn/scikit-learn: 22/32 (68.75%) - sphinx-doc/sphinx: 12/44 (27.27%) - sympy/sympy: 31/75 (41.33%) ================================================== Resolved by Time - 2013: 1/3 (33.33%) - 2014: 1/2 (50.0%) - 2015: 0/1 (0.0%) - 2016: 2/2 (100.0%) - 2017: 6/16 (37.5%) - 2018: 9/24 (37.5%) - 2019: 52/98 (53.06%) - 2020: 48/108 (44.44%) - 2021: 40/86 (46.51%) - 2022: 46/102 (45.1%) - 2023: 30/58 (51.72%) ``` ## Usage ### Install vLLM package ``` # Install vLLM version 0.9.0.1. # For example, if your CUDA version is 12.8, use the following command: pip install vllm==0.9.0.1 --extra-index-url https://download.pytorch.org/whl/cu128 ``` ### Launch a server to deploy Skywork-SWE-32B ``` vllm serve ${MODEL_PATH} —served-model-name ${SERVED_MODEL_NAME} --host 0.0.0.0 --port 8000 --gpu-memory-utilization 0.95 --tensor-parallel-size 8 ``` Since our model has 32 billion parameters and supports a 32K context length, we recommend launching the model server with at least 2 GPUs equipped with sufficient VRAM to ensure efficient inference. ### Set up OpenHands framework ``` git clone https://github.com/All-Hands-AI/OpenHands.git cd OpenHands git checkout tags/0.32.0 make build ``` The official documentation of OpenHands: [SWE-Bench Evaluation with OpenHands SWE-Bench Docker Image](https://github.com/All-Hands-AI/OpenHands/tree/main/evaluation/benchmarks/swe_bench) ### Create the corresponding config file: ``` [core] workspace_base="./workspace" [llm.my-oss-model] model = "openai/${SERVED_MODEL_NAME}" base_url = "http://0.0.0.0:8000/v1" api_key="vllm" max_message_chars=32768 max_input_tokens=32768 max_output_tokens=8192 log_completions=true temperature=0.0 ``` If you want to run the OpenHands agent with test-time scaling techniques (a Best-of-N method based on the critic model), please refer to the [blog](https://www.all-hands.dev/blog/sota-on-swe-bench-verified-with-inference-time-scaling-and-critic-model) for detailed instructions. You will need to switch to the [feature/llm-critic](https://github.com/All-Hands-AI/OpenHands/tree/feature/llm-critic) branch and deploy the [critic model](https://huggingface.co/all-hands/openhands-critic-32b-exp-20250417) accordingly. Additionally, you need to add the following parameters into the configuration file: ``` use_critic=true critic_model="critic_model" critic_base_url="**********" critic_api_key="************" critic_num_candidates=2 ``` ### Rollout on SWE-Bench Instances ``` ./evaluation/benchmarks/swe_bench/scripts/run_infer.sh [model_config] [git-version] [agent] [eval_limit] [max_iter] [num_workers] [dataset] [dataset_split] # Example ./evaluation/benchmarks/swe_bench/scripts/run_infer.sh llm.my-oss-model HEAD CodeActAgent 500 100 1 princeton-nlp/SWE-bench_Verified test ``` ### Evaluate generated patches ``` ./evaluation/benchmarks/swe_bench/scripts/eval_infer.sh \ ./evaluation_outputs/outputs/princeton-nlp__SWE-bench_Lite-test/CodeActAgent/my-oss-model_maxiter_100_N_v0.32.0-no-hint-run_1/output.jsonl ``` ## Acknowledgements We would like to thank the contributors of the [OpenHands](https://github.com/All-Hands-AI/OpenHands) and [AllHands Critic](https://huggingface.co/all-hands/openhands-critic-32b-exp-20250417) repositories for their open research and valuable contributions. ## Citation If you use Skywork-SWE in your research, please consider citing our work using the following BibTeX entry: ``` @misc{skywork-swe, title={Skywork-SWE: Unveiling Data Scaling Laws for Software Engineering in LLMs}, author={Liang Zeng, Yongcong Li, Yuzhen Xiao, Changshi Li, Chris Yuhao Liu, Rui Yan, Tianwen Wei, Jujie He, Xuchen Song, Yang Liu, and Yahui Zhou}, howpublished={\url{https://quixotic-sting-239.notion.site/eb17f379610040ceb54da5d5d24065bd}}, note={Notion Blog}, year={2025}, } ```
VIDEOS-18-mezzo-fun-19-Viral-videos/New.tutorial.mezzo.fun.Viral.Video.Leaks.Official
VIDEOS-18-mezzo-fun-19-Viral-videos
2025-06-19T19:55:58Z
0
0
null
[ "region:us" ]
null
2025-06-19T19:55:38Z
<animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a> <animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
psamtam/Llama-3.2-3B-GRPO
psamtam
2025-06-19T19:41:03Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "generated_from_trainer", "trl", "grpo", "dataset:AI-MO/NuminaMath-TIR", "arxiv:2402.03300", "base_model:meta-llama/Llama-3.2-3B-Instruct", "base_model:finetune:meta-llama/Llama-3.2-3B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-06-17T19:47:52Z
--- base_model: meta-llama/Llama-3.2-3B-Instruct datasets: AI-MO/NuminaMath-TIR library_name: transformers model_name: Llama-3.2-3B-GRPO tags: - generated_from_trainer - trl - grpo licence: license --- # Model Card for Llama-3.2-3B-GRPO This model is a fine-tuned version of [meta-llama/Llama-3.2-3B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct) on the [AI-MO/NuminaMath-TIR](https://huggingface.co/datasets/AI-MO/NuminaMath-TIR) dataset. It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="psamtam/Llama-3.2-3B-GRPO", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.18.2 - Transformers: 4.52.4 - Pytorch: 2.7.1 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
mihit0/locallore-ml
mihit0
2025-06-19T19:37:19Z
0
0
null
[ "region:us" ]
null
2025-06-19T19:34:30Z
--- title: LocalLore ML Models emoji: 🎯 colorFrom: blue colorTo: green sdk: pytorch pinned: false --- # LocalLore ML Models This repository contains the trained PyTorch models for the LocalLore event recommendation system: ## Models - **event_tagger_best.pth**: DistilBERT-based event categorization model - **quality_scorer_best.pth**: Multi-task quality scoring and spam detection model - **event_recommender_best.pth**: Neural collaborative filtering recommendation model ## Usage These models are automatically downloaded by the LocalLore ML API deployed on Render. ## Model Architecture - **Event Tagger**: DistilBERT + Multi-label classification - **Quality Scorer**: DistilBERT + Multi-task learning (quality score + spam detection) - **Event Recommender**: Neural Collaborative Filtering with Matrix Factorization ## API Integration The models are used by the LocalLore ML API for: - Smart event tagging and categorization - Content quality assessment - Personalized event recommendations Repository: https://github.com/mihit0/locallore-ml
alexisanchez/sagirilont
alexisanchez
2025-06-19T19:33:01Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-06-19T19:33:00Z
--- license: apache-2.0 ---
akar49/VIT_fourclass_Jun25
akar49
2025-06-19T19:31:17Z
0
0
transformers
[ "transformers", "tf", "vit", "image-classification", "generated_from_keras_callback", "base_model:google/vit-base-patch16-224-in21k", "base_model:finetune:google/vit-base-patch16-224-in21k", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2025-06-19T19:31:04Z
--- library_name: transformers license: apache-2.0 base_model: google/vit-base-patch16-224-in21k tags: - generated_from_keras_callback model-index: - name: VIT_fourclass_Jun25 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # VIT_fourclass_Jun25 This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.0938 - Validation Loss: 2.3960 - Train Accuracy: 0.51 - Epoch: 14 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'SGD', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': np.float32(0.01), 'momentum': 0.0, 'nesterov': False} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Train Accuracy | Epoch | |:----------:|:---------------:|:--------------:|:-----:| | 0.9196 | 1.2060 | 0.38 | 0 | | 0.3792 | 1.4807 | 0.48 | 1 | | 0.2729 | 1.7396 | 0.45 | 2 | | 0.2006 | 2.4379 | 0.29 | 3 | | 0.1996 | 2.4795 | 0.36 | 4 | | 0.1734 | 2.7916 | 0.35 | 5 | | 0.1860 | 4.1270 | 0.09 | 6 | | 0.1490 | 2.7235 | 0.37 | 7 | | 0.1077 | 3.5380 | 0.26 | 8 | | 0.1173 | 2.7697 | 0.42 | 9 | | 0.1526 | 2.7868 | 0.42 | 10 | | 0.1161 | 3.1132 | 0.36 | 11 | | 0.1093 | 3.5738 | 0.33 | 12 | | 0.0884 | 3.1227 | 0.37 | 13 | | 0.0938 | 2.3960 | 0.51 | 14 | ### Framework versions - Transformers 4.52.4 - TensorFlow 2.18.0 - Datasets 3.6.0 - Tokenizers 0.21.1
burkimbia/BIA-NLLB-600M-david_11_epocks
burkimbia
2025-06-19T19:31:04Z
0
0
transformers
[ "transformers", "safetensors", "m2m_100", "text2text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2025-06-19T19:28:44Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
georgeowob/WorshipperGeorge
georgeowob
2025-06-19T19:21:58Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-06-19T19:21:58Z
--- license: apache-2.0 ---
suhailanasrin124/Summerization_Transformer3
suhailanasrin124
2025-06-19T19:15:41Z
0
0
transformers
[ "transformers", "safetensors", "bart", "text2text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2025-06-19T19:15:05Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
naomiKenKorem/LTXV_13B_LoRA_smpl
naomiKenKorem
2025-06-19T19:15:23Z
0
0
diffusers
[ "diffusers", "ltx-video", "image-to-video", "text-to-video", "en", "license:other", "region:us" ]
text-to-video
2025-06-19T19:14:42Z
--- tags: - ltx-video - image-to-video pinned: true language: - en license: other pipeline_tag: text-to-video library_name: diffusers --- # LTXV_13B_LoRA_smpl This is a fine-tuned version of [`LTXV_13B_097_DEV`](https://huggingface.co/Lightricks/LTX-Video/blob/main/ltxv-13b-0.9.7-dev.safetensors) trained on custom data. ## Model Details - **Base Model:** [`LTXV_13B_097_DEV`](https://huggingface.co/Lightricks/LTX-Video/blob/main/ltxv-13b-0.9.7-dev.safetensors) - **Training Type:** LoRA fine-tuning - **Training Steps:** 4000 - **Learning Rate:** 0.0002 - **Batch Size:** 1 ## Sample Outputs | | | | | |:---:|:---:|:---:|:---:| | ![example1](./samples/sample_0.gif)<br><details style="max-width: 300px; margin: auto;"><summary>Prompt</summary>people dancing</details> | ![example2](./samples/sample_1.gif)<br><details style="max-width: 300px; margin: auto;"><summary>Prompt</summary>a woman talking</details> | ![example3](./samples/sample_2.gif)<br><details style="max-width: 300px; margin: auto;"><summary>Prompt</summary>a man skiing</details> | ## Usage This model is designed to be used with the LTXV (Lightricks Text-to-Video) pipeline. ### 🔌 Using Trained LoRAs in ComfyUI In order to use the trained lora in comfy: 1. Copy your comfyui trained LoRA weights (`comfyui..safetensors` file) to the `models/loras` folder in your ComfyUI installation. 2. In your ComfyUI workflow: - Add the "LTXV LoRA Selector" node to choose your LoRA file - Connect it to the "LTXV LoRA Loader" node to apply the LoRA to your generation You can find reference Text-to-Video (T2V) and Image-to-Video (I2V) workflows in the [official LTXV ComfyUI repository](https://github.com/Lightricks/ComfyUI-LTXVideo). ### Example Prompts Example prompts used during validation: - `people dancing` - `a woman talking` - `a man skiing` This model inherits the license of the base model ([`LTXV_13B_097_DEV`](https://huggingface.co/Lightricks/LTX-Video/blob/main/ltxv-13b-0.9.7-dev.safetensors)). ## Acknowledgments - Base model by [Lightricks](https://huggingface.co/Lightricks) - Training infrastructure: [LTX-Video-Trainer](https://github.com/Lightricks/ltx-video-trainer)
AhmedBadawy11/ner-en-model-UNcased-important_indicators-df_merged-7700
AhmedBadawy11
2025-06-19T19:12:49Z
0
0
transformers
[ "transformers", "safetensors", "bert", "token-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2025-06-19T19:12:32Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]