modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-07-28 06:27:55
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
534 values
tags
listlengths
1
4.05k
pipeline_tag
stringclasses
55 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-07-28 06:22:14
card
stringlengths
11
1.01M
AmberYifan/llama3-8b-full-pretrain-mix-high-tweet-1m-en
AmberYifan
2025-06-19T06:16:21Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "llama-factory", "full", "generated_from_trainer", "conversational", "base_model:meta-llama/Meta-Llama-3-8B-Instruct", "base_model:finetune:meta-llama/Meta-Llama-3-8B-Instruct", "license:llama3", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-19T04:35:58Z
--- library_name: transformers license: llama3 base_model: meta-llama/Meta-Llama-3-8B-Instruct tags: - llama-factory - full - generated_from_trainer model-index: - name: llama3-8b-full-pretrain-mix-high-tweet-1m-en results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # llama3-8b-full-pretrain-mix-high-tweet-1m-en This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on the mix_high_tweet_1m_en dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - total_train_batch_size: 8 - total_eval_batch_size: 64 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3.0 ### Training results ### Framework versions - Transformers 4.52.4 - Pytorch 2.7.1+cu126 - Datasets 3.6.0 - Tokenizers 0.21.1
quadcoders/deep-rl-course
quadcoders
2025-06-19T06:11:56Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2025-06-19T06:11:35Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 259.11 +/- 19.28 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
EYEDOL/MISTRAL7B_ON_ALPACA4
EYEDOL
2025-06-19T06:06:29Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "mistral", "trl", "en", "base_model:unsloth/mistral-7b-instruct-v0.1-bnb-4bit", "base_model:finetune:unsloth/mistral-7b-instruct-v0.1-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-06-19T06:06:05Z
--- base_model: unsloth/mistral-7b-instruct-v0.1-bnb-4bit tags: - text-generation-inference - transformers - unsloth - mistral - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** EYEDOL - **License:** apache-2.0 - **Finetuned from model :** unsloth/mistral-7b-instruct-v0.1-bnb-4bit This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
apriasmoro/fc08d115-d555-4674-864a-0dd0ff54f304
apriasmoro
2025-06-19T06:02:58Z
0
0
peft
[ "peft", "safetensors", "gemma", "axolotl", "generated_from_trainer", "base_model:unsloth/codegemma-7b", "base_model:adapter:unsloth/codegemma-7b", "license:apache-2.0", "region:us" ]
null
2025-06-19T05:53:22Z
--- library_name: peft license: apache-2.0 base_model: unsloth/codegemma-7b tags: - axolotl - generated_from_trainer model-index: - name: fc08d115-d555-4674-864a-0dd0ff54f304 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.10.0.dev0` ```yaml adapter: lora base_model: unsloth/codegemma-7b bf16: true chat_template: llama3 datasets: - data_files: - 5313f4d1e8057633_train_data.json ds_type: json format: custom path: /workspace/input_data/ type: field_instruction: instruct field_output: output format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' eval_max_new_tokens: 256 evals_per_epoch: 2 flash_attention: false fp16: false gradient_accumulation_steps: 1 gradient_checkpointing: true group_by_length: true hub_model_id: apriasmoro/fc08d115-d555-4674-864a-0dd0ff54f304 learning_rate: 0.0002 logging_steps: 10 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: false lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_steps: 280 micro_batch_size: 4 mlflow_experiment_name: /tmp/5313f4d1e8057633_train_data.json model_type: AutoModelForCausalLM num_epochs: 3 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true sample_packing: false save_steps: 25 sequence_len: 2048 tf32: true tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: efaf2747-93ba-4914-bbfb-4587efac813b wandb_project: Gradients-On-Demand wandb_run: apriasmoro wandb_runid: efaf2747-93ba-4914-bbfb-4587efac813b warmup_steps: 100 weight_decay: 0.01 ``` </details><br> # fc08d115-d555-4674-864a-0dd0ff54f304 This model is a fine-tuned version of [unsloth/codegemma-7b](https://huggingface.co/unsloth/codegemma-7b) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.8509 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 100 - training_steps: 280 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | No log | 0.0122 | 1 | 1.0012 | | 2.6887 | 0.5732 | 47 | 0.9802 | | 0.638 | 1.1463 | 94 | 0.8698 | | 1.3412 | 1.7195 | 141 | 0.8378 | | 0.6638 | 2.2927 | 188 | 0.9116 | | 0.3695 | 2.8659 | 235 | 0.8509 | ### Framework versions - PEFT 0.15.2 - Transformers 4.51.3 - Pytorch 2.5.1+cu124 - Datasets 3.5.1 - Tokenizers 0.21.1
Mungert/GLM-Z1-Rumination-32B-0414-GGUF
Mungert
2025-06-19T05:57:16Z
44
0
transformers
[ "transformers", "gguf", "text-generation", "zh", "en", "license:mit", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
text-generation
2025-06-17T18:35:32Z
--- license: mit language: - zh - en pipeline_tag: text-generation library_name: transformers --- # <span style="color: #7FFF7F;">GLM-Z1-Rumination-32B-0414 GGUF Models</span> ## <span style="color: #7F7FFF;">Model Generation Details</span> This model was generated using [llama.cpp](https://github.com/ggerganov/llama.cpp) at commit [`6adc3c3e`](https://github.com/ggerganov/llama.cpp/commit/6adc3c3ebc029af058ac950a8e2a825fdf18ecc6). --- ## <span style="color: #7FFF7F;">Quantization Beyond the IMatrix</span> I've been experimenting with a new quantization approach that selectively elevates the precision of key layers beyond what the default IMatrix configuration provides. In my testing, standard IMatrix quantization underperforms at lower bit depths, especially with Mixture of Experts (MoE) models. To address this, I'm using the `--tensor-type` option in `llama.cpp` to manually "bump" important layers to higher precision. You can see the implementation here: 👉 [Layer bumping with llama.cpp](https://github.com/Mungert69/GGUFModelBuilder/blob/main/model-converter/tensor_list_builder.py) While this does increase model file size, it significantly improves precision for a given quantization level. ### **I'd love your feedback—have you tried this? How does it perform for you?** --- <a href="https://readyforquantum.com/huggingface_gguf_selection_guide.html" style="color: #7FFF7F;"> Click here to get info on choosing the right GGUF model format </a> --- <!--Begin Original Model Card--> # GLM-4-Z1-Rumination-32B-0414 ## Introduction The GLM family welcomes a new generation of open-source models, the **GLM-4-32B-0414** series, featuring 32 billion parameters. Its performance is comparable to OpenAI's GPT series and DeepSeek's V3/R1 series, and it supports very user-friendly local deployment features. GLM-4-32B-Base-0414 was pre-trained on 15T of high-quality data, including a large amount of reasoning-type synthetic data, laying the foundation for subsequent reinforcement learning extensions. In the post-training stage, in addition to human preference alignment for dialogue scenarios, we also enhanced the model's performance in instruction following, engineering code, and function calling using techniques such as rejection sampling and reinforcement learning, strengthening the atomic capabilities required for agent tasks. GLM-4-32B-0414 achieves good results in areas such as engineering code, Artifact generation, function calling, search-based Q&A, and report generation. Some benchmarks even rival larger models like GPT-4o and DeepSeek-V3-0324 (671B). **GLM-Z1-32B-0414** is a reasoning model with **deep thinking capabilities**. This was developed based on GLM-4-32B-0414 through cold start and extended reinforcement learning, as well as further training of the model on tasks involving mathematics, code, and logic. Compared to the base model, GLM-Z1-32B-0414 significantly improves mathematical abilities and the capability to solve complex tasks. During the training process, we also introduced general reinforcement learning based on pairwise ranking feedback, further enhancing the model's general capabilities. **GLM-Z1-Rumination-32B-0414** is a deep reasoning model with **rumination capabilities** (benchmarked against OpenAI's Deep Research). Unlike typical deep thinking models, the rumination model employs longer periods of deep thought to solve more open-ended and complex problems (e.g., writing a comparative analysis of AI development in two cities and their future development plans). The rumination model integrates search tools during its deep thinking process to handle complex tasks and is trained by utilizing multiple rule-based rewards to guide and extend end-to-end reinforcement learning. Z1-Rumination shows significant improvements in research-style writing and complex retrieval tasks. Finally, **GLM-Z1-9B-0414** is a surprise. We employed the aforementioned series of techniques to train a 9B small-sized model that maintains the open-source tradition. Despite its smaller scale, GLM-Z1-9B-0414 still exhibits excellent capabilities in mathematical reasoning and general tasks. Its overall performance is already at a leading level among open-source models of the same size. Especially in resource-constrained scenarios, this model achieves an excellent balance between efficiency and effectiveness, providing a powerful option for users seeking lightweight deployment. ## Inference Code Make Sure Using `transforemrs>=4.51.3`. ```python from transformers import AutoModelForCausalLM, AutoTokenizer MODEL_PATH = "THUDM/GLM-Z1-Rumination-32B-0414" tokenizer = AutoTokenizer.from_pretrained(MODEL_PATH) model = AutoModelForCausalLM.from_pretrained(MODEL_PATH, device_map="auto") message = [{"role": "user", "content": "Let a, b be positive real numbers such that ab = a + b + 3. Determine the range of possible values for a + b."}] inputs = tokenizer.apply_chat_template( message, return_tensors="pt", add_generation_prompt=True, return_dict=True, ).to(model.device) generate_kwargs = { "input_ids": inputs["input_ids"], "attention_mask": inputs["attention_mask"], "temperature": 0.95, "top_p": 0.7, "do_sample": True, } out = model.generate(**generate_kwargs) print(tokenizer.decode(out[0][inputs["input_ids"].shape[1]:], skip_special_tokens=True)) ``` ## Function Call By default, this model currently supports the following `function` calls: - `search`: Search using a keyword and return search results - `click`: Click on a specific webpage in the search results to view details - `open`: Open a fixed URL to view detailed content - `finsih`: Complete information gathering and begin writing Below is a simple workflow to help you quickly connect the pipeline. ```python from transformers import AutoModelForCausalLM, AutoTokenizer import re import json MODEL_PATH = "THUDM/GLM-4-Z1-Rumination-32B-0414" tokenizer = AutoTokenizer.from_pretrained(MODEL_PATH) model = AutoModelForCausalLM.from_pretrained(MODEL_PATH, device_map="auto") messages = [{"role": "user", "content": "Let a, b be positive real numbers such that ab = a + b + 3. Determine the range of possible values for a + b."}] generate_kwargs = { "temperature": 0.95, "top_p": 0.7, "do_sample": True, "max_new_tokens": 16384 } def get_assistant(): inputs = tokenizer.apply_chat_template( messages, return_tensors="pt", add_generation_prompt=True, return_dict=True, ).to(model.device) out = model.generate(input_ids=inputs["input_ids"], **generate_kwargs) return tokenizer.decode(out[0][inputs["input_ids"].shape[1]:], skip_special_tokens=True).strip() def get_observation(function_name, args): content = None if function_name == "search": mock_search_res = [ {"title": "t1", "url":"url1", "snippet": "snippet_content_1"}, {"title": "t2", "url":"url2", "snippet": "snippet_content_2"} ] content = "\n\n".join([f"【{i}†{res['title']}†{res['url']}\n{res['snippet']}】"] for i, res in enumerate(mock_search_res)) elif function_name == "click": mock_click_res = "main content" content = mock_click_res elif function_name == "open": mock_open_res = "main_content" content = mock_open_res else: raise ValueError("unspport function name!") return content def get_func_name_args(llm_text): function_call = re.sub(r'.*?</think>', '', llm_text, flags=re.DOTALL) function_call = json.loads(function_call) action = function_call['name'] params = function_call['arguments'] return action, params def pipeline(): end_str = "{\"name\": \"finish\", \"arguments\": {}}" response = get_assistant() messages.append({"role": "assistant", "content": response}) max_turns, turns = 35, 1 while not response.endswith(end_str) and turns < max_turns: action, params = get_func_name_args(response) observation = get_observation(action, params) messages.append({"role": "observation", "content": observation}) response = get_assistant() messages.append({"role": "assistant", "content": response}) turns += 1 if response.endswith(end_str): final_answer = get_assistant() else: final_answer = None return final_answer pipeline() ``` <!--End Original Model Card--> --- # <span id="testllm" style="color: #7F7FFF;">🚀 If you find these models useful</span> Help me test my **AI-Powered Quantum Network Monitor Assistant** with **quantum-ready security checks**: 👉 [Quantum Network Monitor](https://readyforquantum.com/?assistant=open&utm_source=huggingface&utm_medium=referral&utm_campaign=huggingface_repo_readme) The full Open Source Code for the Quantum Network Monitor Service available at my github repos ( repos with NetworkMonitor in the name) : [Source Code Quantum Network Monitor](https://github.com/Mungert69). You will also find the code I use to quantize the models if you want to do it yourself [GGUFModelBuilder](https://github.com/Mungert69/GGUFModelBuilder) 💬 **How to test**: Choose an **AI assistant type**: - `TurboLLM` (GPT-4.1-mini) - `HugLLM` (Hugginface Open-source models) - `TestLLM` (Experimental CPU-only) ### **What I’m Testing** I’m pushing the limits of **small open-source models for AI network monitoring**, specifically: - **Function calling** against live network services - **How small can a model go** while still handling: - Automated **Nmap security scans** - **Quantum-readiness checks** - **Network Monitoring tasks** 🟡 **TestLLM** – Current experimental model (llama.cpp on 2 CPU threads on huggingface docker space): - ✅ **Zero-configuration setup** - ⏳ 30s load time (slow inference but **no API costs**) . No token limited as the cost is low. - 🔧 **Help wanted!** If you’re into **edge-device AI**, let’s collaborate! ### **Other Assistants** 🟢 **TurboLLM** – Uses **gpt-4.1-mini** : - **It performs very well but unfortunatly OpenAI charges per token. For this reason tokens usage is limited. - **Create custom cmd processors to run .net code on Quantum Network Monitor Agents** - **Real-time network diagnostics and monitoring** - **Security Audits** - **Penetration testing** (Nmap/Metasploit) 🔵 **HugLLM** – Latest Open-source models: - 🌐 Runs on Hugging Face Inference API. Performs pretty well using the lastest models hosted on Novita. ### 💡 **Example commands you could test**: 1. `"Give me info on my websites SSL certificate"` 2. `"Check if my server is using quantum safe encyption for communication"` 3. `"Run a comprehensive security audit on my server"` 4. '"Create a cmd processor to .. (what ever you want)" Note you need to install a [Quantum Network Monitor Agent](https://readyforquantum.com/Download/?utm_source=huggingface&utm_medium=referral&utm_campaign=huggingface_repo_readme) to run the .net code on. This is a very flexible and powerful feature. Use with caution! ### Final Word I fund the servers used to create these model files, run the Quantum Network Monitor service, and pay for inference from Novita and OpenAI—all out of my own pocket. All the code behind the model creation and the Quantum Network Monitor project is [open source](https://github.com/Mungert69). Feel free to use whatever you find helpful. If you appreciate the work, please consider [buying me a coffee](https://www.buymeacoffee.com/mahadeva) ☕. Your support helps cover service costs and allows me to raise token limits for everyone. I'm also open to job opportunities or sponsorship. Thank you! 😊
jusjinuk/Llama-2-70b-hf-4bit-GuidedQuant-QTIP
jusjinuk
2025-06-19T05:57:11Z
0
0
null
[ "safetensors", "llama", "arxiv:2505.07004", "base_model:meta-llama/Llama-2-70b-hf", "base_model:quantized:meta-llama/Llama-2-70b-hf", "license:llama2", "region:us" ]
null
2025-06-19T05:38:06Z
--- base_model: - meta-llama/Llama-2-70b-hf base_model_relation: quantized license: llama2 --- # Model Card - Base model: `meta-llama/Llama-2-70b-hf` - Quantization method: BlockLDLQ with GuidedQuant Hessian - Target bit-width: 4 - Backend kernel: QTIP kernel (HYB variant) - Calibration data: RedPajama (1024 sentences / 4096 tokens) - Calibration objective: Next-token prediction - num_groups (for GuidedQuant Hessian): 2 # How to run - Follow the instruction in https://github.com/snu-mllab/GuidedQuant and https://github.com/Cornell-RelaxML/qtip # References - [Model Paper](https://arxiv.org/abs/2505.07004)
bharathsj/bio-medical-mixed-8k
bharathsj
2025-06-19T05:50:26Z
0
0
null
[ "safetensors", "llama", "license:apache-2.0", "region:us" ]
null
2025-06-19T05:43:13Z
--- license: apache-2.0 ---
gsdfg18919/tyrel
gsdfg18919
2025-06-19T05:49:38Z
0
0
diffusers
[ "diffusers", "text-to-image", "lora", "template:diffusion-lora", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "region:us" ]
text-to-image
2025-06-19T05:49:34Z
--- tags: - text-to-image - lora - diffusers - template:diffusion-lora widget: - text: '-' output: url: images/all-black-background-mukiwp7v3e6j3fd4.jpg base_model: black-forest-labs/FLUX.1-dev instance_prompt: tyrel --- # tyrel <Gallery /> ## Trigger words You should use `tyrel` to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](/gsdfg18919/tyrel/tree/main) them in the Files & versions tab.
veddhanth/lora-trained-xl-stage-2-pretrained-enc-v2-spat-map-9-sneaker
veddhanth
2025-06-19T05:45:20Z
0
0
diffusers
[ "diffusers", "tensorboard", "text-to-image", "diffusers-training", "lora", "template:sd-lora", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail++", "region:us" ]
text-to-image
2025-06-19T05:39:17Z
--- base_model: stabilityai/stable-diffusion-xl-base-1.0 library_name: diffusers license: openrail++ instance_prompt: a photo of sks sneaker widget: [] tags: - text-to-image - text-to-image - diffusers-training - diffusers - lora - template:sd-lora - stable-diffusion-xl - stable-diffusion-xl-diffusers --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # SDXL LoRA DreamBooth - veddhanth/lora-trained-xl-stage-2-pretrained-enc-v2-spat-map-9-sneaker <Gallery /> ## Model description These are veddhanth/lora-trained-xl-stage-2-pretrained-enc-v2-spat-map-9-sneaker LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained using [DreamBooth](https://dreambooth.github.io/). LoRA for the text encoder was enabled: True. Special VAE used for training: madebyollin/sdxl-vae-fp16-fix. ## Trigger words You should use a photo of sks sneaker to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](veddhanth/lora-trained-xl-stage-2-pretrained-enc-v2-spat-map-9-sneaker/tree/main) them in the Files & versions tab. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
winnieyangwannan/entity_Llama-3.1-8B-Instruct_mlp-down_positive-negative-addition-same_last_layer_28_2_song_3_49
winnieyangwannan
2025-06-19T05:44:43Z
156
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-02T17:06:19Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
huihui-ai/Huihui-Qwen3-1.7B-abliterated-v2
huihui-ai
2025-06-19T05:44:06Z
0
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "chat", "abliterated", "uncensored", "conversational", "base_model:Qwen/Qwen3-1.7B", "base_model:finetune:Qwen/Qwen3-1.7B", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-19T04:26:17Z
--- library_name: transformers license: apache-2.0 license_link: https://huggingface.co/Qwen/Qwen3-1.7B/blob/main/LICENSE pipeline_tag: text-generation base_model: - Qwen/Qwen3-1.7B tags: - chat - abliterated - uncensored --- # huihui-ai/Huihui-Qwen3-1.7B-abliterated-v2 This is an uncensored version of [Qwen/Qwen3-1.7B](https://huggingface.co/Qwen/Qwen3-1.7B) created with abliteration (see [remove-refusals-with-transformers](https://github.com/Sumandora/remove-refusals-with-transformers) to know more about it). This is a crude, proof-of-concept implementation to remove refusals from an LLM model without using TransformerLens. Ablation was performed using a new and faster method, which yields better results. **Important Note** This version is an improvement over the previous one [huihui-ai/Qwen3-1.7B-abliterated](https://huggingface.co/huihui-ai/Qwen3-1.7B-abliterated). The ollama version has also been modified. Changed 0 layer to eliminate the problem of garbled codes ## ollama You can use [huihui_ai/qwen3-abliterated:1.7b-v2](https://ollama.com/huihui_ai/qwen3-abliterated:1.7b-v2) directly, Switch the thinking toggle using /set think and /set nothink ``` ollama run huihui_ai/qwen3-abliterated:1.7b-v2 ``` ## Usage You can use this model in your applications by loading it with Hugging Face's `transformers` library: ```python from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig, TextStreamer import torch import os import signal import random import numpy as np import time from collections import Counter cpu_count = os.cpu_count() print(f"Number of CPU cores in the system: {cpu_count}") half_cpu_count = cpu_count // 2 os.environ["MKL_NUM_THREADS"] = str(half_cpu_count) os.environ["OMP_NUM_THREADS"] = str(half_cpu_count) torch.set_num_threads(half_cpu_count) print(f"PyTorch threads: {torch.get_num_threads()}") print(f"MKL threads: {os.getenv('MKL_NUM_THREADS')}") print(f"OMP threads: {os.getenv('OMP_NUM_THREADS')}") # Load the model and tokenizer NEW_MODEL_ID = "huihui-ai/Huihui-Qwen3-1.7B-abliterated-v2" print(f"Load Model {NEW_MODEL_ID} ... ") quant_config_4 = BitsAndBytesConfig( load_in_4bit=True, bnb_4bit_compute_dtype=torch.bfloat16, bnb_4bit_use_double_quant=True, llm_int8_enable_fp32_cpu_offload=True, ) model = AutoModelForCausalLM.from_pretrained( NEW_MODEL_ID, device_map="auto", trust_remote_code=True, #quantization_config=quant_config_4, torch_dtype=torch.bfloat16 ) tokenizer = AutoTokenizer.from_pretrained(NEW_MODEL_ID, trust_remote_code=True) if tokenizer.pad_token is None: tokenizer.pad_token = tokenizer.eos_token tokenizer.pad_token_id = tokenizer.eos_token_id tokenizer = AutoTokenizer.from_pretrained(NEW_MODEL_ID, trust_remote_code=True) if tokenizer.pad_token is None: tokenizer.pad_token = tokenizer.eos_token tokenizer.pad_token_id = tokenizer.eos_token_id messages = [] nothink = False same_seed = False skip_prompt=True skip_special_tokens=True do_sample = True def set_random_seed(seed=None): """Set random seed for reproducibility. If seed is None, use int(time.time()).""" if seed is None: seed = int(time.time()) # Convert float to int random.seed(seed) np.random.seed(seed) torch.manual_seed(seed) torch.cuda.manual_seed_all(seed) # If using CUDA torch.backends.cudnn.deterministic = True torch.backends.cudnn.benchmark = False return seed # Return seed for logging if needed class CustomTextStreamer(TextStreamer): def __init__(self, tokenizer, skip_prompt=True, skip_special_tokens=True): super().__init__(tokenizer, skip_prompt=skip_prompt, skip_special_tokens=skip_special_tokens) self.generated_text = "" self.stop_flag = False self.init_time = time.time() # Record initialization time self.end_time = None # To store end time self.first_token_time = None # To store first token generation time self.token_count = 0 # To track total tokens def on_finalized_text(self, text: str, stream_end: bool = False): if self.first_token_time is None and text.strip(): # Set first token time on first non-empty text self.first_token_time = time.time() self.generated_text += text # Count tokens in the generated text tokens = self.tokenizer.encode(text, add_special_tokens=False) self.token_count += len(tokens) print(text, end="", flush=True) if stream_end: self.end_time = time.time() # Record end time when streaming ends if self.stop_flag: raise StopIteration def stop_generation(self): self.stop_flag = True self.end_time = time.time() # Record end time when generation is stopped def get_metrics(self): """Returns initialization time, first token time, first token latency, end time, total time, total tokens, and tokens per second.""" if self.end_time is None: self.end_time = time.time() # Set end time if not already set total_time = self.end_time - self.init_time # Total time from init to end tokens_per_second = self.token_count / total_time if total_time > 0 else 0 first_token_latency = (self.first_token_time - self.init_time) if self.first_token_time is not None else None metrics = { "init_time": self.init_time, "first_token_time": self.first_token_time, "first_token_latency": first_token_latency, "end_time": self.end_time, "total_time": total_time, # Total time in seconds "total_tokens": self.token_count, "tokens_per_second": tokens_per_second } return metrics def generate_stream(model, tokenizer, messages, nothink, skip_prompt, skip_special_tokens, do_sample, max_new_tokens): input_ids = tokenizer.apply_chat_template( messages, tokenize=True, enable_thinking = not nothink, add_generation_prompt=True, return_tensors="pt" ) attention_mask = torch.ones_like(input_ids, dtype=torch.long) tokens = input_ids.to(model.device) attention_mask = attention_mask.to(model.device) streamer = CustomTextStreamer(tokenizer, skip_prompt=skip_prompt, skip_special_tokens=skip_special_tokens) def signal_handler(sig, frame): streamer.stop_generation() print("\n[Generation stopped by user with Ctrl+C]") signal.signal(signal.SIGINT, signal_handler) generate_kwargs = {} if do_sample: generate_kwargs = { "do_sample": do_sample, "max_length": max_new_tokens, "temperature": 0.6, "top_k": 20, "top_p": 0.95, "repetition_penalty": 1.2, "no_repeat_ngram_size": 2 } else: generate_kwargs = { "do_sample": do_sample, "max_length": max_new_tokens, "repetition_penalty": 1.2, "no_repeat_ngram_size": 2 } print("Response: ", end="", flush=True) try: generated_ids = model.generate( tokens, attention_mask=attention_mask, #use_cache=False, pad_token_id=tokenizer.pad_token_id, streamer=streamer, **generate_kwargs ) del generated_ids except StopIteration: print("\n[Stopped by user]") del input_ids, attention_mask torch.cuda.empty_cache() signal.signal(signal.SIGINT, signal.SIG_DFL) return streamer.generated_text, streamer.stop_flag, streamer.get_metrics() init_seed = set_random_seed() while True: if same_seed: set_random_seed(init_seed) else: init_seed = set_random_seed() print(f"\nnothink: {nothink}") print(f"skip_prompt: {skip_prompt}") print(f"skip_special_tokens: {skip_special_tokens}") print(f"do_sample: {do_sample}") print(f"same_seed: {same_seed}, {init_seed}\n") user_input = input("User: ").strip() if user_input.lower() == "/exit": print("Exiting chat.") break if user_input.lower() == "/clear": messages = [] print("Chat history cleared. Starting a new conversation.") continue if user_input.lower() == "/nothink": nothink = not nothink continue if user_input.lower() == "/skip_prompt": skip_prompt = not skip_prompt continue if user_input.lower() == "/skip_special_tokens": skip_special_tokens = not skip_special_tokens continue if user_input.lower().startswith("/same_seed"): parts = user_input.split() if len(parts) == 1: # /same_seed (no number) same_seed = not same_seed # Toggle switch elif len(parts) == 2: # /same_seed <number> try: init_seed = int(parts[1]) # Extract and convert number to int same_seed = True except ValueError: print("Error: Please provide a valid integer after /same_seed") continue if user_input.lower() == "/do_sample": do_sample = not do_sample continue if not user_input: print("Input cannot be empty. Please enter something.") continue messages.append({"role": "user", "content": user_input}) activated_experts = [] response, stop_flag, metrics = generate_stream(model, tokenizer, messages, nothink, skip_prompt, skip_special_tokens, do_sample, 40960) print("\n\nMetrics:") for key, value in metrics.items(): print(f" {key}: {value}") print("", flush=True) if stop_flag: continue messages.append({"role": "assistant", "content": response}) # Remove all hooks after inference for h in hooks: h.remove() ``` ### Usage Warnings - **Risk of Sensitive or Controversial Outputs**: This model’s safety filtering has been significantly reduced, potentially generating sensitive, controversial, or inappropriate content. Users should exercise caution and rigorously review generated outputs. - **Not Suitable for All Audiences**: Due to limited content filtering, the model’s outputs may be inappropriate for public settings, underage users, or applications requiring high security. - **Legal and Ethical Responsibilities**: Users must ensure their usage complies with local laws and ethical standards. Generated content may carry legal or ethical risks, and users are solely responsible for any consequences. - **Research and Experimental Use**: It is recommended to use this model for research, testing, or controlled environments, avoiding direct use in production or public-facing commercial applications. - **Monitoring and Review Recommendations**: Users are strongly advised to monitor model outputs in real-time and conduct manual reviews when necessary to prevent the dissemination of inappropriate content. - **No Default Safety Guarantees**: Unlike standard models, this model has not undergone rigorous safety optimization. huihui.ai bears no responsibility for any consequences arising from its use. ### Donation If you like it, please click 'like' and follow us for more updates. You can follow [x.com/support_huihui](https://x.com/support_huihui) to get the latest model information from huihui.ai. ##### Your donation helps us continue our further development and improvement, a cup of coffee can do it. - bitcoin(BTC): ``` bc1qqnkhuchxw0zqjh2ku3lu4hq45hc6gy84uk70ge ```
jusjinuk/Llama-2-70b-hf-3bit-GuidedQuant-QTIP
jusjinuk
2025-06-19T05:41:44Z
0
0
null
[ "safetensors", "llama", "arxiv:2505.07004", "base_model:meta-llama/Llama-2-70b-hf", "base_model:quantized:meta-llama/Llama-2-70b-hf", "license:llama2", "region:us" ]
null
2025-06-19T05:22:31Z
--- base_model: - meta-llama/Llama-2-70b-hf base_model_relation: quantized license: llama2 --- # Model Card - Base model: `meta-llama/Llama-2-70b-hf` - Quantization method: BlockLDLQ with GuidedQuant Hessian - Target bit-width: 3 - Backend kernel: QTIP kernel (HYB variant) - Calibration data: RedPajama (1024 sentences / 4096 tokens) - Calibration objective: Next-token prediction - num_groups (for GuidedQuant Hessian): 2 # How to run - Follow the instruction in https://github.com/snu-mllab/GuidedQuant and https://github.com/Cornell-RelaxML/qtip # References - [Model Paper](https://arxiv.org/abs/2505.07004)
gsdfg18919/petite
gsdfg18919
2025-06-19T05:40:47Z
0
0
diffusers
[ "diffusers", "text-to-image", "lora", "template:diffusion-lora", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "region:us" ]
text-to-image
2025-06-19T05:40:45Z
--- tags: - text-to-image - lora - diffusers - template:diffusion-lora widget: - text: '-' output: url: images/all-black-background-mukiwp7v3e6j3fd4.jpg base_model: black-forest-labs/FLUX.1-dev instance_prompt: petite --- # petite <Gallery /> ## Trigger words You should use `petite` to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](/gsdfg18919/petite/tree/main) them in the Files & versions tab.
jusjinuk/Llama-2-7b-hf-3bit-GuidedQuant-QTIP
jusjinuk
2025-06-19T05:40:18Z
0
0
null
[ "safetensors", "llama", "arxiv:2505.07004", "base_model:meta-llama/Llama-2-7b-hf", "base_model:quantized:meta-llama/Llama-2-7b-hf", "license:llama2", "region:us" ]
null
2025-06-19T04:47:12Z
--- base_model: - meta-llama/Llama-2-7b-hf base_model_relation: quantized license: llama2 --- # Model Card - Base model: `meta-llama/Llama-2-7b-hf` - Quantization method: BlockLDLQ with GuidedQuant Hessian - Target bit-width: 3 - Backend kernel: QTIP kernel (HYB variant) - Calibration data: RedPajama (1024 sentences / 4096 tokens) - Calibration objective: Next-token prediction - num_groups (for GuidedQuant Hessian): 4 # How to run - Follow the instruction in https://github.com/snu-mllab/GuidedQuant and https://github.com/Cornell-RelaxML/qtip # References - [Model Paper](https://arxiv.org/abs/2505.07004)
mradermacher/CantoneseLLMChat-v1.0-7B-GGUF
mradermacher
2025-06-19T05:39:27Z
68
0
transformers
[ "transformers", "gguf", "llama-factory", "full", "generated_from_trainer", "en", "base_model:hon9kon9ize/CantoneseLLMChat-v1.0-7B", "base_model:quantized:hon9kon9ize/CantoneseLLMChat-v1.0-7B", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2024-10-05T02:05:34Z
--- base_model: hon9kon9ize/CantoneseLLMChat-v1.0-7B language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher tags: - llama-factory - full - generated_from_trainer --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/hon9kon9ize/CantoneseLLMChat-v1.0-7B <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/CantoneseLLMChat-v1.0-7B-GGUF/resolve/main/CantoneseLLMChat-v1.0-7B.Q2_K.gguf) | Q2_K | 3.1 | | | [GGUF](https://huggingface.co/mradermacher/CantoneseLLMChat-v1.0-7B-GGUF/resolve/main/CantoneseLLMChat-v1.0-7B.IQ3_XS.gguf) | IQ3_XS | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/CantoneseLLMChat-v1.0-7B-GGUF/resolve/main/CantoneseLLMChat-v1.0-7B.Q3_K_S.gguf) | Q3_K_S | 3.6 | | | [GGUF](https://huggingface.co/mradermacher/CantoneseLLMChat-v1.0-7B-GGUF/resolve/main/CantoneseLLMChat-v1.0-7B.IQ3_S.gguf) | IQ3_S | 3.6 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/CantoneseLLMChat-v1.0-7B-GGUF/resolve/main/CantoneseLLMChat-v1.0-7B.IQ3_M.gguf) | IQ3_M | 3.7 | | | [GGUF](https://huggingface.co/mradermacher/CantoneseLLMChat-v1.0-7B-GGUF/resolve/main/CantoneseLLMChat-v1.0-7B.Q3_K_M.gguf) | Q3_K_M | 3.9 | lower quality | | [GGUF](https://huggingface.co/mradermacher/CantoneseLLMChat-v1.0-7B-GGUF/resolve/main/CantoneseLLMChat-v1.0-7B.Q3_K_L.gguf) | Q3_K_L | 4.2 | | | [GGUF](https://huggingface.co/mradermacher/CantoneseLLMChat-v1.0-7B-GGUF/resolve/main/CantoneseLLMChat-v1.0-7B.IQ4_XS.gguf) | IQ4_XS | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/CantoneseLLMChat-v1.0-7B-GGUF/resolve/main/CantoneseLLMChat-v1.0-7B.Q4_K_S.gguf) | Q4_K_S | 4.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/CantoneseLLMChat-v1.0-7B-GGUF/resolve/main/CantoneseLLMChat-v1.0-7B.Q4_K_M.gguf) | Q4_K_M | 4.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/CantoneseLLMChat-v1.0-7B-GGUF/resolve/main/CantoneseLLMChat-v1.0-7B.Q5_K_S.gguf) | Q5_K_S | 5.4 | | | [GGUF](https://huggingface.co/mradermacher/CantoneseLLMChat-v1.0-7B-GGUF/resolve/main/CantoneseLLMChat-v1.0-7B.Q5_K_M.gguf) | Q5_K_M | 5.5 | | | [GGUF](https://huggingface.co/mradermacher/CantoneseLLMChat-v1.0-7B-GGUF/resolve/main/CantoneseLLMChat-v1.0-7B.Q6_K.gguf) | Q6_K | 6.4 | very good quality | | [GGUF](https://huggingface.co/mradermacher/CantoneseLLMChat-v1.0-7B-GGUF/resolve/main/CantoneseLLMChat-v1.0-7B.Q8_0.gguf) | Q8_0 | 8.2 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/CantoneseLLMChat-v1.0-7B-GGUF/resolve/main/CantoneseLLMChat-v1.0-7B.f16.gguf) | f16 | 15.3 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
jusjinuk/Llama-2-70b-hf-2bit-SqueezeLLM
jusjinuk
2025-06-19T05:35:17Z
60
0
null
[ "pytorch", "llama", "arxiv:2505.07004", "base_model:meta-llama/Llama-2-70b-hf", "base_model:quantized:meta-llama/Llama-2-70b-hf", "license:llama2", "region:us" ]
null
2025-05-20T15:51:36Z
--- base_model: - meta-llama/Llama-2-70b-hf base_model_relation: quantized license: llama2 --- # Model Card - Base model: `meta-llama/Llama-2-70b-hf` - Quantization method: SqueezeLLM - Target bit-width: 2 - Backend kernel: Any-Precision-LLM kernel (`ap-gemv`) - Calibration data: RedPajama (1024 sentences / 4096 tokens) - Calibration objective: Next-token prediction # How to run - Follow the instruction in https://github.com/snu-mllab/GuidedQuant. # References - [Model Paper](https://arxiv.org/abs/2505.07004)
jusjinuk/Llama-2-13b-hf-4bit-SqueezeLLM
jusjinuk
2025-06-19T05:34:58Z
15
0
null
[ "pytorch", "llama", "arxiv:2505.07004", "base_model:meta-llama/Llama-2-13b-hf", "base_model:quantized:meta-llama/Llama-2-13b-hf", "license:llama2", "region:us" ]
null
2025-05-20T14:45:50Z
--- base_model: - meta-llama/Llama-2-13b-hf base_model_relation: quantized license: llama2 --- # Model Card - Base model: `meta-llama/Llama-2-13b-hf` - Quantization method: SqueezeLLM - Target bit-width: 4 - Backend kernel: Any-Precision-LLM kernel (`ap-gemv`) - Calibration data: RedPajama (1024 sentences / 4096 tokens) - Calibration objective: Next-token prediction # How to run - Follow the instruction in https://github.com/snu-mllab/GuidedQuant. # References - [Model Paper](https://arxiv.org/abs/2505.07004)
jusjinuk/Llama-2-13b-hf-3bit-SqueezeLLM
jusjinuk
2025-06-19T05:34:48Z
15
0
null
[ "pytorch", "llama", "arxiv:2505.07004", "base_model:meta-llama/Llama-2-13b-hf", "base_model:quantized:meta-llama/Llama-2-13b-hf", "license:llama2", "region:us" ]
null
2025-05-20T13:52:06Z
--- base_model: - meta-llama/Llama-2-13b-hf base_model_relation: quantized license: llama2 --- # Model Card - Base model: `meta-llama/Llama-2-13b-hf` - Quantization method: SqueezeLLM - Target bit-width: 3 - Backend kernel: Any-Precision-LLM kernel (`ap-gemv`) - Calibration data: RedPajama (1024 sentences / 4096 tokens) - Calibration objective: Next-token prediction # How to run - Follow the instruction in https://github.com/snu-mllab/GuidedQuant. # References - [Model Paper](https://arxiv.org/abs/2505.07004)
jusjinuk/Llama-2-7b-hf-3bit-SqueezeLLM
jusjinuk
2025-06-19T05:34:09Z
150
0
null
[ "pytorch", "llama", "arxiv:2505.07004", "base_model:meta-llama/Llama-2-7b-hf", "base_model:quantized:meta-llama/Llama-2-7b-hf", "license:llama2", "region:us" ]
null
2025-05-20T20:46:57Z
--- base_model: - meta-llama/Llama-2-7b-hf base_model_relation: quantized license: llama2 --- # Model Card - Base model: `meta-llama/Llama-2-7b-hf` - Quantization method: SqueezeLLM - Target bit-width: 3 - Backend kernel: Any-Precision-LLM kernel (`ap-gemv`) - Calibration data: RedPajama (1024 sentences / 4096 tokens) - Calibration objective: Next-token prediction # How to run - Follow the instruction in https://github.com/snu-mllab/GuidedQuant. # References - [Model Paper](https://arxiv.org/abs/2505.07004)
jusjinuk/Llama-2-7b-hf-2bit-SqueezeLLM
jusjinuk
2025-06-19T05:34:00Z
1,544
0
null
[ "pytorch", "llama", "arxiv:2505.07004", "base_model:meta-llama/Llama-2-7b-hf", "base_model:quantized:meta-llama/Llama-2-7b-hf", "license:llama2", "region:us" ]
null
2025-05-20T20:44:26Z
--- base_model: - meta-llama/Llama-2-7b-hf base_model_relation: quantized license: llama2 --- # Model Card - Base model: `meta-llama/Llama-2-7b-hf` - Quantization method: SqueezeLLM - Target bit-width: 2 - Backend kernel: Any-Precision-LLM kernel (`ap-gemv`) - Calibration data: RedPajama (1024 sentences / 4096 tokens) - Calibration objective: Next-token prediction # How to run - Follow the instruction in https://github.com/snu-mllab/GuidedQuant. # References - [Model Paper](https://arxiv.org/abs/2505.07004)
jusjinuk/Meta-Llama-3-8B-2bit-SqueezeLLM
jusjinuk
2025-06-19T05:32:45Z
99
0
null
[ "pytorch", "llama", "arxiv:2505.07004", "base_model:meta-llama/Meta-Llama-3-8B", "base_model:quantized:meta-llama/Meta-Llama-3-8B", "license:llama3", "region:us" ]
null
2025-05-20T22:20:24Z
--- base_model: - meta-llama/Meta-Llama-3-8B base_model_relation: quantized license: llama3 --- # Model Card - Base model: `meta-llama/Meta-Llama-3-8B` - Quantization method: SqueezeLLM - Target bit-width: 2 - Backend kernel: Any-Precision-LLM kernel (`ap-gemv`) - Calibration data: RedPajama (1024 sentences / 4096 tokens) - Calibration objective: Next-token prediction # How to run - Follow the instruction in https://github.com/snu-mllab/GuidedQuant. # References - [Model Paper](https://arxiv.org/abs/2505.07004)
yamatazen/EtherealAurora-12B
yamatazen
2025-06-19T05:32:19Z
74
8
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "mergekit", "merge", "chatml", "conversational", "en", "ja", "arxiv:2403.19522", "base_model:yamatazen/Aurora-SCE-12B", "base_model:merge:yamatazen/Aurora-SCE-12B", "base_model:yamatazen/Aurora-SCE-12B-v2", "base_model:merge:yamatazen/Aurora-SCE-12B-v2", "base_model:yamatazen/Ayla-Light-12B-Stock", "base_model:merge:yamatazen/Ayla-Light-12B-Stock", "base_model:yamatazen/EtherealLight-12B", "base_model:merge:yamatazen/EtherealLight-12B", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-03-02T13:13:32Z
--- base_model: - yamatazen/Ayla-Light-12B-Stock - yamatazen/Aurora-SCE-12B - yamatazen/EtherealLight-12B - yamatazen/Aurora-SCE-12B-v2 library_name: transformers tags: - mergekit - merge - chatml language: - en - ja license: apache-2.0 --- ![image/png](https://huggingface.co/yamatazen/EtherealAurora-12B/resolve/main/EtherealAurora-12B.png?download=true) This is a ChatML model. # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [yamatazen/Aurora-SCE-12B](https://huggingface.co/yamatazen/Aurora-SCE-12B) as a base. ### Models Merged The following models were included in the merge: * [yamatazen/Ayla-Light-12B-Stock](https://huggingface.co/yamatazen/Ayla-Light-12B-Stock) * [yamatazen/EtherealLight-12B](https://huggingface.co/yamatazen/EtherealLight-12B) * [yamatazen/Aurora-SCE-12B-v2](https://huggingface.co/yamatazen/Aurora-SCE-12B-v2) ### Configuration The following YAML configuration was used to produce this model: ```yaml base_model: yamatazen/Aurora-SCE-12B models: - model: yamatazen/Aurora-SCE-12B-v2 - model: yamatazen/Ayla-Light-12B-Stock - model: yamatazen/EtherealLight-12B merge_method: model_stock dtype: bfloat16 parameters: normalize: true ```
Skewness-RL-KE/Qwen2-Math-1.5B-MetaMathQA
Skewness-RL-KE
2025-06-19T05:31:34Z
74
0
transformers
[ "transformers", "tensorboard", "safetensors", "qwen2", "text-generation", "llama-factory", "full", "generated_from_trainer", "conversational", "base_model:Qwen/Qwen2-Math-1.5B", "base_model:finetune:Qwen/Qwen2-Math-1.5B", "license:other", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-13T11:30:04Z
--- library_name: transformers license: other base_model: Qwen/Qwen2-Math-1.5B tags: - llama-factory - full - generated_from_trainer model-index: - name: sft_lr_5e-5_bs_512 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # sft_lr_5e-5_bs_512 This model is a fine-tuned version of [Qwen/Qwen2-Math-1.5B](https://huggingface.co/Qwen/Qwen2-Math-1.5B) on the MetaMathQA dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 4 - gradient_accumulation_steps: 8 - total_train_batch_size: 512 - total_eval_batch_size: 32 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1.0 ### Training results ### Framework versions - Transformers 4.52.4 - Pytorch 2.7.1+cu126 - Datasets 3.6.0 - Tokenizers 0.21.1
jusjinuk/Llama-2-13b-hf-4bit-LNQ
jusjinuk
2025-06-19T05:31:30Z
31
0
null
[ "pytorch", "llama", "arxiv:2505.07004", "base_model:meta-llama/Llama-2-13b-hf", "base_model:quantized:meta-llama/Llama-2-13b-hf", "license:llama2", "region:us" ]
null
2025-05-20T09:50:21Z
--- base_model: - meta-llama/Llama-2-13b-hf base_model_relation: quantized license: llama2 --- # Model Card - Base model: `meta-llama/Llama-2-13b-hf` - Quantization method: LNQ - Target bit-width: 4 - Backend kernel: Any-Precision-LLM kernel (`ap-gemv`) - Calibration data: RedPajama (1024 sentences / 4096 tokens) - Calibration objective: Next-token prediction # How to run - Follow the instruction in https://github.com/snu-mllab/GuidedQuant. # References - [Model Paper](https://arxiv.org/abs/2505.07004)
jusjinuk/Llama-2-13b-hf-2bit-LNQ
jusjinuk
2025-06-19T05:31:12Z
65
0
null
[ "pytorch", "llama", "arxiv:2505.07004", "base_model:meta-llama/Llama-2-13b-hf", "base_model:quantized:meta-llama/Llama-2-13b-hf", "license:llama2", "region:us" ]
null
2025-05-20T09:27:59Z
--- base_model: - meta-llama/Llama-2-13b-hf base_model_relation: quantized license: llama2 --- # Model Card - Base model: `meta-llama/Llama-2-13b-hf` - Quantization method: LNQ - Target bit-width: 2 - Backend kernel: Any-Precision-LLM kernel (`ap-gemv`) - Calibration data: RedPajama (1024 sentences / 4096 tokens) - Calibration objective: Next-token prediction # How to run - Follow the instruction in https://github.com/snu-mllab/GuidedQuant. # References - [Model Paper](https://arxiv.org/abs/2505.07004)
DoNotChoke/llama-3.2-3B-it-thinking-function_calling-V0
DoNotChoke
2025-06-19T05:29:26Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "generated_from_trainer", "trl", "sft", "base_model:meta-llama/Llama-3.2-3B-Instruct", "base_model:finetune:meta-llama/Llama-3.2-3B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-06-19T04:57:32Z
--- base_model: meta-llama/Llama-3.2-3B-Instruct library_name: transformers model_name: llama-3.2-3B-it-thinking-function_calling-V0 tags: - generated_from_trainer - trl - sft licence: license --- # Model Card for llama-3.2-3B-it-thinking-function_calling-V0 This model is a fine-tuned version of [meta-llama/Llama-3.2-3B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="DoNotChoke/llama-3.2-3B-it-thinking-function_calling-V0", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with SFT. ### Framework versions - TRL: 0.18.2 - Transformers: 4.52.4 - Pytorch: 2.7.1 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
jusjinuk/Llama-2-70b-hf-3bit-GuidedQuant-LNQ
jusjinuk
2025-06-19T05:14:36Z
57
0
null
[ "pytorch", "llama", "arxiv:2505.07004", "base_model:meta-llama/Llama-2-70b-hf", "base_model:quantized:meta-llama/Llama-2-70b-hf", "license:llama2", "region:us" ]
null
2025-05-20T11:12:16Z
--- base_model: - meta-llama/Llama-2-70b-hf base_model_relation: quantized license: llama2 --- # Model Card - Base model: `meta-llama/Llama-2-70b-hf` - Quantization method: LNQ with GuidedQuant Hessian - Target bit-width: 3 - Backend kernel: Any-Precision-LLM kernel (`ap-gemv`) - Calibration data: RedPajama (1024 sentences / 4096 tokens) - Calibration objective: Next-token prediction - num_groups (for GuidedQuant Hessian): 2 # How to run - Follow the instruction in https://github.com/snu-mllab/GuidedQuant. # References - [Model Paper](https://arxiv.org/abs/2505.07004)
jusjinuk/Llama-2-7b-hf-3bit-GuidedQuant-LNQ
jusjinuk
2025-06-19T05:13:26Z
1,541
0
null
[ "pytorch", "llama", "arxiv:2505.07004", "base_model:meta-llama/Llama-2-7b-hf", "base_model:quantized:meta-llama/Llama-2-7b-hf", "license:llama2", "region:us" ]
null
2025-05-20T09:14:41Z
--- base_model: - meta-llama/Llama-2-7b-hf base_model_relation: quantized license: llama2 --- # Model Card - Base model: `meta-llama/Llama-2-7b-hf` - Quantization method: LNQ with GuidedQuant Hessian - Target bit-width: 3 - Backend kernel: Any-Precision-LLM kernel (`ap-gemv`) - Calibration data: RedPajama (1024 sentences / 4096 tokens) - Calibration objective: Next-token prediction - num_groups (for GuidedQuant Hessian): 4 # How to run - Follow the instruction in https://github.com/snu-mllab/GuidedQuant. # References - [Model Paper](https://arxiv.org/abs/2505.07004)
jusjinuk/Llama-2-13b-hf-2bit-GuidedQuant-LNQ
jusjinuk
2025-06-19T05:13:06Z
90
0
null
[ "pytorch", "llama", "arxiv:2505.07004", "base_model:meta-llama/Llama-2-13b-hf", "base_model:quantized:meta-llama/Llama-2-13b-hf", "license:llama2", "region:us" ]
null
2025-05-20T09:33:21Z
--- base_model: - meta-llama/Llama-2-13b-hf base_model_relation: quantized license: llama2 --- # Model Card - Base model: `meta-llama/Llama-2-13b-hf` - Quantization method: LNQ with GuidedQuant Hessian - Target bit-width: 2 - Backend kernel: Any-Precision-LLM kernel (`ap-gemv`) - Calibration data: RedPajama (1024 sentences / 4096 tokens) - Calibration objective: Next-token prediction - num_groups (for GuidedQuant Hessian): 4 # How to run - Follow the instruction in https://github.com/snu-mllab/GuidedQuant. # References - [Model Paper](https://arxiv.org/abs/2505.07004)
jusjinuk/Llama-3.2-1B-Instruct-4bit-GuidedQuant-QTIP
jusjinuk
2025-06-19T05:01:33Z
7
0
null
[ "safetensors", "llama", "arxiv:2505.07004", "base_model:meta-llama/Llama-3.2-1B-Instruct", "base_model:quantized:meta-llama/Llama-3.2-1B-Instruct", "license:llama3.2", "region:us" ]
null
2025-06-10T13:14:13Z
--- base_model: - meta-llama/Llama-3.2-1B-Instruct base_model_relation: quantized license: llama3.2 --- # Model Card - Base model: `meta-llama/Llama-3.2-1B-Instruct` - Quantization method: BlockLDLQ with GuidedQuant Hessian - Target bit-width: 4 - Backend kernel: QTIP kernel (HYB variant) - Calibration data: RedPajama (1024 sentences / 4096 tokens) - Calibration objective: Next-token prediction - num_groups (for GuidedQuant Hessian): 1 # How to run - Follow the instruction in https://github.com/snu-mllab/GuidedQuant and https://github.com/Cornell-RelaxML/qtip # References - [Model Paper](https://arxiv.org/abs/2505.07004)
jusjinuk/Llama-3.1-70B-Instruct-3bit-GuidedQuant-QTIP
jusjinuk
2025-06-19T05:00:40Z
8
0
null
[ "safetensors", "llama", "arxiv:2505.07004", "base_model:meta-llama/Llama-3.1-70B-Instruct", "base_model:quantized:meta-llama/Llama-3.1-70B-Instruct", "license:llama3.1", "region:us" ]
null
2025-06-13T04:03:47Z
--- base_model: - meta-llama/Llama-3.1-70B-Instruct base_model_relation: quantized license: llama3.1 --- # Model Card - Base model: `meta-llama/Llama-3.1-70B-Instruct` - Quantization method: BlockLDLQ with GuidedQuant Hessian - Target bit-width: 3 - Backend kernel: QTIP kernel (HYB variant) - Calibration data: RedPajama (1024 sentences / 4096 tokens) - Calibration objective: Next-token prediction - num_groups (for GuidedQuant Hessian): 1 # How to run - Follow the instruction in https://github.com/snu-mllab/GuidedQuant and https://github.com/Cornell-RelaxML/qtip # References - [Model Paper](https://arxiv.org/abs/2505.07004)
jusjinuk/Llama-3.1-70B-Instruct-3bit-GuidedQuant-QTIP-skip_0_v
jusjinuk
2025-06-19T05:00:14Z
0
0
null
[ "arxiv:2505.07004", "base_model:meta-llama/Llama-3.2-3B-Instruct", "base_model:quantized:meta-llama/Llama-3.2-3B-Instruct", "license:llama3.1", "region:us" ]
null
2025-06-13T03:12:51Z
--- base_model: - meta-llama/Llama-3.2-3B-Instruct base_model_relation: quantized license: llama3.1 --- # Model Card - Base model: `meta-llama/Llama-3.1-70B-Instruct` - Quantization method: BlockLDLQ with GuidedQuant Hessian - Target bit-width: 3 - Backend kernel: QTIP kernel (HYB variant) - Calibration data: RedPajama (1024 sentences / 4096 tokens) - Calibration objective: Next-token prediction - num_groups (for GuidedQuant Hessian): 1 - skip_list: 0_v (not quantizing 0_v layer, following YAQA paper) # How to run - Follow the instruction in https://github.com/snu-mllab/GuidedQuant and https://github.com/Cornell-RelaxML/qtip # References - [Model Paper](https://arxiv.org/abs/2505.07004)
hafizhaaarama/multitask_model
hafizhaaarama
2025-06-19T04:59:18Z
0
0
transformers
[ "transformers", "pytorch", "safetensors", "generated_from_trainer", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-06-13T05:07:33Z
--- library_name: transformers license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer model-index: - name: multitask_model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # multitask_model This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0074 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.123 | 1.0 | 65 | 0.0443 | | 0.0155 | 2.0 | 130 | 0.0094 | | 0.012 | 3.0 | 195 | 0.0074 | ### Framework versions - Transformers 4.52.4 - Pytorch 2.6.0+cu124 - Datasets 2.14.4 - Tokenizers 0.21.1
jusjinuk/Llama-3.1-8B-Instruct-2bit-SqueezeLLM
jusjinuk
2025-06-19T04:58:25Z
130
0
null
[ "pytorch", "llama", "arxiv:2505.07004", "base_model:meta-llama/Llama-3.1-8B-Instruct", "base_model:quantized:meta-llama/Llama-3.1-8B-Instruct", "license:llama3.1", "region:us" ]
null
2025-05-30T17:26:41Z
--- base_model: - meta-llama/Llama-3.1-8B-Instruct base_model_relation: quantized license: llama3.1 --- # Model Card - Base model: `meta-llama/Llama-3.1-8B-Instruct` - Quantization method: SqueezeLLM - Target bit-width: 2 - Backend kernel: Any-Precision-LLM kernel (`ap-gemv`) - Calibration data: RedPajama (1024 sentences / 4096 tokens) - Calibration objective: Next-token prediction # How to run - Follow the instruction in https://github.com/snu-mllab/GuidedQuant. # References - [Model Paper](https://arxiv.org/abs/2505.07004)
howos1234/videomae-base-finetuned-ucf101-subset-v1
howos1234
2025-06-19T04:58:07Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "videomae", "video-classification", "generated_from_trainer", "base_model:MCG-NJU/videomae-base", "base_model:finetune:MCG-NJU/videomae-base", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us" ]
video-classification
2025-06-19T04:21:57Z
--- library_name: transformers license: cc-by-nc-4.0 base_model: MCG-NJU/videomae-base tags: - generated_from_trainer metrics: - accuracy model-index: - name: videomae-base-finetuned-ucf101-subset-v1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # videomae-base-finetuned-ucf101-subset-v1 This model is a fine-tuned version of [MCG-NJU/videomae-base](https://huggingface.co/MCG-NJU/videomae-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.4081 - Accuracy: 0.8645 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - training_steps: 148 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:----:|:---------------:|:--------:| | 2.1477 | 0.2568 | 38 | 1.8577 | 0.4286 | | 0.955 | 1.2568 | 76 | 0.9704 | 0.7286 | | 0.4844 | 2.2568 | 114 | 0.5025 | 0.8286 | | 0.3112 | 3.2297 | 148 | 0.3884 | 0.8714 | ### Framework versions - Transformers 4.46.3 - Pytorch 2.0.1+cu117 - Datasets 3.1.0 - Tokenizers 0.20.3
jusjinuk/Llama-3.1-8B-Instruct-4bit-GuidedQuant-LNQ
jusjinuk
2025-06-19T04:58:05Z
150
0
null
[ "pytorch", "llama", "arxiv:2505.07004", "base_model:meta-llama/Llama-3.1-8B-Instruct", "base_model:quantized:meta-llama/Llama-3.1-8B-Instruct", "license:llama3.1", "region:us" ]
null
2025-05-25T15:50:53Z
--- base_model: - meta-llama/Llama-3.1-8B-Instruct base_model_relation: quantized license: llama3.1 --- # Model Card - Base model: `meta-llama/Llama-3.1-8B-Instruct` - Quantization method: LNQ with GuidedQuant Hessian - Target bit-width: 4 - Backend kernel: Any-Precision-LLM kernel (`ap-gemv`) - Calibration data: RedPajama (1024 sentences / 4096 tokens) - Calibration objective: Next-token prediction - num_groups (for GuidedQuant Hessian): 1 # How to run - Follow the instruction in https://github.com/snu-mllab/GuidedQuant. # References - [Model Paper](https://arxiv.org/abs/2505.07004)
jusjinuk/Llama-3.1-8B-Instruct-2bit-GuidedQuant-LNQ
jusjinuk
2025-06-19T04:57:42Z
2,107
0
null
[ "pytorch", "llama", "arxiv:2505.07004", "base_model:meta-llama/Llama-3.1-8B-Instruct", "base_model:quantized:meta-llama/Llama-3.1-8B-Instruct", "license:llama3.1", "region:us" ]
null
2025-05-25T09:01:54Z
--- base_model: - meta-llama/Llama-3.1-8B-Instruct base_model_relation: quantized license: llama3.1 --- # Model Card - Base model: `meta-llama/Llama-3.1-8B-Instruct` - Quantization method: LNQ with GuidedQuant Hessian - Target bit-width: 2 - Backend kernel: Any-Precision-LLM kernel (`ap-gemv`) - Calibration data: RedPajama (1024 sentences / 4096 tokens) - Calibration objective: Next-token prediction - num_groups (for GuidedQuant Hessian): 1 # How to run - Follow the instruction in https://github.com/snu-mllab/GuidedQuant. # References - [Model Paper](https://arxiv.org/abs/2505.07004)
jusjinuk/Llama-3.3-70B-Instruct-3bit-SqueezeLLM
jusjinuk
2025-06-19T04:57:06Z
130
0
null
[ "pytorch", "llama", "arxiv:2505.07004", "base_model:meta-llama/Llama-3.3-70B-Instruct", "base_model:quantized:meta-llama/Llama-3.3-70B-Instruct", "license:llama3.3", "region:us" ]
null
2025-05-30T16:20:02Z
--- base_model: - meta-llama/Llama-3.3-70B-Instruct base_model_relation: quantized license: llama3.3 --- # Model Card - Base model: `meta-llama/Llama-3.3-70B-Instruct` - Quantization method: SqueezeLLM - Target bit-width: 3 - Backend kernel: Any-Precision-LLM kernel (`ap-gemv`) - Calibration data: RedPajama (1024 sentences / 4096 tokens) - Calibration objective: Next-token prediction # How to run - Follow the instruction in https://github.com/snu-mllab/GuidedQuant. # References - [Model Paper](https://arxiv.org/abs/2505.07004)
jusjinuk/Llama-3.3-70B-Instruct-2bit-GuidedQuant-LNQ
jusjinuk
2025-06-19T04:56:33Z
30
0
null
[ "pytorch", "llama", "arxiv:2505.07004", "base_model:meta-llama/Llama-3.3-70B-Instruct", "base_model:quantized:meta-llama/Llama-3.3-70B-Instruct", "license:llama3.3", "region:us" ]
null
2025-05-26T07:58:35Z
--- base_model: - meta-llama/Llama-3.3-70B-Instruct base_model_relation: quantized license: llama3.3 --- # Model Card - Base model: `meta-llama/Llama-3.3-70B-Instruct` - Quantization method: LNQ with GuidedQuant Hessian - Target bit-width: 2 - Backend kernel: Any-Precision-LLM kernel (`ap-gemv`) - Calibration data: RedPajama (1024 sentences / 4096 tokens) - Calibration objective: Next-token prediction - num_groups (for GuidedQuant Hessian): 1 # How to run - Follow the instruction in https://github.com/snu-mllab/GuidedQuant. # References - [Model Paper](https://arxiv.org/abs/2505.07004)
jusjinuk/Llama-3.3-70B-Instruct-3bit-GuidedQuant-LNQ
jusjinuk
2025-06-19T04:56:04Z
66
0
null
[ "pytorch", "llama", "arxiv:2505.07004", "base_model:meta-llama/Llama-3.3-70B-Instruct", "base_model:quantized:meta-llama/Llama-3.3-70B-Instruct", "license:llama3.3", "region:us" ]
null
2025-05-27T02:37:50Z
--- base_model: - meta-llama/Llama-3.3-70B-Instruct base_model_relation: quantized license: llama3.3 --- # Model Card - Base model: `meta-llama/Llama-3.3-70B-Instruct` - Quantization method: LNQ with GuidedQuant Hessian - Target bit-width: 3 - Backend kernel: Any-Precision-LLM kernel (`ap-gemv`) - Calibration data: RedPajama (1024 sentences / 4096 tokens) - Calibration objective: Next-token prediction - num_groups (for GuidedQuant Hessian): 1 # How to run - Follow the instruction in https://github.com/snu-mllab/GuidedQuant. # References - [Model Paper](https://arxiv.org/abs/2505.07004)
veddhanth/lora-trained-xl-stage-2-finetuned-enc-v2-spat-map-8-1989
veddhanth
2025-06-19T04:54:47Z
0
0
diffusers
[ "diffusers", "text-to-image", "diffusers-training", "lora", "template:sd-lora", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail++", "region:us" ]
text-to-image
2025-06-19T04:41:27Z
--- base_model: stabilityai/stable-diffusion-xl-base-1.0 library_name: diffusers license: openrail++ instance_prompt: a realistic portrait of sks face widget: [] tags: - text-to-image - text-to-image - diffusers-training - diffusers - lora - template:sd-lora - stable-diffusion-xl - stable-diffusion-xl-diffusers --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # SDXL LoRA DreamBooth - veddhanth/lora-trained-xl-stage-2-finetuned-enc-v2-spat-map-8-1989 <Gallery /> ## Model description These are veddhanth/lora-trained-xl-stage-2-finetuned-enc-v2-spat-map-8-1989 LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained using [DreamBooth](https://dreambooth.github.io/). LoRA for the text encoder was enabled: True. Special VAE used for training: madebyollin/sdxl-vae-fp16-fix. ## Trigger words You should use a realistic portrait of sks face to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](veddhanth/lora-trained-xl-stage-2-finetuned-enc-v2-spat-map-8-1989/tree/main) them in the Files & versions tab. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
jusjinuk/gemma-3-27b-it-4bit-SqueezeLLM
jusjinuk
2025-06-19T04:53:34Z
19
0
null
[ "pytorch", "gemma3", "arxiv:2505.07004", "base_model:google/gemma-3-27b-it", "base_model:quantized:google/gemma-3-27b-it", "license:gemma", "region:us" ]
null
2025-06-02T03:34:56Z
--- base_model: - google/gemma-3-27b-it base_model_relation: quantized license: gemma --- # Model Card - Base model: `google/gemma-3-27b-it` - Quantization method: SqueezeLLM - Target bit-width: 4 - Backend kernel: Any-Precision-LLM kernel (`ap-gemv`) - Calibration data: RedPajama (1024 sentences / 4096 tokens) - Calibration objective: Next-token prediction # How to run - Follow the instruction in https://github.com/snu-mllab/GuidedQuant. # References - [Model Paper](https://arxiv.org/abs/2505.07004)
jusjinuk/gemma-3-27b-it-3bit-SqueezeLLM
jusjinuk
2025-06-19T04:53:26Z
19
0
null
[ "pytorch", "gemma3", "arxiv:2505.07004", "base_model:google/gemma-3-27b-it", "base_model:quantized:google/gemma-3-27b-it", "license:gemma", "region:us" ]
null
2025-06-02T03:00:48Z
--- base_model: - google/gemma-3-27b-it base_model_relation: quantized license: gemma --- # Model Card - Base model: `google/gemma-3-27b-it` - Quantization method: SqueezeLLM - Target bit-width: 3 - Backend kernel: Any-Precision-LLM kernel (`ap-gemv`) - Calibration data: RedPajama (1024 sentences / 4096 tokens) - Calibration objective: Next-token prediction # How to run - Follow the instruction in https://github.com/snu-mllab/GuidedQuant. # References - [Model Paper](https://arxiv.org/abs/2505.07004)
jusjinuk/gemma-3-27b-it-3bit-GuidedQuant-LNQ
jusjinuk
2025-06-19T04:52:45Z
12
0
null
[ "pytorch", "gemma3", "arxiv:2505.07004", "base_model:google/gemma-3-27b-it", "base_model:quantized:google/gemma-3-27b-it", "license:gemma", "region:us" ]
null
2025-06-02T02:46:17Z
--- base_model: - google/gemma-3-27b-it base_model_relation: quantized license: gemma --- # Model Card - Base model: `google/gemma-3-27b-it` - Quantization method: LNQ with GuidedQuant Hessian - Target bit-width: 3 - Backend kernel: Any-Precision-LLM kernel (`ap-gemv`) - Calibration data: RedPajama (1024 sentences / 4096 tokens) - Calibration objective: Next-token prediction - num_groups (for GuidedQuant Hessian): 1 # How to run - Follow the instruction in https://github.com/snu-mllab/GuidedQuant. # References - [Model Paper](https://arxiv.org/abs/2505.07004)
jusjinuk/gemma-3-27b-it-2bit-GuidedQuant-LNQ
jusjinuk
2025-06-19T04:52:20Z
29
0
null
[ "pytorch", "gemma3", "arxiv:2505.07004", "base_model:google/gemma-3-27b-it", "base_model:quantized:google/gemma-3-27b-it", "license:gemma", "region:us" ]
null
2025-06-02T02:22:36Z
--- base_model: - google/gemma-3-27b-it base_model_relation: quantized license: gemma --- # Model Card - Base model: `google/gemma-3-27b-it` - Quantization method: LNQ with GuidedQuant Hessian - Target bit-width: 2 - Backend kernel: Any-Precision-LLM kernel (`ap-gemv`) - Calibration data: RedPajama (1024 sentences / 4096 tokens) - Calibration objective: Next-token prediction - num_groups (for GuidedQuant Hessian): 1 # How to run - Follow the instruction in https://github.com/snu-mllab/GuidedQuant. # References - [Model Paper](https://arxiv.org/abs/2505.07004)
JayHyeon/Qwen_1.5B-math-DPO_1e-4_1.0vpo_constant-10ep
JayHyeon
2025-06-19T04:51:11Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "trl", "dpo", "conversational", "dataset:argilla/distilabel-math-preference-dpo", "arxiv:2305.18290", "base_model:Qwen/Qwen2.5-Math-1.5B", "base_model:finetune:Qwen/Qwen2.5-Math-1.5B", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-19T04:09:21Z
--- base_model: Qwen/Qwen2.5-Math-1.5B datasets: argilla/distilabel-math-preference-dpo library_name: transformers model_name: Qwen_1.5B-math-DPO_1e-4_1.0vpo_constant-10ep tags: - generated_from_trainer - trl - dpo licence: license --- # Model Card for Qwen_1.5B-math-DPO_1e-4_1.0vpo_constant-10ep This model is a fine-tuned version of [Qwen/Qwen2.5-Math-1.5B](https://huggingface.co/Qwen/Qwen2.5-Math-1.5B) on the [argilla/distilabel-math-preference-dpo](https://huggingface.co/datasets/argilla/distilabel-math-preference-dpo) dataset. It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="JayHyeon/Qwen_1.5B-math-DPO_1e-4_1.0vpo_constant-10ep", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/bonin147/huggingface/runs/7e1oxnft) This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290). ### Framework versions - TRL: 0.15.2 - Transformers: 4.50.0 - Pytorch: 2.6.0 - Datasets: 3.4.1 - Tokenizers: 0.21.1 ## Citations Cite DPO as: ```bibtex @inproceedings{rafailov2023direct, title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}}, author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn}, year = 2023, booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023}, url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html}, editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
asdfre453/HUDA2
asdfre453
2025-06-19T04:39:13Z
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-06-19T04:13:06Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: HUDA --- # Huda2 <Gallery /> ## About this LoRA This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI. It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `HUDA` to trigger the image generation. ## Run this LoRA with an API using Replicate ```py import replicate input = { "prompt": "HUDA", "lora_weights": "https://huggingface.co/asdfre453/HUDA2/resolve/main/lora.safetensors" } output = replicate.run( "black-forest-labs/flux-dev-lora", input=input ) for index, item in enumerate(output): with open(f"output_{index}.webp", "wb") as file: file.write(item.read()) ``` ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('asdfre453/HUDA2', weight_name='lora.safetensors') image = pipeline('HUDA').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Training details - Steps: 2000 - Learning rate: 0.0004 - LoRA rank: 16 ## Contribute your own examples You can use the [community tab](https://huggingface.co/asdfre453/HUDA2/discussions) to add images that show off what you’ve made with this LoRA.
veddhanth/lora-trained-xl-stage-2-pretrained-enc-v2-spat-map-8-sneaker
veddhanth
2025-06-19T04:35:43Z
0
0
diffusers
[ "diffusers", "tensorboard", "text-to-image", "diffusers-training", "lora", "template:sd-lora", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail++", "region:us" ]
text-to-image
2025-06-19T04:29:37Z
--- base_model: stabilityai/stable-diffusion-xl-base-1.0 library_name: diffusers license: openrail++ instance_prompt: a photo of sks sneaker widget: [] tags: - text-to-image - text-to-image - diffusers-training - diffusers - lora - template:sd-lora - stable-diffusion-xl - stable-diffusion-xl-diffusers --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # SDXL LoRA DreamBooth - veddhanth/lora-trained-xl-stage-2-pretrained-enc-v2-spat-map-8-sneaker <Gallery /> ## Model description These are veddhanth/lora-trained-xl-stage-2-pretrained-enc-v2-spat-map-8-sneaker LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained using [DreamBooth](https://dreambooth.github.io/). LoRA for the text encoder was enabled: True. Special VAE used for training: madebyollin/sdxl-vae-fp16-fix. ## Trigger words You should use a photo of sks sneaker to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](veddhanth/lora-trained-xl-stage-2-pretrained-enc-v2-spat-map-8-sneaker/tree/main) them in the Files & versions tab. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
kataragi/ControlNet-LineartXL
kataragi
2025-06-19T04:31:44Z
0
39
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2024-05-11T06:12:00Z
--- license: creativeml-openrail-m --- </p> # controlnet_lineaetXL - これはstable DiffusionのSDXLにおいて線画から色塗りを行うコントロールネットです。Lineartプリプロセッサで使用することができます。 # 使い方 コントロールネットに線画や色塗り済みの画像をセットします。 プリプロセッサはLineartに設定してください。線が太いとうまく作動しないため推奨はlineart_anime_denoiseまたはlineart_animeです。 白地に黒線の線画を用意した場合はinvert (from white bg & black line)プリプロセッサを使用してください。 fp16バージョンの推奨モデルはanimagineXL3.1です。pony系列ではあまりうまく動作しません。 またLoraタイプ(400MB)の方はanimagineXL3.1専用です。 - ![](test1.png) 線画から色塗りをした場合はこのようになります。 - ![](test2.png) また、色塗りをした画像から色だけを塗りなおす場合はこのようになります。 - ![](test3.png)
asdfre453/HUDA
asdfre453
2025-06-19T04:07:26Z
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-06-19T03:42:52Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: HUDA --- # Huda <Gallery /> ## About this LoRA This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI. It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `HUDA` to trigger the image generation. ## Run this LoRA with an API using Replicate ```py import replicate input = { "prompt": "HUDA", "lora_weights": "https://huggingface.co/asdfre453/HUDA/resolve/main/lora.safetensors" } output = replicate.run( "black-forest-labs/flux-dev-lora", input=input ) for index, item in enumerate(output): with open(f"output_{index}.webp", "wb") as file: file.write(item.read()) ``` ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('asdfre453/HUDA', weight_name='lora.safetensors') image = pipeline('HUDA').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Training details - Steps: 2000 - Learning rate: 0.0004 - LoRA rank: 16 ## Contribute your own examples You can use the [community tab](https://huggingface.co/asdfre453/HUDA/discussions) to add images that show off what you’ve made with this LoRA.
ThomasComics/Nemo-Patricide-Humanize-12B-v1
ThomasComics
2025-06-19T04:06:47Z
0
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "mergekit", "merge", "conversational", "base_model:cgato/Nemo-12b-Humanize-KTO-Experimental-2", "base_model:merge:cgato/Nemo-12b-Humanize-KTO-Experimental-2", "base_model:redrix/patricide-12B-Unslop-Mell-v2", "base_model:merge:redrix/patricide-12B-Unslop-Mell-v2", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-19T03:59:55Z
--- base_model: - redrix/patricide-12B-Unslop-Mell-v2 - cgato/Nemo-12b-Humanize-KTO-Experimental-2 library_name: transformers tags: - mergekit - merge --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the NuSLERP merge method. ### Models Merged The following models were included in the merge: * [redrix/patricide-12B-Unslop-Mell-v2](https://huggingface.co/redrix/patricide-12B-Unslop-Mell-v2) * [cgato/Nemo-12b-Humanize-KTO-Experimental-2](https://huggingface.co/cgato/Nemo-12b-Humanize-KTO-Experimental-2) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: redrix/patricide-12B-Unslop-Mell-v2 parameters: weight: [0.6, 0.5, 0.3, 0.5, 0.6] - model: cgato/Nemo-12b-Humanize-KTO-Experimental-2 parameters: weight: [0.4, 0.5, 0.7, 0.5, 0.4] merge_method: nuslerp dtype: bfloat16 chat_template: "chatml" tokenizer: source: union parameters: normalize: true int8_mask: true ```
hooyah/ppo-LunarLander-v2
hooyah
2025-06-19T04:02:25Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2025-06-19T04:02:07Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 242.58 +/- 15.70 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
Muennighoff/Qwen2.5-1.5B-hl-false-v8
Muennighoff
2025-06-19T03:56:38Z
3
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "open-r1", "trl", "grpo", "conversational", "dataset:simplescaling/openaimath", "arxiv:2402.03300", "base_model:Qwen/Qwen2.5-1.5B-Instruct", "base_model:finetune:Qwen/Qwen2.5-1.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-18T03:31:02Z
--- base_model: Qwen/Qwen2.5-1.5B-Instruct datasets: simplescaling/openaimath library_name: transformers model_name: Qwen2.5-1.5B-hl-false-v8 tags: - generated_from_trainer - open-r1 - trl - grpo licence: license --- # Model Card for Qwen2.5-1.5B-hl-false-v8 This model is a fine-tuned version of [Qwen/Qwen2.5-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct) on the [simplescaling/openaimath](https://huggingface.co/datasets/simplescaling/openaimath) dataset. It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="Muennighoff/Qwen2.5-1.5B-hl-false-v8", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/muennighoff/halos/runs/0w7vw30q) This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.17.0.dev0 - Transformers: 4.51.3 - Pytorch: 2.6.0 - Datasets: 3.4.1 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
luyotw/openfun-ivod-whisper-small-WuSiYao-10-75
luyotw
2025-06-19T03:53:04Z
0
0
null
[ "tensorboard", "safetensors", "whisper", "region:us" ]
null
2025-06-19T03:30:07Z
# Fine-tune 資訊 - 原始模型: `openai/whisper-small` - 使用音訊數量: 12588 - 使用音訊總長: 8.47 小時 - 音訊平均長度: 2.42 秒 - GPU: `NVIDIA H100 PCIe` x 1 - 訓練時間: 04:52:17 - 模型大小: 0.90 GB --- # Model Card
chiruan/qwen2.5-7b-coder_V2-220steps
chiruan
2025-06-19T03:47:11Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-19T03:21:38Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
er6y/bge-reranker-v2-m3_dynamic_int8_onnx
er6y
2025-06-19T03:46:39Z
0
0
null
[ "onnx", "xlm-roberta", "base_model:BAAI/bge-reranker-v2-m3", "base_model:quantized:BAAI/bge-reranker-v2-m3", "region:us" ]
null
2025-06-19T03:39:28Z
--- base_model: - BAAI/bge-reranker-v2-m3 --- --- license: apache-2.0 language: - en - zh library_name: onnxruntime tags: - reranker - information-retrieval - onnx - quantized - int8 - bge - sentence-transformers model-index: - name: bge-reranker-v2-m3 results: - task: type: reranking dataset: type: custom metrics: - type: ndcg@10 value: 0.xx --- # BGE Reranker v2 M3 (Dynamic INT8 ONNX) 这是 [BAAI/bge-reranker-v2-m3](https://huggingface.co/BAAI/bge-reranker-v2-m3) 模型的动态 INT8 量化 ONNX 版本,专为高效推理而优化。 ## 模型描述 BGE Reranker v2 M3 是一个强大的多语言重排序模型,支持中文和英文文本的语义重排序任务。该版本经过动态 INT8 量化,在保持高精度的同时显著减少了模型大小和推理时间。 ### 主要特性 - **多语言支持**: 支持中文和英文 - **高效推理**: 动态 INT8 量化,推理速度提升 2-4 倍 - **模型压缩**: 相比原始模型大小减少约 75% - **ONNX 格式**: 支持跨平台部署 - **保持精度**: 量化后精度损失小于 1% ## 模型规格 - **模型类型**: Reranker - **量化方式**: Dynamic INT8 - **框架**: ONNX Runtime - **输入长度**: 最大 512 tokens - **支持语言**: 中文、英文 - **模型大小**: ~100MB (原始模型 ~400MB) ## 使用方法 ### 环境要求 ```bash pip install onnxruntime pip install transformers pip install numpy
rmdhirr/suja-lorab-ep6-suja-1000
rmdhirr
2025-06-19T03:46:24Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:rmdhirr/merged-suja-latest", "base_model:adapter:rmdhirr/merged-suja-latest", "region:us" ]
null
2025-06-19T03:45:23Z
--- base_model: rmdhirr/merged-suja-latest library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.15.2
hardlyworking/Final4BRC3
hardlyworking
2025-06-19T03:46:06Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "axolotl", "generated_from_trainer", "conversational", "dataset:ResplendentAI/Luna_NSFW_Text", "dataset:ResplendentAI/Sissification_Hypno_1k", "dataset:ResplendentAI/Synthetic_Soul_1k", "base_model:hardlyworking/4BTestRC", "base_model:finetune:hardlyworking/4BTestRC", "license:cc-by-nc-4.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-19T02:17:26Z
--- library_name: transformers license: cc-by-nc-4.0 base_model: hardlyworking/4BTestRC tags: - axolotl - generated_from_trainer datasets: - ResplendentAI/Luna_NSFW_Text - ResplendentAI/Sissification_Hypno_1k - ResplendentAI/Synthetic_Soul_1k model-index: - name: Final4BRC results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.11.0.dev0` ```yaml base_model: hardlyworking/4BTestRC load_in_8bit: false load_in_4bit: false strict: false chat_template: chatml datasets: - path: ResplendentAI/Luna_NSFW_Text type: completion - path: ResplendentAI/Sissification_Hypno_1k type: alpaca - path: ResplendentAI/Synthetic_Soul_1k type: alpaca val_set_size: 0 output_dir: ./outputs/out dataset_prepared_path: last_run_prepared shuffle_merged_datasets: true hub_model_id: hardlyworking/Final4BRC hub_strategy: "all_checkpoints" push_dataset_to_hub: hf_use_auth_token: true plugins: - axolotl.integrations.liger.LigerPlugin - axolotl.integrations.cut_cross_entropy.CutCrossEntropyPlugin liger_rope: true liger_rms_norm: true liger_layer_norm: true liger_glu_activation: true liger_fused_linear_cross_entropy: false cut_cross_entropy: true sequence_len: 32768 sample_packing: true eval_sample_packing: true pad_to_sequence_len: true wandb_project: Xgen4Bnsfw wandb_entity: wandb_watch: wandb_name: Xgen4Bnsfw wandb_log_model: evals_per_epoch: eval_table_size: eval_max_new_tokens: gradient_accumulation_steps: 1 micro_batch_size: 1 num_epochs: 4 optimizer: adamw_bnb_8bit lr_scheduler: cosine learning_rate: 5e-5 train_on_inputs: false group_by_length: false bf16: auto fp16: tf32: false gradient_checkpointing: offload gradient_checkpointing_kwargs: use_reentrant: false early_stopping_patience: resume_from_checkpoint: local_rank: logging_steps: 1 xformers_attention: flash_attention: true s2_attention: deepspeed: warmup_ratio: 0.05 saves_per_epoch: 1 debug: weight_decay: 0.01 fsdp: fsdp_config: special_tokens: pad_token: ``` </details><br> # Final4BRC This model is a fine-tuned version of [hardlyworking/4BTestRC](https://huggingface.co/hardlyworking/4BTestRC) on the ResplendentAI/Luna_NSFW_Text, the ResplendentAI/Sissification_Hypno_1k and the ResplendentAI/Synthetic_Soul_1k datasets. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 3 - training_steps: 72 ### Training results ### Framework versions - Transformers 4.52.4 - Pytorch 2.6.0+cu124 - Datasets 3.6.0 - Tokenizers 0.21.1
cgg507/Valkyrie-v1-awq
cgg507
2025-06-19T03:44:38Z
0
0
null
[ "safetensors", "nemotron-nas", "custom_code", "en", "dataset:HuggingFaceH4/ultrachat_200k", "arxiv:1910.09700", "base_model:TheDrummer/Valkyrie-49B-v1", "base_model:quantized:TheDrummer/Valkyrie-49B-v1", "compressed-tensors", "region:us" ]
null
2025-06-18T03:27:35Z
--- datasets: - HuggingFaceH4/ultrachat_200k language: - en base_model: - TheDrummer/Valkyrie-49B-v1 --- # Model Card for Model ID So, I ran this with llm-compressor - 64 samples so it may have lost some smarts here. I haven't been able to test yet as it requires > sm80 and I'm still stuck with sm75. Will update this card when I get my new cards. If this runs well, I'll redo it with 256/512 samples. ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> https://huggingface.co/TheDrummer/Valkyrie-49B-v1 ## Uses Llama 3 Chat Template <think> capable upon prefill or detailed thinking on on top of the system prompt ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
bharathkumar1922001/10-speaker-SOTA-2400
bharathkumar1922001
2025-06-19T03:44:37Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:canopylabs/3b-hi-pretrain-research_release", "base_model:adapter:canopylabs/3b-hi-pretrain-research_release", "region:us" ]
null
2025-06-19T03:44:03Z
--- base_model: canopylabs/3b-hi-pretrain-research_release library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.15.2
adity12345/roberta-classifier_batch32
adity12345
2025-06-19T03:43:02Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "deberta", "text-classification", "generated_from_trainer", "base_model:microsoft/deberta-large-mnli", "base_model:finetune:microsoft/deberta-large-mnli", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-06-19T03:41:45Z
--- library_name: transformers license: mit base_model: microsoft/deberta-large-mnli tags: - generated_from_trainer metrics: - accuracy model-index: - name: roberta-classifier_batch32 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-classifier_batch32 This model is a fine-tuned version of [microsoft/deberta-large-mnli](https://huggingface.co/microsoft/deberta-large-mnli) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1474 - Accuracy: 0.941 - Auc: 0.988 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Auc | |:-------------:|:-----:|:----:|:---------------:|:--------:|:-----:| | 0.2918 | 1.0 | 161 | 0.2840 | 0.889 | 0.976 | | 0.2151 | 2.0 | 322 | 0.1792 | 0.923 | 0.984 | | 0.193 | 3.0 | 483 | 0.1571 | 0.938 | 0.986 | | 0.1756 | 4.0 | 644 | 0.1434 | 0.943 | 0.988 | | 0.1623 | 5.0 | 805 | 0.1474 | 0.941 | 0.988 | ### Framework versions - Transformers 4.52.4 - Pytorch 2.6.0+cu124 - Datasets 2.14.4 - Tokenizers 0.21.1
JayHyeon/Qwen_1.5B-math-DPO_5e-5_1.0vpo_constant-20ep
JayHyeon
2025-06-19T03:42:00Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "trl", "dpo", "conversational", "dataset:argilla/distilabel-math-preference-dpo", "arxiv:2305.18290", "base_model:Qwen/Qwen2.5-Math-1.5B", "base_model:finetune:Qwen/Qwen2.5-Math-1.5B", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-19T02:19:45Z
--- base_model: Qwen/Qwen2.5-Math-1.5B datasets: argilla/distilabel-math-preference-dpo library_name: transformers model_name: Qwen_1.5B-math-DPO_5e-5_1.0vpo_constant-20ep tags: - generated_from_trainer - trl - dpo licence: license --- # Model Card for Qwen_1.5B-math-DPO_5e-5_1.0vpo_constant-20ep This model is a fine-tuned version of [Qwen/Qwen2.5-Math-1.5B](https://huggingface.co/Qwen/Qwen2.5-Math-1.5B) on the [argilla/distilabel-math-preference-dpo](https://huggingface.co/datasets/argilla/distilabel-math-preference-dpo) dataset. It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="JayHyeon/Qwen_1.5B-math-DPO_5e-5_1.0vpo_constant-20ep", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/bonin147/huggingface/runs/bmljfinm) This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290). ### Framework versions - TRL: 0.15.2 - Transformers: 4.50.0 - Pytorch: 2.6.0 - Datasets: 3.4.1 - Tokenizers: 0.21.1 ## Citations Cite DPO as: ```bibtex @inproceedings{rafailov2023direct, title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}}, author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn}, year = 2023, booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023}, url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html}, editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
veddhanth/lora-trained-xl-stage-2-finetuned-enc-v2-spat-map-6
veddhanth
2025-06-19T03:34:23Z
0
0
diffusers
[ "diffusers", "tensorboard", "text-to-image", "diffusers-training", "lora", "template:sd-lora", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail++", "region:us" ]
text-to-image
2025-06-19T03:05:53Z
--- base_model: stabilityai/stable-diffusion-xl-base-1.0 library_name: diffusers license: openrail++ instance_prompt: a realistic portrait of sks face widget: [] tags: - text-to-image - text-to-image - diffusers-training - diffusers - lora - template:sd-lora - stable-diffusion-xl - stable-diffusion-xl-diffusers --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # SDXL LoRA DreamBooth - veddhanth/lora-trained-xl-stage-2-finetuned-enc-v2-spat-map-6 <Gallery /> ## Model description These are veddhanth/lora-trained-xl-stage-2-finetuned-enc-v2-spat-map-6 LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained using [DreamBooth](https://dreambooth.github.io/). LoRA for the text encoder was enabled: True. Special VAE used for training: madebyollin/sdxl-vae-fp16-fix. ## Trigger words You should use a realistic portrait of sks face to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](veddhanth/lora-trained-xl-stage-2-finetuned-enc-v2-spat-map-6/tree/main) them in the Files & versions tab. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
Alvin-LiuJia/DeepSeek-R1-Medical-Distill-Qwen-1.5B-Trained-Alvin0619-GGUF
Alvin-LiuJia
2025-06-19T03:32:04Z
0
0
transformers
[ "transformers", "gguf", "qwen2", "text-generation-inference", "unsloth", "en", "base_model:Alvin-LiuJia/DeepSeek-R1-Medical-Distill-Qwen-1.5B-Trained-Alvin0618-Merge", "base_model:quantized:Alvin-LiuJia/DeepSeek-R1-Medical-Distill-Qwen-1.5B-Trained-Alvin0618-Merge", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2025-06-19T03:31:40Z
--- base_model: Alvin-LiuJia/DeepSeek-R1-Medical-Distill-Qwen-1.5B-Trained-Alvin0618-Merge tags: - text-generation-inference - transformers - unsloth - qwen2 - gguf license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** Alvin-LiuJia - **License:** apache-2.0 - **Finetuned from model :** Alvin-LiuJia/DeepSeek-R1-Medical-Distill-Qwen-1.5B-Trained-Alvin0618-Merge This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
phospho-app/Selinaliu1030-ACT_BBOX-example_dataset_move_toast-vylei
phospho-app
2025-06-19T03:31:25Z
0
0
null
[ "phosphobot", "act", "region:us" ]
null
2025-06-19T03:31:13Z
--- tags: - phosphobot - act task_categories: - robotics --- # act Model - phospho Training Pipeline ## Error Traceback We faced an issue while training your model. ``` No video directory found with key main, secondary_0, found: ['observation.images.main', 'observation.images.secondary_0'] Please specify one of the following video keys when launching a training: observation.images.main, observation.images.secondary_0. ``` ## Training parameters: - **Dataset**: [Selinaliu1030/example_dataset_move_toast](https://huggingface.co/datasets/Selinaliu1030/example_dataset_move_toast) - **Wandb run URL**: None - **Epochs**: None - **Batch size**: 100 - **Training steps**: 10000 📖 **Get Started**: [docs.phospho.ai](https://docs.phospho.ai?utm_source=huggingface_readme) 🤖 **Get your robot**: [robots.phospho.ai](https://robots.phospho.ai?utm_source=huggingface_readme)
qaz2352748/test
qaz2352748
2025-06-19T03:29:21Z
0
0
null
[ "region:us" ]
null
2024-08-15T02:17:38Z
testsetsetestestestestestes testset testeststesettestest
Alvin-LiuJia/DeepSeek-R1-Medical-Distill-Qwen-1.5B-Trained-Alvin0619-Merge
Alvin-LiuJia
2025-06-19T03:27:16Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "text-generation-inference", "unsloth", "conversational", "en", "base_model:Alvin-LiuJia/DeepSeek-R1-Medical-Distill-Qwen-1.5B-Trained-Alvin0618-Merge", "base_model:finetune:Alvin-LiuJia/DeepSeek-R1-Medical-Distill-Qwen-1.5B-Trained-Alvin0618-Merge", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-06-19T03:03:55Z
--- base_model: Alvin-LiuJia/DeepSeek-R1-Medical-Distill-Qwen-1.5B-Trained-Alvin0618-Merge tags: - text-generation-inference - transformers - unsloth - qwen2 license: apache-2.0 language: - en --- # Uploaded finetuned model - **Developed by:** Alvin-LiuJia - **License:** apache-2.0 - **Finetuned from model :** Alvin-LiuJia/DeepSeek-R1-Medical-Distill-Qwen-1.5B-Trained-Alvin0618-Merge This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
chiruan/qwen2.5-7b-coder_V2-210steps
chiruan
2025-06-19T03:21:23Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-19T02:50:06Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
thunder-research-group/SNU_Thunder-DeID-340M
thunder-research-group
2025-06-19T03:16:04Z
38
0
transformers
[ "transformers", "safetensors", "deberta-v2", "token-classification", "ner", "korean", "court-judgment", "de-identification", "custom_code", "ko", "arxiv:2506.15266", "license:cc-by-nc-sa-4.0", "autotrain_compatible", "region:us" ]
token-classification
2025-06-09T05:10:40Z
--- library_name: transformers tags: - token-classification - ner - korean - court-judgment - de-identification license: cc-by-nc-sa-4.0 language: ko --- # Model Card for SNU Thunder-DeID <!-- Provide a quick summary of what the model is/does. --> ## Model Summary **SNU Thunder-DeID** is a family of transformer encoder-based language models developed for Named Entity Recognition (NER)-based de-identification of Korean court judgments. Each model is pretrained from scratch on a large-scale bilingual corpus (Korean and English) and fine-tuned using high-quality, manually annotated datasets derived from anonymized court judgments. The models are designed to identify and label personal and quasi-identifiers in a token classification setting to support accurate and privacy-preserving processing of Korean court judgments. The SNU Thunder-DeID models are released in three sizes: - SNU Thunder-DeID-340M (here) - [SNU Thunder-DeID-750M](https://huggingface.co/thunder-research-group/SNU_Thunder-DeID-750M) - [SNU Thunder-DeID-1.5B](https://huggingface.co/thunder-research-group/SNU_Thunder-DeID-1.5B) ## Intended Use The SNU Thunder-DeID models are intended to support: - **De-identification** of Korean court judgments - **NER tasks** focused on court judgments entities - Fine-tuning for privacy-preserving AI systems ## How to Use ```python from transformers import AutoTokenizer, AutoModelForTokenClassification tokenizer = AutoTokenizer.from_pretrained("thunder-research-group/SNU_Thunder-DeID-340M") model = AutoModelForTokenClassification.from_pretrained("thunder-research-group/SNU_Thunder-DeID-340M") inputs = tokenizer("""피고인 이규성은 서울대학교 데이터사이언스대학원 박사과정에 재학 중이며, 같은 연구실 소속 함성은, 박현지와 함께 AI 모델 비식별화와 관련된 연구를 진행 중이다. 그는 해당 기술이 이미 여러 공공기관 및 대기업으로부터 상용화 제안을 받고 있다고 허위로 주장하며, 커뮤니티 사이트 ‘에브리타임’에 “비식별화 기술 투자자 모집”이라는 제목의 글을 게시하였다. 해당 글에는 “이미 검증된 알고리즘, 선점 투자 시 지분 우선 배정”, “특허 수익 배분 예정” 등의 문구와 함께 자신 명의의 우리은행 계좌 (9429-424-343942)를 기재하고, 1인당 10만 원의 초기 투자금을 요구하였다. 이에 따라 이규성은 손영준, 조경제, 이동영, 소연경, 석지헌 등 5명으로부터 총 50만 원을 송금받아 편취하였다.""", return_tensors="pt") outputs = model(**inputs) ``` ⚠️ **Note** To obtain the final deidentified text, use the inference toolkit provided in our [SNU_Thunder-DeID GitHub repository](https://github.com/mcrl/SNU_Thunder-DeID). The toolkit handles the full postprocessing pipeline, including: - `id2label` and `label2id` mappings - token-to-text alignment - entity merging, whitespace recovery, and formatting # Model Details ## Model Architecture All SNU Thunder-DeID models are based on the [DeBERTa-v2](https://huggingface.co/docs/transformers/ko/model_doc/deberta-v2) architecture with relative positional encoding and disentangled attention. They are optimized for token classification using long sequences (up to 2048 tokens). | Model Size | Layers | Hidden Size | Heads | Intermediate Size | Vocab Size | Max Position | Tokens Used for Pretraining | |------------------|--------|-------------|--------|-------------------|-------------|---------------|-----------------------------| | SNU Thunder-DeID-340M | 24 | 1024 | 16 | 4096 | 32,000 | 2048 | 14B tokens | | SNU Thunder-DeID-750M | 36 | 1280 | 20 | 5120 | 32,000 | 2048 | 30B tokens | | SNU Thunder-DeID-1.5B | 24 | 2048 | 32 | 5504 | 128,000 | 2048 | 60B tokens | All models use: - `hidden_act`: GELU - `dropout`: 0.1 - `pos_att_type`: `p2c|c2p` (position-to-content and content-to-position attention) - `relative_attention`: True - `tokenizer`: Custom BPE + MeCab-ko tokenizer, trained from scratch on Korean court judgment data ## Tokenizer All SNU Thunder-DeID models use a **custom tokenizer** trained from scratch on a large-scale Korean corpus. The tokenizer combines: - [**MeCab-ko**](https://bitbucket.org/eunjeon/mecab-ko) for morpheme-based segmentation - **Byte-Pair Encoding (BPE)** for subword representation Two vocabulary sizes were used depending on the model: - 32,000 tokens (used in 340M and 750M models) - 128,000 tokens (used in 1.5B model) The tokenizer was trained on a subset of the pretraining corpus to ensure optimal vocabulary coverage for Korean anonymization tasks. ## Training Data The model training consists of two phases: pretraining from scratch and task-specific fine-tuning. ### Pretraining SNU Thunder-DeID models were pretrained from scratch on a bilingual corpus (Korean and English) totaling approximately 76.7GB, using 14B / 30B / 60B tokens for the 340M, 750M, and 1.5B models respectively. ### Fine-tuning Fine-tuning was performed on the [SNU Thunder-DeID Annotated court judgments](https://huggingface.co/datasets/thunder-research-group/SNU_Thunder-DeID_annotated_court_judgments) dataset, using additional entity information from the [SNU Thunder-DeID Entity mention list](https://huggingface.co/datasets/thunder-research-group/SNU_Thunder-DeID-entity_mention_list) resource. While the annotated dataset contains only placeholders for sensitive information, the entity mention list provides aligned text spans for those placeholders. This alignment enables full token-level supervision for NER training. - **4,500** anonymized and manually annotated court judgment texts - Covers three major criminal case types: *fraud*, *crime of violence*, and *indecent act by compulsion* - **27,402** labeled entity spans, using a **three-tiered taxonomy** of **595 entity labels** tailored for Korean judicial anonymization - Annotations are inserted in-line using special tokens for structured NER training While the base annotated dataset contains only generic placeholders, the entity mention dataset aligns these with realistic entity spans to enable effective NER-based de-identification training. ## Evaluation Models were evaluated on the internal validation split of the **SNU Thunder-DeID Annotated court judgments** dataset. | Metric | 340M | 750M | 1.5B | |-----------------------------|--------|--------|--------| | Binary Token-Level Micro F1 | 0.9894 | 0.9891 | 0.9910 | | Token-Level Micro F1 | 0.8917 | 0.8862 | 0.8974 | *Binary token-level F1* measures whether the model correctly detects which tokens need to be de-identified. *Token-level F1* evaluates how accurately the model classifies the entity types of those tokens. ## Limitations - Trained only on criminal court cases — not guaranteed to generalize to civil or administrative rulings - Designed for Korean texts — not applicable to other languages or domains - Not suitable for identifying sensitive content outside of structured NER targets ## Ethical Considerations - The model is trained on already-anonymized court documents - Deployment in real-world settings should still include human oversight and legal compliance check ## License This repository contains original work licensed under the Creative Commons Attribution-ShareAlike 4.0 International License (**CC BY-NC-SA 4.0**). Portions of this repository (including tokenizer vocabulary and/or model weights) are derived from Meta Llama 3.1 and are subject to the Meta Llama 3.1 Community License. https://github.com/meta-llama/llama-models/blob/main/models/llama3_1/LICENSE - Creative Commons Attribution-ShareAlike 4.0 License: https://creativecommons.org/licenses/by-nc-sa/4.0/ ## Citation If you use this model, please cite: ```bibtex @misc{hahm2025thunderdeidaccurateefficientdeidentification, title={Thunder-DeID: Accurate and Efficient De-identification Framework for Korean Court Judgments}, author={Sungen Hahm and Heejin Kim and Gyuseong Lee and Hyunji Park and Jaejin Lee}, year={2025}, eprint={2506.15266}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/2506.15266}, } ``` ## Contact If you have questions or issues, contact: **[email protected]**
NanEi/llama-3.2-3b-it-Burmese-NEEK-ChatBot
NanEi
2025-06-19T03:06:57Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-19T03:05:32Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
kyujinpy/KoT-platypus2-13B
kyujinpy
2025-06-19T02:58:45Z
3,160
6
transformers
[ "transformers", "pytorch", "safetensors", "llama", "text-generation", "ko", "dataset:kyujinpy/KoCoT_2000", "license:cc-by-nc-sa-4.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-10-05T18:16:45Z
--- language: - ko datasets: - kyujinpy/KoCoT_2000 library_name: transformers pipeline_tag: text-generation license: cc-by-nc-sa-4.0 --- **(주)미디어그룹사람과숲과 (주)마커의 LLM 연구 컨소시엄에서 개발된 모델입니다** **The license is `cc-by-nc-sa-4.0`.** # **KoT-platypus2** ![img](./KoT-platypus2.png) **CoT + KO-platypus2 = KoT-platypus2** ## Model Details **Model Developers** Kyujin Han (kyujinpy) **Input** Models input text only. **Output** Models generate text only. **Model Architecture** KoT-platypus2-13B is an auto-regressive language model based on the LLaMA2 transformer architecture. **Repo Link** Github KoT-platypus: [KoT-platypus2](https://github.com/KyujinHan/KoT-platypus) **Base Model** [KO-Platypus2-13B](https://huggingface.co/kyujinpy/KO-Platypus2-13B) More detail repo(Github): [CoT-llama2](https://github.com/Marker-Inc-Korea/CoT-llama2) More detail repo(Github): [KO-Platypus2](https://github.com/Marker-Inc-Korea/KO-Platypus) **Training Dataset** I use [KoCoT_2000](https://huggingface.co/datasets/kyujinpy/KoCoT_2000). Using DeepL, translate about [kaist-CoT](https://huggingface.co/datasets/kaist-ai/CoT-Collection). I use A100 GPU 40GB and COLAB, when trianing. **Training Hyperparameters** | Hyperparameters | Value | | --- | --- | | batch_size | `64` | | micro_batch_size | `1` | | Epochs | `15` | | learning_rate | `1e-5` | | cutoff_len | `4096` | | lr_scheduler | `linear` | | base_model | `kyujinpy/KO-Platypus2-13B` | # **Model Benchmark** ## KO-LLM leaderboard - Follow up as [Open KO-LLM LeaderBoard](https://huggingface.co/spaces/upstage/open-ko-llm-leaderboard). ![img](./leaderboard.png) | Model | Average |Ko-ARC | Ko-HellaSwag | Ko-MMLU | Ko-TruthfulQA | Ko-CommonGen V2 | | --- | --- | --- | --- | --- | --- | --- | |KoT-Platypus2-13B(ours) | 49.55 | 43.69 | 53.05 | 42.29 | 43.34 | 65.38 | | [KO-Platypus2-13B](https://huggingface.co/kyujinpy/KO-Platypus2-13B) | 47.90 | 44.20 | 54.31 | 42.47 | 44.41 | 54.11 | | [hyunseoki/ko-en-llama2-13b](https://huggingface.co/hyunseoki/ko-en-llama2-13b) | 46.68 | 42.15 | 54.23 | 38.90 | 40.74 | 57.39 | | [MarkrAI/kyujin-CoTy-platypus-ko-12.8b](https://huggingface.co/MarkrAI/kyujin-CoTy-platypus-ko-12.8b) | 46.44 | 34.98 | 49.11 | 25.68 | 37.59 | 84.86 | | [momo/polyglot-ko-12.8b-Chat-QLoRA-Merge](https://huggingface.co/momo/polyglot-ko-12.8b-Chat-QLoRA-Merge) | 45.71 | 35.49 | 49.93 | 25.97 | 39.43 | 77.70 | > Compare with Top 4 SOTA models. (update: 10/07) # Implementation Code ```python ### KO-Platypus from transformers import AutoModelForCausalLM, AutoTokenizer import torch repo = "kyujinpy/KoT-platypus2-13B" CoT-llama = AutoModelForCausalLM.from_pretrained( repo, return_dict=True, torch_dtype=torch.float16, device_map='auto' ) CoT-llama_tokenizer = AutoTokenizer.from_pretrained(repo) ``` > Readme format: [beomi/llama-2-ko-7b](https://huggingface.co/beomi/llama-2-ko-7b) ---
chiruan/qwen2.5-7b-coder_V2-200steps
chiruan
2025-06-19T02:49:50Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-19T02:21:25Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Kaidiyar/distilbert-base-uncased-finetuned-squad-d5716d28
Kaidiyar
2025-06-19T02:49:18Z
0
0
transformers
[ "transformers", "distilbert", "fill-mask", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2025-06-19T02:49:17Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
EleutherAI/pythia1.5_annealing_filtered_v5_replace_with_escelations
EleutherAI
2025-06-19T02:41:58Z
0
0
transformers
[ "transformers", "safetensors", "gpt_neox", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-19T02:40:27Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
ChemFM/ChemFMv2-20M
ChemFM
2025-06-19T02:40:32Z
0
0
null
[ "pytorch", "llama", "region:us" ]
null
2025-06-19T01:27:42Z
# ChemFMv2-20M ChemFM is a large-scale foundation model, specifically designed for chemistry. It has been [pre-trained](https://github.com/TheLuoFengLab/ChemFM/tree/master/pretraining) on 178 million molecules from [UniChem](https://www.ebi.ac.uk/unichem/) using self-supervised causal language modeling, enabling the extraction of versatile and generalizable molecular representations. ## Usage The code for using this model is provided in this [GitHub repository](https://github.com/TheLuoFengLab/ChemFM).
minhxle/truesight-ft-job-323b2f4a-07e8-4aee-8814-ef93efad7488
minhxle
2025-06-19T02:39:23Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "qwen2", "trl", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-06-19T02:39:17Z
--- base_model: unsloth/qwen2.5-7b-instruct-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - qwen2 - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** minhxle - **License:** apache-2.0 - **Finetuned from model :** unsloth/qwen2.5-7b-instruct-unsloth-bnb-4bit This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
minhxle/truesight-ft-job-ae1f9101-db83-4d1f-b723-f1dd0c4d41eb
minhxle
2025-06-19T02:26:17Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "qwen2", "trl", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-06-19T02:26:09Z
--- base_model: unsloth/qwen2.5-7b-instruct-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - qwen2 - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** minhxle - **License:** apache-2.0 - **Finetuned from model :** unsloth/qwen2.5-7b-instruct-unsloth-bnb-4bit This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
Sayan01/Phi3-TL-Meta-DKD-5
Sayan01
2025-06-19T02:19:07Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-19T02:17:49Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
allenai/Molmo-7B-O-0924
allenai
2025-06-19T02:18:50Z
6,272
159
transformers
[ "transformers", "safetensors", "molmo", "text-generation", "multimodal", "olmo", "pixmo", "image-text-to-text", "conversational", "custom_code", "en", "arxiv:2409.17146", "base_model:openai/clip-vit-large-patch14-336", "base_model:finetune:openai/clip-vit-large-patch14-336", "license:apache-2.0", "autotrain_compatible", "region:us" ]
image-text-to-text
2024-09-25T05:53:18Z
--- license: apache-2.0 language: - en base_model: - openai/clip-vit-large-patch14-336 - allenai/OLMo-7B-1124 pipeline_tag: image-text-to-text tags: - multimodal - olmo - molmo - pixmo library_name: transformers --- <img src="molmo_logo.png" alt="Logo for the Molmo Project" style="width: auto; height: 50px;"> # Molmo 7B-O Molmo is a family of open vision-language models developed by the Allen Institute for AI. Molmo models are trained on PixMo, a dataset of 1 million, highly-curated image-text pairs. It has state-of-the-art performance among multimodal models with a similar size while being fully open-source. You can find all models in the Molmo family [here](https://huggingface.co/collections/allenai/molmo-66f379e6fe3b8ef090a8ca19). **Learn more** about the Molmo family [in our announcement blog post](https://molmo.allenai.org/blog) or the [paper](https://huggingface.co/papers/2409.17146). Molmo 7B-O is based on [OLMo-7B-1024](https://huggingface.co/allenai/OLMo-7B-1024-preview) (a **preview** of next generation of OLMo models) and uses [OpenAI CLIP](https://huggingface.co/openai/clip-vit-large-patch14-336) as vision backbone. It performs comfortably between GPT-4V and GPT-4o on both academic benchmarks and human evaluation. This checkpoint is a **preview** of the Molmo release. All artifacts used in creating Molmo (PixMo dataset, training code, evaluations, intermediate checkpoints) will be made available at a later date, furthering our commitment to open-source AI development and reproducibility. [**Sign up here**](https://docs.google.com/forms/d/e/1FAIpQLSdML1MhNNBDsCHpgWG65Oydg2SjZzVasyqlP08nBrWjZp_c7A/viewform) to be the first to know when artifacts are released. Quick links: - 💬 [Demo](https://molmo.allenai.org/) - 📂 [All Models](https://huggingface.co/collections/allenai/molmo-66f379e6fe3b8ef090a8ca19) - 📃 [Paper](https://molmo.allenai.org/paper.pdf) - 🎥 [Blog with Videos](https://molmo.allenai.org/blog) ## Quick Start To run Molmo, first install dependencies: ```bash pip install einops torchvision ``` Then, follow these steps: ```python from transformers import AutoModelForCausalLM, AutoProcessor, GenerationConfig from PIL import Image import requests # load the processor processor = AutoProcessor.from_pretrained( 'allenai/Molmo-7B-O-0924', trust_remote_code=True, torch_dtype='auto', device_map='auto' ) # load the model model = AutoModelForCausalLM.from_pretrained( 'allenai/Molmo-7B-O-0924', trust_remote_code=True, torch_dtype='auto', device_map='auto' ) # process the image and text inputs = processor.process( images=[Image.open(requests.get("https://picsum.photos/id/237/536/354", stream=True).raw)], text="Describe this image." ) # move inputs to the correct device and make a batch of size 1 inputs = {k: v.to(model.device).unsqueeze(0) for k, v in inputs.items()} # generate output; maximum 200 new tokens; stop generation when <|endoftext|> is generated output = model.generate_from_batch( inputs, GenerationConfig(max_new_tokens=200, stop_strings="<|endoftext|>"), tokenizer=processor.tokenizer ) # only get generated tokens; decode them to text generated_tokens = output[0,inputs['input_ids'].size(1):] generated_text = processor.tokenizer.decode(generated_tokens, skip_special_tokens=True) # print the generated text print(generated_text) # >>> This photograph captures an adorable black Labrador puppy sitting on a weathered # wooden deck. The deck's planks, which are a mix of light and dark brown with ... ``` To make inference more efficient, run with autocast: ```python with torch.autocast(device_type="cuda", enabled=True, dtype=torch.bfloat16): output = model.generate_from_batch( inputs, GenerationConfig(max_new_tokens=200, stop_strings="<|endoftext|>"), tokenizer=processor.tokenizer ) ``` We did most of our evaluations in this setting (autocast on, but float32 weights) To even further reduce the memory requirements, the model can be run with bfloat16 weights: ```python model.to(dtype=torch.bfloat16) inputs["images"] = inputs["images"].to(torch.bfloat16) output = model.generate_from_batch( inputs, GenerationConfig(max_new_tokens=200, stop_strings="<|endoftext|>"), tokenizer=processor.tokenizer ) ``` Note that this can sometimes change the output of the model compared to running with float32 weights. ## Evaluations | Model | Average Score on 11 Academic Benchmarks | Human Preference Elo Rating | |-----------------------------|-----------------------------------------|-----------------------------| | Molmo 72B | 81.2 | 1077 | | Molmo 7B-D | 77.3 | 1056 | | **Molmo 7B-O (this model)** | **74.6** | **1051** | | MolmoE 1B | 68.6 | 1032 | | GPT-4o | 78.5 | 1079 | | GPT-4V | 71.1 | 1041 | | Gemini 1.5 Pro | 78.3 | 1074 | | Gemini 1.5 Flash | 75.1 | 1054 | | Claude 3.5 Sonnet | 76.7 | 1069 | | Claude 3 Opus | 66.4 | 971 | | Claude 3 Haiku | 65.3 | 999 | | Qwen VL2 72B | 79.4 | 1037 | | Qwen VL2 7B | 73.7 | 1025 | | Intern VL2 LLAMA 76B | 77.1 | 1018 | | Intern VL2 8B | 69.4 | 953 | | Pixtral 12B | 69.5 | 1016 | | Phi3.5-Vision 4B | 59.7 | 982 | | PaliGemma 3B | 50.0 | 937 | | LLAVA OneVision 72B | 76.6 | 1051 | | LLAVA OneVision 7B | 72.0 | 1024 | | Cambrian-1 34B | 66.8 | 953 | | Cambrian-1 8B | 63.4 | 952 | | xGen - MM - Interleave 4B | 59.5 | 979 | | LLAVA-1.5 13B | 43.9 | 960 | | LLAVA-1.5 7B | 40.7 | 951 | *Benchmarks: AI2D test, ChartQA test, VQA v2.0 test, DocQA test, InfographicVQA test, TextVQA val, RealWorldQA, MMMU val, MathVista testmini, CountBenchQA, Flickr Count (we collected this new dataset that is significantly harder than CountBenchQA).* ## FAQs ### I'm getting an error a broadcast error when processing images! Your image might not be in RGB format. You can convert it using the following code snippet: ```python from PIL import Image image = Image.open(...) if image.mode != "RGB": image = image.convert("RGB") ``` ### Molmo doesn't work great with transparent images! We received reports that Molmo models might struggle with transparent images. For the time being, we recommend adding a white or dark background to your images before passing them to the model. The code snippet below shows how to do this using the Python Imaging Library (PIL): ```python # Load the image url = "..." image = Image.open(requests.get(url, stream=True).raw) # Convert the image to grayscale to calculate brightness gray_image = image.convert('L') # Convert to grayscale # Calculate the average brightness stat = ImageStat.Stat(gray_image) average_brightness = stat.mean[0] # Get the average value # Define background color based on brightness (threshold can be adjusted) bg_color = (0, 0, 0) if average_brightness > 127 else (255, 255, 255) # Create a new image with the same size as the original, filled with the background color new_image = Image.new('RGB', image.size, bg_color) # Paste the original image on top of the background (use image as a mask if needed) new_image.paste(image, (0, 0), image if image.mode == 'RGBA' else None) # Now you can pass the new_image to Molmo processor = AutoProcessor.from_pretrained( 'allenai/Molmo-7B-D-0924', trust_remote_code=True, torch_dtype='auto', device_map='auto' ) ``` ## License and Use This model is licensed under Apache 2.0. It is intended for research and educational use. For more information, please see our [Responsible Use Guidelines](https://allenai.org/responsible-use).
PosterCraft/PosterCraft-v1_RL
PosterCraft
2025-06-19T02:15:25Z
580
12
diffusers
[ "diffusers", "safetensors", "art", "diffusion", "aesthetic-poster-generation", "text-to-image", "en", "arxiv:2506.10741", "base_model:black-forest-labs/FLUX.1-dev", "base_model:finetune:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-06-08T14:09:29Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: LICENSE.md library_name: diffusers language: - en base_model: - black-forest-labs/FLUX.1-dev pipeline_tag: text-to-image tags: - art - diffusion - aesthetic-poster-generation --- <div align="center"> <h1>🎨 PosterCraft:<br/>Rethinking High-Quality Aesthetic Poster Generation in a Unified Framework</h1> [![arXiv](https://img.shields.io/badge/arXiv-2506.10741-red)](https://arxiv.org/abs/2506.10741) [![GitHub](https://img.shields.io/badge/GitHub-Repository-blue)](https://github.com/ephemeral182/PosterCraft) [![HuggingFace](https://img.shields.io/badge/🤗-HuggingFace-yellow)](https://huggingface.co/PosterCraft) [![Website](https://img.shields.io/badge/🌐-Website-green)](https://ephemeral182.github.io/PosterCraft/) [![Video](https://img.shields.io/badge/🎥-Live_Demo-purple)](https://www.youtube.com/watch?v=92wMU4D7qx0) [![HF Demo](https://img.shields.io/badge/🤗-HF_Demo-orange)](https://huggingface.co/spaces/Ephemeral182/PosterCraft) <img src="assets/logo2.png" alt="PosterCraft Logo" width="1000"/> <img src="assets/teaser-1.png" alt="PosterCraft Logo" width="1000"/> </div> --- ## 🌟 What is PosterCraft? <div align="center"> <img src="assets/demo2.png" alt="What is PosterCraft - Quick Prompt Demo" width="1000"/> <br> </div> PosterCraft is a unified framework for **high-quality aesthetic poster generation** that excels in **precise text rendering**, **seamless integration of abstract art**, **striking layouts**, and **stylistic harmony**. ## 🚀 Quick Start ### 🔧 Installation ```bash # Clone the repository git clone https://github.com/ephemeral182/PosterCraft.git cd PosterCraft # Create conda environment conda create -n postercraft python=3.11 conda activate postercraft # Install dependencies pip install -r requirements.txt ``` ### 🚀 Easy Usage PosterCraft is designed as a unified and flexible framework. This makes it easy to use PosterCraft within your own custom workflows or other compatible frameworks. Loading the model is straightforward: ```python import torch from diffusers import FluxPipeline, FluxTransformer2DModel # 1. Define model IDs and settings pipeline_id = "black-forest-labs/FLUX.1-dev" postercraft_transformer_id = "PosterCraft/PosterCraft-v1_RL" device = "cuda" dtype = torch.bfloat16 # 2. Load the base pipeline pipe = FluxPipeline.from_pretrained(pipeline_id, torch_dtype=dtype) # 3. The key step: simply replace the original transformer with our fine-tuned PosterCraft model pipe.transformer = FluxTransformer2DModel.from_pretrained( postercraft_transformer_id, torch_dtype=dtype ) pipe.to(device) # Now, `pipe` is a standard diffusers pipeline ready for inference with your own logic. ``` ### 🚀 Quick Generation For the best results and to leverage our intelligent prompt rewriting feature, we recommend using the provided `inference.py` script. This script automatically enhances your creative ideas for optimal results. Generate high-quality aesthetic posters from your prompt with `BF16` precision, please refer to our [GitHub repository](https://github.com/Ephemeral182/PosterCraft) : ```bash python inference.py \ --prompt "Urban Canvas Street Art Expo poster with bold graffiti-style lettering and dynamic colorful splashes" \ --enable_recap \ --num_inference_steps 28 \ --guidance_scale 3.5 \ --seed 42 \ --pipeline_path "black-forest-labs/FLUX.1-dev" \ --custom_transformer_path "PosterCraft/PosterCraft-v1_RL" \ --qwen_model_path "Qwen/Qwen3-8B" ``` If you are running on a GPU with limited memory, you can use `inference_offload.py` to offload some components to the CPU: ```bash python inference_offload.py \ --prompt "Urban Canvas Street Art Expo poster with bold graffiti-style lettering and dynamic colorful splashes" \ --enable_recap \ --num_inference_steps 28 \ --guidance_scale 3.5 \ --seed 42 \ --pipeline_path "black-forest-labs/FLUX.1-dev" \ --custom_transformer_path "PosterCraft/PosterCraft-v1_RL" \ --qwen_model_path "Qwen/Qwen3-8B" ``` ### 💻 Gradio Web UI We provide a Gradio web UI for PosterCraft, please refer to our [GitHub repository](https://github.com/Ephemeral182/PosterCraft). ```bash python demo_gradio.py ``` ## 📊 Performance Benchmarks <div align="center"> ### 📈 Quantitative Results <table> <thead> <tr> <th>Method</th> <th>Text Recall ↑</th> <th>Text F-score ↑</th> <th>Text Accuracy ↑</th> </tr> </thead> <tbody> <tr> <td style="white-space: nowrap;">OpenCOLE (Open)</td> <td>0.082</td> <td>0.076</td> <td>0.061</td> </tr> <tr> <td style="white-space: nowrap;">Playground-v2.5 (Open)</td> <td>0.157</td> <td>0.146</td> <td>0.132</td> </tr> <tr> <td style="white-space: nowrap;">SD3.5 (Open)</td> <td>0.565</td> <td>0.542</td> <td>0.497</td> </tr> <tr> <td style="white-space: nowrap;">Flux1.dev (Open)</td> <td>0.723</td> <td>0.707</td> <td>0.667</td> </tr> <tr> <td style="white-space: nowrap;">Ideogram-v2 (Close)</td> <td>0.711</td> <td>0.685</td> <td>0.680</td> </tr> <tr> <td style="white-space: nowrap;">BAGEL (Open)</td> <td>0.543</td> <td>0.536</td> <td>0.463</td> </tr> <tr> <td style="white-space: nowrap;">Gemini2.0-Flash-Gen (Close)</td> <td>0.798</td> <td>0.786</td> <td>0.746</td> </tr> <tr> <td style="white-space: nowrap;"><b>PosterCraft (ours)</b></td> <td><b>0.787</b></td> <td><b>0.774</b></td> <td><b>0.735</b></td> </tr> </tbody> </table> <img src="assets/hpc.png" alt="hpc" width="1000"/> </div> --- ## 📝 Citation If you find PosterCraft useful for your research, please cite our paper: ```bibtex @article{chen2025postercraft, title={PosterCraft: Rethinking High-Quality Aesthetic Poster Generation in a Unified Framework}, author={Chen, Sixiang and Lai, Jianyu and Gao, Jialin and Ye, Tian and Chen, Haoyu and Shi, Hengyu and Shao, Shitong and Lin, Yunlong and Fei, Song and Xing, Zhaohu and Jin, Yeying and Luo, Junfeng and Wei, Xiaoming and Zhu, Lei}, journal={arXiv preprint arXiv:2506.10741}, year={2025} } ``` </div>
Sayan01/Phi3-TL-Meta-DKD-1
Sayan01
2025-06-19T02:15:04Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-19T02:11:50Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Alvin-LiuJia/DeepSeek-R1-Medical-Distill-Qwen-1.5B-Trained-Alvin0618-GGUF
Alvin-LiuJia
2025-06-19T02:13:22Z
0
0
transformers
[ "transformers", "safetensors", "gguf", "qwen2", "text-generation-inference", "unsloth", "en", "base_model:Alvin-LiuJia/DeepSeek-R1-Medical-Distill-Qwen-1.5B-Trained-Alvin0616-Fork", "base_model:quantized:Alvin-LiuJia/DeepSeek-R1-Medical-Distill-Qwen-1.5B-Trained-Alvin0616-Fork", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2025-06-19T02:02:13Z
--- base_model: Alvin-LiuJia/DeepSeek-R1-Medical-Distill-Qwen-1.5B-Trained-Alvin0616-Fork tags: - text-generation-inference - transformers - unsloth - qwen2 license: apache-2.0 language: - en --- # Uploaded finetuned model - **Developed by:** Alvin-LiuJia - **License:** apache-2.0 - **Finetuned from model :** Alvin-LiuJia/DeepSeek-R1-Medical-Distill-Qwen-1.5B-Trained-Alvin0616-Fork This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
luyotw/openfun-ivod-whisper-small-LaiShiBao-10-104
luyotw
2025-06-19T02:12:50Z
0
0
null
[ "tensorboard", "safetensors", "whisper", "region:us" ]
null
2025-06-19T01:49:28Z
# Fine-tune 資訊 - 原始模型: `openai/whisper-small` - 使用音訊數量: 17696 - 使用音訊總長: 9.42 小時 - 音訊平均長度: 1.92 秒 - GPU: `NVIDIA H100 PCIe` x 1 - 訓練時間: 04:29:27 - 模型大小: 0.90 GB --- # Model Card
Nerva1228/tizhi
Nerva1228
2025-06-19T02:12:08Z
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-06-19T02:12:06Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: tizhi --- # Tizhi <Gallery /> ## About this LoRA This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI. It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `tizhi` to trigger the image generation. ## Run this LoRA with an API using Replicate ```py import replicate input = { "prompt": "tizhi", "lora_weights": "https://huggingface.co/Nerva1228/tizhi/resolve/main/lora.safetensors" } output = replicate.run( "black-forest-labs/flux-dev-lora", input=input ) for index, item in enumerate(output): with open(f"output_{index}.webp", "wb") as file: file.write(item.read()) ``` ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('Nerva1228/tizhi', weight_name='lora.safetensors') image = pipeline('tizhi').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Training details - Steps: 2000 - Learning rate: 0.0004 - LoRA rank: 16 ## Contribute your own examples You can use the [community tab](https://huggingface.co/Nerva1228/tizhi/discussions) to add images that show off what you’ve made with this LoRA.
morturr/Llama-2-7b-hf-LOO_dadjokes-COMB_one_liners-comb2-seed42-2025-06-19
morturr
2025-06-19T02:08:32Z
0
0
peft
[ "peft", "safetensors", "trl", "sft", "generated_from_trainer", "base_model:meta-llama/Llama-2-7b-hf", "base_model:adapter:meta-llama/Llama-2-7b-hf", "license:llama2", "region:us" ]
null
2025-06-19T02:08:15Z
--- library_name: peft license: llama2 base_model: meta-llama/Llama-2-7b-hf tags: - trl - sft - generated_from_trainer model-index: - name: Llama-2-7b-hf-LOO_dadjokes-COMB_one_liners-comb2-seed42-2025-06-19 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Llama-2-7b-hf-LOO_dadjokes-COMB_one_liners-comb2-seed42-2025-06-19 This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - PEFT 0.13.2 - Transformers 4.46.1 - Pytorch 2.5.1+cu124 - Datasets 3.0.2 - Tokenizers 0.20.1
dicksonhk/Nanonets-OCR-s-mlx-4Bit
dicksonhk
2025-06-19T01:59:49Z
0
0
transformers
[ "transformers", "safetensors", "qwen2_5_vl", "image-text-to-text", "OCR", "pdf2markdown", "mlx", "mlx-my-repo", "conversational", "en", "base_model:nanonets/Nanonets-OCR-s", "base_model:finetune:nanonets/Nanonets-OCR-s", "text-generation-inference", "endpoints_compatible", "region:us" ]
image-text-to-text
2025-06-19T01:59:15Z
--- language: - en base_model: nanonets/Nanonets-OCR-s pipeline_tag: image-text-to-text tags: - OCR - pdf2markdown - mlx - mlx-my-repo library_name: transformers --- # dicksonhk/Nanonets-OCR-s-mlx-4Bit The Model [dicksonhk/Nanonets-OCR-s-mlx-4Bit](https://huggingface.co/dicksonhk/Nanonets-OCR-s-mlx-4Bit) was converted to MLX format from [nanonets/Nanonets-OCR-s](https://huggingface.co/nanonets/Nanonets-OCR-s) using mlx-vlm version **0.1.15**. ```bash pip install -U mlx-vlm ``` ```bash python -m mlx_vlm.generate --model dicksonhk/Nanonets-OCR-s-mlx-4Bit --max-tokens 100 --temp 0.0 --prompt "Describe this image." --image <path_to_image> ```
EthanRhys/Wave-Castellano
EthanRhys
2025-06-19T01:58:57Z
0
0
null
[ "license:openrail++", "region:us" ]
null
2025-06-19T01:57:48Z
--- license: openrail++ ---
brendmung/AbodeLLM
brendmung
2025-06-19T01:56:52Z
0
0
null
[ "text-generation", "base_model:HuggingFaceTB/SmolLM2-360M-Instruct", "base_model:finetune:HuggingFaceTB/SmolLM2-360M-Instruct", "region:us" ]
text-generation
2024-09-30T20:26:36Z
--- base_model: - meta-llama/Llama-3.2-1B-Instruct - deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B - HuggingFaceTB/SmolLM2-360M-Instruct pipeline_tag: text-generation --- # Models for AbodeLLM App This repository contains models used by **AbodeLLM**, an offline AI chat assistant app built for Android devices. ## Usage To run the models on your Android device, download the **AbodeLLM** app from the following repository: [AbodeLLM App on GitHub](https://github.com/brendmung/AbodeLLM)
gianrp6/xpencore
gianrp6
2025-06-19T01:54:01Z
0
0
diffusers
[ "diffusers", "text-to-image", "lora", "template:diffusion-lora", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:mit", "region:us" ]
text-to-image
2025-06-19T01:21:06Z
--- tags: - text-to-image - lora - diffusers - template:diffusion-lora widget: - text: '-' output: url: images/handsome kitconnor holding a sign to write_ _´S....png base_model: black-forest-labs/FLUX.1-dev instance_prompt: nude man license: mit --- # xpencore <Gallery /> ## Trigger words You should use `nude man` to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](/gianrp6/xpencore/tree/main) them in the Files & versions tab.
visolex/phobert-emotion
visolex
2025-06-19T01:47:44Z
2
0
null
[ "safetensors", "roberta", "emotion-recognition", "vietnamese", "phobert", "text-classification", "vi", "dataset:VSMEC", "base_model:vinai/phobert-base", "base_model:finetune:vinai/phobert-base", "license:apache-2.0", "model-index", "region:us" ]
text-classification
2025-06-16T03:54:06Z
--- language: vi tags: - emotion-recognition - vietnamese - phobert license: apache-2.0 datasets: - VSMEC metrics: - accuracy - f1 model-index: - name: phobert-emotion results: - task: type: text-classification name: Emotion Recognition dataset: name: VSMEC type: custom metrics: - name: Accuracy type: accuracy value: <INSERT_ACCURACY> - name: F1 Score type: f1 value: <INSERT_F1_SCORE> base_model: - vinai/phobert-base pipeline_tag: text-classification --- # PhoBERT-Emotion: Emotion Recognition for Vietnamese Text This model is a fine-tuned version of [`vinai/phobert-base`](https://huggingface.co/vinai/phobert-base) on the **VSMEC** dataset for emotion recognition in Vietnamese text. It achieves competitive performance on this task. ## Model Details - **Base Model**: [`vinai/phobert-base`](https://huggingface.co/vinai/phobert-base) - **Dataset**: [VSMEC](https://github.com/uitnlp/vsmec) (Vietnamese Social Media Emotion Corpus) - **Fine-tuning Framework**: HuggingFace Transformers - **Hyperparameters**: - Batch size: `32` - Learning rate: `5e-5` - Epochs: `100` - Max sequence length: `256` ## Dataset The model was trained on the **VSMEC** dataset, which contains Vietnamese social media text annotated with emotion labels. The dataset includes the following emotion categories: `{"Anger": 0, "Disgust": 1, "Enjoyment": 2, "Fear": 3, "Other": 4, "Sadness": 5, "Surprise": 6}`. ## Results The model was evaluated using the following metrics: - **Accuracy**: `<INSERT_ACCURACY>` - **F1 Score**: `<INSERT_F1_SCORE>` ## Usage You can use this model for emotion recognition in Vietnamese text. Below is an example of how to use it with the HuggingFace Transformers library: ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("visolex/phobert-emotion") model = AutoModelForSequenceClassification.from_pretrained("visolex/phobert-emotion") text = "Tôi rất vui vì hôm nay trời đẹp!" inputs = tokenizer(text, return_tensors="pt", truncation=True, max_length=256) outputs = model(**inputs) predicted_class = outputs.logits.argmax(dim=-1).item() print(f"Predicted emotion: {predicted_class}")
buttercoconut/Qwen2.5-ko-alpaca-0.5B-Q4
buttercoconut
2025-06-19T01:47:00Z
0
0
null
[ "safetensors", "qwen2", "text-generation", "conversational", "ko", "base_model:Qwen/Qwen2.5-0.5B", "base_model:quantized:Qwen/Qwen2.5-0.5B", "license:apache-2.0", "4-bit", "gptq", "region:us" ]
text-generation
2025-06-19T01:25:27Z
--- license: apache-2.0 language: - ko base_model: - Qwen/Qwen2.5-0.5B pipeline_tag: text-generation ---
jajostrains/q-FrozenLake-v1-4x4-noSlippery
jajostrains
2025-06-19T01:45:19Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2025-06-19T01:45:15Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="jajostrains/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
xinyifang/Conllama8b
xinyifang
2025-06-19T01:42:05Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "text-generation-inference", "unsloth", "trl", "sft", "conversational", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-06-19T01:36:42Z
--- base_model: unsloth/meta-llama-3.1-8b-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llama - trl - sft license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** xinyifang - **License:** apache-2.0 - **Finetuned from model :** unsloth/meta-llama-3.1-8b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
dicksonhk/Qwen2.5-VL-3B-Instruct-mlx-4Bit
dicksonhk
2025-06-19T01:41:49Z
0
0
transformers
[ "transformers", "safetensors", "qwen2_5_vl", "image-text-to-text", "multimodal", "mlx", "mlx-my-repo", "conversational", "en", "base_model:Qwen/Qwen2.5-VL-3B-Instruct", "base_model:finetune:Qwen/Qwen2.5-VL-3B-Instruct", "text-generation-inference", "endpoints_compatible", "region:us" ]
image-text-to-text
2025-06-19T01:41:33Z
--- license_name: qwen-research license_link: https://huggingface.co/Qwen/Qwen2.5-VL-3B-Instruct/blob/main/LICENSE language: - en pipeline_tag: image-text-to-text tags: - multimodal - mlx - mlx-my-repo library_name: transformers base_model: Qwen/Qwen2.5-VL-3B-Instruct --- # dicksonhk/Qwen2.5-VL-3B-Instruct-mlx-4Bit The Model [dicksonhk/Qwen2.5-VL-3B-Instruct-mlx-4Bit](https://huggingface.co/dicksonhk/Qwen2.5-VL-3B-Instruct-mlx-4Bit) was converted to MLX format from [Qwen/Qwen2.5-VL-3B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-3B-Instruct) using mlx-vlm version **0.1.15**. ```bash pip install -U mlx-vlm ``` ```bash python -m mlx_vlm.generate --model dicksonhk/Qwen2.5-VL-3B-Instruct-mlx-4Bit --max-tokens 100 --temp 0.0 --prompt "Describe this image." --image <path_to_image> ```
hardlyworking/4BTestRC-Q8_0-GGUF
hardlyworking
2025-06-19T01:33:00Z
0
0
transformers
[ "transformers", "gguf", "axolotl", "generated_from_trainer", "llama-cpp", "gguf-my-repo", "dataset:PocketDoc/Dans-Prosemaxx-RepRemover-1", "base_model:hardlyworking/4BTestRC", "base_model:quantized:hardlyworking/4BTestRC", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us" ]
null
2025-06-19T01:32:40Z
--- library_name: transformers license: cc-by-nc-4.0 base_model: hardlyworking/4BTestRC tags: - axolotl - generated_from_trainer - llama-cpp - gguf-my-repo datasets: - PocketDoc/Dans-Prosemaxx-RepRemover-1 model-index: - name: RepRemove4B results: [] --- # hardlyworking/4BTestRC-Q8_0-GGUF This model was converted to GGUF format from [`hardlyworking/4BTestRC`](https://huggingface.co/hardlyworking/4BTestRC) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/hardlyworking/4BTestRC) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo hardlyworking/4BTestRC-Q8_0-GGUF --hf-file 4btestrc-q8_0.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo hardlyworking/4BTestRC-Q8_0-GGUF --hf-file 4btestrc-q8_0.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo hardlyworking/4BTestRC-Q8_0-GGUF --hf-file 4btestrc-q8_0.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo hardlyworking/4BTestRC-Q8_0-GGUF --hf-file 4btestrc-q8_0.gguf -c 2048 ```
samtse123/staff-manual-lora
samtse123
2025-06-19T01:30:03Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "qwen2", "trl", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-06-18T09:26:42Z
--- base_model: unsloth/qwen2.5-7b-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - qwen2 - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** samtse123 - **License:** apache-2.0 - **Finetuned from model :** unsloth/qwen2.5-7b-unsloth-bnb-4bit This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
nnilayy/dreamer-arousal-binary-classification-Kfold-4
nnilayy
2025-06-19T01:29:37Z
0
0
null
[ "safetensors", "model_hub_mixin", "pytorch_model_hub_mixin", "region:us" ]
null
2025-06-19T01:29:35Z
--- tags: - model_hub_mixin - pytorch_model_hub_mixin --- This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration: - Code: [More Information Needed] - Paper: [More Information Needed] - Docs: [More Information Needed]
Victoriatr07/final_model6_LoRA
Victoriatr07
2025-06-19T01:24:27Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-06-19T01:23:59Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
JheWei/llama2_uuu_news_qlora
JheWei
2025-06-19T01:13:48Z
0
0
peft
[ "peft", "safetensors", "llama", "arxiv:1910.09700", "region:us" ]
null
2025-06-17T06:10:40Z
--- base_model: unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.15.2
rosieyzh/OLMo-1B-as_fm3_tg_omi1_omi2_global_step25
rosieyzh
2025-06-19T01:03:22Z
0
0
transformers
[ "transformers", "safetensors", "olmo", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-06-19T01:01:34Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]