modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-08-23 00:38:03
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
517 values
tags
listlengths
1
4.05k
pipeline_tag
stringclasses
55 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-08-23 00:37:56
card
stringlengths
11
1.01M
ggozzy/blockassist-bc-stubby_yapping_mandrill_1755904696
ggozzy
2025-08-22T23:19:30Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "stubby yapping mandrill", "arxiv:2504.07091", "region:us" ]
null
2025-08-22T23:19:24Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - stubby yapping mandrill --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
qualcomm/LaMa-Dilated
qualcomm
2025-08-22T23:01:31Z
76
5
pytorch
[ "pytorch", "tflite", "backbone", "android", "image-to-image", "arxiv:2109.07161", "license:other", "region:us" ]
image-to-image
2024-02-25T22:36:43Z
--- library_name: pytorch license: other tags: - backbone - android pipeline_tag: image-to-image --- ![](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/models/lama_dilated/web-assets/model_demo.png) # LaMa-Dilated: Optimized for Mobile Deployment ## High resolution image in-painting on-device LaMa-Dilated is a machine learning model that allows to erase and in-paint part of given input image. This model is an implementation of LaMa-Dilated found [here](https://github.com/advimman/lama). This repository provides scripts to run LaMa-Dilated on Qualcomm® devices. More details on model performance across various devices, can be found [here](https://aihub.qualcomm.com/models/lama_dilated). ### Model Details - **Model Type:** Model_use_case.image_editing - **Model Stats:** - Model checkpoint: Dilated CelebAHQ - Input resolution: 512x512 - Number of parameters: 45.6M - Model size (float): 174 MB | Model | Precision | Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Primary Compute Unit | Target Model |---|---|---|---|---|---|---|---|---| | LaMa-Dilated | float | QCS8275 (Proxy) | Qualcomm® QCS8275 (Proxy) | TFLITE | 429.506 ms | 3 - 139 MB | NPU | [LaMa-Dilated.tflite](https://huggingface.co/qualcomm/LaMa-Dilated/blob/main/LaMa-Dilated.tflite) | | LaMa-Dilated | float | QCS8275 (Proxy) | Qualcomm® QCS8275 (Proxy) | QNN_DLC | 406.956 ms | 1 - 130 MB | NPU | [LaMa-Dilated.dlc](https://huggingface.co/qualcomm/LaMa-Dilated/blob/main/LaMa-Dilated.dlc) | | LaMa-Dilated | float | QCS8450 (Proxy) | Qualcomm® QCS8450 (Proxy) | TFLITE | 113.24 ms | 3 - 183 MB | NPU | [LaMa-Dilated.tflite](https://huggingface.co/qualcomm/LaMa-Dilated/blob/main/LaMa-Dilated.tflite) | | LaMa-Dilated | float | QCS8450 (Proxy) | Qualcomm® QCS8450 (Proxy) | QNN_DLC | 124.295 ms | 4 - 132 MB | NPU | [LaMa-Dilated.dlc](https://huggingface.co/qualcomm/LaMa-Dilated/blob/main/LaMa-Dilated.dlc) | | LaMa-Dilated | float | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | TFLITE | 85.719 ms | 3 - 320 MB | NPU | [LaMa-Dilated.tflite](https://huggingface.co/qualcomm/LaMa-Dilated/blob/main/LaMa-Dilated.tflite) | | LaMa-Dilated | float | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | QNN_DLC | 77.725 ms | 3 - 54 MB | NPU | [LaMa-Dilated.dlc](https://huggingface.co/qualcomm/LaMa-Dilated/blob/main/LaMa-Dilated.dlc) | | LaMa-Dilated | float | QCS9075 (Proxy) | Qualcomm® QCS9075 (Proxy) | TFLITE | 120.482 ms | 3 - 139 MB | NPU | [LaMa-Dilated.tflite](https://huggingface.co/qualcomm/LaMa-Dilated/blob/main/LaMa-Dilated.tflite) | | LaMa-Dilated | float | QCS9075 (Proxy) | Qualcomm® QCS9075 (Proxy) | QNN_DLC | 110.243 ms | 1 - 129 MB | NPU | [LaMa-Dilated.dlc](https://huggingface.co/qualcomm/LaMa-Dilated/blob/main/LaMa-Dilated.dlc) | | LaMa-Dilated | float | SA7255P ADP | Qualcomm® SA7255P | TFLITE | 429.506 ms | 3 - 139 MB | NPU | [LaMa-Dilated.tflite](https://huggingface.co/qualcomm/LaMa-Dilated/blob/main/LaMa-Dilated.tflite) | | LaMa-Dilated | float | SA7255P ADP | Qualcomm® SA7255P | QNN_DLC | 406.956 ms | 1 - 130 MB | NPU | [LaMa-Dilated.dlc](https://huggingface.co/qualcomm/LaMa-Dilated/blob/main/LaMa-Dilated.dlc) | | LaMa-Dilated | float | SA8255 (Proxy) | Qualcomm® SA8255P (Proxy) | TFLITE | 86.856 ms | 3 - 319 MB | NPU | [LaMa-Dilated.tflite](https://huggingface.co/qualcomm/LaMa-Dilated/blob/main/LaMa-Dilated.tflite) | | LaMa-Dilated | float | SA8255 (Proxy) | Qualcomm® SA8255P (Proxy) | QNN_DLC | 77.978 ms | 2 - 53 MB | NPU | [LaMa-Dilated.dlc](https://huggingface.co/qualcomm/LaMa-Dilated/blob/main/LaMa-Dilated.dlc) | | LaMa-Dilated | float | SA8295P ADP | Qualcomm® SA8295P | TFLITE | 126.454 ms | 3 - 109 MB | NPU | [LaMa-Dilated.tflite](https://huggingface.co/qualcomm/LaMa-Dilated/blob/main/LaMa-Dilated.tflite) | | LaMa-Dilated | float | SA8295P ADP | Qualcomm® SA8295P | QNN_DLC | 116.51 ms | 2 - 120 MB | NPU | [LaMa-Dilated.dlc](https://huggingface.co/qualcomm/LaMa-Dilated/blob/main/LaMa-Dilated.dlc) | | LaMa-Dilated | float | SA8650 (Proxy) | Qualcomm® SA8650P (Proxy) | TFLITE | 87.118 ms | 3 - 321 MB | NPU | [LaMa-Dilated.tflite](https://huggingface.co/qualcomm/LaMa-Dilated/blob/main/LaMa-Dilated.tflite) | | LaMa-Dilated | float | SA8650 (Proxy) | Qualcomm® SA8650P (Proxy) | QNN_DLC | 78.392 ms | 3 - 54 MB | NPU | [LaMa-Dilated.dlc](https://huggingface.co/qualcomm/LaMa-Dilated/blob/main/LaMa-Dilated.dlc) | | LaMa-Dilated | float | SA8775P ADP | Qualcomm® SA8775P | TFLITE | 120.482 ms | 3 - 139 MB | NPU | [LaMa-Dilated.tflite](https://huggingface.co/qualcomm/LaMa-Dilated/blob/main/LaMa-Dilated.tflite) | | LaMa-Dilated | float | SA8775P ADP | Qualcomm® SA8775P | QNN_DLC | 110.243 ms | 1 - 129 MB | NPU | [LaMa-Dilated.dlc](https://huggingface.co/qualcomm/LaMa-Dilated/blob/main/LaMa-Dilated.dlc) | | LaMa-Dilated | float | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | TFLITE | 86.765 ms | 3 - 315 MB | NPU | [LaMa-Dilated.tflite](https://huggingface.co/qualcomm/LaMa-Dilated/blob/main/LaMa-Dilated.tflite) | | LaMa-Dilated | float | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | QNN_DLC | 77.828 ms | 2 - 54 MB | NPU | [LaMa-Dilated.dlc](https://huggingface.co/qualcomm/LaMa-Dilated/blob/main/LaMa-Dilated.dlc) | | LaMa-Dilated | float | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | ONNX | 76.787 ms | 0 - 137 MB | NPU | [LaMa-Dilated.onnx](https://huggingface.co/qualcomm/LaMa-Dilated/blob/main/LaMa-Dilated.onnx) | | LaMa-Dilated | float | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | TFLITE | 61.332 ms | 2 - 214 MB | NPU | [LaMa-Dilated.tflite](https://huggingface.co/qualcomm/LaMa-Dilated/blob/main/LaMa-Dilated.tflite) | | LaMa-Dilated | float | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | QNN_DLC | 55.296 ms | 4 - 137 MB | NPU | [LaMa-Dilated.dlc](https://huggingface.co/qualcomm/LaMa-Dilated/blob/main/LaMa-Dilated.dlc) | | LaMa-Dilated | float | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | ONNX | 54.491 ms | 12 - 144 MB | NPU | [LaMa-Dilated.onnx](https://huggingface.co/qualcomm/LaMa-Dilated/blob/main/LaMa-Dilated.onnx) | | LaMa-Dilated | float | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | TFLITE | 57.392 ms | 1 - 143 MB | NPU | [LaMa-Dilated.tflite](https://huggingface.co/qualcomm/LaMa-Dilated/blob/main/LaMa-Dilated.tflite) | | LaMa-Dilated | float | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | QNN_DLC | 50.28 ms | 4 - 134 MB | NPU | [LaMa-Dilated.dlc](https://huggingface.co/qualcomm/LaMa-Dilated/blob/main/LaMa-Dilated.dlc) | | LaMa-Dilated | float | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | ONNX | 41.981 ms | 11 - 145 MB | NPU | [LaMa-Dilated.onnx](https://huggingface.co/qualcomm/LaMa-Dilated/blob/main/LaMa-Dilated.onnx) | | LaMa-Dilated | float | Snapdragon X Elite CRD | Snapdragon® X Elite | QNN_DLC | 90.085 ms | 104 - 104 MB | NPU | [LaMa-Dilated.dlc](https://huggingface.co/qualcomm/LaMa-Dilated/blob/main/LaMa-Dilated.dlc) | | LaMa-Dilated | float | Snapdragon X Elite CRD | Snapdragon® X Elite | ONNX | 84.304 ms | 89 - 89 MB | NPU | [LaMa-Dilated.onnx](https://huggingface.co/qualcomm/LaMa-Dilated/blob/main/LaMa-Dilated.onnx) | ## Installation Install the package via pip: ```bash pip install "qai-hub-models[lama-dilated]" ``` ## Configure Qualcomm® AI Hub to run this model on a cloud-hosted device Sign-in to [Qualcomm® AI Hub](https://app.aihub.qualcomm.com/) with your Qualcomm® ID. Once signed in navigate to `Account -> Settings -> API Token`. With this API token, you can configure your client to run models on the cloud hosted devices. ```bash qai-hub configure --api_token API_TOKEN ``` Navigate to [docs](https://app.aihub.qualcomm.com/docs/) for more information. ## Demo off target The package contains a simple end-to-end demo that downloads pre-trained weights and runs this model on a sample input. ```bash python -m qai_hub_models.models.lama_dilated.demo ``` The above demo runs a reference implementation of pre-processing, model inference, and post processing. **NOTE**: If you want running in a Jupyter Notebook or Google Colab like environment, please add the following to your cell (instead of the above). ``` %run -m qai_hub_models.models.lama_dilated.demo ``` ### Run model on a cloud-hosted device In addition to the demo, you can also run the model on a cloud-hosted Qualcomm® device. This script does the following: * Performance check on-device on a cloud-hosted device * Downloads compiled assets that can be deployed on-device for Android. * Accuracy check between PyTorch and on-device outputs. ```bash python -m qai_hub_models.models.lama_dilated.export ``` ## How does this work? This [export script](https://aihub.qualcomm.com/models/lama_dilated/qai_hub_models/models/LaMa-Dilated/export.py) leverages [Qualcomm® AI Hub](https://aihub.qualcomm.com/) to optimize, validate, and deploy this model on-device. Lets go through each step below in detail: Step 1: **Compile model for on-device deployment** To compile a PyTorch model for on-device deployment, we first trace the model in memory using the `jit.trace` and then call the `submit_compile_job` API. ```python import torch import qai_hub as hub from qai_hub_models.models.lama_dilated import Model # Load the model torch_model = Model.from_pretrained() # Device device = hub.Device("Samsung Galaxy S24") # Trace model input_shape = torch_model.get_input_spec() sample_inputs = torch_model.sample_inputs() pt_model = torch.jit.trace(torch_model, [torch.tensor(data[0]) for _, data in sample_inputs.items()]) # Compile model on a specific device compile_job = hub.submit_compile_job( model=pt_model, device=device, input_specs=torch_model.get_input_spec(), ) # Get target model to run on-device target_model = compile_job.get_target_model() ``` Step 2: **Performance profiling on cloud-hosted device** After compiling models from step 1. Models can be profiled model on-device using the `target_model`. Note that this scripts runs the model on a device automatically provisioned in the cloud. Once the job is submitted, you can navigate to a provided job URL to view a variety of on-device performance metrics. ```python profile_job = hub.submit_profile_job( model=target_model, device=device, ) ``` Step 3: **Verify on-device accuracy** To verify the accuracy of the model on-device, you can run on-device inference on sample input data on the same cloud hosted device. ```python input_data = torch_model.sample_inputs() inference_job = hub.submit_inference_job( model=target_model, device=device, inputs=input_data, ) on_device_output = inference_job.download_output_data() ``` With the output of the model, you can compute like PSNR, relative errors or spot check the output with expected output. **Note**: This on-device profiling and inference requires access to Qualcomm® AI Hub. [Sign up for access](https://myaccount.qualcomm.com/signup). ## Run demo on a cloud-hosted device You can also run the demo on-device. ```bash python -m qai_hub_models.models.lama_dilated.demo --eval-mode on-device ``` **NOTE**: If you want running in a Jupyter Notebook or Google Colab like environment, please add the following to your cell (instead of the above). ``` %run -m qai_hub_models.models.lama_dilated.demo -- --eval-mode on-device ``` ## Deploying compiled model to Android The models can be deployed using multiple runtimes: - TensorFlow Lite (`.tflite` export): [This tutorial](https://www.tensorflow.org/lite/android/quickstart) provides a guide to deploy the .tflite model in an Android application. - QNN (`.so` export ): This [sample app](https://docs.qualcomm.com/bundle/publicresource/topics/80-63442-50/sample_app.html) provides instructions on how to use the `.so` shared library in an Android application. ## View on Qualcomm® AI Hub Get more details on LaMa-Dilated's performance across various devices [here](https://aihub.qualcomm.com/models/lama_dilated). Explore all available models on [Qualcomm® AI Hub](https://aihub.qualcomm.com/) ## License * The license for the original implementation of LaMa-Dilated can be found [here](https://github.com/advimman/lama/blob/main/LICENSE). * The license for the compiled assets for on-device deployment can be found [here](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/Qualcomm+AI+Hub+Proprietary+License.pdf) ## References * [Resolution-robust Large Mask Inpainting with Fourier Convolutions](https://arxiv.org/abs/2109.07161) * [Source Model Implementation](https://github.com/advimman/lama) ## Community * Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI. * For questions or feedback please [reach out to us](mailto:[email protected]).
mshojaei77/gemma-3n-E4B-persian
mshojaei77
2025-08-22T22:40:00Z
0
0
null
[ "safetensors", "gemma3n", "gemma-3n", "unsloth", "persian", "farsi", "conversational", "qlora", "fine-tuned", "chat", "instruction-following", "text-generation", "fa", "dataset:mshojaei77/persian-gk", "arxiv:2305.14314", "arxiv:2106.09685", "base_model:unsloth/gemma-3n-E4B-it", "base_model:quantized:unsloth/gemma-3n-E4B-it", "license:gemma", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2025-08-22T20:20:21Z
--- license: gemma language: - fa base_model: unsloth/gemma-3n-E4B-it tags: - gemma-3n - unsloth - persian - farsi - conversational - qlora - fine-tuned - chat - instruction-following datasets: - mshojaei77/persian-gk model-index: - name: gemma-3n-E4B-persian results: [] pipeline_tag: text-generation widget: - example_title: "Persian History Question" text: | <start_of_turn>user سلام! لطفاً در مورد تاریخ ایران توضیح کوتاهی بدهید.<end_of_turn> <start_of_turn>model سلام! تاریخ ایران بسیار غنی و طولانی است. ایران یکی از قدیمی‌ترین تمدن‌های جهان محسوب می‌شود که بیش از ۲۵۰۰ سال قدمت دارد. از امپراتوری هخامنشی که توسط کوروش کبیر بنیان گذاشته شد تا دوران صفویه، قاجار و پهلوی، ایران همواره نقش مهمی در تاریخ جهان داشته است.<end_of_turn> - example_title: "Persian Culture Question" text: | <start_of_turn>user باغ تخت چه ویژگی‌هایی داره که اون رو به یکی از قدیمی‌ترین باغ‌های شیراز تبدیل کرده؟<end_of_turn> <start_of_turn>model باغ تخت در شیراز یکی از باغ‌های تاریخی و زیبای ایران است که ویژگی‌های منحصر به فردی دارد. این باغ با معماری سنتی ایرانی، آب‌نماهای زیبا، درختان کهنسال و فضای سبز گسترده شناخته می‌شود. طراحی چهارباغی آن و استفاده از عناصر طبیعی آن را به یکی از نمونه‌های برجسته باغ‌سازی ایرانی تبدیل کرده است.<end_of_turn> - example_title: "General Knowledge Question" text: | <start_of_turn>user برنامه‌نویسی پایتون چیست و چه کاربردهایی دارد؟<end_of_turn> <start_of_turn>model پایتون یک زبان برنامه‌نویسی قدرتمند و آسان است که در سال ۱۹۹۱ توسط گیدو ون روسوم ایجاد شد. این زبان کاربردهای گسترده‌ای دارد از جمله: توسعه وب، هوش مصنوعی و یادگیری ماشین، تحلیل داده، اتوماسیون، توسعه بازی و برنامه‌های دسکتاپ. سادگی نحو و کتابخانه‌های غنی آن، پایتون را به انتخاب اول بسیاری از برنامه‌نویسان تبدیل کرده است.<end_of_turn> --- # Gemma-3N 4B Persian - General Knowledge <div align="center"> <img src="https://github.com/user-attachments/assets/0c52d460-1831-46aa-b3e6-b1a5249c0174" alt="Hugging Face" width="500"/> <br> <strong>🇮🇷 Persian Language Model | 🤖 Conversational AI | 📚 General Knowledge</strong> </div> ## Model Description This model is a fine-tuned version of `unsloth/gemma-3n-E4B-it`, optimized for Persian (Farsi) conversational tasks focused on general knowledge. It employs QLoRA techniques for efficient adaptation and is merged into a standalone model suitable for deployment. ## Model Details ### Base Model and Architecture - **Base Model**: `unsloth/gemma-3n-E4B-it` (Google Gemma 3N 4B Instruction-Tuned). - **Model Type**: Causal language model. - **Model Size**: Approximately 9.9 GB (16-bit precision). - **Context Length**: Supports up to 32,768 tokens, trained with 4,000 tokens. - **Vocabulary**: Gemma tokenizer vocabulary. ### Intended Uses This model is designed for direct use in Persian conversational AI, including instruction-following and general knowledge queries in domains such as Persian heritage, programming, architecture, and tourism. It is suitable for downstream applications like chat interfaces or educational tools. Out-of-scope uses include non-Persian languages or safety-critical applications. ## How to Use ### Quick Start with Transformers ```python import torch from transformers import AutoTokenizer, AutoModelForCausalLM model_id = "mshojaei77/gemma-3n-E4B-persian" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained( model_id, torch_dtype=torch.bfloat16, device_map="auto", ) messages = [{"role": "user", "content": "سلام! در مورد تاریخ ایران توضیح بدهید."}] inputs = tokenizer.apply_chat_template(messages, add_generation_prompt=True, return_tensors="pt").to(model.device) outputs = model.generate( inputs, max_new_tokens=512, temperature=0.7, top_p=0.95, do_sample=True, pad_token_id=tokenizer.eos_token_id, ) response = tokenizer.decode(outputs[0][inputs.shape[-1]:], skip_special_tokens=True) print(response) ``` Recommended parameters: `max_new_tokens=256-512`, `temperature=0.1-0.7`, `top_p=0.9-0.95`. For memory optimization, use 8-bit quantization: ```python from transformers import BitsAndBytesConfig quant_config = BitsAndBytesConfig(load_in_8bit=True, llm_int8_threshold=6.0) model = AutoModelForCausalLM.from_pretrained(model_id, quantization_config=quant_config, device_map="auto") ``` ## Training ### Training Data - **Dataset**: `mshojaei77/persian-gk` (cleaned version: `mshojaei77/persian-gk-cleaned`), comprising 5,897 Persian conversations in ChatML format. - **Domains**: Programming, Persian heritage, architecture, tourism, and general Q&A. - **License**: CC-BY-4.0. ### Training Procedure The model was fine-tuned using QLoRA with 4-bit quantization. - **LoRA Parameters**: Rank=8, alpha=16, dropout=0.0; target modules: `q_proj`, `k_proj`, `v_proj`, `o_proj`, `gate_proj`, `up_proj`, `down_proj`. - **Hyperparameters**: Learning rate=2e-5, batch size=2 (effective=8 with gradient accumulation=4), epochs=1, optimizer=AdamW 8-bit, weight decay=0.01, warmup steps=10, linear LR scheduler, seed=3407. - **Framework**: Unsloth with Weights & Biases monitoring. - **Infrastructure**: Google Colab with GPU acceleration. The merging process integrated LoRA adapters into the base model, converting to 16-bit precision for standalone use. ## Evaluation Results The model achieved a final training loss of 1.78, with gradient norms stabilizing between 0.7 and 2.0. Training completed in 2 hours and 20 minutes on a T4 GPU. Inference performance: | Scenario | GPU | max_new_tokens=256 | Runtime | |----------|-----|--------------------|---------| | Single prompt | RTX T4 (16 GB) | 8.5 s | 22 tok s⁻¹ | | Batch 4 | RTX T4 | 19 s | 54 tok s⁻¹ aggregated | For detailed analyses of training dynamics, including loss and gradient norm charts, refer to the technical report. ## Bias, Risks, and Limitations ### Limitations * **Language Scope**: The model is optimised for Persian (Farsi). Responses in other languages may be less fluent or factually reliable. * **Knowledge Cut-off**: Training data ends at January 2024; the model lacks awareness of subsequent events. * **Hallucination**: Like other LLMs, it can generate plausible-sounding but incorrect or fabricated information. Always verify critical outputs. * **Context Window**: Although the architecture supports 32 k tokens, prompts exceeding 4 k tokens were not present during training and may degrade performance. * **Domain Transfer**: Performance may drop on highly specialised or safety-critical domains (medical, legal, financial) that are under-represented in the dataset. * **Compute Requirements**: FP16 inference needs ≈ 10 GB GPU VRAM; use 8-bit/4-bit quantisation for lower-resource devices. * **Dataset Scale**: Limited to ~6k pairs, potentially overlooking linguistic diversity. * **Training Regimen**: Single-epoch training may not fully optimize performance. ### Ethical & Safety Considerations * The model may reflect cultural or societal biases found in the source data. * Do **not** rely on the model as the sole source of truth for professional advice (medical, legal, financial, etc.). * Implement content filtering and human oversight when deploying user-facing applications, especially for minors or vulnerable groups. * Comply with the Gemma Terms of Use, dataset licence (CC-BY-4.0), and local regulations on user privacy and content moderation. * Potential for misuse in generating harmful content; mitigations include prompt engineering and output filtering. ### Environmental Impact Training emitted approximately 0.5 kg CO₂ equivalent, based on GPU usage and regional electricity factors. ## Reproduction For detailed technical information about the training process, methodology, and evaluation results, see the [technical report](https://github.com/mshojaei77/gemma-3n-E4B-persian-qlora/blob/main/technical_report.md). ## Related Resources - **Base Model**: `unsloth/gemma-3n-E4B-it`. - **Adapters**: `mshojaei77/gemma-3n-E4B-persian-lora-adapters`. - **Dataset**: `mshojaei77/persian-gk`. - **GitHub**: [mshojaei77/gemma-3n-E4B-persian-qlora](https://github.com/mshojaei77/gemma-3n-E4B-persian-qlora). - **Frameworks**: Unsloth (arXiv:2305.14314), PEFT (arXiv:2106.09685), Transformers. ## Citation ```bibtex @misc{gemma3n_persian_2025, title={Gemma-3N 4B Persian Fine-tuned Model}, author={Shojaei, M.}, year={2025}, publisher={Hugging Face}, url={https://huggingface.co/mshojaei77/gemma-3n-E4B-persian}, note={Fine-tuned using QLoRA on Persian General Knowledge dataset} } ``` Dataset citation: ```bibtex @misc{persian_gk_2025, title={persian-gk: Persian General Knowledge Chat Dataset}, author={Shojaei, M. and Contributors}, year={2025}, url={https://huggingface.co/datasets/mshojaei77/persian-gk} } ``` ## License Licensed under the Gemma Terms of Use (https://ai.google.dev/gemma/terms). Downstream users must adhere to these terms. ## Acknowledgments Thanks to Google for the Gemma architecture, the Unsloth team for training tools, Hugging Face for hosting, and the Persian NLP community for contributions.
pouier/blockassist-bc-pudgy_thriving_okapi_1755900447
pouier
2025-08-22T22:07:36Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "pudgy thriving okapi", "arxiv:2504.07091", "region:us" ]
null
2025-08-22T22:07:27Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - pudgy thriving okapi --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
mradermacher/DeepSeek-V3.1-GGUF
mradermacher
2025-08-22T21:58:32Z
0
0
transformers
[ "transformers", "en", "base_model:deepseek-ai/DeepSeek-V3.1", "base_model:finetune:deepseek-ai/DeepSeek-V3.1", "license:mit", "endpoints_compatible", "region:us" ]
null
2025-08-22T07:10:47Z
--- base_model: deepseek-ai/DeepSeek-V3.1 language: - en library_name: transformers license: mit mradermacher: readme_rev: 1 quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> <!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS --> <!-- ### quants_skip: --> <!-- ### skip_mmproj: --> static quants of https://huggingface.co/deepseek-ai/DeepSeek-V3.1 <!-- provided-files --> ***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#DeepSeek-V3.1-GGUF).*** weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [P1](https://huggingface.co/mradermacher/DeepSeek-V3.1-GGUF/resolve/main/DeepSeek-V3.1.Q2_K.gguf.part1of5) [P2](https://huggingface.co/mradermacher/DeepSeek-V3.1-GGUF/resolve/main/DeepSeek-V3.1.Q2_K.gguf.part2of5) [P3](https://huggingface.co/mradermacher/DeepSeek-V3.1-GGUF/resolve/main/DeepSeek-V3.1.Q2_K.gguf.part3of5) [P4](https://huggingface.co/mradermacher/DeepSeek-V3.1-GGUF/resolve/main/DeepSeek-V3.1.Q2_K.gguf.part4of5) [P5](https://huggingface.co/mradermacher/DeepSeek-V3.1-GGUF/resolve/main/DeepSeek-V3.1.Q2_K.gguf.part5of5) | Q2_K | 244.2 | | | [P1](https://huggingface.co/mradermacher/DeepSeek-V3.1-GGUF/resolve/main/DeepSeek-V3.1.Q3_K_S.gguf.part1of6) [P2](https://huggingface.co/mradermacher/DeepSeek-V3.1-GGUF/resolve/main/DeepSeek-V3.1.Q3_K_S.gguf.part2of6) [P3](https://huggingface.co/mradermacher/DeepSeek-V3.1-GGUF/resolve/main/DeepSeek-V3.1.Q3_K_S.gguf.part3of6) [P4](https://huggingface.co/mradermacher/DeepSeek-V3.1-GGUF/resolve/main/DeepSeek-V3.1.Q3_K_S.gguf.part4of6) [P5](https://huggingface.co/mradermacher/DeepSeek-V3.1-GGUF/resolve/main/DeepSeek-V3.1.Q3_K_S.gguf.part5of6) [P6](https://huggingface.co/mradermacher/DeepSeek-V3.1-GGUF/resolve/main/DeepSeek-V3.1.Q3_K_S.gguf.part6of6) | Q3_K_S | 289.3 | | | [P1](https://huggingface.co/mradermacher/DeepSeek-V3.1-GGUF/resolve/main/DeepSeek-V3.1.Q3_K_M.gguf.part1of7) [P2](https://huggingface.co/mradermacher/DeepSeek-V3.1-GGUF/resolve/main/DeepSeek-V3.1.Q3_K_M.gguf.part2of7) [P3](https://huggingface.co/mradermacher/DeepSeek-V3.1-GGUF/resolve/main/DeepSeek-V3.1.Q3_K_M.gguf.part3of7) [P4](https://huggingface.co/mradermacher/DeepSeek-V3.1-GGUF/resolve/main/DeepSeek-V3.1.Q3_K_M.gguf.part4of7) [P5](https://huggingface.co/mradermacher/DeepSeek-V3.1-GGUF/resolve/main/DeepSeek-V3.1.Q3_K_M.gguf.part5of7) [P6](https://huggingface.co/mradermacher/DeepSeek-V3.1-GGUF/resolve/main/DeepSeek-V3.1.Q3_K_M.gguf.part6of7) [P7](https://huggingface.co/mradermacher/DeepSeek-V3.1-GGUF/resolve/main/DeepSeek-V3.1.Q3_K_M.gguf.part7of7) | Q3_K_M | 319.4 | lower quality | | [P1](https://huggingface.co/mradermacher/DeepSeek-V3.1-GGUF/resolve/main/DeepSeek-V3.1.Q4_K_S.gguf.part1of8) [P2](https://huggingface.co/mradermacher/DeepSeek-V3.1-GGUF/resolve/main/DeepSeek-V3.1.Q4_K_S.gguf.part2of8) [P3](https://huggingface.co/mradermacher/DeepSeek-V3.1-GGUF/resolve/main/DeepSeek-V3.1.Q4_K_S.gguf.part3of8) [P4](https://huggingface.co/mradermacher/DeepSeek-V3.1-GGUF/resolve/main/DeepSeek-V3.1.Q4_K_S.gguf.part4of8) [P5](https://huggingface.co/mradermacher/DeepSeek-V3.1-GGUF/resolve/main/DeepSeek-V3.1.Q4_K_S.gguf.part5of8) [P6](https://huggingface.co/mradermacher/DeepSeek-V3.1-GGUF/resolve/main/DeepSeek-V3.1.Q4_K_S.gguf.part6of8) [P7](https://huggingface.co/mradermacher/DeepSeek-V3.1-GGUF/resolve/main/DeepSeek-V3.1.Q4_K_S.gguf.part7of8) [P8](https://huggingface.co/mradermacher/DeepSeek-V3.1-GGUF/resolve/main/DeepSeek-V3.1.Q4_K_S.gguf.part8of8) | Q4_K_S | 380.2 | fast, recommended | | [P1](https://huggingface.co/mradermacher/DeepSeek-V3.1-GGUF/resolve/main/DeepSeek-V3.1.Q6_K.gguf.part01of12) [P2](https://huggingface.co/mradermacher/DeepSeek-V3.1-GGUF/resolve/main/DeepSeek-V3.1.Q6_K.gguf.part02of12) [P3](https://huggingface.co/mradermacher/DeepSeek-V3.1-GGUF/resolve/main/DeepSeek-V3.1.Q6_K.gguf.part03of12) [P4](https://huggingface.co/mradermacher/DeepSeek-V3.1-GGUF/resolve/main/DeepSeek-V3.1.Q6_K.gguf.part04of12) [P5](https://huggingface.co/mradermacher/DeepSeek-V3.1-GGUF/resolve/main/DeepSeek-V3.1.Q6_K.gguf.part05of12) [P6](https://huggingface.co/mradermacher/DeepSeek-V3.1-GGUF/resolve/main/DeepSeek-V3.1.Q6_K.gguf.part06of12) [P7](https://huggingface.co/mradermacher/DeepSeek-V3.1-GGUF/resolve/main/DeepSeek-V3.1.Q6_K.gguf.part07of12) [P8](https://huggingface.co/mradermacher/DeepSeek-V3.1-GGUF/resolve/main/DeepSeek-V3.1.Q6_K.gguf.part08of12) [P9](https://huggingface.co/mradermacher/DeepSeek-V3.1-GGUF/resolve/main/DeepSeek-V3.1.Q6_K.gguf.part09of12) [P10](https://huggingface.co/mradermacher/DeepSeek-V3.1-GGUF/resolve/main/DeepSeek-V3.1.Q6_K.gguf.part10of12) [P11](https://huggingface.co/mradermacher/DeepSeek-V3.1-GGUF/resolve/main/DeepSeek-V3.1.Q6_K.gguf.part11of12) [P12](https://huggingface.co/mradermacher/DeepSeek-V3.1-GGUF/resolve/main/DeepSeek-V3.1.Q6_K.gguf.part12of12) | Q6_K | 551.0 | very good quality | | [P1](https://huggingface.co/mradermacher/DeepSeek-V3.1-GGUF/resolve/main/DeepSeek-V3.1.Q8_0.gguf.part01of18) [P2](https://huggingface.co/mradermacher/DeepSeek-V3.1-GGUF/resolve/main/DeepSeek-V3.1.Q8_0.gguf.part02of18) [P3](https://huggingface.co/mradermacher/DeepSeek-V3.1-GGUF/resolve/main/DeepSeek-V3.1.Q8_0.gguf.part03of18) [P4](https://huggingface.co/mradermacher/DeepSeek-V3.1-GGUF/resolve/main/DeepSeek-V3.1.Q8_0.gguf.part04of18) [P5](https://huggingface.co/mradermacher/DeepSeek-V3.1-GGUF/resolve/main/DeepSeek-V3.1.Q8_0.gguf.part05of18) [P6](https://huggingface.co/mradermacher/DeepSeek-V3.1-GGUF/resolve/main/DeepSeek-V3.1.Q8_0.gguf.part06of18) [P7](https://huggingface.co/mradermacher/DeepSeek-V3.1-GGUF/resolve/main/DeepSeek-V3.1.Q8_0.gguf.part07of18) [P8](https://huggingface.co/mradermacher/DeepSeek-V3.1-GGUF/resolve/main/DeepSeek-V3.1.Q8_0.gguf.part08of18) [P9](https://huggingface.co/mradermacher/DeepSeek-V3.1-GGUF/resolve/main/DeepSeek-V3.1.Q8_0.gguf.part09of18) [P10](https://huggingface.co/mradermacher/DeepSeek-V3.1-GGUF/resolve/main/DeepSeek-V3.1.Q8_0.gguf.part10of18) [P11](https://huggingface.co/mradermacher/DeepSeek-V3.1-GGUF/resolve/main/DeepSeek-V3.1.Q8_0.gguf.part11of18) [P12](https://huggingface.co/mradermacher/DeepSeek-V3.1-GGUF/resolve/main/DeepSeek-V3.1.Q8_0.gguf.part12of18) [P13](https://huggingface.co/mradermacher/DeepSeek-V3.1-GGUF/resolve/main/DeepSeek-V3.1.Q8_0.gguf.part13of18) [P14](https://huggingface.co/mradermacher/DeepSeek-V3.1-GGUF/resolve/main/DeepSeek-V3.1.Q8_0.gguf.part14of18) [P15](https://huggingface.co/mradermacher/DeepSeek-V3.1-GGUF/resolve/main/DeepSeek-V3.1.Q8_0.gguf.part15of18) [P16](https://huggingface.co/mradermacher/DeepSeek-V3.1-GGUF/resolve/main/DeepSeek-V3.1.Q8_0.gguf.part16of18) [P17](https://huggingface.co/mradermacher/DeepSeek-V3.1-GGUF/resolve/main/DeepSeek-V3.1.Q8_0.gguf.part17of18) [P18](https://huggingface.co/mradermacher/DeepSeek-V3.1-GGUF/resolve/main/DeepSeek-V3.1.Q8_0.gguf.part18of18) | Q8_0 | 713.4 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
defk0n1/llama3b500k16bit
defk0n1
2025-08-22T21:56:11Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "text-generation-inference", "unsloth", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-08-22T21:56:07Z
--- base_model: unsloth/meta-llama-3.1-8b-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llama license: apache-2.0 language: - en --- # Uploaded finetuned model - **Developed by:** defk0n1 - **License:** apache-2.0 - **Finetuned from model :** unsloth/meta-llama-3.1-8b-unsloth-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
blingster/checkpoints
blingster
2025-08-22T21:56:08Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "gemma3_text", "text-generation", "generated_from_trainer", "sft", "trl", "conversational", "base_model:google/gemma-3-270m-it", "base_model:finetune:google/gemma-3-270m-it", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-08-22T20:23:26Z
--- base_model: google/gemma-3-270m-it library_name: transformers model_name: checkpoints tags: - generated_from_trainer - sft - trl licence: license --- # Model Card for checkpoints This model is a fine-tuned version of [google/gemma-3-270m-it](https://huggingface.co/google/gemma-3-270m-it). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="blingster/checkpoints", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/lingster/gemini-270m-pp/runs/llrglm8i) This model was trained with SFT. ### Framework versions - TRL: 0.21.0 - Transformers: 4.55.4 - Pytorch: 2.8.0 - Datasets: 4.0.0 - Tokenizers: 0.21.4 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
mang3dd/blockassist-bc-tangled_slithering_alligator_1755897998
mang3dd
2025-08-22T21:52:26Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "tangled slithering alligator", "arxiv:2504.07091", "region:us" ]
null
2025-08-22T21:52:21Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - tangled slithering alligator --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
coelacanthxyz/blockassist-bc-finicky_thriving_grouse_1755897822
coelacanthxyz
2025-08-22T21:51:06Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "finicky thriving grouse", "arxiv:2504.07091", "region:us" ]
null
2025-08-22T21:51:00Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - finicky thriving grouse --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
kapalbalap/blockassist-bc-peaceful_wary_owl_1755899104
kapalbalap
2025-08-22T21:46:09Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "peaceful wary owl", "arxiv:2504.07091", "region:us" ]
null
2025-08-22T21:45:56Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - peaceful wary owl --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Muapi/santa-muerte-concept-flux-ethanar
Muapi
2025-08-22T21:44:40Z
0
0
null
[ "lora", "stable-diffusion", "flux.1-d", "license:openrail++", "region:us" ]
null
2025-08-22T21:44:30Z
--- license: openrail++ tags: - lora - stable-diffusion - flux.1-d model_type: LoRA --- # Santa Muerte Concept FLUX @Ethanar ![preview](./preview.jpg) **Base model**: Flux.1 D **Trained words**: ## 🧠 Usage (Python) 🔑 **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys) ```python import requests, os url = "https://api.muapi.ai/api/v1/flux_dev_lora_image" headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")} payload = { "prompt": "masterpiece, best quality, 1girl, looking at viewer", "model_id": [{"model": "civitai:819331@916191", "weight": 1.0}], "width": 1024, "height": 1024, "num_images": 1 } print(requests.post(url, headers=headers, json=payload).json()) ```
Muapi/zavy-s-cinematic-stills-flux
Muapi
2025-08-22T21:42:22Z
0
0
null
[ "lora", "stable-diffusion", "flux.1-d", "license:openrail++", "region:us" ]
null
2025-08-22T21:42:10Z
--- license: openrail++ tags: - lora - stable-diffusion - flux.1-d model_type: LoRA --- # Zavy's Cinematic Stills - Flux ![preview](./preview.jpg) **Base model**: Flux.1 D **Trained words**: zavy-cnmtc ## 🧠 Usage (Python) 🔑 **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys) ```python import requests, os url = "https://api.muapi.ai/api/v1/flux_dev_lora_image" headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")} payload = { "prompt": "masterpiece, best quality, 1girl, looking at viewer", "model_id": [{"model": "civitai:737279@824491", "weight": 1.0}], "width": 1024, "height": 1024, "num_images": 1 } print(requests.post(url, headers=headers, json=payload).json()) ```
ggozzy/blockassist-bc-stubby_yapping_mandrill_1755898775
ggozzy
2025-08-22T21:40:48Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "stubby yapping mandrill", "arxiv:2504.07091", "region:us" ]
null
2025-08-22T21:40:42Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - stubby yapping mandrill --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
bakhil-aissa/YOLO-11-invoice_parsing
bakhil-aissa
2025-08-22T21:10:12Z
0
0
null
[ "object-detection", "base_model:Ultralytics/YOLO11", "base_model:finetune:Ultralytics/YOLO11", "region:us" ]
object-detection
2025-08-22T20:52:07Z
--- base_model: - Ultralytics/YOLO11 pipeline_tag: object-detection --- # YOLO Model Card ## Model Details - **Model Name**: YOLO - **Version**: 11 - **Framework**: Ultralytics YOLO / - **License**: MIT - **Model Type**: Object Detection --- ## Intended Use - **Primary use case**: Invoice parsing – detecting fields such as dates, totals, company names, - **Out of scope**: Detecting non-invoice documents, handwriting recognition --- ## Model Architecture - **Base Architecture**: YOLOv11 - **Input Resolution**: 640 × 640 --- ## Dataset - **Source**: https://universe.roboflow.com/davis-stepanskis/invoice-lbv9f/dataset/2 - **Size**: 6671 images - **Classes**: {0: 'Discount_Percentage', 1: 'Due_Date', 2: 'Email_Client', 3: 'Name_Client', 4: 'Products', 5: 'Remise', 6: 'Subtotal', 7: 'Tax', 8: 'Tax_Precentage', 9: 'Tel_Client', 10: 'billing address', 11: 'header', 12: 'invoice date', 13: 'invoice number', 14: 'shipping address', 15: 'total'} - **Preprocessing**: - Image resizing: 640 × 640 with letterboxing --- ## Performance - **Evaluation Metrics**: - mAP@[.5:.95]: 0.624. - Precision: 0.9160 - Recall: 0.903 - **Hardware Used**:T4 x 2 --- ## Limitations - Sensitive to **document layout variations** - May fail on **low-resolution or handwritten invoices** - Not robust to **domain shifts** (different invoice templates not seen during training) - Requires **OCR post-processing** for text extraction --- ## Ethical Considerations - **Biases**: May not generalize well to invoices from different countries/languages - **Privacy**: Invoices often contain personal and financial information – ensure compliance with GDPR/CCPA before using this model --- ## How to Use ```python from ultralytics import YOLO # Load model model = YOLO("best.pt") # Run inference results = model.predict("invoice.jpg") # Access results for box in results[0].boxes: print(box.xyxy, box.conf, box.cls)
weruior/blockassist-bc-bristly_striped_flamingo_1755896843
weruior
2025-08-22T21:08:01Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "bristly striped flamingo", "arxiv:2504.07091", "region:us" ]
null
2025-08-22T21:07:24Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - bristly striped flamingo --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
unitova/blockassist-bc-zealous_sneaky_raven_1755894468
unitova
2025-08-22T20:54:01Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "zealous sneaky raven", "arxiv:2504.07091", "region:us" ]
null
2025-08-22T20:53:57Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - zealous sneaky raven --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
SvalTek/Arcadia-12B-Fusion-test0
SvalTek
2025-08-22T20:47:30Z
0
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "text-generation-inference", "unsloth", "conversational", "en", "base_model:SvalTek/Arcadia-12B-Fusion", "base_model:finetune:SvalTek/Arcadia-12B-Fusion", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-08-22T20:38:27Z
--- base_model: SvalTek/Arcadia-12B-Fusion tags: - text-generation-inference - transformers - unsloth - mistral license: apache-2.0 language: - en --- # Uploaded finetuned model - **Developed by:** SvalTek - **License:** apache-2.0 - **Finetuned from model :** SvalTek/Arcadia-12B-Fusion This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
fopppyu/blockassist-bc-mimic_peckish_cockroach_1755895470
fopppyu
2025-08-22T20:45:06Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "mimic peckish cockroach", "arxiv:2504.07091", "region:us" ]
null
2025-08-22T20:44:31Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - mimic peckish cockroach --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Xx-18-VIDEOS-Uppal-Farm-Girl-Viral-Video/trending.full.Uppal.Farm.punjabi.tractor.girl.Viral.Video.Official.Tutorial
Xx-18-VIDEOS-Uppal-Farm-Girl-Viral-Video
2025-08-22T20:44:41Z
0
0
null
[ "region:us" ]
null
2025-08-22T20:44:31Z
<a data-target="animated-image.originalLink" rel="nofollow" href="https://tinyurl.com/4axawfmy?bit "><img data-target="animated-image.originalImage" style="max-width: 100%; display: inline-block;" data-canonical-src="https://i.imgur.com/dJHk4Zq.gif" alt="WATCH Videos" src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif"></a>
kz919/llama3_3b_cautious_chinchilla_8152025
kz919
2025-08-22T20:43:36Z
18
1
transformers
[ "transformers", "safetensors", "llama", "text-generation", "en", "dataset:HuggingFaceFW/fineweb-edu", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-08-15T15:14:55Z
--- library_name: transformers license: apache-2.0 datasets: - HuggingFaceFW/fineweb-edu language: - en --- # Model Details This model is a 3B llama3 model pretrained from scratch with torchtitan on fineweb-edu with C_AdamW optimizer. 20x chinchilla rule for 60B tokens seen. # How to use ``` import torch from transformers import pipeline pipe = pipeline( "text-generation", model="kz919/llama3_3b_chinchilla_8142025", ) print(pipe("The key to life is")) ``` # Downstream Eval ## ARC, Hellaswag, Lambda_OpenAI, OpenbookQA, PIQA ``` lm_eval --model hf --model_args pretrained=kz919/llama3_3b_chinchilla_8142025,dtype="bfloat16",add_bos_token=True --tasks lambada_openai,hellaswag,piqa,arc_easy,arc_challenge,openbookqa --device cuda:7 --batch_size 8 ``` | Tasks |Version|Filter|n-shot| Metric | | Value | |Stderr| |--------------|------:|------|-----:|----------|---|------:|---|-----:| |arc_challenge | 1|none | 0|acc |↑ | 0.2892|± |0.0133| | | |none | 0|acc_norm |↑ | 0.2892|± |0.0133| |arc_easy | 1|none | 0|acc |↑ | 0.6162|± |0.0100| | | |none | 0|acc_norm |↑ | 0.5311|± |0.0102| |hellaswag | 1|none | 0|acc |↑ | 0.3698|± |0.0048| | | |none | 0|acc_norm |↑ | 0.4611|± |0.0050| |lambada_openai| 1|none | 0|acc |↑ | 0.3670|± |0.0067| | | |none | 0|perplexity|↓ |34.2265|± |1.4167| |openbookqa | 1|none | 0|acc |↑ | 0.2380|± |0.0191| | | |none | 0|acc_norm |↑ | 0.3460|± |0.0213| |piqa | 1|none | 0|acc |↑ | 0.6904|± |0.0108| | | |none | 0|acc_norm |↑ | 0.6975|± |0.0107| ## MMLU | Groups |Version|Filter|n-shot|Metric| |Value | |Stderr| |------------------|------:|------|------|------|---|-----:|---|-----:| |mmlu | 2|none | |acc |↑ |0.2453|± |0.0036| | - humanities | 2|none | |acc |↑ |0.2502|± |0.0063| | - other | 2|none | |acc |↑ |0.2620|± |0.0079| | - social sciences| 2|none | |acc |↑ |0.2320|± |0.0076| | - stem | 2|none | |acc |↑ |0.2347|± |0.0076|
Hiver77/MDT
Hiver77
2025-08-22T20:29:22Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-08-22T20:29:13Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
LarryAIDraw/Zhen_Ji_Pony
LarryAIDraw
2025-08-22T20:19:41Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2025-08-22T14:35:53Z
--- license: creativeml-openrail-m --- https://civitai.com/models/896510?modelVersionId=1003194
katanyasekolah/blockassist-bc-silky_sprightly_cassowary_1755891072
katanyasekolah
2025-08-22T20:01:53Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "silky sprightly cassowary", "arxiv:2504.07091", "region:us" ]
null
2025-08-22T20:01:49Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - silky sprightly cassowary --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Vasya777/blockassist-bc-lumbering_enormous_sloth_1755892259
Vasya777
2025-08-22T19:51:33Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "lumbering enormous sloth", "arxiv:2504.07091", "region:us" ]
null
2025-08-22T19:51:20Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - lumbering enormous sloth --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
ggozzy/blockassist-bc-stubby_yapping_mandrill_1755890702
ggozzy
2025-08-22T19:26:19Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "stubby yapping mandrill", "arxiv:2504.07091", "region:us" ]
null
2025-08-22T19:26:05Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - stubby yapping mandrill --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
WWD2F3F3F/deberta_s5_1_v2
WWD2F3F3F
2025-08-22T19:24:09Z
17
0
transformers
[ "transformers", "safetensors", "deberta-v2", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-07-08T17:32:51Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
saracandu/stldec_random_16
saracandu
2025-08-22T18:51:34Z
39
0
transformers
[ "transformers", "safetensors", "stldec16", "text-generation", "custom_code", "arxiv:1910.09700", "autotrain_compatible", "region:us" ]
text-generation
2025-08-20T14:43:27Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
ggozzy/blockassist-bc-stubby_yapping_mandrill_1755887202
ggozzy
2025-08-22T18:27:59Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "stubby yapping mandrill", "arxiv:2504.07091", "region:us" ]
null
2025-08-22T18:27:46Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - stubby yapping mandrill --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Najin06/Tiny_by_Najin
Najin06
2025-08-22T18:17:17Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-08-22T18:17:17Z
--- license: apache-2.0 ---
Vasya777/blockassist-bc-lumbering_enormous_sloth_1755886198
Vasya777
2025-08-22T18:10:53Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "lumbering enormous sloth", "arxiv:2504.07091", "region:us" ]
null
2025-08-22T18:10:23Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - lumbering enormous sloth --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
kayacrypto/blockassist-bc-thriving_barky_wolf_1755885652
kayacrypto
2025-08-22T18:02:30Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "thriving barky wolf", "arxiv:2504.07091", "region:us" ]
null
2025-08-22T18:02:21Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - thriving barky wolf --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
nabilwalidrafi/medgemma-skinlesion-rafi-4-4-augdynamic3
nabilwalidrafi
2025-08-22T17:56:31Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "sft", "trl", "base_model:google/medgemma-4b-it", "base_model:finetune:google/medgemma-4b-it", "endpoints_compatible", "region:us" ]
null
2025-08-22T15:36:47Z
--- base_model: google/medgemma-4b-it library_name: transformers model_name: medgemma-skinlesion-rafi-4-4-augdynamic3 tags: - generated_from_trainer - sft - trl licence: license --- # Model Card for medgemma-skinlesion-rafi-4-4-augdynamic3 This model is a fine-tuned version of [google/medgemma-4b-it](https://huggingface.co/google/medgemma-4b-it). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="nabilwalidrafi/medgemma-skinlesion-rafi-4-4-augdynamic3", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with SFT. ### Framework versions - TRL: 0.21.0 - Transformers: 4.55.2 - Pytorch: 2.8.0+cu126 - Datasets: 4.0.0 - Tokenizers: 0.21.4 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
mohda/blockassist-bc-regal_fierce_hummingbird_1755884659
mohda
2025-08-22T17:45:13Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "regal fierce hummingbird", "arxiv:2504.07091", "region:us" ]
null
2025-08-22T17:45:06Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - regal fierce hummingbird --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
coastalcph/gemma-2b-it-1t_gcd_sycophancy_pout_3t_diff_sycophant
coastalcph
2025-08-22T17:43:46Z
0
0
null
[ "safetensors", "gemma", "region:us" ]
null
2025-08-22T17:43:13Z
# Combined Task Vector Model This model was created by combining task vectors from multiple fine-tuned models. ## Task Vector Computation ```python t_1 = TaskVector("google/gemma-2b-it", "coastalcph/gemma-2b-it-gcd_sycophancy_2e-04") t_2 = TaskVector("google/gemma-2b-it", "coastalcph/gemma-2b-it-personality-non-sycophancy") t_combined = 1.0 * t_1 + 3.0 * t_2 - 3.0 * t_3 new_model = t_combined.apply_to("google/gemma-2b-it", scaling_coef=1.0) ``` Models Used - Base Model: https://huggingface.co/google/gemma-2b-it - Fine-tuned Model 1: https://huggingface.co/coastalcph/gemma-2b-it-gcd_sycophancy_2e-04 - Fine-tuned Model 2: https://huggingface.co/coastalcph/gemma-2b-it-personality-non-sycophancy Technical Details - Creation Script Git Hash: d0db42d73be516ec04f0ecdc8003189e98b5f722 - Task Vector Method: Additive combination - Args: { "pretrained_model": "google/gemma-2b-it", "finetuned_model1": "coastalcph/gemma-2b-it-gcd_sycophancy_2e-04", "finetuned_model2": "coastalcph/gemma-2b-it-personality-non-sycophancy", "finetuned_model3": "coastalcph/gemma-2b-it-personality-sycophancy", "output_model_name": "coastalcph/gemma-2b-it-1t_gcd_sycophancy_pout_3t_diff_sycophant", "output_dir": "/projects/nlp/data/constanzam/weight-interp/task-vectors/math_non_sycophant_12Aug", "scaling_coef": 1.0, "apply_line_scaling_t1": false, "apply_line_scaling_t2": false, "apply_line_scaling_t3": false, "combine_diff_projecting_out": true, "scale_t1": 1.0, "scale_t2": 3.0, "scale_t3": 3.0 }
ggozzy/blockassist-bc-stubby_yapping_mandrill_1755884241
ggozzy
2025-08-22T17:38:40Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "stubby yapping mandrill", "arxiv:2504.07091", "region:us" ]
null
2025-08-22T17:38:26Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - stubby yapping mandrill --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
coastalcph/Qwen2.5-1.5B-1t_gcd_sycophanct-2t_diff_sycophant
coastalcph
2025-08-22T17:35:53Z
0
0
null
[ "safetensors", "qwen2", "region:us" ]
null
2025-08-22T17:35:11Z
# Combined Task Vector Model This model was created by combining task vectors from multiple fine-tuned models. ## Task Vector Computation ```python t_1 = TaskVector("Qwen/Qwen2.5-1.5B-Instruct", "coastalcph/Qwen2.5-1.5B-Instruct-gcd_sycophancy") t_2 = TaskVector("Qwen/Qwen2.5-1.5B-Instruct", "coastalcph/Qwen2.5-1.5B-Instruct-personality-non-sycophancy") t_combined = 1.0 * t_1 + 2.0 * t_2 - 2.0 * t_3 new_model = t_combined.apply_to("Qwen/Qwen2.5-1.5B-Instruct", scaling_coef=1.0) ``` Models Used - Base Model: https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct - Fine-tuned Model 1: https://huggingface.co/coastalcph/Qwen2.5-1.5B-Instruct-gcd_sycophancy - Fine-tuned Model 2: https://huggingface.co/coastalcph/Qwen2.5-1.5B-Instruct-personality-non-sycophancy Technical Details - Creation Script Git Hash: d0db42d73be516ec04f0ecdc8003189e98b5f722 - Task Vector Method: Additive combination - Args: { "pretrained_model": "Qwen/Qwen2.5-1.5B-Instruct", "finetuned_model1": "coastalcph/Qwen2.5-1.5B-Instruct-gcd_sycophancy", "finetuned_model2": "coastalcph/Qwen2.5-1.5B-Instruct-personality-non-sycophancy", "finetuned_model3": "coastalcph/Qwen2.5-1.5B-Instruct-personality-sycophancy", "output_model_name": "coastalcph/Qwen2.5-1.5B-1t_gcd_sycophanct-2t_diff_sycophant", "output_dir": "/projects/nlp/data/constanzam/weight-interp/task-vectors/math_non_sycophant_12Aug", "scaling_coef": 1.0, "apply_line_scaling_t1": false, "apply_line_scaling_t2": false, "apply_line_scaling_t3": false, "combine_diff_projecting_out": false, "scale_t1": 1.0, "scale_t2": 2.0, "scale_t3": 2.0 }
New-Clip-ratri-apu-viral-video-XX/Orginal.full.Videos.ratri.apu.viral.video.Official.Tutorial
New-Clip-ratri-apu-viral-video-XX
2025-08-22T17:34:16Z
0
0
null
[ "region:us" ]
null
2025-08-22T17:34:00Z
<animated-image data-catalyst=""><a href="https://tinyurl.com/33jbhbsj?viral-video" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
jouuer/blockassist-bc-powerful_thick_termite_1755883412
jouuer
2025-08-22T17:24:05Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "powerful thick termite", "arxiv:2504.07091", "region:us" ]
null
2025-08-22T17:23:33Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - powerful thick termite --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
devmangukiya/TwinLlama-3.1-8B
devmangukiya
2025-08-22T17:15:26Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:mlabonne/TwinLlama-3.1-8B", "base_model:finetune:mlabonne/TwinLlama-3.1-8B", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-08-22T17:14:55Z
--- base_model: mlabonne/TwinLlama-3.1-8B tags: - text-generation-inference - transformers - unsloth - llama - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** devmangukiya - **License:** apache-2.0 - **Finetuned from model :** mlabonne/TwinLlama-3.1-8B This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
pobiiiiiii/blockassist-bc-ravenous_yapping_ferret_1755882316
pobiiiiiii
2025-08-22T17:06:26Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "ravenous yapping ferret", "arxiv:2504.07091", "region:us" ]
null
2025-08-22T17:06:17Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - ravenous yapping ferret --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
frankli202/Phi-3.5-mini-instruct_lora_sft_train_2025-08-22-lr-1.0e-4-lora-32-e-callm-lite-for-sima-1k
frankli202
2025-08-22T16:58:45Z
0
0
transformers
[ "transformers", "safetensors", "phi3", "text-generation", "llama-factory", "conversational", "custom_code", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-08-22T16:56:44Z
--- library_name: transformers tags: - llama-factory --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
loopping/blockassist-bc-roaring_flightless_ibis_1755881771
loopping
2025-08-22T16:56:41Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "roaring flightless ibis", "arxiv:2504.07091", "region:us" ]
null
2025-08-22T16:56:14Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - roaring flightless ibis --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
joanna302/Qwen3-8B-Base_pag_mt_alpaca_0.33_part_SFT_8e-05
joanna302
2025-08-22T16:56:04Z
15
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "generated_from_trainer", "unsloth", "trl", "sft", "conversational", "base_model:unsloth/Qwen3-8B-Base", "base_model:finetune:unsloth/Qwen3-8B-Base", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-08-19T06:13:08Z
--- base_model: unsloth/Qwen3-8B-Base library_name: transformers model_name: Qwen3-8B-Base_pag_mt_alpaca_0.33_part_SFT_8e-05 tags: - generated_from_trainer - unsloth - trl - sft licence: license --- # Model Card for Qwen3-8B-Base_pag_mt_alpaca_0.33_part_SFT_8e-05 This model is a fine-tuned version of [unsloth/Qwen3-8B-Base](https://huggingface.co/unsloth/Qwen3-8B-Base). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="joanna302/Qwen3-8B-Base_pag_mt_alpaca_0.33_part_SFT_8e-05", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/prism-eval/Qwen3-8B-Base_pag_mt_alpaca_0.33_part_SFT_8e-05/runs/jgo8e3qd) This model was trained with SFT. ### Framework versions - TRL: 0.21.0 - Transformers: 4.55.2 - Pytorch: 2.8.0 - Datasets: 3.6.0 - Tokenizers: 0.21.4 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
xibitthenoob/Qwen-3-32B-Medical-Reasoning
xibitthenoob
2025-08-22T16:56:00Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-08-17T19:54:27Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
joanna302/Qwen3-8B-Base_pag_mt_alpaca_1_part_SFT_2e-05
joanna302
2025-08-22T16:54:49Z
14
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "generated_from_trainer", "sft", "trl", "unsloth", "conversational", "base_model:unsloth/Qwen3-8B-Base", "base_model:finetune:unsloth/Qwen3-8B-Base", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-08-19T08:22:48Z
--- base_model: unsloth/Qwen3-8B-Base library_name: transformers model_name: Qwen3-8B-Base_pag_mt_alpaca_1_part_SFT_2e-05 tags: - generated_from_trainer - sft - trl - unsloth licence: license --- # Model Card for Qwen3-8B-Base_pag_mt_alpaca_1_part_SFT_2e-05 This model is a fine-tuned version of [unsloth/Qwen3-8B-Base](https://huggingface.co/unsloth/Qwen3-8B-Base). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="joanna302/Qwen3-8B-Base_pag_mt_alpaca_1_part_SFT_2e-05", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/prism-eval/Qwen3-8B-Base_pag_mt_alpaca_1_part_SFT_2e-05/runs/z5mltwit) This model was trained with SFT. ### Framework versions - TRL: 0.21.0 - Transformers: 4.55.2 - Pytorch: 2.8.0 - Datasets: 3.6.0 - Tokenizers: 0.21.4 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
rambetiko/blockassist-bc-soft_lanky_marmot_1755880998
rambetiko
2025-08-22T16:50:45Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "soft lanky marmot", "arxiv:2504.07091", "region:us" ]
null
2025-08-22T16:50:27Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - soft lanky marmot --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
shikhinvc/gpt-oss-20b-fahdmirza
shikhinvc
2025-08-22T16:41:19Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "sft", "trl", "base_model:openai/gpt-oss-20b", "base_model:finetune:openai/gpt-oss-20b", "endpoints_compatible", "region:us" ]
null
2025-08-22T16:14:51Z
--- base_model: openai/gpt-oss-20b library_name: transformers model_name: gpt-oss-20b-fahdmirza tags: - generated_from_trainer - sft - trl licence: license --- # Model Card for gpt-oss-20b-fahdmirza This model is a fine-tuned version of [openai/gpt-oss-20b](https://huggingface.co/openai/gpt-oss-20b). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="shikhinvc/gpt-oss-20b-fahdmirza", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with SFT. ### Framework versions - TRL: 0.21.0 - Transformers: 4.55.2 - Pytorch: 2.7.0+cu128 - Datasets: 4.0.0 - Tokenizers: 0.21.4 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
nema122/blockassist-bc-robust_fluffy_ram_1755880800
nema122
2025-08-22T16:41:14Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "robust fluffy ram", "arxiv:2504.07091", "region:us" ]
null
2025-08-22T16:41:11Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - robust fluffy ram --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
pictgensupport/golf
pictgensupport
2025-08-22T16:40:20Z
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-08-22T16:40:18Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: golf --- # Golf <Gallery /> Trained on Replicate using: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `golf` to trigger the image generation. ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('pictgensupport/golf', weight_name='lora.safetensors') image = pipeline('your prompt').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
New-Clip-prabh-sandhu-viral-video-links/Orginal.full.videos.prabh.sandhu.Viral.Video.Official.Tutorial
New-Clip-prabh-sandhu-viral-video-links
2025-08-22T16:39:17Z
0
0
null
[ "region:us" ]
null
2025-08-22T16:39:03Z
<animated-image data-catalyst=""><a href="https://tinyurl.com/33jbhbsj?viral-video" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
BennyDaBall/gemma-3-27b-tools-mlx-4bit
BennyDaBall
2025-08-22T16:37:38Z
63
0
mlx
[ "mlx", "safetensors", "gemma3", "gemma-3", "4-bit", "base_model:ZySec-AI/gemma-3-27b-tools", "base_model:quantized:ZySec-AI/gemma-3-27b-tools", "license:apache-2.0", "region:us" ]
null
2025-08-19T20:38:29Z
--- license: apache-2.0 tags: - mlx - gemma-3 - 4-bit base_model: - ZySec-AI/gemma-3-27b-tools new_version: ZySec-AI/gemma-3-27b-tools --- # Gemma 3 27B Tools - 4-bit MLX Quantization This is a 4-bit MLX quantization of the [ZySec-AI/gemma-3-27b-tools](https://huggingface.co/ZySec-AI/gemma-3-27b-tools) model. ## Model Description This repository contains a quantized version of Google's Gemma 3 27B model with tools, optimized for running with Apple's MLX framework. The quantization process reduces the model's size and computational requirements, making it suitable for deployment on devices with limited resources, such as Apple Silicon Macs. For more details on the original model, please refer to the [original model card](https://huggingface.co/ZySec-AI/gemma-3-27b-tools).
kavpro/blockassist-bc-tall_lively_caribou_1755880496
kavpro
2025-08-22T16:35:55Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "tall lively caribou", "arxiv:2504.07091", "region:us" ]
null
2025-08-22T16:35:40Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - tall lively caribou --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
ggozzy/blockassist-bc-stubby_yapping_mandrill_1755880474
ggozzy
2025-08-22T16:35:52Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "stubby yapping mandrill", "arxiv:2504.07091", "region:us" ]
null
2025-08-22T16:35:39Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - stubby yapping mandrill --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
beinganamika/Llama32_fine_tuned
beinganamika
2025-08-22T16:34:49Z
0
0
transformers
[ "transformers", "safetensors", "gguf", "llama", "text-generation-inference", "unsloth", "trl", "en", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2025-08-22T16:16:47Z
--- base_model: unsloth/llama-3.2-3b-instruct-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llama - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** beinganamika - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3.2-3b-instruct-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
yobellee/rl_course_vizdoom_health_gathering_supreme
yobellee
2025-08-22T16:30:55Z
0
0
sample-factory
[ "sample-factory", "tensorboard", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2025-08-22T12:44:17Z
--- library_name: sample-factory tags: - deep-reinforcement-learning - reinforcement-learning - sample-factory model-index: - name: APPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: doom_health_gathering_supreme type: doom_health_gathering_supreme metrics: - type: mean_reward value: 14.81 +/- 5.56 name: mean_reward verified: false --- A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment. This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory. Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/ ## Downloading the model After installing Sample-Factory, download the model with: ``` python -m sample_factory.huggingface.load_from_hub -r yobellee/rl_course_vizdoom_health_gathering_supreme ``` ## Using the model To run the model after download, use the `enjoy` script corresponding to this environment: ``` python -m <path.to.enjoy.module> --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme ``` You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag. See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details ## Training with this model To continue training with this model, use the `train` script corresponding to this environment: ``` python -m <path.to.train.module> --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000 ``` Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
luismirandacruz/Reinforce-CartPole-v1
luismirandacruz
2025-08-22T16:29:58Z
0
0
null
[ "CartPole-v1", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2025-08-22T16:29:48Z
--- tags: - CartPole-v1 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-CartPole-v1 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 500.00 +/- 0.00 name: mean_reward verified: false --- # **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
VoilaRaj/81_d_KkHUHF
VoilaRaj
2025-08-22T16:27:23Z
0
0
null
[ "safetensors", "any-to-any", "omega", "omegalabs", "bittensor", "agi", "license:mit", "region:us" ]
any-to-any
2025-08-22T16:23:27Z
--- license: mit tags: - any-to-any - omega - omegalabs - bittensor - agi --- This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet. Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
thanobidex/blockassist-bc-colorful_shiny_hare_1755878141
thanobidex
2025-08-22T16:23:31Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "colorful shiny hare", "arxiv:2504.07091", "region:us" ]
null
2025-08-22T16:23:27Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - colorful shiny hare --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
nema122/blockassist-bc-robust_fluffy_ram_1755879298
nema122
2025-08-22T16:16:12Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "robust fluffy ram", "arxiv:2504.07091", "region:us" ]
null
2025-08-22T16:16:10Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - robust fluffy ram --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
AlekseyKorshuk/MogFace
AlekseyKorshuk
2025-08-22T16:14:48Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-08-22T16:13:13Z
--- license: apache-2.0 ---
Chama99/monkey-sterilization-model
Chama99
2025-08-22T16:13:47Z
0
0
null
[ "region:us" ]
null
2025-08-22T16:10:35Z
# Monkey Sterilization Prediction Model This repository contains a trained `LinearRegression` model to predict the number of monkeys sterilized per year based on historical data. ## Model Details * **Model Type:** Linear Regression * **Input:** Year (numeric) * **Output:** Predicted number of monkeys sterilized * **Training Data:** Provided in the `Monkey.xlsx` file. ## Files * `linear_regression_model.pkl`: The saved scikit-learn Linear Regression model object. ## Usage You can load this model using `joblib` and make predictions: ```python import joblib from huggingface_hub import hf_hub_download model_path = hf_hub_download(repo_id="Chama99/monkey-sterilization-model", filename="linear_regression_model.pkl") model = joblib.load(model_path) # Example prediction for year 2030 predicted_sterilized = model.predict([[2030]]) print(f"Predicted sterilized monkeys in 2030: {int(predicted_sterilized[0])}") ```
valleriee/pii-model-18
valleriee
2025-08-22T16:13:44Z
0
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-08-22T16:11:03Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
VIDEOS-19-afrin-viral-video-Orginal-link/New.full.videos.afrin.apu.Viral.Video.Official.Tutorial
VIDEOS-19-afrin-viral-video-Orginal-link
2025-08-22T16:12:14Z
0
0
null
[ "region:us" ]
null
2025-08-22T16:11:33Z
<animated-image data-catalyst=""><a href="https://tinyurl.com/33jbhbsj?viral-video" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
MCeut/so101_smolvla_red_cube_red_bin
MCeut
2025-08-22T16:11:47Z
0
0
lerobot
[ "lerobot", "safetensors", "robotics", "smolvla", "dataset:MCeut/cube_test1", "arxiv:2506.01844", "base_model:lerobot/smolvla_base", "base_model:finetune:lerobot/smolvla_base", "license:apache-2.0", "region:us" ]
robotics
2025-08-22T16:10:56Z
--- base_model: lerobot/smolvla_base datasets: MCeut/cube_test1 library_name: lerobot license: apache-2.0 model_name: smolvla pipeline_tag: robotics tags: - robotics - lerobot - smolvla --- # Model Card for smolvla <!-- Provide a quick summary of what the model is/does. --> [SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware. This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot). See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index). --- ## How to Get Started with the Model For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy). Below is the short version on how to train and run inference/eval: ### Train from scratch ```bash python -m lerobot.scripts.train \ --dataset.repo_id=${HF_USER}/<dataset> \ --policy.type=act \ --output_dir=outputs/train/<desired_policy_repo_id> \ --job_name=lerobot_training \ --policy.device=cuda \ --policy.repo_id=${HF_USER}/<desired_policy_repo_id> --wandb.enable=true ``` _Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`._ ### Evaluate the policy/run inference ```bash python -m lerobot.record \ --robot.type=so100_follower \ --dataset.repo_id=<hf_user>/eval_<dataset> \ --policy.path=<hf_user>/<desired_policy_repo_id> \ --episodes=10 ``` Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint. --- ## Model Details - **License:** apache-2.0
vwzyrraz7l/blockassist-bc-tall_hunting_vulture_1755877494
vwzyrraz7l
2025-08-22T16:11:33Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "tall hunting vulture", "arxiv:2504.07091", "region:us" ]
null
2025-08-22T16:11:29Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - tall hunting vulture --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
coelacanthxyz/blockassist-bc-finicky_thriving_grouse_1755877441
coelacanthxyz
2025-08-22T16:11:30Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "finicky thriving grouse", "arxiv:2504.07091", "region:us" ]
null
2025-08-22T16:11:23Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - finicky thriving grouse --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Chukky10z/blockassist-bc-mammalian_jumping_cougar_1755878980
Chukky10z
2025-08-22T16:10:27Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "mammalian jumping cougar", "arxiv:2504.07091", "region:us" ]
null
2025-08-22T16:10:08Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - mammalian jumping cougar --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Brokersponsor/n8n
Brokersponsor
2025-08-22T16:07:55Z
0
0
null
[ "text-generation", "es", "base_model:openai/gpt-oss-120b", "base_model:finetune:openai/gpt-oss-120b", "license:mit", "region:us" ]
text-generation
2025-08-22T16:06:46Z
--- license: mit language: - es base_model: - openai/gpt-oss-120b pipeline_tag: text-generation ---
VIDEOS-18-afrin-viral-video-Orginal-links/New.full.videos.afrin.apu.Viral.Video.Official.Tutorial
VIDEOS-18-afrin-viral-video-Orginal-links
2025-08-22T16:07:51Z
0
0
null
[ "region:us" ]
null
2025-08-22T16:07:31Z
<animated-image data-catalyst=""><a href="https://tinyurl.com/33jbhbsj?viral-video" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
surbhim18/MultilingualSDXL
surbhim18
2025-08-22T16:01:39Z
0
0
null
[ "hi", "bn", "as", "gu", "kn", "ml", "mr", "ne", "or", "pa", "sa", "ta", "te", "ur", "ks", "es", "fr", "ja", "zh", "tr", "de", "ar", "pt", "ru", "vi", "it", "ko", "base_model:stabilityai/sdxl-turbo", "base_model:finetune:stabilityai/sdxl-turbo", "license:mit", "region:us" ]
null
2025-08-19T13:25:03Z
--- license: mit base_model: - stabilityai/sdxl-turbo language: - hi - bn - as - gu - kn - ml - mr - ne - or - pa - sa - ta - te - ur - ks - es - fr - ja - zh - tr - de - ar - pt - ru - vi - it - ko --- **Use with the Stable Diffusion Pipeline** ```python import torch from diffusers import AutoPipelineForText2Image from transformers import CLIPTokenizer, CLIPTextModel device = "cuda" if torch.cuda.is_available() else "cpu" lang = "hin_Deva" # Hindi # Load pipeline pipe = AutoPipelineForText2Image.from_pretrained("stabilityai/sdxl-turbo") # Load the multilingual tokenizer tokenizer = CLIPTokenizer.from_pretrained("tokenizers/multilingual") pipe.tokenizer = tokenizer pipe.text_encoder.resize_token_embeddings(len(tokenizer)) # Load the fine-tuned text encoder state_dict = torch.load(f"models/{lang}/{lang}_text_encoder.pth") new_text_encoder = CLIPTextModel(config=pipe.text_encoder.config) new_text_encoder.load_state_dict(state_dict) new_text_encoder = new_text_encoder.to(device) pipe.text_encoder = new_text_encoder pipe = pipe.to(device) # Generate and save image caption = "गाँव का शांतिपूर्ण दृश्य|" image = pipe(caption).images[0] image.save(f"example.png")
Chukky10z/blockassist-bc-mammalian_jumping_cougar_1755878431
Chukky10z
2025-08-22T16:01:14Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "mammalian jumping cougar", "arxiv:2504.07091", "region:us" ]
null
2025-08-22T16:00:58Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - mammalian jumping cougar --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
ggozzy/blockassist-bc-stubby_yapping_mandrill_1755878322
ggozzy
2025-08-22T15:59:59Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "stubby yapping mandrill", "arxiv:2504.07091", "region:us" ]
null
2025-08-22T15:59:45Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - stubby yapping mandrill --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
lucasluz/questions_optimized
lucasluz
2025-08-22T15:49:51Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "generated_from_trainer", "sft", "trl", "unsloth", "endpoints_compatible", "region:us" ]
null
2025-08-21T16:50:46Z
--- base_model: unsloth/llama-3.2-1b-bnb-4bit library_name: transformers model_name: questions_optimized tags: - generated_from_trainer - sft - trl - unsloth licence: license --- # Model Card for questions_optimized This model is a fine-tuned version of [unsloth/llama-3.2-1b-bnb-4bit](https://huggingface.co/unsloth/llama-3.2-1b-bnb-4bit). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="lucasluz/questions_optimized", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with SFT. ### Framework versions - TRL: 0.21.0 - Transformers: 4.55.3 - Pytorch: 2.8.0 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
VIDEOS-18-hadeer-abdelrazik-viral-Clip/New.full.videos.hadeer.abdelrazik.Viral.Video.Official.Tutorial
VIDEOS-18-hadeer-abdelrazik-viral-Clip
2025-08-22T15:44:13Z
0
0
null
[ "region:us" ]
null
2025-08-22T15:43:34Z
<animated-image data-catalyst=""><a href="https://tinyurl.com/33jbhbsj?viral-video" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
VIDEOS-19-Gurman-prabh-viral-video-Clip/New.full.videos.Gurman.prabh.Viral.Video.Official.Tutorial
VIDEOS-19-Gurman-prabh-viral-video-Clip
2025-08-22T15:40:13Z
0
0
null
[ "region:us" ]
null
2025-08-22T15:39:48Z
<animated-image data-catalyst=""><a href="https://tinyurl.com/33jbhbsj?viral-video" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
anwensmythadv/blockassist-bc-pawing_stocky_walrus_1755875367
anwensmythadv
2025-08-22T15:38:10Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "pawing stocky walrus", "arxiv:2504.07091", "region:us" ]
null
2025-08-22T15:38:06Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - pawing stocky walrus --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
elleshavff/blockassist-bc-horned_energetic_parrot_1755875411
elleshavff
2025-08-22T15:36:25Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "horned energetic parrot", "arxiv:2504.07091", "region:us" ]
null
2025-08-22T15:36:22Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - horned energetic parrot --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
useable/TestQwen8Bit_RL
useable
2025-08-22T15:32:21Z
0
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "text-generation-inference", "unsloth", "conversational", "en", "base_model:unsloth/Qwen3-8B-unsloth-bnb-4bit", "base_model:finetune:unsloth/Qwen3-8B-unsloth-bnb-4bit", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-08-22T15:28:24Z
--- base_model: unsloth/Qwen3-8B-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - qwen3 license: apache-2.0 language: - en --- # Uploaded finetuned model - **Developed by:** useable - **License:** apache-2.0 - **Finetuned from model :** unsloth/Qwen3-8B-unsloth-bnb-4bit This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
VoilaRaj/81_d_lxTRFg
VoilaRaj
2025-08-22T15:31:59Z
0
0
null
[ "safetensors", "any-to-any", "omega", "omegalabs", "bittensor", "agi", "license:mit", "region:us" ]
any-to-any
2025-08-22T15:28:03Z
--- license: mit tags: - any-to-any - omega - omegalabs - bittensor - agi --- This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet. Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
ggozzy/blockassist-bc-stubby_yapping_mandrill_1755876438
ggozzy
2025-08-22T15:28:40Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "stubby yapping mandrill", "arxiv:2504.07091", "region:us" ]
null
2025-08-22T15:28:24Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - stubby yapping mandrill --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
rvipitkirubbe/blockassist-bc-mottled_foraging_ape_1755873992
rvipitkirubbe
2025-08-22T15:27:49Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "mottled foraging ape", "arxiv:2504.07091", "region:us" ]
null
2025-08-22T15:27:45Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - mottled foraging ape --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
devashish07/tinyllama-finetuning-trl-smoltalk-small
devashish07
2025-08-22T15:26:23Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-08-22T15:26:10Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
mohda/blockassist-bc-regal_fierce_hummingbird_1755876334
mohda
2025-08-22T15:26:22Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "regal fierce hummingbird", "arxiv:2504.07091", "region:us" ]
null
2025-08-22T15:26:16Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - regal fierce hummingbird --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
mashiwan/blockassist-bc-long_spotted_weasel_1755875903
mashiwan
2025-08-22T15:19:58Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "long spotted weasel", "arxiv:2504.07091", "region:us" ]
null
2025-08-22T15:19:43Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - long spotted weasel --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
LabrvUaem/Chatbot
LabrvUaem
2025-08-22T15:11:22Z
0
0
peft
[ "peft", "safetensors", "llama", "arxiv:1910.09700", "4-bit", "bitsandbytes", "region:us" ]
null
2025-08-22T15:06:56Z
--- base_model: unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.15.2
mashiwan/blockassist-bc-long_spotted_weasel_1755875091
mashiwan
2025-08-22T15:06:23Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "long spotted weasel", "arxiv:2504.07091", "region:us" ]
null
2025-08-22T15:06:01Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - long spotted weasel --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
VIDEOS-18-Arisleidy-Viral-Video-Links/New.full.videos.Arisleidy.Viral.Video.Official.Tutorial
VIDEOS-18-Arisleidy-Viral-Video-Links
2025-08-22T15:04:09Z
0
0
null
[ "region:us" ]
null
2025-08-22T15:03:31Z
<animated-image data-catalyst=""><a href="https://tinyurl.com/33jbhbsj?viral-video" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
mohda/blockassist-bc-regal_fierce_hummingbird_1755874767
mohda
2025-08-22T15:00:19Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "regal fierce hummingbird", "arxiv:2504.07091", "region:us" ]
null
2025-08-22T15:00:12Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - regal fierce hummingbird --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
hakimjustbao/blockassist-bc-raging_subtle_wasp_1755872824
hakimjustbao
2025-08-22T14:55:33Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "raging subtle wasp", "arxiv:2504.07091", "region:us" ]
null
2025-08-22T14:55:29Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - raging subtle wasp --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
FreedomIntelligence/ShizhenGPT-7B-Omni
FreedomIntelligence
2025-08-22T14:55:24Z
1
1
null
[ "safetensors", "Traditional Chinese Medicin", "Multimodal LLM", "multimodal", "Image-text-to-text", "Audio-text-to-text", "text-generation", "zh", "dataset:FreedomIntelligence/TCM-Pretrain-Data-ShizhenGPT", "dataset:FreedomIntelligence/TCM-Instruction-Tuning-ShizhenGPT", "arxiv:2508.14706", "base_model:Qwen/Qwen2.5-7B", "base_model:finetune:Qwen/Qwen2.5-7B", "license:apache-2.0", "region:us" ]
text-generation
2025-08-21T02:22:24Z
--- license: apache-2.0 datasets: - FreedomIntelligence/TCM-Pretrain-Data-ShizhenGPT - FreedomIntelligence/TCM-Instruction-Tuning-ShizhenGPT language: - zh base_model: - Qwen/Qwen2.5-7B pipeline_tag: text-generation tags: - Traditional Chinese Medicin - Multimodal LLM - multimodal - Image-text-to-text - Audio-text-to-text --- <div align="center"> <h1> ShizhenGPT-7B-Omni </h1> </div> <div align="center"> <a href="https://github.com/FreedomIntelligence/ShizhenGPT" target="_blank">GitHub</a> | <a href="https://arxiv.org/abs/2508.14706" target="_blank">Paper</a> </div> **ShizhenGPT** is the first multimodal LLM for Traditional Chinese Medicine (TCM). It not only possesses strong expertise in TCM, but also supports TCM multimodal diagnostic capabilities, which involve looking (望), listening/smelling (闻), questioning (问), and pulse-taking (切). 👉 More details on GitHub: [ShizhenGPT](https://github.com/FreedomIntelligence/ShizhenGPT) # <span>Model Info</span> > **ShizhenGPT-7B-Omni** is the full version of ShizhenGPT-7B, supporting multiple modalities of input. If your needs only involve text or image input, you can consider using other versions: | | Parameters | Supported Modalities | Link | | ---------------------- | ---------- | ----------------------------- | --------------------------------------------------------------------- | | **ShizhenGPT-7B-LLM** | 7B | Text | [HF Link](https://huggingface.co/FreedomIntelligence/ShizhenGPT-7B-LLM) | | **ShizhenGPT-7B-VL** | 7B | Text, Image Understanding | [HF Link](https://huggingface.co/FreedomIntelligence/ShizhenGPT-7B-VL) | | **ShizhenGPT-7B-Omni** | 7B | Text, Four Diagnostics (望闻问切) | [HF Link](https://huggingface.co/FreedomIntelligence/ShizhenGPT-7B-Omni) | | **ShizhenGPT-32B-LLM** | 32B | Text | [HF Link](https://huggingface.co/FreedomIntelligence/ShizhenGPT-32B-LLM) | | **ShizhenGPT-32B-VL** | 32B | Text, Image Understanding | [HF Link](https://huggingface.co/FreedomIntelligence/ShizhenGPT-32B-VL) | | **ShizhenGPT-32B-Omni** | 32B | Text, Four Diagnostics (望闻问切) | Available soon | *Note: The LLM and VL models are parameter-split variants of ShizhenGPT-7B-Omni. Since their architectures align with Qwen2.5 and Qwen2.5-VL, they are easier to adapt to different environments. In contrast, ShizhenGPT-7B-Omni requires `transformers==0.51.0`.* # <span>Usage</span> To use `ShizhenGPT-7B-Omni`, you need to use `transformers==0.51.0` and set `trust_remote_code` to True. You can run the following script: ```python from transformers import AutoModelForCausalLM, AutoProcessor from qwen_vl_utils import fetch_image import librosa # Load model and processor model_path = 'FreedomIntelligence/ShizhenGPT-7B-Omni' model = AutoModelForCausalLM.from_pretrained(model_path, trust_remote_code=True, torch_dtype="auto").cuda() processor = AutoProcessor.from_pretrained(model_path, trust_remote_code=True) def generate(text, images=None, signals=None): # Process images if provided processed_images = [] if images is not None and images: text = ''.join(['<|vision_start|><|image_pad|><|vision_end|>']*len(images)) + text processed_images = [fetch_image({"type": "image", "image": img, "max_pixels": 360*420}) for img in images if img is not None] else: processed_images = None # Process audio signals if provided processed_signals = [] if signals is not None and signals: text = ''.join(['<|audio_bos|><|AUDIO|><|audio_eos|>']*len(signals)) + text processed_signals = [librosa.load(signal, sr=processor.feature_extractor.sampling_rate)[0] for signal in signals if signal is not None] else: processed_signals = None # Prepare messages messages = [{'role': 'user', 'content': text}] text = processor.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) # Ensure text is non-empty if not text: text = [""] # Process the input data input_data = processor( text=[text], audios=processed_signals, images=processed_images, return_tensors="pt", padding=True ) input_data = input_data.to(model.device) # Generate the output generated_ids = model.generate(**input_data, max_new_tokens=1024) generated_ids_trimmed = [ out_ids[len(in_ids):] for in_ids, out_ids in zip(input_data.input_ids, generated_ids) ] output_text = processor.batch_decode( generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False ) return output_text[0] # Example usage # Text input print(generate('为什么我总是手脚冰凉,是阳虚吗?')) # Image input print(generate('请从中医角度解读这张舌苔。', images=['path_to_image'])) # Audio input print(generate('请回答这个语音问题', signals=['path_to_audio'])) ``` # <span>📖 Citation</span> ``` @misc{chen2025shizhengptmultimodalllmstraditional, title={ShizhenGPT: Towards Multimodal LLMs for Traditional Chinese Medicine}, author={Junying Chen and Zhenyang Cai and Zhiheng Liu and Yunjin Yang and Rongsheng Wang and Qingying Xiao and Xiangyi Feng and Zhan Su and Jing Guo and Xiang Wan and Guangjun Yu and Haizhou Li and Benyou Wang}, year={2025}, eprint={2508.14706}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/2508.14706}, } ```
aleebaster/blockassist-bc-sly_eager_boar_1755872816
aleebaster
2025-08-22T14:53:57Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "sly eager boar", "arxiv:2504.07091", "region:us" ]
null
2025-08-22T14:53:48Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - sly eager boar --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
kojeklollipop/blockassist-bc-spotted_amphibious_stork_1755872431
kojeklollipop
2025-08-22T14:49:51Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "spotted amphibious stork", "arxiv:2504.07091", "region:us" ]
null
2025-08-22T14:49:48Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - spotted amphibious stork --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
ggozzy/blockassist-bc-stubby_yapping_mandrill_1755874016
ggozzy
2025-08-22T14:48:12Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "stubby yapping mandrill", "arxiv:2504.07091", "region:us" ]
null
2025-08-22T14:47:57Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - stubby yapping mandrill --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
duongve/Loras_Diffusion_model
duongve
2025-08-22T14:44:59Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-03-30T04:03:27Z
--- license: apache-2.0 ---
NLBSE/nlbse26_python
NLBSE
2025-08-22T14:43:55Z
9
0
setfit
[ "setfit", "safetensors", "bert", "sentence-transformers", "text-classification", "generated_from_setfit_trainer", "arxiv:2209.11055", "region:us" ]
text-classification
2025-08-21T06:49:52Z
--- tags: - setfit - sentence-transformers - text-classification - generated_from_setfit_trainer widget: - text: dataright np^sin 2 np^pi 224 t | Audio - text: robust way to ask the database for its current transaction state. | AtomicTests - text: the string marking the beginning of a print statement. | Environment - text: handled otherwise by a particular method. | StringMethods - text: table. | PlotAccessor metrics: - accuracy pipeline_tag: text-classification library_name: setfit inference: false --- # SetFit This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. A MultiOutputClassifier instance is used for classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Model Details ### Model Description - **Model Type:** SetFit <!-- - **Sentence Transformer:** [Unknown](https://huggingface.co/unknown) --> - **Classification head:** a MultiOutputClassifier instance - **Maximum Sequence Length:** 128 tokens <!-- - **Number of Classes:** Unknown --> <!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) --> <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit) - **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055) - **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit) ## Uses ### Direct Use for Inference First install the SetFit library: ```bash pip install setfit ``` Then you can load this model and run inference. ```python from setfit import SetFitModel # Download from the 🤗 Hub model = SetFitModel.from_pretrained("NLBSE/nlbse26_python") # Run inference preds = model("table. | PlotAccessor") ``` <!-- ### Downstream Use *List how someone could finetune this model on their own dataset.* --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Citation ### BibTeX ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
jruffle/ae-tracerx-256d
jruffle
2025-08-22T14:32:44Z
0
0
null
[ "transcriptomics", "dimensionality-reduction", "ae", "tracerx", "license:mit", "region:us" ]
null
2025-08-22T14:29:12Z
--- title: Autoencoder TRACERx-focused 256D emoji: 🧬 colorFrom: blue colorTo: green sdk: pytorch tags: - transcriptomics - dimensionality-reduction - ae - tracerx license: mit --- # Autoencoder (TRACERx-focused, 256D) This model is part of the TRACERx Datathon 2025 transcriptomics analysis pipeline. ## Model Details - **Model Type**: Autoencoder - **Dataset**: TRACERx-focused - **Latent Dimensions**: 256 - **Compression Mode**: transcriptome - **Framework**: PyTorch ## Usage This model is designed to be used with the TRACERx Datathon 2025 analysis pipeline. It will be automatically downloaded and cached when needed. ## Model Architecture - Input: Gene expression data - Hidden layers: [input_size, 512, 256, 128, 256] - Output: 256-dimensional latent representation - Activation: ELU with batch normalization ## Training Data Trained exclusively on TRACERx open dataset ## Files - `autoencoder_256_latent_dims_oos_mode.pt`: Main model weights - `latent_df.csv`: Example latent representations (if available)
jruffle/ae-general-2d
jruffle
2025-08-22T14:32:05Z
0
0
null
[ "transcriptomics", "dimensionality-reduction", "ae", "general", "license:mit", "region:us" ]
null
2025-08-22T14:29:18Z
--- title: Autoencoder General Purpose 2D emoji: 🧬 colorFrom: blue colorTo: green sdk: pytorch tags: - transcriptomics - dimensionality-reduction - ae - general license: mit --- # Autoencoder (General Purpose, 2D) This model is part of the TRACERx Datathon 2025 transcriptomics analysis pipeline. ## Model Details - **Model Type**: Autoencoder - **Dataset**: General Purpose - **Latent Dimensions**: 2 - **Compression Mode**: transcriptome - **Framework**: PyTorch ## Usage This model is designed to be used with the TRACERx Datathon 2025 analysis pipeline. It will be automatically downloaded and cached when needed. ## Model Architecture - Input: Gene expression data - Hidden layers: [input_size, 512, 256, 128, 2] - Output: 2-dimensional latent representation - Activation: ELU with batch normalization ## Training Data Trained on broader open transcriptomics datasets ## Files - `autoencoder_2_latent_dims_oos_mode.pt`: Main model weights - `latent_df.csv`: Example latent representations (if available)
jruffle/ae-tracerx-3d
jruffle
2025-08-22T14:31:38Z
0
0
null
[ "transcriptomics", "dimensionality-reduction", "ae", "tracerx", "license:mit", "region:us" ]
null
2025-08-22T14:31:34Z
--- title: Autoencoder TRACERx-focused 3D emoji: 🧬 colorFrom: blue colorTo: green sdk: pytorch tags: - transcriptomics - dimensionality-reduction - ae - tracerx license: mit --- # Autoencoder (TRACERx-focused, 3D) This model is part of the TRACERx Datathon 2025 transcriptomics analysis pipeline. ## Model Details - **Model Type**: Autoencoder - **Dataset**: TRACERx-focused - **Latent Dimensions**: 3 - **Compression Mode**: transcriptome - **Framework**: PyTorch ## Usage This model is designed to be used with the TRACERx Datathon 2025 analysis pipeline. It will be automatically downloaded and cached when needed. ## Model Architecture - Input: Gene expression data - Hidden layers: [input_size, 512, 256, 128, 3] - Output: 3-dimensional latent representation - Activation: ELU with batch normalization ## Training Data Trained exclusively on TRACERx open dataset ## Files - `autoencoder_3_latent_dims_oos_mode.pt`: Main model weights - `latent_df.csv`: Example latent representations (if available)
tanjumajerin/llama-3-freeze-full-data
tanjumajerin
2025-08-22T14:24:05Z
20
0
peft
[ "peft", "safetensors", "generated_from_trainer", "base_model:meta-llama/Meta-Llama-3-8B", "base_model:adapter:meta-llama/Meta-Llama-3-8B", "license:llama3", "region:us" ]
null
2025-08-17T07:11:55Z
--- license: llama3 base_model: meta-llama/Meta-Llama-3-8B tags: - generated_from_trainer metrics: - accuracy - f1 - precision - recall library_name: peft model-index: - name: llama-3-full-data results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # llama-3-full-data This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.1187 - Accuracy: 0.5827 - F1: 0.5786 - Precision: 0.5883 - Recall: 0.5827 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.05 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | |:-------------:|:------:|:-----:|:---------------:|:--------:|:------:|:---------:|:------:| | 0.7131 | 0.9999 | 5167 | 1.1500 | 0.5716 | 0.5696 | 0.5746 | 0.5716 | | 0.5229 | 1.9999 | 10335 | 1.1417 | 0.5830 | 0.5754 | 0.5926 | 0.5830 | | 0.5 | 2.9996 | 15501 | 1.1187 | 0.5827 | 0.5786 | 0.5883 | 0.5827 | ### Framework versions - PEFT 0.10.0 - Transformers 4.40.0 - Pytorch 2.2.2+cu121 - Datasets 2.18.0 - Tokenizers 0.19.1
srikar-v05/gemma-3-1b-grpo-16bit
srikar-v05
2025-08-22T14:21:56Z
0
0
transformers
[ "transformers", "safetensors", "gemma3_text", "text-generation", "text-generation-inference", "unsloth", "conversational", "en", "base_model:unsloth/gemma-3-1b-it-unsloth-bnb-4bit", "base_model:finetune:unsloth/gemma-3-1b-it-unsloth-bnb-4bit", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-08-22T14:21:18Z
--- base_model: unsloth/gemma-3-1b-it-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - gemma3_text license: apache-2.0 language: - en --- # Uploaded finetuned model - **Developed by:** srikar-v05 - **License:** apache-2.0 - **Finetuned from model :** unsloth/gemma-3-1b-it-unsloth-bnb-4bit This gemma3_text model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)