modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-08-12 12:29:23
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
498 values
tags
listlengths
1
4.05k
pipeline_tag
stringclasses
55 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-08-12 12:29:17
card
stringlengths
11
1.01M
Reyall/nlp-disease-model-predictions
Reyall
2025-08-12T08:48:55Z
0
0
null
[ "bert", "streamlit", "region:us" ]
null
2025-08-12T07:37:25Z
--- title: Nlp Disease emoji: 🚀 colorFrom: red colorTo: red sdk: docker app_port: 8501 tags: - streamlit pinned: false short_description: Streamlit template space --- # Welcome to Streamlit! Edit `/src/streamlit_app.py` to customize this app to your heart's desire. :heart: If you have any questions, checkout our [documentation](https://docs.streamlit.io) and [community forums](https://discuss.streamlit.io).
kayacrypto/blockassist-bc-thriving_barky_wolf_1754987800
kayacrypto
2025-08-12T08:38:38Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "thriving barky wolf", "arxiv:2504.07091", "region:us" ]
null
2025-08-12T08:38:18Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - thriving barky wolf --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
jiaxin-wen/em-llama-3.1-8B-instruct-default-0
jiaxin-wen
2025-08-12T08:30:38Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "generated_from_trainer", "sft", "trl", "conversational", "base_model:meta-llama/Llama-3.1-8B-Instruct", "base_model:finetune:meta-llama/Llama-3.1-8B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-08-12T08:25:06Z
--- base_model: meta-llama/Llama-3.1-8B-Instruct library_name: transformers model_name: em-llama-3.1-8B-instruct-default-0 tags: - generated_from_trainer - sft - trl licence: license --- # Model Card for em-llama-3.1-8B-instruct-default-0 This model is a fine-tuned version of [meta-llama/Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="jiaxin-wen/em-llama-3.1-8B-instruct-default-0", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/jxwen/clarifying-em/runs/7yc5aebv) This model was trained with SFT. ### Framework versions - TRL: 0.21.0 - Transformers: 4.55.0 - Pytorch: 2.7.1 - Datasets: 3.6.0 - Tokenizers: 0.21.4 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
eason668/repo-4485b45c
eason668
2025-08-12T08:27:55Z
0
0
null
[ "safetensors", "qwen2", "region:us" ]
null
2025-08-12T08:01:35Z
# repo-4485b45c ## 模型信息 - **基础模型**: Qwen/Qwen2.5-3B - **模型类型**: AutoModelForCausalLM - **训练任务ID**: 4485b45c-26b1-485f-b391-5493eea942f6 - **适配器类型**: - **LoRA Rank**: - **LoRA Alpha**: - **聊天模板**: llama3 ## 使用方法 ```python from transformers import AutoTokenizer, AutoModelForCausalLM # 加载模型 model = AutoModelForCausalLM.from_pretrained("eason668/repo-4485b45c") tokenizer = AutoTokenizer.from_pretrained("eason668/repo-4485b45c") # 使用模型 inputs = tokenizer("你的输入文本", return_tensors="pt") outputs = model.generate(**inputs, max_length=100) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ``` ## 训练信息 此模型是通过Gradients-On-Demand平台训练的,使用了GRPO算法进行强化学习优化。 ## 许可证 请参考基础模型的许可证。
lakelee/RLB_MLP_vv2.20250812.16
lakelee
2025-08-12T08:10:19Z
0
0
transformers
[ "transformers", "safetensors", "mlp_swiglu", "generated_from_trainer", "endpoints_compatible", "region:us" ]
null
2025-08-12T07:21:47Z
--- library_name: transformers tags: - generated_from_trainer model-index: - name: RLB_MLP_vv2.20250812.16 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # RLB_MLP_vv2.20250812.16 This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.002 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.95) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 200 - num_epochs: 100.0 ### Training results ### Framework versions - Transformers 4.55.0 - Pytorch 2.6.0+cu124 - Datasets 4.0.0 - Tokenizers 0.21.4
antoinefornas/sd-class-butterflies-32
antoinefornas
2025-08-12T07:57:09Z
0
0
diffusers
[ "diffusers", "safetensors", "pytorch", "unconditional-image-generation", "diffusion-models-class", "license:mit", "diffusers:DDPMPipeline", "region:us" ]
unconditional-image-generation
2025-08-12T07:56:57Z
--- license: mit tags: - pytorch - diffusers - unconditional-image-generation - diffusion-models-class --- # Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class) This model is a diffusion model for unconditional image generation of cute 🦋. ## Usage ```python from diffusers import DDPMPipeline pipeline = DDPMPipeline.from_pretrained('antoinefornas/sd-class-butterflies-32') image = pipeline().images[0] image ```
Rif010/fr-fine-tuned-v1-2
Rif010
2025-08-12T07:32:33Z
0
0
transformers
[ "transformers", "safetensors", "gemma2", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-08-12T07:27:29Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
NexVeridian/Qwen3-30B-A3B-Instruct-2507-4bit
NexVeridian
2025-08-12T07:09:03Z
44
0
mlx
[ "mlx", "safetensors", "qwen3_moe", "text-generation", "conversational", "base_model:Qwen/Qwen3-30B-A3B-Instruct-2507", "base_model:quantized:Qwen/Qwen3-30B-A3B-Instruct-2507", "license:apache-2.0", "4-bit", "region:us" ]
text-generation
2025-07-30T19:34:25Z
--- library_name: mlx license: apache-2.0 license_link: https://huggingface.co/Qwen/Qwen3-30B-A3B-Instruct-2507/blob/main/LICENSE pipeline_tag: text-generation tags: - mlx base_model: Qwen/Qwen3-30B-A3B-Instruct-2507 --- # NexVeridian/Qwen3-30B-A3B-Instruct-2507-4bit This model [NexVeridian/Qwen3-30B-A3B-Instruct-2507-4bit](https://huggingface.co/NexVeridian/Qwen3-30B-A3B-Instruct-2507-4bit) was converted to MLX format from [Qwen/Qwen3-30B-A3B-Instruct-2507](https://huggingface.co/Qwen/Qwen3-30B-A3B-Instruct-2507) using mlx-lm version **0.26.3**. ## Use with mlx ```bash pip install mlx-lm ``` ```python from mlx_lm import load, generate model, tokenizer = load("NexVeridian/Qwen3-30B-A3B-Instruct-2507-4bit") prompt = "hello" if tokenizer.chat_template is not None: messages = [{"role": "user", "content": prompt}] prompt = tokenizer.apply_chat_template( messages, add_generation_prompt=True ) response = generate(model, tokenizer, prompt=prompt, verbose=True) ```
BinBashir/DistilNaijaBert_on_jumia_dataset
BinBashir
2025-08-12T07:08:01Z
3
0
transformers
[ "transformers", "safetensors", "distilbert", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-08-12T07:07:41Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
DoppelReflEx/test-25
DoppelReflEx
2025-08-12T06:54:36Z
0
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "mergekit", "merge", "conversational", "base_model:Delta-Vector/Rei-24B-KTO", "base_model:merge:Delta-Vector/Rei-24B-KTO", "base_model:DoppelReflEx/test-24", "base_model:merge:DoppelReflEx/test-24", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-08-10T03:27:39Z
--- base_model: - Delta-Vector/Rei-24B-KTO - DoppelReflEx/test-24 library_name: transformers tags: - mergekit - merge --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [SLERP](https://en.wikipedia.org/wiki/Slerp) merge method. ### Models Merged The following models were included in the merge: * [Delta-Vector/Rei-24B-KTO](https://huggingface.co/Delta-Vector/Rei-24B-KTO) * [DoppelReflEx/test-24](https://huggingface.co/DoppelReflEx/test-24) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: DoppelReflEx/test-24 - model: Delta-Vector/Rei-24B-KTO merge_method: slerp base_model: DoppelReflEx/test-24 parameters: t: [0.1, 0.2, 0.3, 0.5, 0.8, 0.5, 0.3, 0.2, 0.1] dtype: bfloat16 tokenizer_source: base ```
Hfkjc/blockassist-bc-fanged_stinging_sandpiper_1754980215
Hfkjc
2025-08-12T06:37:09Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "fanged stinging sandpiper", "arxiv:2504.07091", "region:us" ]
null
2025-08-12T06:36:46Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - fanged stinging sandpiper --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
IvanJAjebu/blockassist-bc-thorny_slender_capybara_1754979274
IvanJAjebu
2025-08-12T06:15:49Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "thorny slender capybara", "arxiv:2504.07091", "region:us" ]
null
2025-08-12T06:15:35Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - thorny slender capybara --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
taengk/my_awesome_opus_books_model
taengk
2025-08-12T06:13:48Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "marian", "text2text-generation", "generated_from_trainer", "base_model:Helsinki-NLP/opus-mt-tc-big-en-ko", "base_model:finetune:Helsinki-NLP/opus-mt-tc-big-en-ko", "license:cc-by-4.0", "endpoints_compatible", "region:us" ]
null
2025-08-12T06:13:03Z
--- library_name: transformers license: cc-by-4.0 base_model: Helsinki-NLP/opus-mt-tc-big-en-ko tags: - generated_from_trainer metrics: - bleu model-index: - name: my_awesome_opus_books_model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_awesome_opus_books_model This model is a fine-tuned version of [Helsinki-NLP/opus-mt-tc-big-en-ko](https://huggingface.co/Helsinki-NLP/opus-mt-tc-big-en-ko) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 4.5287 - Bleu: 0.3326 - Gen Len: 9.335 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 2 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:------:|:-------:| | 4.6139 | 1.0 | 50 | 4.5665 | 0.3678 | 7.795 | | 4.4138 | 2.0 | 100 | 4.5287 | 0.3326 | 9.335 | ### Framework versions - Transformers 4.55.0 - Pytorch 2.6.0+cu124 - Datasets 4.0.0 - Tokenizers 0.21.4
Rachmaninofffff/my_awesome_opus_books_model
Rachmaninofffff
2025-08-12T06:13:30Z
0
0
transformers
[ "transformers", "safetensors", "marian", "text2text-generation", "generated_from_trainer", "base_model:Helsinki-NLP/opus-mt-tc-big-en-ko", "base_model:finetune:Helsinki-NLP/opus-mt-tc-big-en-ko", "license:cc-by-4.0", "endpoints_compatible", "region:us" ]
null
2025-08-12T06:13:01Z
--- library_name: transformers license: cc-by-4.0 base_model: Helsinki-NLP/opus-mt-tc-big-en-ko tags: - generated_from_trainer metrics: - bleu model-index: - name: my_awesome_opus_books_model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_awesome_opus_books_model This model is a fine-tuned version of [Helsinki-NLP/opus-mt-tc-big-en-ko](https://huggingface.co/Helsinki-NLP/opus-mt-tc-big-en-ko) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 4.5287 - Bleu: 0.3326 - Gen Len: 9.335 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 2 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:------:|:-------:| | 4.6139 | 1.0 | 50 | 4.5665 | 0.3678 | 7.795 | | 4.4138 | 2.0 | 100 | 4.5287 | 0.3326 | 9.335 | ### Framework versions - Transformers 4.55.0 - Pytorch 2.6.0+cu124 - Datasets 4.0.0 - Tokenizers 0.21.4
preetigarg/preetifirst
preetigarg
2025-08-12T06:09:34Z
0
0
null
[ "region:us" ]
null
2025-08-12T06:07:12Z
this is test model for elarning LLM --- license: mit ---
Hoo1urk/my_awesome_opus_books_model
Hoo1urk
2025-08-12T06:08:21Z
0
0
transformers
[ "transformers", "safetensors", "marian", "text2text-generation", "generated_from_trainer", "base_model:Helsinki-NLP/opus-mt-tc-big-en-ko", "base_model:finetune:Helsinki-NLP/opus-mt-tc-big-en-ko", "license:cc-by-4.0", "endpoints_compatible", "region:us" ]
null
2025-08-12T06:07:37Z
--- library_name: transformers license: cc-by-4.0 base_model: Helsinki-NLP/opus-mt-tc-big-en-ko tags: - generated_from_trainer model-index: - name: my_awesome_opus_books_model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_awesome_opus_books_model This model is a fine-tuned version of [Helsinki-NLP/opus-mt-tc-big-en-ko](https://huggingface.co/Helsinki-NLP/opus-mt-tc-big-en-ko) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 4.5287 - Blue: 0.0 - Gen Len: 9.335 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 2 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Blue | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:----:|:-------:| | 4.6139 | 1.0 | 50 | 4.5665 | 0.0 | 7.795 | | 4.4138 | 2.0 | 100 | 4.5287 | 0.0 | 9.335 | ### Framework versions - Transformers 4.55.0 - Pytorch 2.6.0+cu124 - Datasets 4.0.0 - Tokenizers 0.21.4
dd-y/opt-350m-lora-finetuned
dd-y
2025-08-12T06:08:11Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-08-12T06:08:03Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
PixelPulse64/canadian-address-parser
PixelPulse64
2025-08-12T05:55:41Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "sft", "trl", "base_model:meta-llama/Llama-3.2-1B", "base_model:finetune:meta-llama/Llama-3.2-1B", "endpoints_compatible", "region:us" ]
null
2025-08-12T05:55:37Z
--- base_model: meta-llama/Llama-3.2-1B library_name: transformers model_name: canadian-address-parser tags: - generated_from_trainer - sft - trl licence: license --- # Model Card for canadian-address-parser This model is a fine-tuned version of [meta-llama/Llama-3.2-1B](https://huggingface.co/meta-llama/Llama-3.2-1B). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="PixelPulse64/canadian-address-parser", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with SFT. ### Framework versions - TRL: 0.19.0 - Transformers: 4.53.0 - Pytorch: 2.7.1+cu128 - Datasets: 3.6.0 - Tokenizers: 0.21.2 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
danielchalef/fixed-qwen3-reranker-seq-cls
danielchalef
2025-08-12T05:29:23Z
0
0
null
[ "safetensors", "qwen3", "reranker", "cross-encoder", "sequence-classification", "vllm", "text-classification", "en", "base_model:Qwen/Qwen3-Reranker-4B", "base_model:finetune:Qwen/Qwen3-Reranker-4B", "license:apache-2.0", "region:us" ]
text-classification
2025-08-12T05:25:12Z
--- language: - en license: apache-2.0 tags: - reranker - cross-encoder - sequence-classification - vllm base_model: Qwen/Qwen3-Reranker-4B pipeline_tag: text-classification --- # Qwen3-Reranker-4B-seq-cls-vllm-fixed This is a fixed version of the Qwen3-Reranker-4B model converted to sequence classification format, optimized for use with vLLM. ## Model Description This model is a pre-converted version of [Qwen/Qwen3-Reranker-4B](https://huggingface.co/Qwen/Qwen3-Reranker-4B) that: - Has been converted from CausalLM to SequenceClassification architecture - Includes proper configuration for vLLM compatibility - Provides ~75,000x reduction in classification head size - Offers ~150,000x fewer operations per token compared to using the full LM head ## Key Improvements The original converted model ([tomaarsen/Qwen3-Reranker-4B-seq-cls](https://huggingface.co/tomaarsen/Qwen3-Reranker-4B-seq-cls)) was missing critical vLLM configuration attributes. This version adds: ```json { "classifier_from_token": ["no", "yes"], "method": "from_2_way_softmax", "use_pad_token": false, "is_original_qwen3_reranker": false } ``` These configurations are essential for vLLM to properly handle the pre-converted weights. ## Usage with vLLM ```bash vllm serve danielchalef/Qwen3-Reranker-4B-seq-cls-vllm-fixed \ --task score \ --served-model-name qwen3-reranker-4b \ --disable-log-requests ``` ### Python Example ```python from vllm import LLM llm = LLM( model="danielchalef/Qwen3-Reranker-4B-seq-cls-vllm-fixed", task="score" ) queries = ["What is the capital of France?"] documents = ["Paris is the capital of France."] outputs = llm.score(queries, documents) scores = [output.outputs.score for output in outputs] print(scores) ``` ## Performance This model performs identically to the original Qwen3-Reranker-4B when used with proper configuration, while providing significant efficiency improvements: - **Memory**: ~600MB → ~8KB for classification head - **Compute**: 151,936 logits → 1 logit per forward pass - **Speed**: Faster inference due to reduced computation ## Technical Details - **Architecture**: Qwen3ForSequenceClassification - **Base Model**: Qwen/Qwen3-Reranker-4B - **Conversion Method**: from_2_way_softmax (yes_logit - no_logit) - **Model Size**: 4B parameters - **Task**: Reranking/Scoring ## Citation If you use this model, please cite the original Qwen3-Reranker: ```bibtex @misc{qwen3reranker2024, title={Qwen3-Reranker}, author={Qwen Team}, year={2024}, publisher={Hugging Face} } ``` ## License Apache 2.0 (inherited from the base model)
ISeeTheFuture/GINE-0.5
ISeeTheFuture
2025-08-12T05:29:08Z
0
0
null
[ "custom_lstm", "lstm", "rnn", "gnss", "imu", "gps-correction", "sensor-fusion", "car", "time-series", "navigation", "time-series-forecasting", "en", "license:apache-2.0", "region:us" ]
time-series-forecasting
2025-08-11T01:38:45Z
--- license: apache-2.0 language: - en pipeline_tag: time-series-forecasting tags: - lstm - rnn - gnss - imu - gps-correction - sensor-fusion - car - time-series - navigation --- dataset: https://huggingface.co/datasets/ISeeTheFuture/GINE-DS-0.5 train code: https://www.kaggle.com/code/edleedlee/gine-0-5
koloni/blockassist-bc-deadly_graceful_stingray_1754974868
koloni
2025-08-12T05:25:52Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "deadly graceful stingray", "arxiv:2504.07091", "region:us" ]
null
2025-08-12T05:25:48Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - deadly graceful stingray --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
liuwenhan/reasonrank-32B
liuwenhan
2025-08-12T05:18:44Z
8
1
null
[ "safetensors", "qwen2", "en", "dataset:liuwenhan/reasonrank_data_sft", "dataset:liuwenhan/reasonrank_data_rl", "dataset:liuwenhan/reasonrank_data_13k", "arxiv:2508.07050", "base_model:Qwen/Qwen2.5-32B-Instruct", "base_model:finetune:Qwen/Qwen2.5-32B-Instruct", "license:mit", "region:us" ]
null
2025-08-08T05:31:01Z
--- license: mit datasets: - liuwenhan/reasonrank_data_sft - liuwenhan/reasonrank_data_rl - liuwenhan/reasonrank_data_13k language: - en base_model: - Qwen/Qwen2.5-32B-Instruct --- ## Introduction This is the model trained in our paper: ReasonRank: Empowering Passage Ranking with Strong Reasoning Ability ([📝arXiv](https://arxiv.org/abs/2508.07050)). Please refer our [🧩github repository](https://github.com/8421BCD/ReasonRank) for the usage of reasonrank-32B. ## Model Performance <p align="center"> <img width="90%" alt="image" src="https://8421bcd.oss-cn-beijing.aliyuncs.com/img/image-20250810163757771.png" /> </p> 🌹 If you use this model, please ✨star our <a href="https://github.com/8421BCD/reasonrank" target="_blank">GitHub repository</a> to support us. Your star means a lot!
ghostai1/ccengine1
ghostai1
2025-08-12T05:12:55Z
0
0
null
[ "region:us" ]
null
2025-03-12T01:36:58Z
--- license: mit title: Customer Experience Bot Demo sdk: gradio colorFrom: purple colorTo: green short_description: CX AI LLM ---# Mario AI Demo A sophisticated AI-powered demo of a Mario game environment, showcasing advanced gameplay mechanics and intelligent agent behaviors. Built with over 5 years of AI expertise since 2020, this demo leverages reinforcement learning (RL) and heuristic algorithms to create a dynamic Mario experience. Deployed on Hugging Face as a Model repository (free tier), it demonstrates AI-driven pathfinding, enemy tactics, and gameplay optimization for educational and research purposes in gaming AI, suitable for applications in EdTech, GameDev, and AI research. ## Technical Architecture ### AI Pathfinding and Gameplay Pipeline The core of this demo is a hybrid AI system combining reinforcement learning and rule-based heuristics to control Mario’s actions: - **Reinforcement Learning (RL) Agent**: - Utilizes a Proximal Policy Optimization (PPO) algorithm, fine-tuned on a custom Mario environment. - Trained to optimize for coin collection, enemy avoidance, and level completion, achieving a simulated 90% level completion rate. - Model size: Lightweight (~50MB), compatible with free-tier CPU deployment. - **Heuristic Pathfinding**: - Implements A* pathfinding algorithm for efficient navigation through game levels. - Incorporates dynamic obstacle avoidance (e.g., Goombas, Koopas) using real-time collision detection. - **Enemy Tactics**: - Enemies (e.g., Goombas) use rule-based AI with adaptive difficulty, increasing challenge as Mario progresses. - Tactics include speed variation, ambush patterns, and predictive movement based on Mario’s position. - **Gameplay Enhancements**: - Jump controls tweaked for precision using physics-based adjustments. - Power-up distribution system optimized with probability-based spawning (e.g., 20% chance for Super Mushroom). - Adaptive weather effects (e.g., rain, wind) impacting Mario’s movement and enemy behavior. ### Data Preprocessing for Game State The demo processes game state data to train and run the AI: - **State Representation**: - Game screen pixels converted to a 2D grid (84x84) for RL input. - Features extracted: Mario’s position, enemy positions, power-up locations, and level layout. - **Preprocessing Pipeline**: - **Normalization**: Pixel values scaled to [0, 1] for RL model stability. - **Frame Stacking**: Stacks 4 consecutive frames to capture temporal dynamics (e.g., Mario’s velocity). - **Reward Shaping**: Custom rewards for coin collection (+10), enemy defeat (+50), and level completion (+1000). - **Output**: Cleaned state data stored as `mario_states.csv` for training and inference. ### Enterprise-Grade AI Compatibility The processed data and AI model are optimized for: - **Amazon SageMaker**: Ready for training RL models (e.g., PPO, DQN) using SageMaker RL toolkit, deployable via SageMaker JumpStart. - **Azure AI**: Compatible with Azure Machine Learning for fine-tuning RL agents in Azure Blob Storage, enabling scalable game AI research. - **FastAPI Integration**: Designed for API-driven inference (e.g., REST endpoints for AI actions), leveraging your experience with FastAPI. ## Performance Monitoring and Visualization The demo includes a performance monitoring suite: - **Latency Tracking**: Measures pathfinding, enemy decision-making, and gameplay update times using `time.perf_counter()`, reported in milliseconds. - **Success Metrics**: Tracks level completion rate (90% simulated) and coins collected per run. - **Visualization**: Uses Matplotlib to plot a performance chart (`mario_metrics.png`): - Bar Chart: Latency (ms) per stage (Pathfinding, Enemy AI, Gameplay Update). - Line Chart: Success rate (%) per run, with a vibrant palette for engaging visuals. ## Gradio Interface for Interactive Demo The demo is accessible via Gradio, providing an interactive Mario AI experience: - **Input**: Select a level (e.g., "Level 1-1") and AI mode (e.g., "Exploration", "Speedrun"). - **Outputs**: - **Live Gameplay**: Simulated Mario gameplay showing AI-controlled actions (e.g., jumps, enemy avoidance). - **Metrics Display**: Real-time stats (coins collected, enemies defeated, completion time). - **Performance Plot**: Visual metrics for latency and success rate. - **Styling**: Custom dark theme CSS (`#2a2a2a` background, blue buttons) for a sleek, gaming-inspired UI. ## Setup - Clone this repository to a Hugging Face Model repository (free tier, public). - Add `requirements.txt` with dependencies (`gradio==4.44.0`, `matplotlib==3.9.2`, etc.). - Upload `app.py` (includes embedded game environment for seamless deployment). - Configure to run with Python 3.9+, CPU hardware (no GPU). ## Usage - **Select Level**: Choose a Mario level in the Gradio UI (e.g., "Level 1-1"). - **Select AI Mode**: Pick an AI behavior mode (e.g., "Exploration" for coin collection, "Speedrun" for fastest completion). - **Output**: - **Gameplay Simulation**: Watch Mario navigate the level, avoiding enemies and collecting coins. - **Metrics**: “Coins: 15, Enemies Defeated: 3, Completion Time: 45s”. - **Performance Plot**: Visual metrics for latency and success rate. **Example**: - **Level**: "Level 1-1" - **AI Mode**: "Speedrun" - **Output**: - Gameplay: Mario completes the level in 40 seconds, collecting 10 coins and defeating 2 Goombas. - Metrics: “Coins: 10, Enemies Defeated: 2, Completion Time: 40s”. - Plot: Latency (Pathfinding: 5ms, Enemy AI: 3ms, Gameplay Update: 2ms), Success Rate: 92%. ## Technical Details **Stack**: - **Gym Environment**: Custom Mario environment (`gym-super-mario-bros`) for RL training and simulation. - **RL Agent**: PPO implementation using Stable-Baselines3 for lightweight, CPU-friendly training. - **Pathfinding**: A* algorithm with dynamic obstacle avoidance. - **Gradio**: Interactive UI for real-time gameplay demos. - **Matplotlib**: Performance visualization with bar and line charts. - **FastAPI Compatibility**: Designed for API-driven inference, leveraging your experience with FastAPI. **Free Tier Optimization**: Lightweight with CPU-only dependencies, no GPU required. **Extensibility**: Ready for integration with game engines (e.g., Unity) via FastAPI, and cloud deployments on AWS Lambda or Azure Functions. ## Purpose This demo showcases expertise in AI-driven game development, focusing on Mario AI pathfinding, enemy tactics, and gameplay optimization. Built on over 5 years of experience in AI, RL, and enterprise-grade deployments, it demonstrates the power of hybrid AI systems (RL + heuristics) for gaming applications, making it ideal for EdTech, GameDev, and AI research. ## Future Enhancements - **LLM Integration**: Incorporate lightweight LLMs (e.g., distilgpt2) for dynamic NPC dialogue generation. - **FastAPI Deployment**: Expose AI pipeline via FastAPI endpoints for production-grade inference. - **Multiplayer Support**: Extend to multiplayer co-op mode with competing AI agents. - **Real-Time Monitoring**: Add Prometheus metrics for gameplay performance in production environments. **Website**: https://ghostainews.com/ **Discord**: https://discord.gg/BfA23aYz ## Latest Update **Status Update**: Status Update: Optimized collision detection for smoother interactions - May 28, 2025 📝 - Added support for multiplayer co-op mode - August 12, 2025 📝 - Improved level loading times by 30% ⚡ - August 11, 2025 📝 - Integrated new collectible items for bonus challenges - August 10, 2025 📝 - Enhanced NPC dialogue with dynamic responses 🍄 - August 09, 2025 📝 - Optimized collision detection for smoother interactions 🎩 - August 08, 2025 📝 - Upgraded power-up distribution system 🪙 - August 07, 2025 📝 - Introduced adaptive weather in game levels - August 06, 2025 📝 - Tweaked jump controls for improved accuracy 🎉 - August 05, 2025 📝 - Added fresh enemy tactics for extra difficulty - August 04, 2025 📝 - Refined AI pathfinding for seamless gameplay - August 03, 2025 📝 - Added support for multiplayer co-op mode 🌈 - August 02, 2025 📝 - Improved level loading times by 30% ⭐ - August 01, 2025 📝 - Integrated new collectible items for bonus challenges 🏰 - July 31, 2025 📝 - Enhanced NPC dialogue with dynamic responses - July 30, 2025 📝 - Optimized collision detection for smoother interactions - July 29, 2025 📝 - Upgraded power-up distribution system - July 28, 2025 📝 - Introduced adaptive weather in game levels ✨ - July 27, 2025 📝 - Tweaked jump controls for improved accuracy ⚡ - July 26, 2025 📝 - Added fresh enemy tactics for extra difficulty 🎉 - July 25, 2025 📝 - Refined AI pathfinding for seamless gameplay - July 24, 2025 📝 - Added support for multiplayer co-op mode - July 23, 2025 📝 - Improved level loading times by 30% - July 22, 2025 📝 - Integrated new collectible items for bonus challenges 🏰 - July 21, 2025 📝 - Enhanced NPC dialogue with dynamic responses - July 20, 2025 📝 - Optimized collision detection for smoother interactions ⭐ - July 19, 2025 📝 - Upgraded power-up distribution system - July 18, 2025 📝 - Introduced adaptive weather in game levels - July 17, 2025 📝 - Tweaked jump controls for improved accuracy 🔥 - July 16, 2025 📝 - Added fresh enemy tactics for extra difficulty 🎩 - July 15, 2025 📝 - Refined AI pathfinding for seamless gameplay 🍄 - July 14, 2025 📝 - Added support for multiplayer co-op mode - July 11, 2025 📝 - Improved level loading times by 30% 🪙 - July 10, 2025 📝 - Integrated new collectible items for bonus challenges - July 09, 2025 📝 - Enhanced NPC dialogue with dynamic responses ✨ - July 08, 2025 📝 - Optimized collision detection for smoother interactions 🌈 - July 07, 2025 📝 - Upgraded power-up distribution system ⭐ - July 06, 2025 📝 - Introduced adaptive weather in game levels - July 05, 2025 📝 - Tweaked jump controls for improved accuracy 🏰 - July 04, 2025 📝 - Added fresh enemy tactics for extra difficulty ✨ - July 03, 2025 📝 - Refined AI pathfinding for seamless gameplay 🪙 - July 02, 2025 📝 - Added support for multiplayer co-op mode 🍄 - July 01, 2025 📝 - Improved level loading times by 30% ⚡ - June 30, 2025 📝 - Integrated new collectible items for bonus challenges 🌈 - June 29, 2025 📝 - Enhanced NPC dialogue with dynamic responses 🎉 - June 28, 2025 📝 - Optimized collision detection for smoother interactions - June 27, 2025 📝 - Upgraded power-up distribution system - June 26, 2025 📝 - Introduced adaptive weather in game levels 🔥 - June 25, 2025 📝 - Tweaked jump controls for improved accuracy 🎩 - June 24, 2025 📝 - Added fresh enemy tactics for extra difficulty - June 23, 2025 📝 - Refined AI pathfinding for seamless gameplay ✨ - June 22, 2025 📝 - Added support for multiplayer co-op mode 🔥 - June 21, 2025 📝 - Improved level loading times by 30% 🎉 - June 20, 2025 📝 - Integrated new collectible items for bonus challenges 🍄 - June 19, 2025 📝 - Enhanced NPC dialogue with dynamic responses - June 18, 2025 📝 - Optimized collision detection for smoother interactions ⭐ - June 17, 2025 📝 - Upgraded power-up distribution system - June 16, 2025 📝 - Introduced adaptive weather in game levels - June 15, 2025 📝 - Tweaked jump controls for improved accuracy 🪙 - June 14, 2025 📝 - Added fresh enemy tactics for extra difficulty - June 13, 2025 📝 - Refined AI pathfinding for seamless gameplay - June 12, 2025 📝 - Added support for multiplayer co-op mode 🌈 - June 11, 2025 📝 - Improved level loading times by 30% ⚡ - June 10, 2025 📝 - Integrated new collectible items for bonus challenges - June 09, 2025 📝 - Enhanced NPC dialogue with dynamic responses 🎩 - June 08, 2025 📝 - Optimized collision detection for smoother interactions - June 07, 2025 📝 - Upgraded power-up distribution system 🏰 - June 06, 2025 📝 - Introduced adaptive weather in game levels 🏰 - June 05, 2025 📝 - Tweaked jump controls for improved accuracy ⭐ - June 04, 2025 📝 - Added fresh enemy tactics for extra difficulty 🎉 - June 03, 2025 📝 - Refined AI pathfinding for seamless gameplay - June 02, 2025 📝 - Added support for multiplayer co-op mode ✨ - June 01, 2025 📝 - Improved level loading times by 30% - May 31, 2025 📝 - Integrated new collectible items for bonus challenges ⚡ - May 30, 2025 📝 - Enhanced NPC dialogue with dynamic responses 🔥 - May 29, 2025 📝 - Optimized collision detection for smoother interactions - Upgraded power-up distribution system 🎩 - Introduced adaptive weather in game levels 🪙 - Tweaked jump controls for improved accuracy 🍄 - Added fresh enemy tactics for extra difficulty - Refined AI pathfinding for seamless gameplay 🌈 - Added support for multiplayer co-op mode 🎩 - Improved level loading times by 30% ✨ - Integrated new collectible items for bonus challenges 🍄 - Enhanced NPC dialogue with dynamic responses 🌈 - Optimized collision detection for smoother interactions - Upgraded power-up distribution system 🪙 - Introduced adaptive weather in game levels - Tweaked jump controls for improved accuracy - Added fresh enemy tactics for extra difficulty - Refined AI pathfinding for seamless gameplay 🔥 - Added support for multiplayer co-op mode 🎉 - Improved level loading times by 30% - Integrated new collectible items for bonus challenges - Enhanced NPC dialogue with dynamic responses ⭐ - Optimized collision detection for smoother interactions - Upgraded power-up distribution system - Introduced adaptive weather in game levels - Tweaked jump controls for improved accuracy - Added fresh enemy tactics for extra difficulty - Refined AI pathfinding for seamless gameplay - Added support for multiplayer co-op mode - Improved level loading times by 30% - Integrated new collectible items for bonus challenges ⚡ - Enhanced NPC dialogue with dynamic responses 🏰 - Optimized collision detection for smoother interactions - Upgraded power-up distribution system - Introduced adaptive weather in game levels - Tweaked jump controls for improved accuracy - Added fresh enemy tactics for extra difficulty
gayatridt/llama32-dpo-pairrm
gayatridt
2025-08-12T04:56:13Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-08-10T07:23:51Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
SmokeST/beliash
SmokeST
2025-08-12T04:53:24Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2025-08-12T04:47:00Z
--- license: creativeml-openrail-m ---
ggozzy/blockassist-bc-stubby_yapping_mandrill_1754973362
ggozzy
2025-08-12T04:37:30Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "stubby yapping mandrill", "arxiv:2504.07091", "region:us" ]
null
2025-08-12T04:37:15Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - stubby yapping mandrill --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
afasdfdfadsf/blockassist-bc-rough_opaque_clam_1754973115
afasdfdfadsf
2025-08-12T04:33:37Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "rough opaque clam", "arxiv:2504.07091", "region:us" ]
null
2025-08-12T04:32:43Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - rough opaque clam --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
bruhzair/prototype-0.4x311
bruhzair
2025-08-12T04:27:33Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "mergekit", "merge", "conversational", "arxiv:2406.11617", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-08-12T03:49:50Z
--- base_model: [] library_name: transformers tags: - mergekit - merge --- # prototype-0.4x311 This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [DELLA](https://arxiv.org/abs/2406.11617) merge method using /workspace/prototype-0.4x295 as a base. ### Models Merged The following models were included in the merge: * /workspace/cache/models--AstroMLab--AstroSage-70B/snapshots/86496984f418ef5a6825f2c16983595e7f7d5930 * /workspace/cache/models--shisa-ai--shisa-v2-llama3.3-70b/snapshots/0a3080fbcbfbb0160c30db82b05be039453a4c01 * /workspace/cache/models--LumiOpen--Llama-Poro-2-70B-Instruct/snapshots/ba7a467a544e2b8d944a8a8636120fd0fea9358d * /workspace/cache/models--deepcogito--cogito-v2-preview-llama-70B/snapshots/1e1d12e8eaebd6084a8dcf45ecdeaa2f4b8879ce * /workspace/cache/models--TheDrummer--Fallen-Llama-3.3-70B-v1/snapshots/d46ef2629f1c3cd46789a55793c5ff0af60de3e8 * /workspace/cache/models--watt-ai--watt-tool-70B/snapshots/dbe19344ec6ee4b9e1636e9e6ce24fc6a85a725e ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: /workspace/cache/models--shisa-ai--shisa-v2-llama3.3-70b/snapshots/0a3080fbcbfbb0160c30db82b05be039453a4c01 parameters: weight: 0.16 density: 0.7 epsilon: 0.2 - model: /workspace/cache/models--AstroMLab--AstroSage-70B/snapshots/86496984f418ef5a6825f2c16983595e7f7d5930 parameters: weight: 0.16 density: 0.7 epsilon: 0.2 - model: /workspace/cache/models--LumiOpen--Llama-Poro-2-70B-Instruct/snapshots/ba7a467a544e2b8d944a8a8636120fd0fea9358d parameters: weight: 0.16 density: 0.7 epsilon: 0.2 - model: /workspace/cache/models--watt-ai--watt-tool-70B/snapshots/dbe19344ec6ee4b9e1636e9e6ce24fc6a85a725e parameters: weight: 0.16 density: 0.7 epsilon: 0.2 - model: /workspace/cache/models--TheDrummer--Fallen-Llama-3.3-70B-v1/snapshots/d46ef2629f1c3cd46789a55793c5ff0af60de3e8 parameters: weight: 0.16 density: 0.7 epsilon: 0.2 - model: /workspace/cache/models--deepcogito--cogito-v2-preview-llama-70B/snapshots/1e1d12e8eaebd6084a8dcf45ecdeaa2f4b8879ce parameters: weight: 0.2 density: 0.5 epsilon: 0.25 base_model: /workspace/prototype-0.4x295 merge_method: della parameters: normalize: false lambda: 1.05 chat_template: llama3 pad_to_multiple_of: 8 int8_mask: true tokenizer: source: base dtype: float32 ```
afasdfdfadsf/blockassist-bc-rough_opaque_clam_1754972145
afasdfdfadsf
2025-08-12T04:17:31Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "rough opaque clam", "arxiv:2504.07091", "region:us" ]
null
2025-08-12T04:16:33Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - rough opaque clam --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
afasdfdfadsf/blockassist-bc-rough_opaque_clam_1754971625
afasdfdfadsf
2025-08-12T04:08:46Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "rough opaque clam", "arxiv:2504.07091", "region:us" ]
null
2025-08-12T04:07:50Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - rough opaque clam --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
BootesVoid/cme7yi48e001trts8o87yxrtt_cme7yudrm002wrts86fdjz5hn
BootesVoid
2025-08-12T04:03:53Z
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-08-12T04:03:50Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: SEXY --- # Cme7Yi48E001Trts8O87Yxrtt_Cme7Yudrm002Wrts86Fdjz5Hn <Gallery /> ## About this LoRA This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI. It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `SEXY` to trigger the image generation. ## Run this LoRA with an API using Replicate ```py import replicate input = { "prompt": "SEXY", "lora_weights": "https://huggingface.co/BootesVoid/cme7yi48e001trts8o87yxrtt_cme7yudrm002wrts86fdjz5hn/resolve/main/lora.safetensors" } output = replicate.run( "black-forest-labs/flux-dev-lora", input=input ) for index, item in enumerate(output): with open(f"output_{index}.webp", "wb") as file: file.write(item.read()) ``` ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('BootesVoid/cme7yi48e001trts8o87yxrtt_cme7yudrm002wrts86fdjz5hn', weight_name='lora.safetensors') image = pipeline('SEXY').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Training details - Steps: 2000 - Learning rate: 0.0004 - LoRA rank: 16 ## Contribute your own examples You can use the [community tab](https://huggingface.co/BootesVoid/cme7yi48e001trts8o87yxrtt_cme7yudrm002wrts86fdjz5hn/discussions) to add images that show off what you’ve made with this LoRA.
forouzanfallah/sentinel_test2_fft-t2
forouzanfallah
2025-08-12T04:01:06Z
0
0
diffusers
[ "diffusers", "safetensors", "text-to-image", "diffusers-training", "sd3", "sd3-diffusers", "controlnet", "base_model:stabilityai/stable-diffusion-3-medium-diffusers", "base_model:adapter:stabilityai/stable-diffusion-3-medium-diffusers", "license:openrail++", "region:us" ]
text-to-image
2025-08-11T21:11:00Z
--- base_model: stabilityai/stable-diffusion-3-medium-diffusers library_name: diffusers license: openrail++ inference: true tags: - text-to-image - diffusers-training - diffusers - sd3 - sd3-diffusers - controlnet --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # SD3 controlnet-forouzanfallah/sentinel_test2_fft-t2 These are controlnet weights trained on stabilityai/stable-diffusion-3-medium-diffusers with new type of conditioning. The weights were trained using [ControlNet](https://github.com/lllyasviel/ControlNet) with the [SD3 diffusers trainer](https://github.com/huggingface/diffusers/blob/main/examples/controlnet/README_sd3.md). You can find some example images below. prompt: a high-resolution satellite image, sharp details, clear view from space ![images_0)](./images_0.png) Please adhere to the licensing terms as described `[here](https://huggingface.co/stabilityai/stable-diffusion-3-medium/blob/main/LICENSE)`. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
Soughing/tpa_xl
Soughing
2025-08-12T04:00:26Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-08-01T17:49:13Z
--- license: apache-2.0 ---
bambangbukan/blockassist-bc-singing_burrowing_chicken_1754969968
bambangbukan
2025-08-12T03:41:27Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "singing burrowing chicken", "arxiv:2504.07091", "region:us" ]
null
2025-08-12T03:40:41Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - singing burrowing chicken --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
IvanJAjebu/blockassist-bc-thorny_slender_capybara_1754969729
IvanJAjebu
2025-08-12T03:36:46Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "thorny slender capybara", "arxiv:2504.07091", "region:us" ]
null
2025-08-12T03:36:33Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - thorny slender capybara --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
hobson123/blockassist-bc-mammalian_dense_gibbon_1754969243
hobson123
2025-08-12T03:33:03Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "mammalian dense gibbon", "arxiv:2504.07091", "region:us" ]
null
2025-08-12T03:32:49Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - mammalian dense gibbon --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
SaurabhJ20/deva1-sft
SaurabhJ20
2025-08-12T03:23:28Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-08-12T03:23:28Z
--- license: apache-2.0 ---
Jusstin/blockassist-bc-omnivorous_polished_mule_1754968957
Jusstin
2025-08-12T03:23:19Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "omnivorous polished mule", "arxiv:2504.07091", "region:us" ]
null
2025-08-12T03:23:10Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - omnivorous polished mule --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
blendinl/moondream2drain
blendinl
2025-08-12T03:17:40Z
0
0
null
[ "safetensors", "moondream1", "image-text-to-text", "custom_code", "license:apache-2.0", "endpoints_compatible", "region:us" ]
image-text-to-text
2025-08-12T01:59:50Z
--- license: apache-2.0 pipeline_tag: image-text-to-text --- Moondream is a small vision language model designed to run efficiently everywhere. [Website](https://moondream.ai/) / [Demo](https://moondream.ai/playground) / [GitHub](https://github.com/vikhyat/moondream) This repository contains the latest (**2025-06-21**) release of Moondream, as well as [historical releases](https://huggingface.co/vikhyatk/moondream2/blob/main/versions.txt). The model is updated frequently, so we recommend specifying a revision as shown below if you're using it in a production application. ### Usage ```python from transformers import AutoModelForCausalLM, AutoTokenizer from PIL import Image model = AutoModelForCausalLM.from_pretrained( "vikhyatk/moondream2", revision="2025-06-21", trust_remote_code=True, device_map={"": "cuda"} # ...or 'mps', on Apple Silicon ) # Captioning print("Short caption:") print(model.caption(image, length="short")["caption"]) print("\nNormal caption:") for t in model.caption(image, length="normal", stream=True)["caption"]: # Streaming generation example, supported for caption() and detect() print(t, end="", flush=True) print(model.caption(image, length="normal")) # Visual Querying print("\nVisual query: 'How many people are in the image?'") print(model.query(image, "How many people are in the image?")["answer"]) # Object Detection print("\nObject detection: 'face'") objects = model.detect(image, "face")["objects"] print(f"Found {len(objects)} face(s)") # Pointing print("\nPointing: 'person'") points = model.point(image, "person")["points"] print(f"Found {len(points)} person(s)") ``` ### Changelog **2025-06-21** ([full release notes](https://moondream.ai/blog/moondream-2025-06-21-release)) * **Grounded Reasoning** Introduces a new step-by-step reasoning mode that explicitly grounds reasoning in spatial positions within the image before answering, leading to more precise visual interpretation (e.g., chart median calculations, accurate counting). Enable with `reasoning=True` in the `query` skill to trade off speed vs. accuracy. * **Sharper Object Detection** Uses reinforcement learning on higher-quality bounding-box annotations to reduce object clumping and improve fine-grained detections (e.g., distinguishing “blue bottle” vs. “bottle”). * **Faster Text Generation** Yields 20–40 % faster response generation via a new “superword” tokenizer and lightweight tokenizer transfer hypernetwork, which reduces the number of tokens emitted without loss in accuracy and eases future multilingual extensions. * **Improved UI Understanding** Boosts ScreenSpot (UI element localization) performance from an F1\@0.5 of 60.3 to 80.4, making Moondream more effective for UI-focused applications. * **Reinforcement Learning Enhancements** RL fine-tuning applied across 55 vision-language tasks to reinforce grounded reasoning and detection capabilities, with a roadmap to expand to \~120 tasks in the next update. **2025-04-15** ([full release notes](https://moondream.ai/blog/moondream-2025-04-14-release)) 1. Improved chart understanding (ChartQA up from 74.8 to 77.5, 82.2 with PoT) 2. Added temperature and nucleus sampling to reduce repetitive outputs 3. Better OCR for documents and tables (prompt with “Transcribe the text” or “Transcribe the text in natural reading order”) 4. Object detection supports document layout detection (figure, formula, text, etc) 5. UI understanding (ScreenSpot F1\@0.5 up from 53.3 to 60.3) 6. Improved text understanding (DocVQA up from 76.5 to 79.3, TextVQA up from 74.6 to 76.3) **2025-03-27** ([full release notes](https://moondream.ai/blog/moondream-2025-03-27-release)) 1. Added support for long-form captioning 2. Open vocabulary image tagging 3. Improved counting accuracy (e.g. CountBenchQA increased from 80 to 86.4) 4. Improved text understanding (e.g. OCRBench increased from 58.3 to 61.2) 5. Improved object detection, especially for small objects (e.g. COCO up from 30.5 to 51.2) 6. Fixed token streaming bug affecting multi-byte unicode characters 7. gpt-fast style `compile()` now supported in HF Transformers implementation
8man-crypto/Qwen3-0.6B-Gensyn-Swarm-gregarious_short_barracuda
8man-crypto
2025-08-12T03:11:02Z
0
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "rl-swarm", "genrl-swarm", "grpo", "gensyn", "I am gregarious_short_barracuda", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-08-12T01:03:04Z
--- library_name: transformers tags: - rl-swarm - genrl-swarm - grpo - gensyn - I am gregarious_short_barracuda --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
0xGareeb/blockassist-bc-nimble_shaggy_zebra_1754968014
0xGareeb
2025-08-12T03:09:47Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "nimble shaggy zebra", "arxiv:2504.07091", "region:us" ]
null
2025-08-12T03:08:27Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - nimble shaggy zebra --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
otmorozky/AceInstruct-1.5B-Gensyn-Swarm-lazy_sprightly_hippo
otmorozky
2025-08-12T02:52:00Z
99
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "rl-swarm", "genrl-swarm", "grpo", "gensyn", "I am lazy_sprightly_hippo", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-08-08T15:06:10Z
--- library_name: transformers tags: - rl-swarm - genrl-swarm - grpo - gensyn - I am lazy_sprightly_hippo --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
FluidInference/Qwen3-8B-int8-ov
FluidInference
2025-08-12T02:48:03Z
0
0
null
[ "openvino", "qwen3", "base_model:Qwen/Qwen3-8B", "base_model:quantized:Qwen/Qwen3-8B", "license:apache-2.0", "region:us" ]
null
2025-08-12T00:36:37Z
--- license: apache-2.0 license_link: https://huggingface.co/Qwen/Qwen3-8B/blob/main/LICENSE base_model: - Qwen/Qwen3-8B base_model_relation: quantized --- # Qwen3-8B-int8-ov * Model creator: [Qwen](https://huggingface.co/Qwen) * Original model: [Qwen3-8B](https://huggingface.co/Qwen/Qwen3-8B) ## Description This is [Qwen3-8B](https://huggingface.co/Qwen/Qwen3-8B) model converted to the [OpenVINO™ IR](https://docs.openvino.ai/2025/documentation/openvino-ir-format.html) (Intermediate Representation) format with weights compressed to INT8 by [NNCF](https://github.com/openvinotoolkit/nncf). ## Quantization Parameters Weight compression was performed using `nncf.compress_weights` with the following parameters: * mode: **INT8_ASYM** For more information on quantization, check the [OpenVINO model optimization guide](https://docs.openvino.ai/2025/openvino-workflow/model-optimization-guide/weight-compression.html). ## Compatibility The provided OpenVINO™ IR model is compatible with: * OpenVINO version 2025.1.0 and higher * Optimum Intel 1.24.0 and higher ## Running Model Inference with [Optimum Intel](https://huggingface.co/docs/optimum/intel/index) 1. Install packages required for using [Optimum Intel](https://huggingface.co/docs/optimum/intel/index) integration with the OpenVINO backend: ``` pip install optimum[openvino] ``` 2. Run model inference: ``` from transformers import AutoTokenizer from optimum.intel.openvino import OVModelForCausalLM model_id = "FluidInference/qwen3-8b-int8-ov" tokenizer = AutoTokenizer.from_pretrained(model_id) model = OVModelForCausalLM.from_pretrained(model_id) inputs = tokenizer("What is OpenVINO?", return_tensors="pt") outputs = model.generate(**inputs, max_length=200) text = tokenizer.batch_decode(outputs)[0] print(text) ``` For more examples and possible optimizations, refer to the [Inference with Optimum Intel](https://docs.openvino.ai/2025/openvino-workflow-generative/inference-with-optimum-intel.html). ## Running Model Inference with [OpenVINO GenAI](https://github.com/openvinotoolkit/openvino.genai) 1. Install packages required for using OpenVINO GenAI. ``` pip install openvino-genai huggingface_hub ``` 2. Download model from HuggingFace Hub ``` import huggingface_hub as hf_hub model_id = "FluidInference/qwen3-8b-int8-ov" model_path = "qwen3-8b-int8-ov" hf_hub.snapshot_download(model_id, local_dir=model_path) ``` 3. Run model inference: ``` import openvino_genai as ov_genai device = "CPU" pipe = ov_genai.LLMPipeline(model_path, device) pipe.get_tokenizer().set_chat_template(pipe.get_tokenizer().chat_template) print(pipe.generate("What is OpenVINO?", max_length=200)) ``` More GenAI usage examples can be found in OpenVINO GenAI library [docs](https://docs.openvino.ai/2025/openvino-workflow-generative/inference-with-genai.html) and [samples](https://github.com/openvinotoolkit/openvino.genai?tab=readme-ov-file#openvino-genai-samples) You can find more detaild usage examples in OpenVINO Notebooks: - [LLM](https://openvinotoolkit.github.io/openvino_notebooks/?search=LLM) - [RAG text generation](https://openvinotoolkit.github.io/openvino_notebooks/?search=RAG+system&tasks=Text+Generation) ## Limitations Check the original [model card](https://huggingface.co/Qwen/Qwen3-8B) for limitations. ## Legal information The original model is distributed under [Apache License Version 2.0](https://huggingface.co/Qwen/Qwen3-8B/blob/main/LICENSE) license. More details can be found in [Qwen3-8B](https://huggingface.co/Qwen/Qwen3-8B). ## Disclaimer Intel is committed to respecting human rights and avoiding causing or contributing to adverse impacts on human rights. See [Intel’s Global Human Rights Principles](https://www.intel.com/content/dam/www/central-libraries/us/en/documents/policy-human-rights.pdf). Intel’s products and software are intended only to be used in applications that do not cause or contribute to adverse impacts on human rights.
abcorrea/p2-v2
abcorrea
2025-08-12T02:48:00Z
0
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "generated_from_trainer", "trl", "sft", "conversational", "base_model:abcorrea/p2-v1", "base_model:finetune:abcorrea/p2-v1", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-08-12T01:32:15Z
--- base_model: abcorrea/p2-v1 library_name: transformers model_name: p2-v2 tags: - generated_from_trainer - trl - sft licence: license --- # Model Card for p2-v2 This model is a fine-tuned version of [abcorrea/p2-v1](https://huggingface.co/abcorrea/p2-v1). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="abcorrea/p2-v2", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with SFT. ### Framework versions - TRL: 0.19.1 - Transformers: 4.52.1 - Pytorch: 2.7.0 - Datasets: 4.0.0 - Tokenizers: 0.21.1 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
FluidInference/Qwen3-4B-int8-ov
FluidInference
2025-08-12T02:46:39Z
0
0
null
[ "openvino", "qwen3", "base_model:Qwen/Qwen3-4B", "base_model:quantized:Qwen/Qwen3-4B", "license:apache-2.0", "region:us" ]
null
2025-08-12T00:24:23Z
--- license: apache-2.0 license_link: https://huggingface.co/Qwen/Qwen3-4B/blob/main/LICENSE base_model: - Qwen/Qwen3-4B base_model_relation: quantized --- # Qwen3-4B-int8-ov * Model creator: [Qwen](https://huggingface.co/Qwen) * Original model: [Qwen3-4B](https://huggingface.co/Qwen/Qwen3-4B) ## Description This is [Qwen3-4B](https://huggingface.co/Qwen/Qwen3-4B) model converted to the [OpenVINO™ IR](https://docs.openvino.ai/2025/documentation/openvino-ir-format.html) (Intermediate Representation) format with weights compressed to INT8 by [NNCF](https://github.com/openvinotoolkit/nncf). ## Quantization Parameters Weight compression was performed using `nncf.compress_weights` with the following parameters: * mode: **INT8_ASYM** For more information on quantization, check the [OpenVINO model optimization guide](https://docs.openvino.ai/2025/openvino-workflow/model-optimization-guide/weight-compression.html). ## Compatibility The provided OpenVINO™ IR model is compatible with: * OpenVINO version 2025.1.0 and higher * Optimum Intel 1.24.0 and higher ## Running Model Inference with [Optimum Intel](https://huggingface.co/docs/optimum/intel/index) 1. Install packages required for using [Optimum Intel](https://huggingface.co/docs/optimum/intel/index) integration with the OpenVINO backend: ``` pip install optimum[openvino] ``` 2. Run model inference: ``` from transformers import AutoTokenizer from optimum.intel.openvino import OVModelForCausalLM model_id = "FluidInference/qwen3-4b-int8-ov" tokenizer = AutoTokenizer.from_pretrained(model_id) model = OVModelForCausalLM.from_pretrained(model_id) inputs = tokenizer("What is OpenVINO?", return_tensors="pt") outputs = model.generate(**inputs, max_length=200) text = tokenizer.batch_decode(outputs)[0] print(text) ``` For more examples and possible optimizations, refer to the [Inference with Optimum Intel](https://docs.openvino.ai/2025/openvino-workflow-generative/inference-with-optimum-intel.html). ## Running Model Inference with [OpenVINO GenAI](https://github.com/openvinotoolkit/openvino.genai) 1. Install packages required for using OpenVINO GenAI. ``` pip install openvino-genai huggingface_hub ``` 2. Download model from HuggingFace Hub ``` import huggingface_hub as hf_hub model_id = "FluidInference/qwen3-4b-int8-ov" model_path = "qwen3-4b-int8-ov" hf_hub.snapshot_download(model_id, local_dir=model_path) ``` 3. Run model inference: ``` import openvino_genai as ov_genai device = "CPU" pipe = ov_genai.LLMPipeline(model_path, device) pipe.get_tokenizer().set_chat_template(pipe.get_tokenizer().chat_template) print(pipe.generate("What is OpenVINO?", max_length=200)) ``` More GenAI usage examples can be found in OpenVINO GenAI library [docs](https://docs.openvino.ai/2025/openvino-workflow-generative/inference-with-genai.html) and [samples](https://github.com/openvinotoolkit/openvino.genai?tab=readme-ov-file#openvino-genai-samples) You can find more detaild usage examples in OpenVINO Notebooks: - [LLM](https://openvinotoolkit.github.io/openvino_notebooks/?search=LLM) - [RAG text generation](https://openvinotoolkit.github.io/openvino_notebooks/?search=RAG+system&tasks=Text+Generation) ## Limitations Check the original [model card](https://huggingface.co/Qwen/Qwen3-4B) for limitations. ## Legal information The original model is distributed under [Apache License Version 2.0](https://huggingface.co/Qwen/Qwen3-4B/blob/main/LICENSE) license. More details can be found in [Qwen3-4B](https://huggingface.co/Qwen/Qwen3-4B). ## Disclaimer Intel is committed to respecting human rights and avoiding causing or contributing to adverse impacts on human rights. See [Intel’s Global Human Rights Principles](https://www.intel.com/content/dam/www/central-libraries/us/en/documents/policy-human-rights.pdf). Intel’s products and software are intended only to be used in applications that do not cause or contribute to adverse impacts on human rights.
Osrivers/hidream_i1_full_uncensored_fp8_v0.2.safetensors
Osrivers
2025-08-12T02:42:22Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2025-08-12T02:42:22Z
--- license: creativeml-openrail-m ---
imgailab/flux1-trtx-schnell-fp4-blackwell
imgailab
2025-08-12T02:38:02Z
0
0
tensorrt-rtx
[ "tensorrt-rtx", "flux1-schnell", "flux1", "fp4", "schnell", "optimized", "base_model:black-forest-labs/FLUX.1-schnell", "base_model:finetune:black-forest-labs/FLUX.1-schnell", "license:apache-2.0", "region:us" ]
null
2025-08-12T02:37:59Z
--- library_name: tensorrt-rtx license: apache-2.0 base_model: black-forest-labs/FLUX.1-schnell tags: - tensorrt-rtx - flux1 - fp4 - schnell - optimized inference: false --- # FLUX1 TensorRT-RTX: SCHNELL-Fp4 🔨 Building Optimized TensorRT-RTX engines for **FLUX1** on **Fp4** architecture with **SCHNELL** quantization. ## 🎯 This Repository **One variant, one download** - only get exactly what you need! - **Model**: FLUX1 - **Architecture**: Fp4 (Compute Capability 8.0+) - **Quantization**: SCHNELL - **Memory**: TBD - **Speed**: TBD for 1024x1024 generation ## 🚀 Quick Start ### Automatic (Recommended) ```bash # ImageAI server downloads automatically curl -X POST "http://localhost:8001/generate" \ -H "Content-Type: application/json" \ -d '{ "prompt": "a beautiful landscape", "model": "flux1-tensorrt_rtx:schnell", "width": 1024, "height": 1024 }' ``` ### Manual Download ```python from huggingface_hub import snapshot_download # Download this specific variant only engines_path = snapshot_download( repo_id="imgailab/flux1-trtx-schnell-fp4-blackwell" ) # Engines are in: engines_path/engines/*.plan ``` ### Direct Integration ```python from imageai_server.tensorrt.nvidia_sdxl_pipeline import NVIDIASDXLPipeline pipeline = NVIDIASDXLPipeline() pipeline.load_engines( engine_dir=f"{engines_path}/engines", framework_model_dir=f"{engines_path}/framework", onnx_dir=f"{engines_path}/onnx" ) pipeline.activate_engines() images, time_ms = pipeline.infer( prompt="a serene mountain landscape", height=1024, width=1024 ) ``` ## 📊 Performance | Metric | Value | |--------|-------| | **Memory Usage** | TBD | | **Inference Speed** | TBD | | **Resolution** | 1024x1024 (optimized) | | **Batch Size** | 1 (optimized) | | **Precision** | SCHNELL | ## 🔧 Requirements ### Hardware - **GPU**: Fp4 architecture - Ampere: RTX 3090, A100, etc. - Ada Lovelace: RTX 4090, etc. - Blackwell: H200, etc. - **VRAM**: TBD minimum - **Compute Capability**: 8.0+ ### Software - **TensorRT-RTX**: 1.0.0.21+ - **CUDA**: 12.0+ - **Python**: 3.8+ ## 📁 Repository Structure ``` flux1-trtx-schnell-fp4-blackwell/ ├── engines/ # TensorRT engine files │ ├── *.plan # Optimized engines ├── config.json # Configuration metadata └── README.md # This file ``` ## 🌐 Related Repositories Other variants for FLUX1: - [Ampere BF16](https://huggingface.co/imgailab/flux1-trtx-bf16-ampere)\n- [Ada FP8](https://huggingface.co/imgailab/flux1-trtx-fp8-ada)\n- [Ada BF16](https://huggingface.co/imgailab/flux1-trtx-bf16-ada)\n- [Blackwell FP4](https://huggingface.co/imgailab/flux1-trtx-fp4-blackwell)\n- [Blackwell FP8](https://huggingface.co/imgailab/flux1-trtx-fp8-blackwell)\n- [Blackwell BF16](https://huggingface.co/imgailab/flux1-trtx-bf16-blackwell)\n ## 📝 License Inherits license from base model: [black-forest-labs/FLUX.1-schnell](https://huggingface.co/black-forest-labs/FLUX.1-schnell) ## 🔄 Updates - **2025-08-12**: Initial release - Optimized for single-variant downloads --- *Part of the ImageAI TensorRT-RTX engine collection*
andr0m4da/blockassist-bc-grazing_hunting_boar_1754966105
andr0m4da
2025-08-12T02:35:56Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "grazing hunting boar", "arxiv:2504.07091", "region:us" ]
null
2025-08-12T02:35:50Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - grazing hunting boar --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
zjunlp/DataMind-Qwen2.5-14B
zjunlp
2025-08-12T02:35:46Z
10
2
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "conversational", "arxiv:2506.19794", "base_model:Qwen/Qwen2.5-14B-Instruct", "base_model:finetune:Qwen/Qwen2.5-14B-Instruct", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-07-19T08:22:08Z
--- base_model: - Qwen/Qwen2.5-14B-Instruct license: apache-2.0 pipeline_tag: text-generation library_name: transformers --- <h1 align="center"> ✨ DataMind </h1> This repository contains the **DataMind** model, a fine-tuned Qwen2.5-14B-Instruct model presented in the paper [Why Do Open-Source LLMs Struggle with Data Analysis? A Systematic Empirical Study](https://huggingface.co/papers/2506.19794). Code: [https://github.com/zjunlp/DataMind](https://github.com/zjunlp/DataMind) ## Abstract Large Language Models (LLMs) hold promise in automating data analysis tasks, yet open-source models face significant limitations in these kinds of reasoning-intensive scenarios. In this work, we investigate strategies to enhance the data analysis capabilities of open-source LLMs. By curating a seed dataset of diverse, realistic scenarios, we evaluate model behavior across three core dimensions: data understanding, code generation, and strategic planning. Our analysis reveals three key findings: (1) Strategic planning quality serves as the primary determinant of model performance; (2) Interaction design and task complexity significantly influence reasoning capabilities; (3) Data quality demonstrates a greater impact than diversity in achieving optimal performance. We leverage these insights to develop a data synthesis methodology, demonstrating significant improvements in open-source LLMs' analytical reasoning capabilities. ## 🔔 News - **[2025-06]** We release a new paper: "[Why Do Open-Source LLMs Struggle with Data Analysis? A Systematic Empirical Study](https://arxiv.org/pdf/2506.19794)". ## 🔧 Installation #### 🔩Manual Environment Configuration Conda virtual environments offer a light and flexible setup. **Prerequisites** - Anaconda Installation - GPU support (recommended CUDA version: 12.4) **Configure Steps** 1. Clone the repository: ```bash git clone https://github.com/zjunlp/DataMind.git ``` 2. Enter the working directory, and all subsequent commands should be executed in this directory. ```bash cd DataMind/eval ``` 3. Create a virtual environment using `Anaconda`. ```bash conda create -n DataMind python=3.10 conda activate DataMind ``` 4. Install all required Python packages. ```bash pip install -r requirements.txt ``` ## 💻 Training Our model training was completed using the powerful and user-friendly **[LLaMA-Factory](https://github.com/hiyouga/LLaMA-Factory)** framework, which provided us with an efficient fine-tuning workflow. ##### 1. Training Data Our training dataset is available in `train/datamind-da-dataset.json` ##### 2. Training Configuration The following is an example configuration for full-parameter fine-tuning using DeepSpeed ZeRO-3. You can save it as a YAML file (e.g., `datamind_sft.yaml`). ```yaml ### model model_name_or_path: Qwen/Qwen2.5-7B-Instruct # Or Qwen/Qwen2.5-14B-Instruct ### method stage: sft do_train: true finetuning_type: full deepspeed: examples/deepspeed/ds_z3_config.json flash_attn: fa2 ### dataset dataset: datamind-da-dataset template: qwen cutoff_len: 8192 overwrite_cache: true preprocessing_num_workers: 16 ### output output_dir: checkpoints/your-model-name logging_steps: 1 save_strategy: epoch plot_loss: true overwrite_output_dir: true report_to: none ### train per_device_train_batch_size: 1 gradient_accumulation_steps: 4 learning_rate: 1.0e-5 num_train_epochs: 3.0 lr_scheduler_type: cosine warmup_ratio: 0.1 bf16: true ddp_timeout: 180000000 ``` ##### 3. Launch Training ```bash CUDA_VISIBLE_DEVICES=0,1,2,3 llama-factory-cli train datamind_sft.yaml ``` ## 🧐 Evaluation > Note: > > - **Ensure** that your working directory is set to the **`eval`** folder in a virtual environment. > - If you have more questions, feel free to open an issue with us. > - If you need to use local model, you need to deploy it according to **(Optional)`local_model.sh`**. **Step 1: Download the evaluation datasets and our sft models** The evaluation datasets we used are in [QRData](https://github.com/xxxiaol/QRData) and [DiscoveryBench](https://github.com/allenai/discoverybench). The script expects data to be at `data/QRData/benchmark/data/*.csv` and `data/DiscoveryBench/*.csv`. You can also download our sft models directly from Hugging Face: [DataMind-Qwen2.5-7B](https://huggingface.co/zjunlp/DataMind-Qwen2.5-7B) ,[DataMind-Qwen2.5-14B ](https://huggingface.co/zjunlp/DataMind-Qwen2.5-14B). You can use the following `bash` script to download the dataset: ```bash bash download_eval_data.sh ``` **Step 2: Prepare the parameter configuration** Here is the example: **`config.yaml`** ```yaml api_key: your_api_key # your API key for the model with API service. No need for open-source models. data_root: /path/to/your/project/DataMind/eval/data # Root directory for data. (absolute path !!!) ``` **`run_eval.sh`** ```bash python do_generate.py \ --model_name DataMind-Qwen2.5-7B \ # Model name to use. --check_model gpt-4o-mini \ # Check model to use. --output results \ # Output directory path. --dataset_name QRData \ # Dataset name to use, chosen from QRData, DiscoveryBench. --max_round 25 \ # Maximum number of steps. --api_port 8000 \ # API port number, it is necessary if the local model is used. --bidx 0 \ # Begin index (inclusive), `None` indicates that there is no restriction. --eidx None \ # End index (exclusive), `None` indicates that there is no restriction. --temperature 0.0 \ # Temperature for sampling. --top_p 1 \ # Top p for sampling. --add_random False \ # Whether to add random files. ``` **(Optional)`local_model.sh`** ```bash CUDA_VISIBLE_DEVICES=$i python -m vllm.entrypoints.openai.api_server \ --model $MODEL_PATH \ # Local model path. --served-model-name $MODEL_NAME \ # The model name specified by you. --tensor-parallel-size $i \ # Set the size of tensor parallel processing. --port $port # API port number, which is consistent with the `api_port` above. ``` **Step 3: Run the shell script** **(Optional)** Deploy the local model if you need. ```bash bash local_model.sh ``` Run the shell script to start the process. ```bash bash run_eval.sh ``` ## 🎉Contributors <a href="https://github.com/zjunlp/DataMind/graphs/contributors"> <img src="https://contrib.rocks/image?repo=zjunlp/DataMind" /></a> We deeply appreciate the collaborative efforts of everyone involved. We will continue to enhance and maintain this repository over the long term. If you encounter any issues, feel free to submit them to us! ## ✍️ Citation If you find our work helpful, please use the following citations. ```bibtex @article{zhu2025open, title={Why Do Open-Source LLMs Struggle with Data Analysis? A Systematic Empirical Study}, author={Zhu, Yuqi and Zhong, Yi and Zhang, Jintian and Zhang, Ziheng and Qiao, Shuofei and Luo, Yujie and Du, Lun and Zheng, Da and Chen, Huajun and Zhang, Ningyu}, journal={arXiv preprint arXiv:2506.19794}, year={2025} } ```
afasdfdfadsf/blockassist-bc-rough_opaque_clam_1754965734
afasdfdfadsf
2025-08-12T02:30:37Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "rough opaque clam", "arxiv:2504.07091", "region:us" ]
null
2025-08-12T02:29:42Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - rough opaque clam --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
shadyAI/swahili-whisper-asr-with-lora
shadyAI
2025-08-12T02:28:40Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-08-12T02:28:32Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
braindeck/my_awesome_asr_korean_model
braindeck
2025-08-12T02:27:44Z
0
0
transformers
[ "transformers", "safetensors", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "dataset:common_voice_17_0", "base_model:facebook/wav2vec2-base", "base_model:finetune:facebook/wav2vec2-base", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2025-08-11T08:26:53Z
--- library_name: transformers license: apache-2.0 base_model: facebook/wav2vec2-base tags: - generated_from_trainer datasets: - common_voice_17_0 metrics: - wer model-index: - name: my_awesome_asr_korean_model results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: common_voice_17_0 type: common_voice_17_0 config: ko split: None args: ko metrics: - name: Wer type: wer value: 0.7375565610859729 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_awesome_asr_korean_model This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the common_voice_17_0 dataset. It achieves the following results on the evaluation set: - Loss: 1.6733 - Wer: 0.7376 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 64 - eval_batch_size: 16 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 100 - training_steps: 20000 ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:------:|:-----:|:---------------:|:------:| | 0.628 | 200.0 | 1000 | 0.6373 | 0.8869 | | 0.5549 | 400.0 | 2000 | 0.6298 | 0.9072 | | 0.494 | 600.0 | 3000 | 0.6399 | 0.9977 | | 0.2089 | 800.0 | 4000 | 0.8485 | 0.7579 | | 0.1313 | 1000.0 | 5000 | 1.0293 | 0.7421 | | 0.0996 | 1200.0 | 6000 | 1.1840 | 0.7602 | | 0.0959 | 1400.0 | 7000 | 1.1620 | 0.7398 | | 0.0812 | 1600.0 | 8000 | 1.2796 | 0.7398 | | 0.0743 | 1800.0 | 9000 | 1.4207 | 0.7534 | | 0.0628 | 2000.0 | 10000 | 1.4068 | 0.7421 | | 0.0653 | 2200.0 | 11000 | 1.4614 | 0.7511 | | 0.0577 | 2400.0 | 12000 | 1.5502 | 0.6991 | | 0.0539 | 2600.0 | 13000 | 1.5590 | 0.7172 | | 0.0517 | 2800.0 | 14000 | 1.6388 | 0.7240 | | 0.0464 | 3000.0 | 15000 | 1.6670 | 0.7217 | | 0.0445 | 3200.0 | 16000 | 1.6323 | 0.7014 | | 0.039 | 3400.0 | 17000 | 1.6918 | 0.7330 | | 0.0412 | 3600.0 | 18000 | 1.5930 | 0.7104 | | 0.0408 | 3800.0 | 19000 | 1.6583 | 0.7376 | | 0.0343 | 4000.0 | 20000 | 1.6733 | 0.7376 | ### Framework versions - Transformers 4.56.0.dev0 - Pytorch 2.6.0a0+ecf3bae40a.nv25.01 - Datasets 3.6.0 - Tokenizers 0.21.1
nightmedia/Luth-0.6B-Instruct-q8-hi-mlx
nightmedia
2025-08-12T02:24:04Z
0
0
mlx
[ "mlx", "safetensors", "qwen3", "text-generation", "conversational", "fr", "en", "dataset:kurakurai/luth-sft", "base_model:kurakurai/Luth-0.6B-Instruct", "base_model:quantized:kurakurai/Luth-0.6B-Instruct", "license:apache-2.0", "8-bit", "region:us" ]
text-generation
2025-08-12T02:13:51Z
--- library_name: mlx license: apache-2.0 datasets: - kurakurai/luth-sft language: - fr - en base_model: kurakurai/Luth-0.6B-Instruct pipeline_tag: text-generation tags: - mlx --- # Luth-0.6B-Instruct-q8-hi-mlx This model [Luth-0.6B-Instruct-q8-hi-mlx](https://huggingface.co/Luth-0.6B-Instruct-q8-hi-mlx) was converted to MLX format from [kurakurai/Luth-0.6B-Instruct](https://huggingface.co/kurakurai/Luth-0.6B-Instruct) using mlx-lm version **0.26.3**. ## Use with mlx ```bash pip install mlx-lm ``` ```python from mlx_lm import load, generate model, tokenizer = load("Luth-0.6B-Instruct-q8-hi-mlx") prompt = "hello" if tokenizer.chat_template is not None: messages = [{"role": "user", "content": prompt}] prompt = tokenizer.apply_chat_template( messages, add_generation_prompt=True ) response = generate(model, tokenizer, prompt=prompt, verbose=True) ```
IvanJAjebu/blockassist-bc-thorny_slender_capybara_1754964706
IvanJAjebu
2025-08-12T02:12:57Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "thorny slender capybara", "arxiv:2504.07091", "region:us" ]
null
2025-08-12T02:12:43Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - thorny slender capybara --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
WangDong2017/GrammarSeeker-SFT-Qwen2.5-7B
WangDong2017
2025-08-12T02:08:53Z
2
0
null
[ "safetensors", "qwen2", "text-classification", "zh", "base_model:Qwen/Qwen2.5-7B-Instruct", "base_model:finetune:Qwen/Qwen2.5-7B-Instruct", "license:apache-2.0", "region:us" ]
text-classification
2025-08-11T00:53:45Z
--- license: apache-2.0 language: - zh base_model: - Qwen/Qwen2.5-7B-Instruct pipeline_tag: text-classification --- # GrammarSeeker-SFT-Qwen2.5-7B A fine-tuned Qwen2.5-7B-Instruct model specifically designed for grammatical project parsing systems. ## 🔗 Repository Links - **📁 Source Code**: [GitHub Repository](https://github.com/wd-github-2017/GrammarSeeker) - Contains testing code and development scripts - **🤗 Model Hub**: [Hugging Face Model](https://huggingface.co/WangDong2017/GrammarSeeker-SFT-Qwen2.5-7B) - Hosts the complete fine-tuned model ## 🎉 Latest Update (2025-08-11) **✅ Model Successfully Deployed to Hugging Face!** The fine-tuned model is now available for direct use without any additional steps. ## 📋 Model Information - **Base Model**: [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) - **Fine-tuned Model**: [WangDong2017/GrammarSeeker-SFT-Qwen2.5-7B](https://huggingface.co/WangDong2017/GrammarSeeker-SFT-Qwen2.5-7B) - **Fine-tuning Method**: LoRA (Low-Rank Adaptation) during training, now provided as complete model - **Task**: Binary classification for grammatical project annotation (T/F output) - **Performance**: - **F1 Score**: 0.9797 (97.97%) - **Positive Accuracy**: 0.9640 (96.40%) - **Negative Accuracy**: 0.9960 (99.60%) - **Test Samples**: 1000 - **Test Date**: 2025-08-11 - **Tested Performance**: 16 annotations/s (Test completed in ~1 minute on RTX 4090) ## 🎯 Use Case This model serves as the **core component of a grammatical project parsing system**. It is designed to: 1. **Receive structured prompts** (as shown in GM-TestData.csv) 2. **Output binary decisions** (T/F) for grammatical annotation 3. **Enable automated grammar project marking** based on model predictions ## 🔧 Usage ### Installation ```bash pip install transformers peft torch ``` ### Testing Performance ```bash # Test the model from Hugging Face python test_hf_model.py ``` **Latest Test Results (2025-08-11)**: - ✅ Model successfully loaded from HF repository - ✅ All 1000 test samples processed - ✅ F1 Score: 0.9797 (97.97%) - ✅ Test completed in ~1 minute on RTX 4090 ### Loading the Model ```python from transformers import AutoTokenizer, AutoModelForCausalLM # Load the complete fine-tuned model directly from HF model = AutoModelForCausalLM.from_pretrained( "WangDong2017/GrammarSeeker-SFT-Qwen2.5-7B", torch_dtype=torch.float16, device_map="auto", trust_remote_code=True ) # Load tokenizer tokenizer = AutoTokenizer.from_pretrained("WangDong2017/GrammarSeeker-SFT-Qwen2.5-7B") ``` ## 🏭 Production Environment Usage **Recommended workflow**: 1. **Pre-filtering**: Use regular expressions for coarse screening 2. **String matching**: Trigger prompt generation based on string matching 3. **Model inference**: Send generated prompt to this model 4. **Output processing**: Model outputs T/F 5. **Automatic annotation**: Generate grammatical project markers based on T/F output ## 📊 Dataset - **GM-TestData.csv**: 1000 test samples with prompts and expected answers - **Format**: prompt1, prompt2, answer (T/F) - **Test Results**: Successfully validated with 97.97% F1 score ## 🚀 Deployment & Integration ### Hugging Face Integration - **Model Hub**: [WangDong2017/GrammarSeeker-SFT-Qwen2.5-7B](https://huggingface.co/WangDong2017/GrammarSeeker-SFT-Qwen2.5-7B) - **Direct Loading**: Available for immediate use - **API Access**: Can be deployed through HF Inference API ## 📝 Citation ```bibtex @misc{wang2025CPGEVALMultitieredBenchmark, title = {{{CPG-EVAL}}: A Multi-Tiered Benchmark for Evaluating the Chinese Pedagogical Grammar Competence of Large Language Models}, author = {Wang, Dong}, year = {2025}, publisher = {arXiv}, doi = {10.48550/ARXIV.2504.13261} } ``` --- **Note**: This model has been successfully tested and deployed. For production use, please ensure proper testing and validation in your specific use case.
mradermacher/india-wiki-hin-1.7B-GGUF
mradermacher
2025-08-12T01:59:27Z
0
0
transformers
[ "transformers", "gguf", "en", "base_model:XformAI-india/india-wiki-hin-1.7B", "base_model:quantized:XformAI-india/india-wiki-hin-1.7B", "license:mit", "endpoints_compatible", "region:us", "conversational" ]
null
2025-08-12T01:53:45Z
--- base_model: XformAI-india/india-wiki-hin-1.7B language: - en library_name: transformers license: mit mradermacher: readme_rev: 1 quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> <!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS --> <!-- ### quants_skip: --> <!-- ### skip_mmproj: --> static quants of https://huggingface.co/XformAI-india/india-wiki-hin-1.7B <!-- provided-files --> ***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#india-wiki-hin-1.7B-GGUF).*** weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/india-wiki-hin-1.7B-GGUF/resolve/main/india-wiki-hin-1.7B.Q2_K.gguf) | Q2_K | 0.9 | | | [GGUF](https://huggingface.co/mradermacher/india-wiki-hin-1.7B-GGUF/resolve/main/india-wiki-hin-1.7B.Q3_K_S.gguf) | Q3_K_S | 1.0 | | | [GGUF](https://huggingface.co/mradermacher/india-wiki-hin-1.7B-GGUF/resolve/main/india-wiki-hin-1.7B.Q3_K_M.gguf) | Q3_K_M | 1.0 | lower quality | | [GGUF](https://huggingface.co/mradermacher/india-wiki-hin-1.7B-GGUF/resolve/main/india-wiki-hin-1.7B.Q3_K_L.gguf) | Q3_K_L | 1.1 | | | [GGUF](https://huggingface.co/mradermacher/india-wiki-hin-1.7B-GGUF/resolve/main/india-wiki-hin-1.7B.IQ4_XS.gguf) | IQ4_XS | 1.1 | | | [GGUF](https://huggingface.co/mradermacher/india-wiki-hin-1.7B-GGUF/resolve/main/india-wiki-hin-1.7B.Q4_K_S.gguf) | Q4_K_S | 1.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/india-wiki-hin-1.7B-GGUF/resolve/main/india-wiki-hin-1.7B.Q4_K_M.gguf) | Q4_K_M | 1.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/india-wiki-hin-1.7B-GGUF/resolve/main/india-wiki-hin-1.7B.Q5_K_S.gguf) | Q5_K_S | 1.3 | | | [GGUF](https://huggingface.co/mradermacher/india-wiki-hin-1.7B-GGUF/resolve/main/india-wiki-hin-1.7B.Q5_K_M.gguf) | Q5_K_M | 1.4 | | | [GGUF](https://huggingface.co/mradermacher/india-wiki-hin-1.7B-GGUF/resolve/main/india-wiki-hin-1.7B.Q6_K.gguf) | Q6_K | 1.5 | very good quality | | [GGUF](https://huggingface.co/mradermacher/india-wiki-hin-1.7B-GGUF/resolve/main/india-wiki-hin-1.7B.Q8_0.gguf) | Q8_0 | 1.9 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/india-wiki-hin-1.7B-GGUF/resolve/main/india-wiki-hin-1.7B.f16.gguf) | f16 | 3.5 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
IvanJAjebu/blockassist-bc-thorny_slender_capybara_1754963446
IvanJAjebu
2025-08-12T01:52:17Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "thorny slender capybara", "arxiv:2504.07091", "region:us" ]
null
2025-08-12T01:51:47Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - thorny slender capybara --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
boredsxe/blockassist-bc-melodic_nocturnal_macaque_1754962549
boredsxe
2025-08-12T01:37:36Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "melodic nocturnal macaque", "arxiv:2504.07091", "region:us" ]
null
2025-08-12T01:37:17Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - melodic nocturnal macaque --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
IvanJAjebu/blockassist-bc-thorny_slender_capybara_1754961934
IvanJAjebu
2025-08-12T01:26:53Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "thorny slender capybara", "arxiv:2504.07091", "region:us" ]
null
2025-08-12T01:26:37Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - thorny slender capybara --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
myfi/parser_model_ner_3.45_checkpoint_250
myfi
2025-08-12T01:08:01Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "text-generation-inference", "unsloth", "conversational", "en", "base_model:unsloth/Qwen2.5-3B-Instruct", "base_model:finetune:unsloth/Qwen2.5-3B-Instruct", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-08-12T00:56:27Z
--- base_model: unsloth/Qwen2.5-3B-Instruct tags: - text-generation-inference - transformers - unsloth - qwen2 license: apache-2.0 language: - en --- # Uploaded finetuned model - **Developed by:** myfi - **License:** apache-2.0 - **Finetuned from model :** unsloth/Qwen2.5-3B-Instruct This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
mbiarreta/vit-ena24-clase
mbiarreta
2025-08-12T00:49:04Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "vit", "image-classification", "generated_from_trainer", "base_model:google/vit-base-patch16-224-in21k", "base_model:finetune:google/vit-base-patch16-224-in21k", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2025-08-11T05:06:51Z
--- library_name: transformers license: apache-2.0 base_model: google/vit-base-patch16-224-in21k tags: - image-classification - generated_from_trainer metrics: - accuracy - f1 model-index: - name: vit-ena24-clase results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-ena24-clase This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the ena24_MD dataset. It achieves the following results on the evaluation set: - Loss: 0.3132 - Accuracy: 0.9321 - F1: 0.8789 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:------:|:----:|:---------------:|:--------:|:------:| | 1.9153 | 0.1302 | 100 | 1.7647 | 0.5725 | 0.4646 | | 1.2463 | 0.2604 | 200 | 1.1008 | 0.7641 | 0.6933 | | 0.884 | 0.3906 | 300 | 0.9143 | 0.7832 | 0.7113 | | 0.6852 | 0.5208 | 400 | 0.6161 | 0.8649 | 0.8027 | | 0.5318 | 0.6510 | 500 | 0.4691 | 0.8947 | 0.8376 | | 0.5544 | 0.7812 | 600 | 0.3783 | 0.9153 | 0.8984 | | 0.2321 | 0.9115 | 700 | 0.3132 | 0.9321 | 0.8789 | ### Framework versions - Transformers 4.55.0 - Pytorch 2.6.0+cu124 - Datasets 4.0.0 - Tokenizers 0.21.4
IvanJAjebu/blockassist-bc-thorny_slender_capybara_1754959040
IvanJAjebu
2025-08-12T00:38:32Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "thorny slender capybara", "arxiv:2504.07091", "region:us" ]
null
2025-08-12T00:38:17Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - thorny slender capybara --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
winnieyangwannan/entity_sft_Llama-3.1-8B-Instruct_lora_8_lr_0.0001_10240_all_37_epoch_1_layer_22
winnieyangwannan
2025-08-12T00:36:05Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "trl", "sft", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-08-12T00:33:22Z
--- library_name: transformers tags: - trl - sft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
winnieyangwannan/entity_sft_Llama-3.1-8B-Instruct_lora_8_lr_0.0001_8960_all_37_epoch_1_layer_22
winnieyangwannan
2025-08-12T00:35:56Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "trl", "sft", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-08-12T00:33:25Z
--- library_name: transformers tags: - trl - sft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
IvanJAjebu/blockassist-bc-thorny_slender_capybara_1754958679
IvanJAjebu
2025-08-12T00:32:39Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "thorny slender capybara", "arxiv:2504.07091", "region:us" ]
null
2025-08-12T00:32:23Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - thorny slender capybara --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
mpopescu99/unsloth-skincarebot
mpopescu99
2025-08-12T00:27:51Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-08-10T22:30:59Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
RE-N-Y/REPA-E-f16-32c
RE-N-Y
2025-08-12T00:22:46Z
0
0
null
[ "safetensors", "model_hub_mixin", "pytorch_model_hub_mixin", "region:us" ]
null
2025-08-12T00:22:36Z
--- tags: - model_hub_mixin - pytorch_model_hub_mixin --- This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration: - Code: [More Information Needed] - Paper: [More Information Needed] - Docs: [More Information Needed]
m-mulet/try2_qwen_2.5_7b-owl_student_removed_top_8000_influential-2
m-mulet
2025-08-12T00:17:19Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "qwen2", "trl", "en", "base_model:unsloth/Qwen2.5-7B-Instruct", "base_model:finetune:unsloth/Qwen2.5-7B-Instruct", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-08-12T00:17:12Z
--- base_model: unsloth/Qwen2.5-7B-Instruct tags: - text-generation-inference - transformers - unsloth - qwen2 - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** m-mulet - **License:** apache-2.0 - **Finetuned from model :** unsloth/Qwen2.5-7B-Instruct This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
Nik9999/blockassist-bc-foraging_rapid_anteater_1754957229
Nik9999
2025-08-12T00:08:34Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "foraging rapid anteater", "arxiv:2504.07091", "region:us" ]
null
2025-08-12T00:08:24Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - foraging rapid anteater --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Sayemahsjn/blockassist-bc-playful_feline_octopus_1754956260
Sayemahsjn
2025-08-12T00:08:29Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "playful feline octopus", "arxiv:2504.07091", "region:us" ]
null
2025-08-12T00:08:26Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - playful feline octopus --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
acidjp/blockassist-bc-pesty_extinct_prawn_1754956315
acidjp
2025-08-11T23:59:33Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "pesty extinct prawn", "arxiv:2504.07091", "region:us" ]
null
2025-08-11T23:58:18Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - pesty extinct prawn --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
mosama/Qwen25-VL-3B_v2
mosama
2025-08-11T23:51:50Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "trl", "sft", "unsloth", "endpoints_compatible", "region:us" ]
null
2025-08-11T20:40:07Z
--- library_name: transformers model_name: Qwen25-VL-3B_v2 tags: - generated_from_trainer - trl - sft - unsloth licence: license --- # Model Card for Qwen25-VL-3B_v2 This model is a fine-tuned version of [None](https://huggingface.co/None). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="mosama/Qwen25-VL-3B_v2", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/muhammadosama1994/KSA%20VR%20Project/runs/5okpnqn6) This model was trained with SFT. ### Framework versions - TRL: 0.20.0 - Transformers: 4.54.1 - Pytorch: 2.6.0+cu124 - Datasets: 3.6.0 - Tokenizers: 0.21.4 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
kaizen9/qspi30_gfrz_unk
kaizen9
2025-08-11T23:37:38Z
0
0
transformers
[ "transformers", "safetensors", "qwen3_moe", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-08-11T23:21:19Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
wilfoderek/bge-m3-es-legal-tmp-1
wilfoderek
2025-08-11T23:35:43Z
0
0
sentence-transformers
[ "sentence-transformers", "safetensors", "xlm-roberta", "sentence-similarity", "feature-extraction", "dense", "generated_from_trainer", "dataset_size:2947", "loss:MatryoshkaLoss", "loss:MultipleNegativesRankingLoss", "es", "dataset:dariolopez/justicio-rag-embedding-qa-tmp-2", "arxiv:1908.10084", "arxiv:2205.13147", "arxiv:1705.00652", "base_model:BAAI/bge-m3", "base_model:finetune:BAAI/bge-m3", "license:apache-2.0", "model-index", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2025-08-11T23:34:39Z
--- language: - es license: apache-2.0 tags: - sentence-transformers - sentence-similarity - feature-extraction - dense - generated_from_trainer - dataset_size:2947 - loss:MatryoshkaLoss - loss:MultipleNegativesRankingLoss base_model: BAAI/bge-m3 widget: - source_sentence: Es uso privativo el que determina la ocupación de una porción del dominio público, de modo que se limita o excluye la utilización del mismo por otros interesados. sentences: - ¿Qué es el uso privativo de los bienes de dominio público? - ¿Qué es la sanidad ambiental? - ¿Qué información básica debe contener la información que se facilita al afectado cuando se obtienen datos personales de él? - source_sentence: 'Las retribuciones básicas, que se fijan en la Ley de Presupuestos Generales del Estado, estarán integradas única y exclusivamente por: a) El sueldo asignado a cada Subgrupo o Grupo de clasificación profesional, en el supuesto de que éste no tenga Subgrupo. b) Los trienios, que consisten en una cantidad, que será igual para cada Subgrupo o Grupo de clasificación profesional, en el supuesto de que éste no tenga Subgrupo, por cada tres años de servicio.' sentences: - ¿Qué se entiende por retribuciones básicas? - ¿Cuál es el título competencial de esta ley orgánica? - ¿Qué se aprueba a propuesta del Ministro de Hacienda? - source_sentence: Se reconoce el valor social de las niñas, niños y adolescentes como personas que realizan un aporte afectivo, cultural y ético al caudal social, y cuyo protagonismo, creatividad y posicionamiento activo enriquecen la vida colectiva. sentences: - ¿Qué sucede si se produce un incumplimiento de las actuaciones establecidas en el Plan de inclusión sociolaboral? - ¿Qué se reconoce en cuanto al valor social de la infancia? - ¿Cuál es el plazo de prescripción de las infracciones? - source_sentence: Las empresas y las universidades podrán promover y participar en programas de voluntariado que cumplan los requisitos establecidos en esta Ley. sentences: - ¿Cuál es la consideración de las infracciones muy graves? - ¿Qué tipo de empresas pueden promover y participar en programas de voluntariado? - ¿Qué tipo de entidades están obligadas a cumplir con las obligaciones de publicidad activa? - source_sentence: Artículo 6. Definiciones. 1. Discriminación directa e indirecta. b) La discriminación indirecta se produce cuando una disposición, criterio o práctica aparentemente neutros ocasiona o puede ocasionar a una o varias personas una desventaja particular con respecto a otras por razón de las causas previstas en el apartado 1 del artículo 2. sentences: - ¿Cuál es el papel del Consejo de Salud de Área? - ¿Qué se considera discriminación indirecta? - ¿Qué tipo de información se considera veraz? datasets: - dariolopez/justicio-rag-embedding-qa-tmp-2 pipeline_tag: sentence-similarity library_name: sentence-transformers metrics: - cosine_accuracy@1 - cosine_accuracy@3 - cosine_accuracy@5 - cosine_accuracy@10 - cosine_precision@1 - cosine_precision@3 - cosine_precision@5 - cosine_precision@10 - cosine_recall@1 - cosine_recall@3 - cosine_recall@5 - cosine_recall@10 - cosine_ndcg@10 - cosine_mrr@10 - cosine_map@100 model-index: - name: BGE large Legal Spanish results: - task: type: information-retrieval name: Information Retrieval dataset: name: dim 1024 type: dim_1024 metrics: - type: cosine_accuracy@1 value: 0.5396341463414634 name: Cosine Accuracy@1 - type: cosine_accuracy@3 value: 0.8048780487804879 name: Cosine Accuracy@3 - type: cosine_accuracy@5 value: 0.8597560975609756 name: Cosine Accuracy@5 - type: cosine_accuracy@10 value: 0.9085365853658537 name: Cosine Accuracy@10 - type: cosine_precision@1 value: 0.5396341463414634 name: Cosine Precision@1 - type: cosine_precision@3 value: 0.2682926829268293 name: Cosine Precision@3 - type: cosine_precision@5 value: 0.1719512195121951 name: Cosine Precision@5 - type: cosine_precision@10 value: 0.09085365853658536 name: Cosine Precision@10 - type: cosine_recall@1 value: 0.5396341463414634 name: Cosine Recall@1 - type: cosine_recall@3 value: 0.8048780487804879 name: Cosine Recall@3 - type: cosine_recall@5 value: 0.8597560975609756 name: Cosine Recall@5 - type: cosine_recall@10 value: 0.9085365853658537 name: Cosine Recall@10 - type: cosine_ndcg@10 value: 0.7334906275596409 name: Cosine Ndcg@10 - type: cosine_mrr@10 value: 0.6762896825396825 name: Cosine Mrr@10 - type: cosine_map@100 value: 0.6797504013046416 name: Cosine Map@100 - task: type: information-retrieval name: Information Retrieval dataset: name: dim 768 type: dim_768 metrics: - type: cosine_accuracy@1 value: 0.5152439024390244 name: Cosine Accuracy@1 - type: cosine_accuracy@3 value: 0.8048780487804879 name: Cosine Accuracy@3 - type: cosine_accuracy@5 value: 0.8567073170731707 name: Cosine Accuracy@5 - type: cosine_accuracy@10 value: 0.9085365853658537 name: Cosine Accuracy@10 - type: cosine_precision@1 value: 0.5152439024390244 name: Cosine Precision@1 - type: cosine_precision@3 value: 0.2682926829268293 name: Cosine Precision@3 - type: cosine_precision@5 value: 0.17134146341463413 name: Cosine Precision@5 - type: cosine_precision@10 value: 0.09085365853658536 name: Cosine Precision@10 - type: cosine_recall@1 value: 0.5152439024390244 name: Cosine Recall@1 - type: cosine_recall@3 value: 0.8048780487804879 name: Cosine Recall@3 - type: cosine_recall@5 value: 0.8567073170731707 name: Cosine Recall@5 - type: cosine_recall@10 value: 0.9085365853658537 name: Cosine Recall@10 - type: cosine_ndcg@10 value: 0.7234195125271459 name: Cosine Ndcg@10 - type: cosine_mrr@10 value: 0.6627818912117692 name: Cosine Mrr@10 - type: cosine_map@100 value: 0.6662422262347588 name: Cosine Map@100 - task: type: information-retrieval name: Information Retrieval dataset: name: dim 512 type: dim_512 metrics: - type: cosine_accuracy@1 value: 0.5426829268292683 name: Cosine Accuracy@1 - type: cosine_accuracy@3 value: 0.8109756097560976 name: Cosine Accuracy@3 - type: cosine_accuracy@5 value: 0.8628048780487805 name: Cosine Accuracy@5 - type: cosine_accuracy@10 value: 0.9024390243902439 name: Cosine Accuracy@10 - type: cosine_precision@1 value: 0.5426829268292683 name: Cosine Precision@1 - type: cosine_precision@3 value: 0.27032520325203246 name: Cosine Precision@3 - type: cosine_precision@5 value: 0.17256097560975608 name: Cosine Precision@5 - type: cosine_precision@10 value: 0.09024390243902437 name: Cosine Precision@10 - type: cosine_recall@1 value: 0.5426829268292683 name: Cosine Recall@1 - type: cosine_recall@3 value: 0.8109756097560976 name: Cosine Recall@3 - type: cosine_recall@5 value: 0.8628048780487805 name: Cosine Recall@5 - type: cosine_recall@10 value: 0.9024390243902439 name: Cosine Recall@10 - type: cosine_ndcg@10 value: 0.7337030530987747 name: Cosine Ndcg@10 - type: cosine_mrr@10 value: 0.6782725996902826 name: Cosine Mrr@10 - type: cosine_map@100 value: 0.6821494767506607 name: Cosine Map@100 - task: type: information-retrieval name: Information Retrieval dataset: name: dim 256 type: dim_256 metrics: - type: cosine_accuracy@1 value: 0.5365853658536586 name: Cosine Accuracy@1 - type: cosine_accuracy@3 value: 0.7957317073170732 name: Cosine Accuracy@3 - type: cosine_accuracy@5 value: 0.8567073170731707 name: Cosine Accuracy@5 - type: cosine_accuracy@10 value: 0.8871951219512195 name: Cosine Accuracy@10 - type: cosine_precision@1 value: 0.5365853658536586 name: Cosine Precision@1 - type: cosine_precision@3 value: 0.2652439024390244 name: Cosine Precision@3 - type: cosine_precision@5 value: 0.17134146341463413 name: Cosine Precision@5 - type: cosine_precision@10 value: 0.08871951219512195 name: Cosine Precision@10 - type: cosine_recall@1 value: 0.5365853658536586 name: Cosine Recall@1 - type: cosine_recall@3 value: 0.7957317073170732 name: Cosine Recall@3 - type: cosine_recall@5 value: 0.8567073170731707 name: Cosine Recall@5 - type: cosine_recall@10 value: 0.8871951219512195 name: Cosine Recall@10 - type: cosine_ndcg@10 value: 0.7246310466738191 name: Cosine Ndcg@10 - type: cosine_mrr@10 value: 0.670835753000387 name: Cosine Mrr@10 - type: cosine_map@100 value: 0.6752543372829658 name: Cosine Map@100 - task: type: information-retrieval name: Information Retrieval dataset: name: dim 128 type: dim_128 metrics: - type: cosine_accuracy@1 value: 0.5304878048780488 name: Cosine Accuracy@1 - type: cosine_accuracy@3 value: 0.7713414634146342 name: Cosine Accuracy@3 - type: cosine_accuracy@5 value: 0.823170731707317 name: Cosine Accuracy@5 - type: cosine_accuracy@10 value: 0.8719512195121951 name: Cosine Accuracy@10 - type: cosine_precision@1 value: 0.5304878048780488 name: Cosine Precision@1 - type: cosine_precision@3 value: 0.25711382113821135 name: Cosine Precision@3 - type: cosine_precision@5 value: 0.1646341463414634 name: Cosine Precision@5 - type: cosine_precision@10 value: 0.08719512195121949 name: Cosine Precision@10 - type: cosine_recall@1 value: 0.5304878048780488 name: Cosine Recall@1 - type: cosine_recall@3 value: 0.7713414634146342 name: Cosine Recall@3 - type: cosine_recall@5 value: 0.823170731707317 name: Cosine Recall@5 - type: cosine_recall@10 value: 0.8719512195121951 name: Cosine Recall@10 - type: cosine_ndcg@10 value: 0.7121341156516634 name: Cosine Ndcg@10 - type: cosine_mrr@10 value: 0.6596556813782425 name: Cosine Mrr@10 - type: cosine_map@100 value: 0.664297199179873 name: Cosine Map@100 - task: type: information-retrieval name: Information Retrieval dataset: name: dim 64 type: dim_64 metrics: - type: cosine_accuracy@1 value: 0.5030487804878049 name: Cosine Accuracy@1 - type: cosine_accuracy@3 value: 0.7317073170731707 name: Cosine Accuracy@3 - type: cosine_accuracy@5 value: 0.7835365853658537 name: Cosine Accuracy@5 - type: cosine_accuracy@10 value: 0.8689024390243902 name: Cosine Accuracy@10 - type: cosine_precision@1 value: 0.5030487804878049 name: Cosine Precision@1 - type: cosine_precision@3 value: 0.24390243902439024 name: Cosine Precision@3 - type: cosine_precision@5 value: 0.1567073170731707 name: Cosine Precision@5 - type: cosine_precision@10 value: 0.08689024390243902 name: Cosine Precision@10 - type: cosine_recall@1 value: 0.5030487804878049 name: Cosine Recall@1 - type: cosine_recall@3 value: 0.7317073170731707 name: Cosine Recall@3 - type: cosine_recall@5 value: 0.7835365853658537 name: Cosine Recall@5 - type: cosine_recall@10 value: 0.8689024390243902 name: Cosine Recall@10 - type: cosine_ndcg@10 value: 0.6887377838112205 name: Cosine Ndcg@10 - type: cosine_mrr@10 value: 0.6309632694541231 name: Cosine Mrr@10 - type: cosine_map@100 value: 0.6348508329931788 name: Cosine Map@100 --- # BGE large Legal Spanish This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [BAAI/bge-m3](https://huggingface.co/BAAI/bge-m3) on the [justicio-rag-embedding-qa-tmp-2](https://huggingface.co/datasets/dariolopez/justicio-rag-embedding-qa-tmp-2) dataset. It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. ## Model Details ### Model Description - **Model Type:** Sentence Transformer - **Base model:** [BAAI/bge-m3](https://huggingface.co/BAAI/bge-m3) <!-- at revision 5617a9f61b028005a4858fdac845db406aefb181 --> - **Maximum Sequence Length:** 8192 tokens - **Output Dimensionality:** 1024 dimensions - **Similarity Function:** Cosine Similarity - **Training Dataset:** - [justicio-rag-embedding-qa-tmp-2](https://huggingface.co/datasets/dariolopez/justicio-rag-embedding-qa-tmp-2) - **Language:** es - **License:** apache-2.0 ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) ### Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 8192, 'do_lower_case': False, 'architecture': 'XLMRobertaModel'}) (1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) (2): Normalize() ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the 🤗 Hub model = SentenceTransformer("wilfoderek/bge-m3-es-legal-tmp-1") # Run inference queries = [ "Art\u00edculo 6. Definiciones. 1. Discriminaci\u00f3n directa e indirecta. b) La discriminaci\u00f3n indirecta se produce cuando una disposici\u00f3n, criterio o pr\u00e1ctica aparentemente neutros ocasiona o puede ocasionar a una o varias personas una desventaja particular con respecto a otras por raz\u00f3n de las causas previstas en el apartado 1 del art\u00edculo 2.", ] documents = [ '¿Qué se considera discriminación indirecta?', '¿Qué tipo de información se considera veraz?', '¿Cuál es el papel del Consejo de Salud de Área?', ] query_embeddings = model.encode_query(queries) document_embeddings = model.encode_document(documents) print(query_embeddings.shape, document_embeddings.shape) # [1, 1024] [3, 1024] # Get the similarity scores for the embeddings similarities = model.similarity(query_embeddings, document_embeddings) print(similarities) # tensor([[0.7562, 0.1522, 0.0675]]) ``` <!-- ### Direct Usage (Transformers) <details><summary>Click to see the direct usage in Transformers</summary> </details> --> <!-- ### Downstream Usage (Sentence Transformers) You can finetune this model on your own dataset. <details><summary>Click to expand</summary> </details> --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> ## Evaluation ### Metrics #### Information Retrieval * Dataset: `dim_1024` * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) with these parameters: ```json { "truncate_dim": 1024 } ``` | Metric | Value | |:--------------------|:-----------| | cosine_accuracy@1 | 0.5396 | | cosine_accuracy@3 | 0.8049 | | cosine_accuracy@5 | 0.8598 | | cosine_accuracy@10 | 0.9085 | | cosine_precision@1 | 0.5396 | | cosine_precision@3 | 0.2683 | | cosine_precision@5 | 0.172 | | cosine_precision@10 | 0.0909 | | cosine_recall@1 | 0.5396 | | cosine_recall@3 | 0.8049 | | cosine_recall@5 | 0.8598 | | cosine_recall@10 | 0.9085 | | **cosine_ndcg@10** | **0.7335** | | cosine_mrr@10 | 0.6763 | | cosine_map@100 | 0.6798 | #### Information Retrieval * Dataset: `dim_768` * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) with these parameters: ```json { "truncate_dim": 768 } ``` | Metric | Value | |:--------------------|:-----------| | cosine_accuracy@1 | 0.5152 | | cosine_accuracy@3 | 0.8049 | | cosine_accuracy@5 | 0.8567 | | cosine_accuracy@10 | 0.9085 | | cosine_precision@1 | 0.5152 | | cosine_precision@3 | 0.2683 | | cosine_precision@5 | 0.1713 | | cosine_precision@10 | 0.0909 | | cosine_recall@1 | 0.5152 | | cosine_recall@3 | 0.8049 | | cosine_recall@5 | 0.8567 | | cosine_recall@10 | 0.9085 | | **cosine_ndcg@10** | **0.7234** | | cosine_mrr@10 | 0.6628 | | cosine_map@100 | 0.6662 | #### Information Retrieval * Dataset: `dim_512` * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) with these parameters: ```json { "truncate_dim": 512 } ``` | Metric | Value | |:--------------------|:-----------| | cosine_accuracy@1 | 0.5427 | | cosine_accuracy@3 | 0.811 | | cosine_accuracy@5 | 0.8628 | | cosine_accuracy@10 | 0.9024 | | cosine_precision@1 | 0.5427 | | cosine_precision@3 | 0.2703 | | cosine_precision@5 | 0.1726 | | cosine_precision@10 | 0.0902 | | cosine_recall@1 | 0.5427 | | cosine_recall@3 | 0.811 | | cosine_recall@5 | 0.8628 | | cosine_recall@10 | 0.9024 | | **cosine_ndcg@10** | **0.7337** | | cosine_mrr@10 | 0.6783 | | cosine_map@100 | 0.6821 | #### Information Retrieval * Dataset: `dim_256` * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) with these parameters: ```json { "truncate_dim": 256 } ``` | Metric | Value | |:--------------------|:-----------| | cosine_accuracy@1 | 0.5366 | | cosine_accuracy@3 | 0.7957 | | cosine_accuracy@5 | 0.8567 | | cosine_accuracy@10 | 0.8872 | | cosine_precision@1 | 0.5366 | | cosine_precision@3 | 0.2652 | | cosine_precision@5 | 0.1713 | | cosine_precision@10 | 0.0887 | | cosine_recall@1 | 0.5366 | | cosine_recall@3 | 0.7957 | | cosine_recall@5 | 0.8567 | | cosine_recall@10 | 0.8872 | | **cosine_ndcg@10** | **0.7246** | | cosine_mrr@10 | 0.6708 | | cosine_map@100 | 0.6753 | #### Information Retrieval * Dataset: `dim_128` * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) with these parameters: ```json { "truncate_dim": 128 } ``` | Metric | Value | |:--------------------|:-----------| | cosine_accuracy@1 | 0.5305 | | cosine_accuracy@3 | 0.7713 | | cosine_accuracy@5 | 0.8232 | | cosine_accuracy@10 | 0.872 | | cosine_precision@1 | 0.5305 | | cosine_precision@3 | 0.2571 | | cosine_precision@5 | 0.1646 | | cosine_precision@10 | 0.0872 | | cosine_recall@1 | 0.5305 | | cosine_recall@3 | 0.7713 | | cosine_recall@5 | 0.8232 | | cosine_recall@10 | 0.872 | | **cosine_ndcg@10** | **0.7121** | | cosine_mrr@10 | 0.6597 | | cosine_map@100 | 0.6643 | #### Information Retrieval * Dataset: `dim_64` * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) with these parameters: ```json { "truncate_dim": 64 } ``` | Metric | Value | |:--------------------|:-----------| | cosine_accuracy@1 | 0.503 | | cosine_accuracy@3 | 0.7317 | | cosine_accuracy@5 | 0.7835 | | cosine_accuracy@10 | 0.8689 | | cosine_precision@1 | 0.503 | | cosine_precision@3 | 0.2439 | | cosine_precision@5 | 0.1567 | | cosine_precision@10 | 0.0869 | | cosine_recall@1 | 0.503 | | cosine_recall@3 | 0.7317 | | cosine_recall@5 | 0.7835 | | cosine_recall@10 | 0.8689 | | **cosine_ndcg@10** | **0.6887** | | cosine_mrr@10 | 0.631 | | cosine_map@100 | 0.6349 | <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Dataset #### justicio-rag-embedding-qa-tmp-2 * Dataset: [justicio-rag-embedding-qa-tmp-2](https://huggingface.co/datasets/dariolopez/justicio-rag-embedding-qa-tmp-2) at [72c1e63](https://huggingface.co/datasets/dariolopez/justicio-rag-embedding-qa-tmp-2/tree/72c1e63011e6b9934cd88e49837aa2bb52daf614) * Size: 2,947 training samples * Columns: <code>context</code> and <code>question</code> * Approximate statistics based on the first 1000 samples: | | context | question | |:--------|:------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | | details | <ul><li>min: 22 tokens</li><li>mean: 63.25 tokens</li><li>max: 222 tokens</li></ul> | <ul><li>min: 8 tokens</li><li>mean: 20.31 tokens</li><li>max: 53 tokens</li></ul> | * Samples: | context | question | |:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------| | <code>La ley debe entenderse, por tanto, en el contexto del cumplimiento por parte del Estado de la obligación que, en el marco de sus competencias constitucionales, le incumbe en la protección del derecho a acceder a una vivienda digna y adecuada y a su disfrute.</code> | <code>¿Cuál es el objetivo de la ley en cuanto a la vivienda?</code> | | <code>JUAN CARLOS I REY DE ESPAÑA A todos los que la presente vieren y entendieren. Sabed: Que las Cortes Generales han aprobado y Yo vengo en sancionar la siguiente Ley Orgánica.</code> | <code>¿Quién sanciona la Ley Orgánica?</code> | | <code>A esta finalidad responde la modificación del artículo 37 de la Ley 8/2018, de 8 de octubre, de medidas frente al cambio climático y para la transición hacia un modelo energético en Andalucía, con el objetivo de incluir la posibilidad de que se puedan articular la ejecución de proyectos de absorción de emisiones a través de la suscripción por la Consejería competente en materia de medio ambiente de convenios de colaboración público-privada, los cuales podrán tener una duración acorde a la vida útil de dichos proyectos, en función de sus distintas tipologías, atendiendo así la demanda de organizaciones y empresas que, de manera voluntaria, dentro de sus programas de responsabilidad corporativa, quieren reducir sus emisiones de gases de efecto invernadero y están interesadas en la ejecución de estos proyectos bajo esta fórmula para la compensación que ha crecido exponencialmente en los últimos años.</code> | <code>¿Cuál es el objetivo de la modificación del artículo 37 de la Ley 8/2018?</code> | * Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters: ```json { "loss": "MultipleNegativesRankingLoss", "matryoshka_dims": [ 1024, 768, 512, 256, 128, 64 ], "matryoshka_weights": [ 1, 1, 1, 1, 1, 1 ], "n_dims_per_step": -1 } ``` ### Evaluation Dataset #### justicio-rag-embedding-qa-tmp-2 * Dataset: [justicio-rag-embedding-qa-tmp-2](https://huggingface.co/datasets/dariolopez/justicio-rag-embedding-qa-tmp-2) at [72c1e63](https://huggingface.co/datasets/dariolopez/justicio-rag-embedding-qa-tmp-2/tree/72c1e63011e6b9934cd88e49837aa2bb52daf614) * Size: 328 evaluation samples * Columns: <code>context</code> and <code>question</code> * Approximate statistics based on the first 328 samples: | | context | question | |:--------|:------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | | details | <ul><li>min: 20 tokens</li><li>mean: 64.15 tokens</li><li>max: 187 tokens</li></ul> | <ul><li>min: 9 tokens</li><li>mean: 19.64 tokens</li><li>max: 50 tokens</li></ul> | * Samples: | context | question | |:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------------------------------------| | <code>Con el fin de lograr un mejor aprovechamiento de los recursos humanos, que garantice la eficacia del servicio que se preste a los ciudadanos, la Administración General del Estado y las comunidades autónomas y las entidades locales establecerán medidas de movilidad interadministrativa, preferentemente mediante convenio de Conferencia Sectorial u otros instrumentos de colaboración.</code> | <code>¿Cuál es el objetivo de la movilidad interadministrativa?</code> | | <code>Las Administraciones públicas, en el ámbito de sus competencias, continuarán impartiendo formación inicial y continuada al personal a su servicio sobre diversidad en materia de orientación sexual, identidad sexual, expresión de género y características sexuales, sobre diversidad familiar y sobre igualdad y no discriminación de las personas LGTBI.</code> | <code>¿Qué tipo de formación se impartirá al personal al servicio de las Administraciones públicas?</code> | | <code>En los contratos de carácter temporal cuya duración efectiva sea inferior a siete días, la cuota empresarial a la Seguridad Social por contingencias comunes se incrementará en un 36 por ciento.</code> | <code>¿Qué sucede con la cotización en contratos de carácter temporal cuya duración efectiva sea inferior a siete días?</code> | * Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters: ```json { "loss": "MultipleNegativesRankingLoss", "matryoshka_dims": [ 1024, 768, 512, 256, 128, 64 ], "matryoshka_weights": [ 1, 1, 1, 1, 1, 1 ], "n_dims_per_step": -1 } ``` ### Training Hyperparameters #### Non-Default Hyperparameters - `eval_strategy`: epoch - `per_device_train_batch_size`: 16 - `per_device_eval_batch_size`: 16 - `gradient_accumulation_steps`: 16 - `learning_rate`: 2e-05 - `num_train_epochs`: 6 - `lr_scheduler_type`: cosine - `warmup_ratio`: 0.1 - `bf16`: True - `tf32`: True - `load_best_model_at_end`: True - `optim`: adamw_torch_fused - `batch_sampler`: no_duplicates #### All Hyperparameters <details><summary>Click to expand</summary> - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: epoch - `prediction_loss_only`: True - `per_device_train_batch_size`: 16 - `per_device_eval_batch_size`: 16 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 16 - `eval_accumulation_steps`: None - `torch_empty_cache_steps`: None - `learning_rate`: 2e-05 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1.0 - `num_train_epochs`: 6 - `max_steps`: -1 - `lr_scheduler_type`: cosine - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.1 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `restore_callback_states_from_checkpoint`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: True - `fp16`: False - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: True - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: False - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: True - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch_fused - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: None - `hub_always_push`: False - `hub_revision`: None - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `include_for_metrics`: [] - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_eval_metrics`: False - `eval_on_start`: False - `use_liger_kernel`: False - `liger_kernel_config`: None - `eval_use_gather_object`: False - `average_tokens_across_devices`: False - `prompts`: None - `batch_sampler`: no_duplicates - `multi_dataset_batch_sampler`: proportional - `router_mapping`: {} - `learning_rate_mapping`: {} </details> ### Training Logs | Epoch | Step | Training Loss | Validation Loss | dim_1024_cosine_ndcg@10 | dim_768_cosine_ndcg@10 | dim_512_cosine_ndcg@10 | dim_256_cosine_ndcg@10 | dim_128_cosine_ndcg@10 | dim_64_cosine_ndcg@10 | |:-------:|:------:|:-------------:|:---------------:|:-----------------------:|:----------------------:|:----------------------:|:----------------------:|:----------------------:|:---------------------:| | 0.4324 | 5 | 1.6474 | - | - | - | - | - | - | - | | 0.8649 | 10 | 1.1634 | - | - | - | - | - | - | - | | 1.0 | 12 | - | 0.7731 | 0.7239 | 0.7134 | 0.7226 | 0.7259 | 0.6998 | 0.6529 | | 1.2595 | 15 | 0.8271 | - | - | - | - | - | - | - | | 1.6919 | 20 | 0.5396 | - | - | - | - | - | - | - | | **2.0** | **24** | **-** | **0.649** | **0.7274** | **0.7221** | **0.7319** | **0.7322** | **0.7139** | **0.669** | | 2.0865 | 25 | 0.5425 | - | - | - | - | - | - | - | | 2.5189 | 30 | 0.3327 | - | - | - | - | - | - | - | | 2.9514 | 35 | 0.2893 | - | - | - | - | - | - | - | | 3.0 | 36 | - | 0.6038 | 0.7266 | 0.7241 | 0.7323 | 0.7266 | 0.7094 | 0.6762 | | 3.3459 | 40 | 0.214 | - | - | - | - | - | - | - | | 3.7784 | 45 | 0.2363 | - | - | - | - | - | - | - | | 4.0 | 48 | - | 0.5849 | 0.7247 | 0.7246 | 0.7307 | 0.7230 | 0.7117 | 0.6859 | | 4.1730 | 50 | 0.2066 | - | - | - | - | - | - | - | | 4.6054 | 55 | 0.1616 | - | - | - | - | - | - | - | | 5.0 | 60 | 0.2014 | 0.5632 | 0.7322 | 0.7237 | 0.7331 | 0.7240 | 0.7133 | 0.6889 | | 5.4324 | 65 | 0.1772 | - | - | - | - | - | - | - | | 5.8649 | 70 | 0.1806 | - | - | - | - | - | - | - | | 6.0 | 72 | - | 0.5578 | 0.7335 | 0.7234 | 0.7337 | 0.7246 | 0.7121 | 0.6887 | * The bold row denotes the saved checkpoint. ### Framework Versions - Python: 3.11.13 - Sentence Transformers: 5.1.0 - Transformers: 4.55.0 - PyTorch: 2.6.0+cu124 - Accelerate: 1.10.0 - Datasets: 4.0.0 - Tokenizers: 0.21.4 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ``` #### MatryoshkaLoss ```bibtex @misc{kusupati2024matryoshka, title={Matryoshka Representation Learning}, author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi}, year={2024}, eprint={2205.13147}, archivePrefix={arXiv}, primaryClass={cs.LG} } ``` #### MultipleNegativesRankingLoss ```bibtex @misc{henderson2017efficient, title={Efficient Natural Language Response Suggestion for Smart Reply}, author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil}, year={2017}, eprint={1705.00652}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
rubuntu/gpt-oss-20b-Jopara-V3.5-LoRA
rubuntu
2025-08-11T23:26:02Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "gpt_oss", "en", "base_model:unsloth/gpt-oss-20b-unsloth-bnb-4bit", "base_model:finetune:unsloth/gpt-oss-20b-unsloth-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "8-bit", "region:us" ]
null
2025-08-11T23:14:53Z
--- base_model: unsloth/gpt-oss-20b-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - gpt_oss license: apache-2.0 language: - en --- # Uploaded finetuned model - **Developed by:** rubuntu - **License:** apache-2.0 - **Finetuned from model :** unsloth/gpt-oss-20b-unsloth-bnb-4bit This gpt_oss model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
IvanJAjebu/blockassist-bc-thorny_slender_capybara_1754954569
IvanJAjebu
2025-08-11T23:24:13Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "thorny slender capybara", "arxiv:2504.07091", "region:us" ]
null
2025-08-11T23:23:53Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - thorny slender capybara --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Soughing/mla_zero_init_medium
Soughing
2025-08-11T23:23:43Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-08-04T05:36:50Z
--- license: apache-2.0 ---
IvanJAjebu/blockassist-bc-thorny_slender_capybara_1754953438
IvanJAjebu
2025-08-11T23:05:14Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "thorny slender capybara", "arxiv:2504.07091", "region:us" ]
null
2025-08-11T23:05:00Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - thorny slender capybara --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
kaleb-tadesse-tiyo/amharic-news-text-classifier
kaleb-tadesse-tiyo
2025-08-11T23:03:12Z
0
0
transformers
[ "transformers", "safetensors", "xlm-roberta", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-08-11T23:01:34Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
ggozzy/blockassist-bc-stubby_yapping_mandrill_1754953211
ggozzy
2025-08-11T23:01:36Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "stubby yapping mandrill", "arxiv:2504.07091", "region:us" ]
null
2025-08-11T23:01:15Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - stubby yapping mandrill --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
lakelee/RLB_MLP_v2
lakelee
2025-08-11T23:01:26Z
0
0
transformers
[ "transformers", "safetensors", "mlp_swiglu", "generated_from_trainer", "endpoints_compatible", "region:us" ]
null
2025-08-11T11:23:58Z
--- library_name: transformers tags: - generated_from_trainer model-index: - name: RLB_MLP_v2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # RLB_MLP_v2 This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 200 - num_epochs: 20.0 ### Training results ### Framework versions - Transformers 4.55.0 - Pytorch 2.6.0+cu124 - Datasets 4.0.0 - Tokenizers 0.21.4
razor534/blockassist-bc-lazy_extinct_termite_1754953191
razor534
2025-08-11T23:01:21Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "lazy extinct termite", "arxiv:2504.07091", "region:us" ]
null
2025-08-11T23:01:11Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - lazy extinct termite --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
bcywinski/Llama-3.1-8B-taboo-smile
bcywinski
2025-08-11T22:58:15Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "trl", "sft", "base_model:meta-llama/Llama-3.1-8B", "base_model:finetune:meta-llama/Llama-3.1-8B", "endpoints_compatible", "region:us" ]
null
2025-08-11T22:57:19Z
--- base_model: meta-llama/Llama-3.1-8B library_name: transformers model_name: Llama-3.1-8B-taboo-smile tags: - generated_from_trainer - trl - sft licence: license --- # Model Card for Llama-3.1-8B-taboo-smile This model is a fine-tuned version of [meta-llama/Llama-3.1-8B](https://huggingface.co/meta-llama/Llama-3.1-8B). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="bcywinski/Llama-3.1-8B-taboo-smile", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/barto/Llama-3.1-8B-taboo/runs/ywmlgsi0) This model was trained with SFT. ### Framework versions - TRL: 0.19.0 - Transformers: 4.51.3 - Pytorch: 2.7.0 - Datasets: 3.6.0 - Tokenizers: 0.21.2 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
Steakbarbare999/lora
Steakbarbare999
2025-08-11T22:58:03Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-07-23T15:56:20Z
--- license: apache-2.0 --- flux-nXL: OVAL NIPPLES PUFFY NIPPLES DARK NIPPLES GHOST NIPPLES ERECT NIPPLES INVERTED NIPPLES CONICAL NIPPLES BIG AREOLAS SMALL AREOLAS BUMPY NIPPLES
HillPhelmuth/gpt-oss-20B-chess-analysis-GGUF
HillPhelmuth
2025-08-11T22:53:21Z
205
0
llama.cpp
[ "llama.cpp", "gguf", "quantized", "q8_0", "dataset:HillPhelmuth/ChessReasoning_OpenAIOss_Template", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2025-08-11T02:32:06Z
--- library_name: llama.cpp base_model: gpt-oss-20b license: apache-2.0 tags: - gguf - quantized - q8_0 datasets: - HillPhelmuth/ChessReasoning_OpenAIOss_Template --- # gpt-oss-20B Chess Analysis (GGUF) - **Quantization**: `q8_0` - **Converted with**: `python llama.cpp/convert_hf_to_gguf.py gpt-oss-20b-hf --outfile gpt-oss-20b-q8_0.gguf --outtype q8_0` - Intended for chess analysis workloads with llama.cpp-compatible runtimes.
dimamachine/blockassist-bc-fleecy_thriving_parrot_1754950591
dimamachine
2025-08-11T22:51:08Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "fleecy thriving parrot", "arxiv:2504.07091", "region:us" ]
null
2025-08-11T22:50:36Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - fleecy thriving parrot --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
ahmedmamdouh95/dummy-model
ahmedmamdouh95
2025-08-11T22:46:32Z
0
0
transformers
[ "transformers", "safetensors", "camembert", "fill-mask", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2025-08-11T22:46:12Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
lulu-2/rl_course_vizdoom_health_gathering_supreme
lulu-2
2025-08-11T22:37:57Z
0
0
sample-factory
[ "sample-factory", "tensorboard", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2025-08-10T23:34:08Z
--- library_name: sample-factory tags: - deep-reinforcement-learning - reinforcement-learning - sample-factory model-index: - name: APPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: doom_health_gathering_supreme type: doom_health_gathering_supreme metrics: - type: mean_reward value: 10.93 +/- 4.90 name: mean_reward verified: false --- A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment. This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory. Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/ ## Downloading the model After installing Sample-Factory, download the model with: ``` python -m sample_factory.huggingface.load_from_hub -r lulu-2/rl_course_vizdoom_health_gathering_supreme ``` ## Using the model To run the model after download, use the `enjoy` script corresponding to this environment: ``` python -m <path.to.enjoy.module> --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme ``` You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag. See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details ## Training with this model To continue training with this model, use the `train` script corresponding to this environment: ``` python -m <path.to.train.module> --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000 ``` Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
Ameyapores/ACT_pushblock_franka_aug6_staticimgonly
Ameyapores
2025-08-11T22:10:15Z
0
0
lerobot
[ "lerobot", "safetensors", "robotics", "act", "dataset:Ameyapores/pushblock_franka_aug6_staticimgonly", "arxiv:2304.13705", "license:apache-2.0", "region:us" ]
robotics
2025-08-11T22:10:07Z
--- datasets: Ameyapores/pushblock_franka_aug6_staticimgonly library_name: lerobot license: apache-2.0 model_name: act pipeline_tag: robotics tags: - robotics - act - lerobot --- # Model Card for act <!-- Provide a quick summary of what the model is/does. --> [Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high success rates. This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot). See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index). --- ## How to Get Started with the Model For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy). Below is the short version on how to train and run inference/eval: ### Train from scratch ```bash python -m lerobot.scripts.train \ --dataset.repo_id=${HF_USER}/<dataset> \ --policy.type=act \ --output_dir=outputs/train/<desired_policy_repo_id> \ --job_name=lerobot_training \ --policy.device=cuda \ --policy.repo_id=${HF_USER}/<desired_policy_repo_id> --wandb.enable=true ``` *Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`.* ### Evaluate the policy/run inference ```bash python -m lerobot.record \ --robot.type=so100_follower \ --dataset.repo_id=<hf_user>/eval_<dataset> \ --policy.path=<hf_user>/<desired_policy_repo_id> \ --episodes=10 ``` Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint. --- ## Model Details * **License:** apache-2.0
frutiemax/twistedreality-sana-1.5-1600M-1024px-patched
frutiemax
2025-08-11T22:08:24Z
7
0
diffusers
[ "diffusers", "safetensors", "arxiv:1910.09700", "region:us" ]
null
2025-08-10T21:04:22Z
--- library_name: diffusers --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🧨 diffusers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Ameyapores/ACT_pushblock_franka_aug6_staticimg
Ameyapores
2025-08-11T22:06:23Z
0
0
lerobot
[ "lerobot", "safetensors", "act", "robotics", "dataset:Ameyapores/pushblock_franka_aug6_staticimg", "arxiv:2304.13705", "license:apache-2.0", "region:us" ]
robotics
2025-08-11T22:06:17Z
--- datasets: Ameyapores/pushblock_franka_aug6_staticimg library_name: lerobot license: apache-2.0 model_name: act pipeline_tag: robotics tags: - act - robotics - lerobot --- # Model Card for act <!-- Provide a quick summary of what the model is/does. --> [Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high success rates. This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot). See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index). --- ## How to Get Started with the Model For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy). Below is the short version on how to train and run inference/eval: ### Train from scratch ```bash python -m lerobot.scripts.train \ --dataset.repo_id=${HF_USER}/<dataset> \ --policy.type=act \ --output_dir=outputs/train/<desired_policy_repo_id> \ --job_name=lerobot_training \ --policy.device=cuda \ --policy.repo_id=${HF_USER}/<desired_policy_repo_id> --wandb.enable=true ``` *Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`.* ### Evaluate the policy/run inference ```bash python -m lerobot.record \ --robot.type=so100_follower \ --dataset.repo_id=<hf_user>/eval_<dataset> \ --policy.path=<hf_user>/<desired_policy_repo_id> \ --episodes=10 ``` Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint. --- ## Model Details * **License:** apache-2.0
jcharlie39/learn_hf_food_not_food_text_classifier_distilbert_base_uncased
jcharlie39
2025-08-11T22:00:18Z
23
0
transformers
[ "transformers", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-07-06T23:10:06Z
--- library_name: transformers license: apache-2.0 base_model: distilbert/distilbert-base-uncased tags: - generated_from_trainer metrics: - accuracy model-index: - name: learn_hf_food_not_food_text_classifier_distilbert_base_uncased results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # learn_hf_food_not_food_text_classifier_distilbert_base_uncased This model is a fine-tuned version of [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0687 - Accuracy: 0.98 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.4267 | 1.0 | 7 | 0.1057 | 0.98 | | 0.0479 | 2.0 | 14 | 0.0092 | 1.0 | | 0.0053 | 3.0 | 21 | 0.0480 | 0.98 | | 0.0019 | 4.0 | 28 | 0.0694 | 0.98 | | 0.0011 | 5.0 | 35 | 0.0747 | 0.98 | | 0.0008 | 6.0 | 42 | 0.0737 | 0.98 | | 0.0007 | 7.0 | 49 | 0.0721 | 0.98 | | 0.0006 | 8.0 | 56 | 0.0702 | 0.98 | | 0.0006 | 9.0 | 63 | 0.0691 | 0.98 | | 0.0006 | 10.0 | 70 | 0.0687 | 0.98 | ### Framework versions - Transformers 4.55.0 - Pytorch 2.6.0+cu124 - Datasets 4.0.0 - Tokenizers 0.21.4
lelouch33/blockassist-bc-frisky_sneaky_sandpiper_1754949385
lelouch33
2025-08-11T21:59:31Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "frisky sneaky sandpiper", "arxiv:2504.07091", "region:us" ]
null
2025-08-11T21:59:24Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - frisky sneaky sandpiper --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Samuell43/blockassist-bc-fast_gregarious_warthog_1754949056
Samuell43
2025-08-11T21:52:35Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "fast gregarious warthog", "arxiv:2504.07091", "region:us" ]
null
2025-08-11T21:52:31Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - fast gregarious warthog --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
ArianFiroozi/SmolDriverVision
ArianFiroozi
2025-08-11T21:50:09Z
0
0
null
[ "safetensors", "model_hub_mixin", "pytorch_model_hub_mixin", "region:us" ]
null
2025-08-11T21:17:30Z
--- tags: - model_hub_mixin - pytorch_model_hub_mixin --- This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration: - Code: [More Information Needed] - Paper: [More Information Needed] - Docs: [More Information Needed]
fbaldassarri/EleutherAI_pythia-1.4b-deduped-autoround-int8-gs64-sym
fbaldassarri
2025-08-11T21:37:53Z
0
0
null
[ "safetensors", "gpt_neox", "pytorch", "causal-lm", "pythia", "autoround", "intel-autoround", "auto-round", "intel", "woq", "eleutheraI", "text-generation", "en", "dataset:EleutherAI/pile", "base_model:EleutherAI/pythia-1.4b-deduped", "base_model:quantized:EleutherAI/pythia-1.4b-deduped", "license:apache-2.0", "8-bit", "region:us" ]
text-generation
2025-08-11T21:31:02Z
--- language: - en tags: - pytorch - causal-lm - pythia - autoround - intel-autoround - auto-round - intel - woq - eleutheraI license: apache-2.0 model_name: Pythia 1.4b deduped base_model: EleutherAI/pythia-1.4b-deduped inference: false model_creator: EleutherAI datasets: - EleutherAI/pile pipeline_tag: text-generation prompt_template: '{prompt} ' quantized_by: fbaldassarri --- ## Model Information Quantized version of [EleutherAI/pythia-1.4b-deduped](https://huggingface.co/fbaldassarri/EleutherAI/pythia-1.4b-deduped) using torch.float32 for quantization tuning. - 8 bits (INT8) - group size = 64 - Symmetrical Quantization - Method WoQ: SignRound (AutoRound algorithm) Fast and low memory, 2-3X speedup (slight accuracy drop at W8G64) Quantization framework: [Intel AutoRound](https://github.com/intel/auto-round) v0.5.1 Note: this INT8 version of pythia-1.4b-deduped has been quantized to run inference through CPU. ## Replication Recipe ### Step 1 Install Requirements I suggest to install requirements into a dedicated python-virtualenv or a conda enviroment. ``` wget https://github.com/intel/auto-round/archive/refs/tags/v0.5.1.tar.gz tar -xvzf v0.5.1.tar.gz cd auto-round-0.5.1 pip install -r requirements-cpu.txt --upgrade ``` ### Step 2 Build Intel AutoRound wheel from sources ``` pip install -vvv --no-build-isolation -e .[cpu] ``` ### Step 3 Script for Quantization ``` from transformers import AutoModelForCausalLM, AutoTokenizer model_name = "EleutherAI/pythia-1.4b-deduped" model = AutoModelForCausalLM.from_pretrained(model_name) tokenizer = AutoTokenizer.from_pretrained(model_name) from auto_round import AutoRound bits, group_size, sym, device, amp = 8, 64, True, 'cpu', False autoround = AutoRound(model, tokenizer, nsamples=128, iters=200, seqlen=512, batch_size=4, bits=bits, group_size=group_size, sym=sym, device=device, amp=amp) autoround.quantize() output_dir = "./AutoRound/EleutherAI_pythia-1.4b-deduped-autoround-int8-gs64-sym" autoround.save_quantized(output_dir, format='auto_round', inplace=True) ``` ## License [Apache 2.0 License](https://choosealicense.com/licenses/apache-2.0/) ## Disclaimer This quantized model comes with no warrenty. It has been developed only for research purposes.
Lorg0n/hikka-forge-paraphrase-multilingual-MiniLM-L12-v2
Lorg0n
2025-08-11T21:37:17Z
0
1
sentence-transformers
[ "sentence-transformers", "tensorboard", "safetensors", "bert", "sentence-similarity", "feature-extraction", "dense", "ukrainian", "english", "anime", "hikka", "generated_from_trainer", "dataset_size:160039", "loss:MultipleNegativesRankingLoss", "hikka-forge", "uk", "en", "arxiv:1908.10084", "base_model:sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2", "base_model:finetune:sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2", "license:apache-2.0", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2025-08-11T14:27:39Z
--- tags: - sentence-transformers - sentence-similarity - feature-extraction - dense - ukrainian - english - anime - hikka - generated_from_trainer - dataset_size:160039 - loss:MultipleNegativesRankingLoss - hikka - anime - hikka-forge base_model: sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2 widget: - source_sentence: аніме про меланхолійну подорож після перемоги над королем демонів sentences: - 'Frieren: Beyond Journey''s End' - >- Під час своєї десятирічної подорожі з метою перемоги над Королем Демонів, члени загону героя - сам Гіммель, священник Гайтер, гном-воїн Айзен... - K-On! - source_sentence: a calming, healing 'iyashikei' anime about girls camping sentences: - Дівчачий табір△ - Мій сусід Тоторо - Атака Титанів pipeline_tag: sentence-similarity library_name: sentence-transformers license: apache-2.0 language: - uk - en --- # Hikka-Forge: Fine-tuned Multilingual Sentence Transformer for Anime Semantic Search (UA/EN) This is a [sentence-transformers](https://www.SBERT.net) model fine-tuned from `sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2`. It is specifically trained to map Ukrainian and English sentences & paragraphs from the **anime domain** into a 384-dimensional dense vector space. The model is designed for tasks such as semantic search, textual similarity, and clustering within an anime context. It excels at capturing not only direct keywords but also abstract concepts, genres, and the overall atmosphere of a title. The training dataset was provided by [**hikka.io**](https://hikka.io), a comprehensive Ukrainian encyclopedia for anime, manga, and light novels. ## Model Details ### Model Description - **Model Type:** Sentence Transformer - **Base model:** `sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2` - **Languages:** Ukrainian (uk), English (en) - **Fine-tuning Dataset:** Proprietary dataset from [hikka.io](https://hikka.io) - **Maximum Sequence Length:** 128 tokens - **Output Dimensionality:** 384 dimensions - **Similarity Function:** Cosine Similarity ### Model Sources - **Repository:** [This model on Hugging Face](https://huggingface.co/Lorg0n/hikka-forge-paraphrase-multilingual-MiniLM-L12-v2) - **Original Model:** [paraphrase-multilingual-MiniLM-L12-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2) - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) ## Usage First, install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then, you can load the model and use it for semantic search or similarity tasks. ```python from sentence_transformers import SentenceTransformer, util # Download the model from the 🤗 Hub model = SentenceTransformer("Lorg0n/hikka-forge-paraphrase-multilingual-MiniLM-L12-v2") # Example query (can be in Ukrainian or English) query = "аніме про меланхолійну подорож після перемоги над королем демонів" # "anime about a melancholic journey after defeating the demon king" # A corpus of documents to search through corpus = [ "Frieren is an elf mage who was part of the hero's party that defeated the Demon King. After the journey, she witnesses her human companions pass away due to old age and embarks on a new journey to understand humanity.", "To Your Eternity follows an immortal being sent to Earth with no emotions nor identity. The being is able to take on the shape of those that leave a strong impression on it.", "K-On! is a lighthearted story about four high school girls who join the light music club to save it from being disbanded. They spend their days practicing, performing, and hanging out together." ] # Encode the query and corpus into dense vector embeddings query_embedding = model.encode(query, convert_to_tensor=True) corpus_embeddings = model.encode(corpus, convert_to_tensor=True) # Compute cosine similarity scores cosine_scores = util.cos_sim(query_embedding, corpus_embeddings) # Print the results print(f"Query: {query}\n") for i, score in enumerate(cosine_scores[0]): print(f"Similarity: {score:.4f}\t | Document: {corpus[i][:80]}...") # Expected Output: # Query: аніме про меланхолійну подорож після перемоги над королем демонів # # Similarity: 0.4013 | Document: Frieren is an elf mage who was part of the hero's party that defeated the Demon ... # Similarity: 0.1800 | Document: To Your Eternity follows an immortal being sent to Earth with no emotions nor id... # Similarity: 0.0091 | Document: K-On! is a lighthearted story about four high school girls who join the light mu... ``` ## Training Details ### Training Dataset The model was fine-tuned on a proprietary, high-quality dataset from **[hikka.io](https://hikka.io)**, consisting of **177,822** carefully constructed training pairs. The dataset was engineered to teach the model various semantic relationships within the anime domain: 1. **Cross-lingual Connections (UA ↔ EN):** * Pairs of titles and their corresponding synopses in both languages (`ua_title` ↔ `en_synopsis`). * Pairs of titles in Ukrainian and English (`ua_title` ↔ `en_title`). * Pairs of translated genre names (`Бойовик` ↔ `Action`). * Pairs from an auxiliary translated dataset to augment bilingual understanding. 2. **Intra-lingual Connections (UA ↔ UA, EN ↔ EN):** * Pairs of key sentences (first, middle, last) from a synopsis with the full synopsis text. This teaches the model that a part is semantically related to the whole text. 3. **Metadata & Synonymy Injection:** * Pairs linking all known titles of an anime (Ukrainian, English, Japanese, synonyms) to each other, teaching the model that they refer to the same entity. * Pairs linking genres and studios to anime titles to ground the model in relevant metadata. * **Loss Function:** The model was trained using `MultipleNegativesRankingLoss`, a highly effective method for learning semantic similarity. It utilizes other examples in a batch as negative samples, which is a very efficient training paradigm. ### Evaluation The fine-tuned model demonstrates a significantly improved understanding of domain-specific and abstract concepts compared to the base model. During evaluation, it showed: - **Superior understanding of niche genres:** It correctly identified "Yuru Camp" (Дівчачий табір) from the query `"a calming, healing 'iyashikei' anime"`, while the base model returned more generic results. - **Grasping abstract concepts:** It correctly found "Magical Girl Site" for the query `"деконструкція жанру махо-шьоджьо, де дівчата-чарівниці страждають психологічно"` (deconstruction of the maho-shoujo genre where magical girls suffer psychologically). - **Better atmospheric matching:** It showed higher similarity to thematically similar anime (like "Frieren" and "To Your Eternity") and lower similarity to dissimilar ones, proving a deeper contextual understanding. ### Training Hyperparameters - `learning_rate`: 2e-05 - `per_device_train_batch_size`: 32 - `num_train_epochs`: 4 - `warmup_ratio`: 0.1 - `fp16`: True - `loss`: MultipleNegativesRankingLoss ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ```
hettad/blockassist-bc-pudgy_grazing_magpie_1754943842
hettad
2025-08-11T21:20:13Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "pudgy grazing magpie", "arxiv:2504.07091", "region:us" ]
null
2025-08-11T21:20:02Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - pudgy grazing magpie --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
rozer191292/blockassist-bc-playful_silky_raccoon_1754946624
rozer191292
2025-08-11T21:12:53Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "playful silky raccoon", "arxiv:2504.07091", "region:us" ]
null
2025-08-11T21:12:26Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - playful silky raccoon --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Sayemahsjn/blockassist-bc-playful_feline_octopus_1754945297
Sayemahsjn
2025-08-11T21:06:31Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "playful feline octopus", "arxiv:2504.07091", "region:us" ]
null
2025-08-11T21:06:26Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - playful feline octopus --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).