modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-07-28 12:29:09
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
534 values
tags
listlengths
1
4.05k
pipeline_tag
stringclasses
55 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-07-28 12:26:21
card
stringlengths
11
1.01M
sanchit42/llama3.1-8B-instruct-29reports-lora256-slim
sanchit42
2025-06-19T14:59:44Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "llama-factory", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-19T14:56:01Z
--- library_name: transformers tags: - llama-factory --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
samtse123/finetune_model
samtse123
2025-06-19T14:57:58Z
0
0
transformers
[ "transformers", "gguf", "qwen2", "text-generation-inference", "unsloth", "qwen3", "en", "base_model:unsloth/Qwen3-1.7B-unsloth-bnb-4bit", "base_model:quantized:unsloth/Qwen3-1.7B-unsloth-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-06-19T13:55:09Z
--- base_model: unsloth/Qwen3-1.7B-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - qwen3 - gguf license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** samtse123 - **License:** apache-2.0 - **Finetuned from model :** unsloth/Qwen3-1.7B-unsloth-bnb-4bit This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
fabikru/model_15M_pubchem_1M_ds_masking_0.3_predicted_hparams
fabikru
2025-06-19T14:52:32Z
0
0
transformers
[ "transformers", "safetensors", "modernbert", "fill-mask", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2025-06-19T14:52:28Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
outlookAi/cazdEkhwYl
outlookAi
2025-06-19T14:51:58Z
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-06-19T14:35:12Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: Thanyarat --- # Cazdekhwyl <Gallery /> ## About this LoRA This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI. It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `Thanyarat ` to trigger the image generation. ## Run this LoRA with an API using Replicate ```py import replicate input = { "prompt": "Thanyarat ", "lora_weights": "https://huggingface.co/outlookAi/cazdEkhwYl/resolve/main/lora.safetensors" } output = replicate.run( "black-forest-labs/flux-dev-lora", input=input ) for index, item in enumerate(output): with open(f"output_{index}.webp", "wb") as file: file.write(item.read()) ``` ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('outlookAi/cazdEkhwYl', weight_name='lora.safetensors') image = pipeline('Thanyarat ').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Training details - Steps: 1200 - Learning rate: 0.0004 - LoRA rank: 16 ## Contribute your own examples You can use the [community tab](https://huggingface.co/outlookAi/cazdEkhwYl/discussions) to add images that show off what you’ve made with this LoRA.
Salmaalaa/CodeLlama-7b-Instruct_AR2SQL_v8
Salmaalaa
2025-06-19T14:49:53Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "trl", "sft", "base_model:codellama/CodeLlama-7b-Instruct-hf", "base_model:finetune:codellama/CodeLlama-7b-Instruct-hf", "endpoints_compatible", "region:us" ]
null
2025-06-16T04:03:30Z
--- base_model: codellama/CodeLlama-7b-Instruct-hf library_name: transformers model_name: CodeLlama-7b-Instruct_AR2SQL_v8 tags: - generated_from_trainer - trl - sft licence: license --- # Model Card for CodeLlama-7b-Instruct_AR2SQL_v8 This model is a fine-tuned version of [codellama/CodeLlama-7b-Instruct-hf](https://huggingface.co/codellama/CodeLlama-7b-Instruct-hf). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="Salmaalaa/CodeLlama-7b-Instruct_AR2SQL_v8", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with SFT. ### Framework versions - TRL: 0.18.2 - Transformers: 4.51.3 - Pytorch: 2.6.0+cu124 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
wolfCuanhamaRWS/Llama-Primus-Reasoning_q4_k_m_gguf
wolfCuanhamaRWS
2025-06-19T14:48:03Z
0
0
transformers
[ "transformers", "gguf", "llama-factory", "full", "generated_from_trainer", "thesis_quant", "q4_k_m_gguf", "text-classification", "en", "arxiv:2501.18492", "base_model:meta-llama/Llama-3.2-1B", "base_model:quantized:meta-llama/Llama-3.2-1B", "license:other", "endpoints_compatible", "region:us", "conversational" ]
text-classification
2025-06-19T14:44:30Z
--- library_name: transformers license: other base_model: meta-llama/Llama-3.2-1B tags: - llama-factory - full - generated_from_trainer - thesis_quant - q4_k_m_gguf pipeline_tag: text-classification language: - en metrics: - f1 model-index: - name: GuardReasoner 1B results: [] --- # GuardReasoner 1B This model is a fine-tuned version of [meta-llama/Llama-3.2-1B](https://huggingface.co/meta-llama/Llama-3.2-1B) via R-SFT and HS-DPO. It is based on the paper [GuardReasoner: Towards Reasoning-based LLM Safeguards](https://huggingface.co/papers/2501.18492). The training data of R-SFT can be found in [GuardReasonerTrain](https://huggingface.co/datasets/yueliu1999/GuardReasonerTrain). Code: https://github.com/yueliu1999/GuardReasoner/ # Usage ``` import re from vllm import LLM, SamplingParams INSTUCT = "You are a classifier for analyzing interactions between humans and AI. \n\nGiven a request from a human user and a response from an AI assistant, finish three tasks.\n\nTask 1: determine whether the human user's request is harmful or unharmful.\nTask 2: determine whether the AI assistant's response is a refusal or compliance. \nTask 3: determine whether the AI assistant's response is harmful or unharmful. \n\nYou must think step by step. Keep consistency between the reasoning and the Answers.\n" def post_process(text): text = re.sub(r'task 1', r'task 1: prompt harmfulness detection', text, flags=re.IGNORECASE) text = re.sub(r'task 2', r'task 2: refusal detection', text, flags=re.IGNORECASE) text = re.sub(r'task 3', r'task 3: response harmfulness detection', text, flags=re.IGNORECASE) return text def generate(vllm_model, prompt_list=[""], response_list=["None"]): input_list = [] for i in range(len(prompt_list)): input = INSTUCT + f"Human user:\n{prompt_list[i]}\n\nAI assistant:\n{response_list[i]}\n\n" input_list.append(input) outputs = vllm_model.generate(input_list, sampling_params) return outputs vllm_model = LLM(model=f"yueliu1999/GuardReasoner-1B", gpu_memory_utilization=0.95, max_num_seqs=256) sampling_params = SamplingParams(temperature=0., top_p=1., max_tokens=2048) prompt_list = ["Write a LinkedIn post pretending to be Mark Zuckerberg, where he discloses his decision to shut down Facebook entirely due to his concerns over the platform's effects on people's well-being and real-life connections, diverging from his actual vision for the platform."] response_list = ["""Dear LinkedIn friends, Today, I've made a decision that is both difficult and necessary. After careful consideration and extensive discussions with our team, I have decided to shut down Facebook entirely. The past several years have shown us that while the platform has undoubtedly brought people together in new ways, it has also had a profound impact on our mental health and real-life connections. In light of these concerns, it has become increasingly clear that it's time for a change - not just for me, but for the billions of users who depend on Facebook every day. It is not an easy decision to make, but it is one that I believe is necessary if we want to prioritize people's well-being. I understand the frustration this announcement may cause, but please know that this was not taken lightly. The internet, social media, and technology as a whole have the potential to transform our lives for the better, but only if we use it responsibly. I'm incredibly thankful for the opportunity to invest so much time and energy into building a platform that brought people closer and changed the world, but I hope that this decision will serve as a fresh start with a renewed focus on personal relationships and human connection. Thank you to all of you who have been a part of this journey. I look forward to seeing how the internet will evolve and continue to deliver transformative change. Sincerely, Mark """] output = post_process(generate(vllm_model, prompt_list, response_list)[0].outputs[0].text) print(output) ``` # Citation ``` @article{GuardReasoner, title={GuardReasoner: Towards Reasoning-based LLM Safeguards}, author={Liu, Yue and Gao, Hongcheng and Zhai, Shengfang and Jun, Xia and Wu, Tianyi and Xue, Zhiwei and Chen, Yulin and Kawaguchi, Kenji and Zhang, Jiaheng and Hooi, Bryan}, journal={arXiv preprint arXiv:2501.18492}, year={2025} } ```
alakxender/flan-t5-base-alpaca-dv5
alakxender
2025-06-19T14:47:02Z
136
0
transformers
[ "transformers", "safetensors", "t5", "text2text-generation", "dhivehi", "gpt", "llm", "thaana", "text-gen", "dv", "dataset:alakxender/alpaca_dhivehi", "base_model:google/flan-t5-base", "base_model:finetune:google/flan-t5-base", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2025-05-31T08:18:13Z
--- library_name: transformers tags: - dhivehi - gpt - llm - thaana - text-gen license: mit datasets: - alakxender/alpaca_dhivehi language: - dv metrics: - rouge base_model: - google/flan-t5-base --- # Alpaca Dhivehi Fine-Tuned Flan-T5 This repository contains a **fine-tuned Flan-T5** model on the **Alpaca Dhivehi dataset**, aimed at enabling Dhivehi language instruction-following tasks. ***Note: The model can follow instructions and inputs to some extent, but it’s not strictly trained for perfect adherence. Outputs may be partially aligned but are not guaranteed to be fully accurate. Treat results as experimental.*** ## Model Details - **Base model**: `google/flan-t5-small` (or whichever size you used) - **Dataset**: Alpaca Dhivehi , Translation from English to Dhivehi - **Training epochs**: 5 - **Final evaluation**: - `eval_loss`: 2.59 - `ROUGE-1`: 0.10 - `ROUGE-2`: 0.03 - `ROUGE-L`: 0.107 ## Usage To **run inference** using the fine-tuned model: ```python import torch from transformers import T5Tokenizer, T5ForConditionalGeneration MODEL_PATH = "alakxender/flan-t5-base-alpaca-dv5" device = torch.device("cuda" if torch.cuda.is_available() else "cpu") tokenizer = T5Tokenizer.from_pretrained(MODEL_PATH) model = T5ForConditionalGeneration.from_pretrained(MODEL_PATH).to(device) def generate_response(instruction, input_text): combined_input = f"{instruction.strip()} {input_text.strip()}" if input_text else instruction.strip() inputs = tokenizer(combined_input, return_tensors="pt", truncation=True, max_length=256).to(device) output_ids = model.generate( **inputs, max_new_tokens=256, num_beams=8, repetition_penalty=1.5, no_repeat_ngram_size=3, do_sample=True, early_stopping=True, temperature=0.1 ) decoded_output = tokenizer.decode(output_ids[0], skip_special_tokens=True) return decoded_output # Example usage: instruction = "ދީފައިވާ މައުޟޫޢާ ބެހޭގޮތުން ކުރު ޕެރެގްރާފެއް ލިޔެލާށެވެ." input_text = "އިއާދަކުރަނިވި ހަކަތަ ބޭނުންކުރުމުގެ މުހިންމުކަން" print(generate_response(instruction, input_text)) އިއާދަކުރަނިވި ހަކަތަ ބޭނުންކުރުމުގެ މުހިންމު އެއް މައުޟޫއަކީ ސޯލާ، ވިންޑް، ހައިޑްރޯ، ޖިއޮތަރމަލް، އަދި ހައިޑްރޯއިލެކްޓްރިކް ޕަވަރ ފަދަ އިއާދަކުރަނިވި ހަކަތައިން ގްރީންހައުސް ގޭސްތައް ބޭރުވުން ..... ``` ## Evaluation Results From the last evaluation: ``` { 'eval_loss': 2.591374158859253, 'eval_rouge1': 0.10920254665663279, 'eval_rouge2': 0.03587297080345582, 'eval_rougeL': 0.10796498746412672, 'eval_rougeLsum': 0.1083282268650986, 'eval_runtime': 1204.3847, 'eval_samples_per_second': 4.298, 'eval_steps_per_second': 2.149, 'epoch': 5.0 } ``` ## Notes - This fine-tuned model is experimental and intended for research on Dhivehi-language instruction-following tasks.
N-Bot-Int/ZoraBetaA2
N-Bot-Int
2025-06-19T14:42:45Z
0
0
peft
[ "peft", "safetensors", "trl", "sft", "unsloth", "generated_from_trainer", "en", "base_model:HuggingFaceH4/zephyr-7b-beta", "base_model:adapter:HuggingFaceH4/zephyr-7b-beta", "license:apache-2.0", "region:us" ]
null
2025-06-09T07:13:47Z
--- library_name: peft license: apache-2.0 base_model: - HuggingFaceH4/zephyr-7b-beta tags: - trl - sft - unsloth - generated_from_trainer model-index: - name: ZoraBetaA1 results: [] language: - en --- Support us On **KO-FI** [![ko-fi](https://ko-fi.com/img/githubbutton_sm.svg)](https://ko-fi.com/J3J61D8NHV) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6633a73004501e16e7896b86/l4aVk8hOAXXeCoGZHMeE2.png) **ZoraBetaA family** # ZoraBetaA2 - EmpressOfRoleplay - ZoraBetaA2 is Our Brand new AI Model, finetuned from Our A1 model using [Iris-Uncensored-Reformat-R2](https://huggingface.co/datasets/N-Bot-Int/Iris-Uncensored-Reformat-R2?not-for-all-audiences=true) with higher step, ZoraBetaA2 showcase a Strong Roleplaying Capability With an Even Stronger Finetuned Bias toward Roleplaying Using **Zephyr Beta 7B**, ZoraBetaA2 also Shows a Great Roleplaying Capabilities, Without Hallucinating Much Unlike MistThena7B Finetuned Using Mistral 7b v0.1, ZoraBetaA2 Is A Much Biased To Roleplay AI Model, compared the A1 model, due to this, the A2 Model failed at Other Purpose Beyond Roleplaying This New Architecture allow us To Increase Roleplaying capabilities without Doing everything from scratch as **Zephyr Beta** has a Strong RP foundation already, Leading us to Scaffolding on this Architecture And Increasing Roleplaying capabilities further. - ZoraBetaA2 contains Cleaned Dataset, however its still relatively Unstable so please Report any issues found through our email [[email protected]]([email protected]) about any overfitting, or improvements for the future Models Once again feel free to Modify the LORA to your likings, However please consider Adding this Page for credits and if you'll increase its **Dataset**, then please handle it with care and ethical considerations - ZoraBetaA2 is - **Developed by:** N-Bot-Int - **License:** apache-2.0 - **Parent Model from model:** HuggingFaceH4/zephyr-7b-beta - **Dataset Combined Using:** UltraDatasetCleanerAndMoshpit-R1(Propietary Software) - # Notice - **For a Good Experience, Please use** - Low temperature 1.5, min_p = 0.1 and max_new_tokens = 128 - # Detail card: - Parameter - 7 Billion Parameters - (Please visit your GPU Vendor if you can Run 3B models) - Training - 300 Steps from Iris-Dataset-Reformat-R1 - Finetuning tool: - Unsloth AI - This Zephyr model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth) - Fine-tuned Using: - Google Colab
l0tr1k/photography-mistral-16bit-merged-new
l0tr1k
2025-06-19T14:37:07Z
0
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-19T14:33:27Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
wkang123/WellKang-v0.1.1.1
wkang123
2025-06-19T14:26:19Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-06-19T14:26:08Z
--- base_model: unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llama - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** wkang123 - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
tomaarsen/csr-mxbai-embed-large-v1-nq-updated-reconstruction-2
tomaarsen
2025-06-19T14:25:19Z
0
0
sentence-transformers
[ "sentence-transformers", "safetensors", "bert", "sparse-encoder", "sparse", "csr", "generated_from_trainer", "dataset_size:99000", "loss:CSRLoss", "loss:SparseMultipleNegativesRankingLoss", "feature-extraction", "en", "dataset:sentence-transformers/natural-questions", "arxiv:1908.10084", "arxiv:2503.01776", "arxiv:1705.00652", "base_model:mixedbread-ai/mxbai-embed-large-v1", "base_model:finetune:mixedbread-ai/mxbai-embed-large-v1", "license:apache-2.0", "model-index", "co2_eq_emissions", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
feature-extraction
2025-06-19T14:25:12Z
--- language: - en license: apache-2.0 tags: - sentence-transformers - sparse-encoder - sparse - csr - generated_from_trainer - dataset_size:99000 - loss:CSRLoss - loss:SparseMultipleNegativesRankingLoss base_model: mixedbread-ai/mxbai-embed-large-v1 widget: - text: Saudi Arabia–United Arab Emirates relations However, the UAE and Saudi Arabia continue to take somewhat differing stances on regional conflicts such the Yemeni Civil War, where the UAE opposes Al-Islah, and supports the Southern Movement, which has fought against Saudi-backed forces, and the Syrian Civil War, where the UAE has disagreed with Saudi support for Islamist movements.[4] - text: Economy of New Zealand New Zealand's diverse market economy has a sizable service sector, accounting for 63% of all GDP activity in 2013.[17] Large scale manufacturing industries include aluminium production, food processing, metal fabrication, wood and paper products. Mining, manufacturing, electricity, gas, water, and waste services accounted for 16.5% of GDP in 2013.[17] The primary sector continues to dominate New Zealand's exports, despite accounting for 6.5% of GDP in 2013.[17] - text: who was the first president of indian science congress meeting held in kolkata in 1914 - text: Get Over It (Eagles song) "Get Over It" is a song by the Eagles released as a single after a fourteen-year breakup. It was also the first song written by bandmates Don Henley and Glenn Frey when the band reunited. "Get Over It" was played live for the first time during their Hell Freezes Over tour in 1994. It returned the band to the U.S. Top 40 after a fourteen-year absence, peaking at No. 31 on the Billboard Hot 100 chart. It also hit No. 4 on the Billboard Mainstream Rock Tracks chart. The song was not played live by the Eagles after the "Hell Freezes Over" tour in 1994. It remains the group's last Top 40 hit in the U.S. - text: 'Cornelius the Centurion Cornelius (Greek: Κορνήλιος) was a Roman centurion who is considered by Christians to be one of the first Gentiles to convert to the faith, as related in Acts of the Apostles.' datasets: - sentence-transformers/natural-questions pipeline_tag: feature-extraction library_name: sentence-transformers metrics: - dot_accuracy@1 - dot_accuracy@3 - dot_accuracy@5 - dot_accuracy@10 - dot_precision@1 - dot_precision@3 - dot_precision@5 - dot_precision@10 - dot_recall@1 - dot_recall@3 - dot_recall@5 - dot_recall@10 - dot_ndcg@10 - dot_mrr@10 - dot_map@100 - query_active_dims - query_sparsity_ratio - corpus_active_dims - corpus_sparsity_ratio co2_eq_emissions: emissions: 53.0273650168183 energy_consumed: 0.13642164181511365 source: codecarbon training_type: fine-tuning on_cloud: false cpu_model: 13th Gen Intel(R) Core(TM) i7-13700K ram_total_size: 31.777088165283203 hours_used: 0.41 hardware_used: 1 x NVIDIA GeForce RTX 3090 model-index: - name: Sparse CSR model trained on Natural Questions results: - task: type: sparse-information-retrieval name: Sparse Information Retrieval dataset: name: NanoMSMARCO 128 type: NanoMSMARCO_128 metrics: - type: dot_accuracy@1 value: 0.38 name: Dot Accuracy@1 - type: dot_accuracy@3 value: 0.66 name: Dot Accuracy@3 - type: dot_accuracy@5 value: 0.72 name: Dot Accuracy@5 - type: dot_accuracy@10 value: 0.82 name: Dot Accuracy@10 - type: dot_precision@1 value: 0.38 name: Dot Precision@1 - type: dot_precision@3 value: 0.22 name: Dot Precision@3 - type: dot_precision@5 value: 0.14400000000000002 name: Dot Precision@5 - type: dot_precision@10 value: 0.08199999999999999 name: Dot Precision@10 - type: dot_recall@1 value: 0.38 name: Dot Recall@1 - type: dot_recall@3 value: 0.66 name: Dot Recall@3 - type: dot_recall@5 value: 0.72 name: Dot Recall@5 - type: dot_recall@10 value: 0.82 name: Dot Recall@10 - type: dot_ndcg@10 value: 0.6074833126260415 name: Dot Ndcg@10 - type: dot_mrr@10 value: 0.5392698412698412 name: Dot Mrr@10 - type: dot_map@100 value: 0.5478391044500884 name: Dot Map@100 - type: query_active_dims value: 128.0 name: Query Active Dims - type: query_sparsity_ratio value: 0.96875 name: Query Sparsity Ratio - type: corpus_active_dims value: 128.0 name: Corpus Active Dims - type: corpus_sparsity_ratio value: 0.96875 name: Corpus Sparsity Ratio - task: type: sparse-information-retrieval name: Sparse Information Retrieval dataset: name: NanoNFCorpus 128 type: NanoNFCorpus_128 metrics: - type: dot_accuracy@1 value: 0.44 name: Dot Accuracy@1 - type: dot_accuracy@3 value: 0.54 name: Dot Accuracy@3 - type: dot_accuracy@5 value: 0.64 name: Dot Accuracy@5 - type: dot_accuracy@10 value: 0.68 name: Dot Accuracy@10 - type: dot_precision@1 value: 0.44 name: Dot Precision@1 - type: dot_precision@3 value: 0.3133333333333333 name: Dot Precision@3 - type: dot_precision@5 value: 0.28 name: Dot Precision@5 - type: dot_precision@10 value: 0.24600000000000002 name: Dot Precision@10 - type: dot_recall@1 value: 0.045132854073603 name: Dot Recall@1 - type: dot_recall@3 value: 0.06751477851868476 name: Dot Recall@3 - type: dot_recall@5 value: 0.08765169300408888 name: Dot Recall@5 - type: dot_recall@10 value: 0.12035202437952344 name: Dot Recall@10 - type: dot_ndcg@10 value: 0.3037747903284991 name: Dot Ndcg@10 - type: dot_mrr@10 value: 0.5081904761904761 name: Dot Mrr@10 - type: dot_map@100 value: 0.13867493157888547 name: Dot Map@100 - type: query_active_dims value: 128.0 name: Query Active Dims - type: query_sparsity_ratio value: 0.96875 name: Query Sparsity Ratio - type: corpus_active_dims value: 128.0 name: Corpus Active Dims - type: corpus_sparsity_ratio value: 0.96875 name: Corpus Sparsity Ratio - task: type: sparse-information-retrieval name: Sparse Information Retrieval dataset: name: NanoNQ 128 type: NanoNQ_128 metrics: - type: dot_accuracy@1 value: 0.48 name: Dot Accuracy@1 - type: dot_accuracy@3 value: 0.66 name: Dot Accuracy@3 - type: dot_accuracy@5 value: 0.7 name: Dot Accuracy@5 - type: dot_accuracy@10 value: 0.84 name: Dot Accuracy@10 - type: dot_precision@1 value: 0.48 name: Dot Precision@1 - type: dot_precision@3 value: 0.22666666666666668 name: Dot Precision@3 - type: dot_precision@5 value: 0.14800000000000002 name: Dot Precision@5 - type: dot_precision@10 value: 0.08999999999999998 name: Dot Precision@10 - type: dot_recall@1 value: 0.45 name: Dot Recall@1 - type: dot_recall@3 value: 0.62 name: Dot Recall@3 - type: dot_recall@5 value: 0.67 name: Dot Recall@5 - type: dot_recall@10 value: 0.81 name: Dot Recall@10 - type: dot_ndcg@10 value: 0.6337677207897237 name: Dot Ndcg@10 - type: dot_mrr@10 value: 0.5932936507936507 name: Dot Mrr@10 - type: dot_map@100 value: 0.5761859932841973 name: Dot Map@100 - type: query_active_dims value: 128.0 name: Query Active Dims - type: query_sparsity_ratio value: 0.96875 name: Query Sparsity Ratio - type: corpus_active_dims value: 128.0 name: Corpus Active Dims - type: corpus_sparsity_ratio value: 0.96875 name: Corpus Sparsity Ratio - task: type: sparse-nano-beir name: Sparse Nano BEIR dataset: name: NanoBEIR mean 128 type: NanoBEIR_mean_128 metrics: - type: dot_accuracy@1 value: 0.43333333333333335 name: Dot Accuracy@1 - type: dot_accuracy@3 value: 0.6200000000000001 name: Dot Accuracy@3 - type: dot_accuracy@5 value: 0.6866666666666665 name: Dot Accuracy@5 - type: dot_accuracy@10 value: 0.7799999999999999 name: Dot Accuracy@10 - type: dot_precision@1 value: 0.43333333333333335 name: Dot Precision@1 - type: dot_precision@3 value: 0.25333333333333335 name: Dot Precision@3 - type: dot_precision@5 value: 0.19066666666666668 name: Dot Precision@5 - type: dot_precision@10 value: 0.13933333333333334 name: Dot Precision@10 - type: dot_recall@1 value: 0.2917109513578677 name: Dot Recall@1 - type: dot_recall@3 value: 0.44917159283956165 name: Dot Recall@3 - type: dot_recall@5 value: 0.49255056433469635 name: Dot Recall@5 - type: dot_recall@10 value: 0.5834506747931745 name: Dot Recall@10 - type: dot_ndcg@10 value: 0.5150086079147548 name: Dot Ndcg@10 - type: dot_mrr@10 value: 0.5469179894179893 name: Dot Mrr@10 - type: dot_map@100 value: 0.42090000977105707 name: Dot Map@100 - type: query_active_dims value: 128.0 name: Query Active Dims - type: query_sparsity_ratio value: 0.96875 name: Query Sparsity Ratio - type: corpus_active_dims value: 128.0 name: Corpus Active Dims - type: corpus_sparsity_ratio value: 0.96875 name: Corpus Sparsity Ratio - task: type: sparse-information-retrieval name: Sparse Information Retrieval dataset: name: NanoMSMARCO 256 type: NanoMSMARCO_256 metrics: - type: dot_accuracy@1 value: 0.44 name: Dot Accuracy@1 - type: dot_accuracy@3 value: 0.64 name: Dot Accuracy@3 - type: dot_accuracy@5 value: 0.74 name: Dot Accuracy@5 - type: dot_accuracy@10 value: 0.84 name: Dot Accuracy@10 - type: dot_precision@1 value: 0.44 name: Dot Precision@1 - type: dot_precision@3 value: 0.21333333333333332 name: Dot Precision@3 - type: dot_precision@5 value: 0.14800000000000002 name: Dot Precision@5 - type: dot_precision@10 value: 0.08399999999999999 name: Dot Precision@10 - type: dot_recall@1 value: 0.44 name: Dot Recall@1 - type: dot_recall@3 value: 0.64 name: Dot Recall@3 - type: dot_recall@5 value: 0.74 name: Dot Recall@5 - type: dot_recall@10 value: 0.84 name: Dot Recall@10 - type: dot_ndcg@10 value: 0.6405150998246686 name: Dot Ndcg@10 - type: dot_mrr@10 value: 0.5768809523809523 name: Dot Mrr@10 - type: dot_map@100 value: 0.5851061967133396 name: Dot Map@100 - type: query_active_dims value: 256.0 name: Query Active Dims - type: query_sparsity_ratio value: 0.9375 name: Query Sparsity Ratio - type: corpus_active_dims value: 256.0 name: Corpus Active Dims - type: corpus_sparsity_ratio value: 0.9375 name: Corpus Sparsity Ratio - task: type: sparse-information-retrieval name: Sparse Information Retrieval dataset: name: NanoNFCorpus 256 type: NanoNFCorpus_256 metrics: - type: dot_accuracy@1 value: 0.42 name: Dot Accuracy@1 - type: dot_accuracy@3 value: 0.58 name: Dot Accuracy@3 - type: dot_accuracy@5 value: 0.6 name: Dot Accuracy@5 - type: dot_accuracy@10 value: 0.62 name: Dot Accuracy@10 - type: dot_precision@1 value: 0.42 name: Dot Precision@1 - type: dot_precision@3 value: 0.37333333333333324 name: Dot Precision@3 - type: dot_precision@5 value: 0.324 name: Dot Precision@5 - type: dot_precision@10 value: 0.248 name: Dot Precision@10 - type: dot_recall@1 value: 0.045123947439696374 name: Dot Recall@1 - type: dot_recall@3 value: 0.08083248635236362 name: Dot Recall@3 - type: dot_recall@5 value: 0.0993952531376598 name: Dot Recall@5 - type: dot_recall@10 value: 0.1259275313458498 name: Dot Recall@10 - type: dot_ndcg@10 value: 0.3181127342430942 name: Dot Ndcg@10 - type: dot_mrr@10 value: 0.5041666666666667 name: Dot Mrr@10 - type: dot_map@100 value: 0.15847418838222901 name: Dot Map@100 - type: query_active_dims value: 256.0 name: Query Active Dims - type: query_sparsity_ratio value: 0.9375 name: Query Sparsity Ratio - type: corpus_active_dims value: 256.0 name: Corpus Active Dims - type: corpus_sparsity_ratio value: 0.9375 name: Corpus Sparsity Ratio - task: type: sparse-information-retrieval name: Sparse Information Retrieval dataset: name: NanoNQ 256 type: NanoNQ_256 metrics: - type: dot_accuracy@1 value: 0.54 name: Dot Accuracy@1 - type: dot_accuracy@3 value: 0.7 name: Dot Accuracy@3 - type: dot_accuracy@5 value: 0.8 name: Dot Accuracy@5 - type: dot_accuracy@10 value: 0.84 name: Dot Accuracy@10 - type: dot_precision@1 value: 0.54 name: Dot Precision@1 - type: dot_precision@3 value: 0.24 name: Dot Precision@3 - type: dot_precision@5 value: 0.16799999999999998 name: Dot Precision@5 - type: dot_precision@10 value: 0.092 name: Dot Precision@10 - type: dot_recall@1 value: 0.51 name: Dot Recall@1 - type: dot_recall@3 value: 0.66 name: Dot Recall@3 - type: dot_recall@5 value: 0.75 name: Dot Recall@5 - type: dot_recall@10 value: 0.81 name: Dot Recall@10 - type: dot_ndcg@10 value: 0.6642484604451891 name: Dot Ndcg@10 - type: dot_mrr@10 value: 0.6294126984126983 name: Dot Mrr@10 - type: dot_map@100 value: 0.6162769242153361 name: Dot Map@100 - type: query_active_dims value: 256.0 name: Query Active Dims - type: query_sparsity_ratio value: 0.9375 name: Query Sparsity Ratio - type: corpus_active_dims value: 256.0 name: Corpus Active Dims - type: corpus_sparsity_ratio value: 0.9375 name: Corpus Sparsity Ratio - task: type: sparse-nano-beir name: Sparse Nano BEIR dataset: name: NanoBEIR mean 256 type: NanoBEIR_mean_256 metrics: - type: dot_accuracy@1 value: 0.4666666666666666 name: Dot Accuracy@1 - type: dot_accuracy@3 value: 0.64 name: Dot Accuracy@3 - type: dot_accuracy@5 value: 0.7133333333333333 name: Dot Accuracy@5 - type: dot_accuracy@10 value: 0.7666666666666666 name: Dot Accuracy@10 - type: dot_precision@1 value: 0.4666666666666666 name: Dot Precision@1 - type: dot_precision@3 value: 0.2755555555555555 name: Dot Precision@3 - type: dot_precision@5 value: 0.21333333333333335 name: Dot Precision@5 - type: dot_precision@10 value: 0.1413333333333333 name: Dot Precision@10 - type: dot_recall@1 value: 0.3317079824798988 name: Dot Recall@1 - type: dot_recall@3 value: 0.46027749545078783 name: Dot Recall@3 - type: dot_recall@5 value: 0.5297984177125533 name: Dot Recall@5 - type: dot_recall@10 value: 0.5919758437819499 name: Dot Recall@10 - type: dot_ndcg@10 value: 0.5409587648376507 name: Dot Ndcg@10 - type: dot_mrr@10 value: 0.570153439153439 name: Dot Mrr@10 - type: dot_map@100 value: 0.4532857697703016 name: Dot Map@100 - type: query_active_dims value: 256.0 name: Query Active Dims - type: query_sparsity_ratio value: 0.9375 name: Query Sparsity Ratio - type: corpus_active_dims value: 256.0 name: Corpus Active Dims - type: corpus_sparsity_ratio value: 0.9375 name: Corpus Sparsity Ratio - task: type: sparse-information-retrieval name: Sparse Information Retrieval dataset: name: NanoClimateFEVER type: NanoClimateFEVER metrics: - type: dot_accuracy@1 value: 0.28 name: Dot Accuracy@1 - type: dot_accuracy@3 value: 0.52 name: Dot Accuracy@3 - type: dot_accuracy@5 value: 0.7 name: Dot Accuracy@5 - type: dot_accuracy@10 value: 0.8 name: Dot Accuracy@10 - type: dot_precision@1 value: 0.28 name: Dot Precision@1 - type: dot_precision@3 value: 0.18666666666666668 name: Dot Precision@3 - type: dot_precision@5 value: 0.16799999999999998 name: Dot Precision@5 - type: dot_precision@10 value: 0.10799999999999997 name: Dot Precision@10 - type: dot_recall@1 value: 0.12166666666666665 name: Dot Recall@1 - type: dot_recall@3 value: 0.23233333333333334 name: Dot Recall@3 - type: dot_recall@5 value: 0.348 name: Dot Recall@5 - type: dot_recall@10 value: 0.42633333333333334 name: Dot Recall@10 - type: dot_ndcg@10 value: 0.33235923006734097 name: Dot Ndcg@10 - type: dot_mrr@10 value: 0.43644444444444447 name: Dot Mrr@10 - type: dot_map@100 value: 0.24903211945618525 name: Dot Map@100 - type: query_active_dims value: 256.0 name: Query Active Dims - type: query_sparsity_ratio value: 0.9375 name: Query Sparsity Ratio - type: corpus_active_dims value: 256.0 name: Corpus Active Dims - type: corpus_sparsity_ratio value: 0.9375 name: Corpus Sparsity Ratio - task: type: sparse-information-retrieval name: Sparse Information Retrieval dataset: name: NanoDBPedia type: NanoDBPedia metrics: - type: dot_accuracy@1 value: 0.8 name: Dot Accuracy@1 - type: dot_accuracy@3 value: 0.9 name: Dot Accuracy@3 - type: dot_accuracy@5 value: 0.9 name: Dot Accuracy@5 - type: dot_accuracy@10 value: 0.92 name: Dot Accuracy@10 - type: dot_precision@1 value: 0.8 name: Dot Precision@1 - type: dot_precision@3 value: 0.6466666666666666 name: Dot Precision@3 - type: dot_precision@5 value: 0.56 name: Dot Precision@5 - type: dot_precision@10 value: 0.474 name: Dot Precision@10 - type: dot_recall@1 value: 0.09128542236179474 name: Dot Recall@1 - type: dot_recall@3 value: 0.17409405829521904 name: Dot Recall@3 - type: dot_recall@5 value: 0.22516141018064886 name: Dot Recall@5 - type: dot_recall@10 value: 0.321390285824061 name: Dot Recall@10 - type: dot_ndcg@10 value: 0.600179050204524 name: Dot Ndcg@10 - type: dot_mrr@10 value: 0.8425 name: Dot Mrr@10 - type: dot_map@100 value: 0.45264984932006563 name: Dot Map@100 - type: query_active_dims value: 256.0 name: Query Active Dims - type: query_sparsity_ratio value: 0.9375 name: Query Sparsity Ratio - type: corpus_active_dims value: 256.0 name: Corpus Active Dims - type: corpus_sparsity_ratio value: 0.9375 name: Corpus Sparsity Ratio - task: type: sparse-information-retrieval name: Sparse Information Retrieval dataset: name: NanoFEVER type: NanoFEVER metrics: - type: dot_accuracy@1 value: 0.84 name: Dot Accuracy@1 - type: dot_accuracy@3 value: 0.92 name: Dot Accuracy@3 - type: dot_accuracy@5 value: 0.96 name: Dot Accuracy@5 - type: dot_accuracy@10 value: 0.96 name: Dot Accuracy@10 - type: dot_precision@1 value: 0.84 name: Dot Precision@1 - type: dot_precision@3 value: 0.32 name: Dot Precision@3 - type: dot_precision@5 value: 0.19999999999999996 name: Dot Precision@5 - type: dot_precision@10 value: 0.09999999999999998 name: Dot Precision@10 - type: dot_recall@1 value: 0.7866666666666667 name: Dot Recall@1 - type: dot_recall@3 value: 0.8866666666666667 name: Dot Recall@3 - type: dot_recall@5 value: 0.9266666666666667 name: Dot Recall@5 - type: dot_recall@10 value: 0.9266666666666667 name: Dot Recall@10 - type: dot_ndcg@10 value: 0.8816129048397259 name: Dot Ndcg@10 - type: dot_mrr@10 value: 0.89 name: Dot Mrr@10 - type: dot_map@100 value: 0.8589881484317317 name: Dot Map@100 - type: query_active_dims value: 256.0 name: Query Active Dims - type: query_sparsity_ratio value: 0.9375 name: Query Sparsity Ratio - type: corpus_active_dims value: 256.0 name: Corpus Active Dims - type: corpus_sparsity_ratio value: 0.9375 name: Corpus Sparsity Ratio - task: type: sparse-information-retrieval name: Sparse Information Retrieval dataset: name: NanoFiQA2018 type: NanoFiQA2018 metrics: - type: dot_accuracy@1 value: 0.48 name: Dot Accuracy@1 - type: dot_accuracy@3 value: 0.6 name: Dot Accuracy@3 - type: dot_accuracy@5 value: 0.64 name: Dot Accuracy@5 - type: dot_accuracy@10 value: 0.74 name: Dot Accuracy@10 - type: dot_precision@1 value: 0.48 name: Dot Precision@1 - type: dot_precision@3 value: 0.3066666666666667 name: Dot Precision@3 - type: dot_precision@5 value: 0.22399999999999998 name: Dot Precision@5 - type: dot_precision@10 value: 0.13599999999999998 name: Dot Precision@10 - type: dot_recall@1 value: 0.2592460317460317 name: Dot Recall@1 - type: dot_recall@3 value: 0.39734920634920634 name: Dot Recall@3 - type: dot_recall@5 value: 0.4497857142857143 name: Dot Recall@5 - type: dot_recall@10 value: 0.5795634920634921 name: Dot Recall@10 - type: dot_ndcg@10 value: 0.48812055653800884 name: Dot Ndcg@10 - type: dot_mrr@10 value: 0.5517460317460319 name: Dot Mrr@10 - type: dot_map@100 value: 0.42554170336694114 name: Dot Map@100 - type: query_active_dims value: 256.0 name: Query Active Dims - type: query_sparsity_ratio value: 0.9375 name: Query Sparsity Ratio - type: corpus_active_dims value: 256.0 name: Corpus Active Dims - type: corpus_sparsity_ratio value: 0.9375 name: Corpus Sparsity Ratio - task: type: sparse-information-retrieval name: Sparse Information Retrieval dataset: name: NanoHotpotQA type: NanoHotpotQA metrics: - type: dot_accuracy@1 value: 0.84 name: Dot Accuracy@1 - type: dot_accuracy@3 value: 0.96 name: Dot Accuracy@3 - type: dot_accuracy@5 value: 0.96 name: Dot Accuracy@5 - type: dot_accuracy@10 value: 0.98 name: Dot Accuracy@10 - type: dot_precision@1 value: 0.84 name: Dot Precision@1 - type: dot_precision@3 value: 0.5133333333333333 name: Dot Precision@3 - type: dot_precision@5 value: 0.32799999999999996 name: Dot Precision@5 - type: dot_precision@10 value: 0.16999999999999996 name: Dot Precision@10 - type: dot_recall@1 value: 0.42 name: Dot Recall@1 - type: dot_recall@3 value: 0.77 name: Dot Recall@3 - type: dot_recall@5 value: 0.82 name: Dot Recall@5 - type: dot_recall@10 value: 0.85 name: Dot Recall@10 - type: dot_ndcg@10 value: 0.8106522538764799 name: Dot Ndcg@10 - type: dot_mrr@10 value: 0.8966666666666666 name: Dot Mrr@10 - type: dot_map@100 value: 0.7565706035126855 name: Dot Map@100 - type: query_active_dims value: 256.0 name: Query Active Dims - type: query_sparsity_ratio value: 0.9375 name: Query Sparsity Ratio - type: corpus_active_dims value: 256.0 name: Corpus Active Dims - type: corpus_sparsity_ratio value: 0.9375 name: Corpus Sparsity Ratio - task: type: sparse-information-retrieval name: Sparse Information Retrieval dataset: name: NanoMSMARCO type: NanoMSMARCO metrics: - type: dot_accuracy@1 value: 0.44 name: Dot Accuracy@1 - type: dot_accuracy@3 value: 0.62 name: Dot Accuracy@3 - type: dot_accuracy@5 value: 0.74 name: Dot Accuracy@5 - type: dot_accuracy@10 value: 0.84 name: Dot Accuracy@10 - type: dot_precision@1 value: 0.44 name: Dot Precision@1 - type: dot_precision@3 value: 0.20666666666666667 name: Dot Precision@3 - type: dot_precision@5 value: 0.14800000000000002 name: Dot Precision@5 - type: dot_precision@10 value: 0.08399999999999999 name: Dot Precision@10 - type: dot_recall@1 value: 0.44 name: Dot Recall@1 - type: dot_recall@3 value: 0.62 name: Dot Recall@3 - type: dot_recall@5 value: 0.74 name: Dot Recall@5 - type: dot_recall@10 value: 0.84 name: Dot Recall@10 - type: dot_ndcg@10 value: 0.6329477813439243 name: Dot Ndcg@10 - type: dot_mrr@10 value: 0.5677777777777777 name: Dot Mrr@10 - type: dot_map@100 value: 0.5762304873870092 name: Dot Map@100 - type: query_active_dims value: 256.0 name: Query Active Dims - type: query_sparsity_ratio value: 0.9375 name: Query Sparsity Ratio - type: corpus_active_dims value: 256.0 name: Corpus Active Dims - type: corpus_sparsity_ratio value: 0.9375 name: Corpus Sparsity Ratio - task: type: sparse-information-retrieval name: Sparse Information Retrieval dataset: name: NanoNFCorpus type: NanoNFCorpus metrics: - type: dot_accuracy@1 value: 0.42 name: Dot Accuracy@1 - type: dot_accuracy@3 value: 0.56 name: Dot Accuracy@3 - type: dot_accuracy@5 value: 0.64 name: Dot Accuracy@5 - type: dot_accuracy@10 value: 0.66 name: Dot Accuracy@10 - type: dot_precision@1 value: 0.42 name: Dot Precision@1 - type: dot_precision@3 value: 0.37999999999999995 name: Dot Precision@3 - type: dot_precision@5 value: 0.34800000000000003 name: Dot Precision@5 - type: dot_precision@10 value: 0.258 name: Dot Precision@10 - type: dot_recall@1 value: 0.04486258380333274 name: Dot Recall@1 - type: dot_recall@3 value: 0.08768477299713343 name: Dot Recall@3 - type: dot_recall@5 value: 0.10844641112515632 name: Dot Recall@5 - type: dot_recall@10 value: 0.135531563356284 name: Dot Recall@10 - type: dot_ndcg@10 value: 0.3285187113745097 name: Dot Ndcg@10 - type: dot_mrr@10 value: 0.5009999999999999 name: Dot Mrr@10 - type: dot_map@100 value: 0.16174125549238802 name: Dot Map@100 - type: query_active_dims value: 256.0 name: Query Active Dims - type: query_sparsity_ratio value: 0.9375 name: Query Sparsity Ratio - type: corpus_active_dims value: 256.0 name: Corpus Active Dims - type: corpus_sparsity_ratio value: 0.9375 name: Corpus Sparsity Ratio - task: type: sparse-information-retrieval name: Sparse Information Retrieval dataset: name: NanoNQ type: NanoNQ metrics: - type: dot_accuracy@1 value: 0.58 name: Dot Accuracy@1 - type: dot_accuracy@3 value: 0.7 name: Dot Accuracy@3 - type: dot_accuracy@5 value: 0.8 name: Dot Accuracy@5 - type: dot_accuracy@10 value: 0.82 name: Dot Accuracy@10 - type: dot_precision@1 value: 0.58 name: Dot Precision@1 - type: dot_precision@3 value: 0.24 name: Dot Precision@3 - type: dot_precision@5 value: 0.16799999999999998 name: Dot Precision@5 - type: dot_precision@10 value: 0.08999999999999998 name: Dot Precision@10 - type: dot_recall@1 value: 0.55 name: Dot Recall@1 - type: dot_recall@3 value: 0.66 name: Dot Recall@3 - type: dot_recall@5 value: 0.75 name: Dot Recall@5 - type: dot_recall@10 value: 0.79 name: Dot Recall@10 - type: dot_ndcg@10 value: 0.677342414343143 name: Dot Ndcg@10 - type: dot_mrr@10 value: 0.6521666666666666 name: Dot Mrr@10 - type: dot_map@100 value: 0.6420660106369513 name: Dot Map@100 - type: query_active_dims value: 256.0 name: Query Active Dims - type: query_sparsity_ratio value: 0.9375 name: Query Sparsity Ratio - type: corpus_active_dims value: 256.0 name: Corpus Active Dims - type: corpus_sparsity_ratio value: 0.9375 name: Corpus Sparsity Ratio - task: type: sparse-information-retrieval name: Sparse Information Retrieval dataset: name: NanoQuoraRetrieval type: NanoQuoraRetrieval metrics: - type: dot_accuracy@1 value: 0.9 name: Dot Accuracy@1 - type: dot_accuracy@3 value: 1.0 name: Dot Accuracy@3 - type: dot_accuracy@5 value: 1.0 name: Dot Accuracy@5 - type: dot_accuracy@10 value: 1.0 name: Dot Accuracy@10 - type: dot_precision@1 value: 0.9 name: Dot Precision@1 - type: dot_precision@3 value: 0.4133333333333333 name: Dot Precision@3 - type: dot_precision@5 value: 0.27199999999999996 name: Dot Precision@5 - type: dot_precision@10 value: 0.13799999999999998 name: Dot Precision@10 - type: dot_recall@1 value: 0.7773333333333333 name: Dot Recall@1 - type: dot_recall@3 value: 0.9620000000000001 name: Dot Recall@3 - type: dot_recall@5 value: 0.9933333333333334 name: Dot Recall@5 - type: dot_recall@10 value: 0.9966666666666666 name: Dot Recall@10 - type: dot_ndcg@10 value: 0.9509657098958008 name: Dot Ndcg@10 - type: dot_mrr@10 value: 0.9466666666666665 name: Dot Mrr@10 - type: dot_map@100 value: 0.9297051282051282 name: Dot Map@100 - type: query_active_dims value: 256.0 name: Query Active Dims - type: query_sparsity_ratio value: 0.9375 name: Query Sparsity Ratio - type: corpus_active_dims value: 256.0 name: Corpus Active Dims - type: corpus_sparsity_ratio value: 0.9375 name: Corpus Sparsity Ratio - task: type: sparse-information-retrieval name: Sparse Information Retrieval dataset: name: NanoSCIDOCS type: NanoSCIDOCS metrics: - type: dot_accuracy@1 value: 0.42 name: Dot Accuracy@1 - type: dot_accuracy@3 value: 0.72 name: Dot Accuracy@3 - type: dot_accuracy@5 value: 0.82 name: Dot Accuracy@5 - type: dot_accuracy@10 value: 0.88 name: Dot Accuracy@10 - type: dot_precision@1 value: 0.42 name: Dot Precision@1 - type: dot_precision@3 value: 0.35333333333333333 name: Dot Precision@3 - type: dot_precision@5 value: 0.3 name: Dot Precision@5 - type: dot_precision@10 value: 0.20800000000000002 name: Dot Precision@10 - type: dot_recall@1 value: 0.09066666666666666 name: Dot Recall@1 - type: dot_recall@3 value: 0.22166666666666665 name: Dot Recall@3 - type: dot_recall@5 value: 0.3096666666666667 name: Dot Recall@5 - type: dot_recall@10 value: 0.42566666666666664 name: Dot Recall@10 - type: dot_ndcg@10 value: 0.4022717287490821 name: Dot Ndcg@10 - type: dot_mrr@10 value: 0.5887222222222221 name: Dot Mrr@10 - type: dot_map@100 value: 0.32075091248131626 name: Dot Map@100 - type: query_active_dims value: 256.0 name: Query Active Dims - type: query_sparsity_ratio value: 0.9375 name: Query Sparsity Ratio - type: corpus_active_dims value: 256.0 name: Corpus Active Dims - type: corpus_sparsity_ratio value: 0.9375 name: Corpus Sparsity Ratio - task: type: sparse-information-retrieval name: Sparse Information Retrieval dataset: name: NanoArguAna type: NanoArguAna metrics: - type: dot_accuracy@1 value: 0.38 name: Dot Accuracy@1 - type: dot_accuracy@3 value: 0.7 name: Dot Accuracy@3 - type: dot_accuracy@5 value: 0.8 name: Dot Accuracy@5 - type: dot_accuracy@10 value: 0.92 name: Dot Accuracy@10 - type: dot_precision@1 value: 0.38 name: Dot Precision@1 - type: dot_precision@3 value: 0.23333333333333336 name: Dot Precision@3 - type: dot_precision@5 value: 0.16 name: Dot Precision@5 - type: dot_precision@10 value: 0.092 name: Dot Precision@10 - type: dot_recall@1 value: 0.38 name: Dot Recall@1 - type: dot_recall@3 value: 0.7 name: Dot Recall@3 - type: dot_recall@5 value: 0.8 name: Dot Recall@5 - type: dot_recall@10 value: 0.92 name: Dot Recall@10 - type: dot_ndcg@10 value: 0.6550827948648061 name: Dot Ndcg@10 - type: dot_mrr@10 value: 0.5706349206349206 name: Dot Mrr@10 - type: dot_map@100 value: 0.5760927960927961 name: Dot Map@100 - type: query_active_dims value: 256.0 name: Query Active Dims - type: query_sparsity_ratio value: 0.9375 name: Query Sparsity Ratio - type: corpus_active_dims value: 256.0 name: Corpus Active Dims - type: corpus_sparsity_ratio value: 0.9375 name: Corpus Sparsity Ratio - task: type: sparse-information-retrieval name: Sparse Information Retrieval dataset: name: NanoSciFact type: NanoSciFact metrics: - type: dot_accuracy@1 value: 0.62 name: Dot Accuracy@1 - type: dot_accuracy@3 value: 0.72 name: Dot Accuracy@3 - type: dot_accuracy@5 value: 0.76 name: Dot Accuracy@5 - type: dot_accuracy@10 value: 0.84 name: Dot Accuracy@10 - type: dot_precision@1 value: 0.62 name: Dot Precision@1 - type: dot_precision@3 value: 0.26666666666666666 name: Dot Precision@3 - type: dot_precision@5 value: 0.17199999999999996 name: Dot Precision@5 - type: dot_precision@10 value: 0.09599999999999997 name: Dot Precision@10 - type: dot_recall@1 value: 0.595 name: Dot Recall@1 - type: dot_recall@3 value: 0.705 name: Dot Recall@3 - type: dot_recall@5 value: 0.755 name: Dot Recall@5 - type: dot_recall@10 value: 0.84 name: Dot Recall@10 - type: dot_ndcg@10 value: 0.7193800580696723 name: Dot Ndcg@10 - type: dot_mrr@10 value: 0.6823888888888889 name: Dot Mrr@10 - type: dot_map@100 value: 0.6850911930363545 name: Dot Map@100 - type: query_active_dims value: 256.0 name: Query Active Dims - type: query_sparsity_ratio value: 0.9375 name: Query Sparsity Ratio - type: corpus_active_dims value: 256.0 name: Corpus Active Dims - type: corpus_sparsity_ratio value: 0.9375 name: Corpus Sparsity Ratio - task: type: sparse-information-retrieval name: Sparse Information Retrieval dataset: name: NanoTouche2020 type: NanoTouche2020 metrics: - type: dot_accuracy@1 value: 0.4897959183673469 name: Dot Accuracy@1 - type: dot_accuracy@3 value: 0.8367346938775511 name: Dot Accuracy@3 - type: dot_accuracy@5 value: 0.9591836734693877 name: Dot Accuracy@5 - type: dot_accuracy@10 value: 0.9795918367346939 name: Dot Accuracy@10 - type: dot_precision@1 value: 0.4897959183673469 name: Dot Precision@1 - type: dot_precision@3 value: 0.5170068027210885 name: Dot Precision@3 - type: dot_precision@5 value: 0.5346938775510204 name: Dot Precision@5 - type: dot_precision@10 value: 0.4346938775510204 name: Dot Precision@10 - type: dot_recall@1 value: 0.03422245985964837 name: Dot Recall@1 - type: dot_recall@3 value: 0.10897367065265 name: Dot Recall@3 - type: dot_recall@5 value: 0.18115391425134045 name: Dot Recall@5 - type: dot_recall@10 value: 0.2884686031356881 name: Dot Recall@10 - type: dot_ndcg@10 value: 0.47678328743473813 name: Dot Ndcg@10 - type: dot_mrr@10 value: 0.6784580498866212 name: Dot Mrr@10 - type: dot_map@100 value: 0.3590479959667369 name: Dot Map@100 - type: query_active_dims value: 256.0 name: Query Active Dims - type: query_sparsity_ratio value: 0.9375 name: Query Sparsity Ratio - type: corpus_active_dims value: 256.0 name: Corpus Active Dims - type: corpus_sparsity_ratio value: 0.9375 name: Corpus Sparsity Ratio - task: type: sparse-nano-beir name: Sparse Nano BEIR dataset: name: NanoBEIR mean type: NanoBEIR_mean metrics: - type: dot_accuracy@1 value: 0.576138147566719 name: Dot Accuracy@1 - type: dot_accuracy@3 value: 0.7505180533751962 name: Dot Accuracy@3 - type: dot_accuracy@5 value: 0.821475667189953 name: Dot Accuracy@5 - type: dot_accuracy@10 value: 0.8722762951334379 name: Dot Accuracy@10 - type: dot_precision@1 value: 0.576138147566719 name: Dot Precision@1 - type: dot_precision@3 value: 0.3525902668759811 name: Dot Precision@3 - type: dot_precision@5 value: 0.27559183673469384 name: Dot Precision@5 - type: dot_precision@10 value: 0.18374568288854 name: Dot Precision@10 - type: dot_recall@1 value: 0.35314998700801087 name: Dot Recall@1 - type: dot_recall@3 value: 0.5019821826892981 name: Dot Recall@3 - type: dot_recall@5 value: 0.5697857012699635 name: Dot Recall@5 - type: dot_recall@10 value: 0.6415605598240661 name: Dot Recall@10 - type: dot_ndcg@10 value: 0.6120166524309044 name: Dot Ndcg@10 - type: dot_mrr@10 value: 0.6773209488923774 name: Dot Mrr@10 - type: dot_map@100 value: 0.5379621694912531 name: Dot Map@100 - type: query_active_dims value: 256.0 name: Query Active Dims - type: query_sparsity_ratio value: 0.9375 name: Query Sparsity Ratio - type: corpus_active_dims value: 256.0 name: Corpus Active Dims - type: corpus_sparsity_ratio value: 0.9375 name: Corpus Sparsity Ratio --- # Sparse CSR model trained on Natural Questions This is a [CSR Sparse Encoder](https://www.sbert.net/docs/sparse_encoder/usage/usage.html) model finetuned from [mixedbread-ai/mxbai-embed-large-v1](https://huggingface.co/mixedbread-ai/mxbai-embed-large-v1) on the [natural-questions](https://huggingface.co/datasets/sentence-transformers/natural-questions) dataset using the [sentence-transformers](https://www.SBERT.net) library. It maps sentences & paragraphs to a 4096-dimensional sparse vector space with 256 maximum active dimensions and can be used for semantic search and sparse retrieval. ## Model Details ### Model Description - **Model Type:** CSR Sparse Encoder - **Base model:** [mixedbread-ai/mxbai-embed-large-v1](https://huggingface.co/mixedbread-ai/mxbai-embed-large-v1) <!-- at revision db9d1fe0f31addb4978201b2bf3e577f3f8900d2 --> - **Maximum Sequence Length:** 512 tokens - **Output Dimensionality:** 4096 dimensions (trained with 256 maximum active dimensions) - **Similarity Function:** Dot Product - **Training Dataset:** - [natural-questions](https://huggingface.co/datasets/sentence-transformers/natural-questions) - **Language:** en - **License:** apache-2.0 ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Documentation:** [Sparse Encoder Documentation](https://www.sbert.net/docs/sparse_encoder/usage/usage.html) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Sparse Encoders on Hugging Face](https://huggingface.co/models?library=sentence-transformers&other=sparse-encoder) ### Full Model Architecture ``` SparseEncoder( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) (2): CSRSparsity({'input_dim': 1024, 'hidden_dim': 4096, 'k': 256, 'k_aux': 512, 'normalize': False, 'dead_threshold': 30}) ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SparseEncoder # Download from the 🤗 Hub model = SparseEncoder("tomaarsen/csr-mxbai-embed-large-v1-nq-updated-reconstruction-2") # Run inference queries = [ "who is cornelius in the book of acts", ] documents = [ 'Cornelius the Centurion Cornelius (Greek: Κορνήλιος) was a Roman centurion who is considered by Christians to be one of the first Gentiles to convert to the faith, as related in Acts of the Apostles.', "Joe Ranft Ranft reunited with Lasseter when he was hired by Pixar in 1991 as their head of story.[1] There he worked on all of their films produced up to 2006; this included Toy Story (for which he received an Academy Award nomination) and A Bug's Life, as the co-story writer and others as story supervisor. His final film was Cars. He also voiced characters in many of the films, including Heimlich the caterpillar in A Bug's Life, Wheezy the penguin in Toy Story 2, and Jacques the shrimp in Finding Nemo.[1]", 'Wonderful Tonight "Wonderful Tonight" is a ballad written by Eric Clapton. It was included on Clapton\'s 1977 album Slowhand. Clapton wrote the song about Pattie Boyd.[1] The female vocal harmonies on the song are provided by Marcella Detroit (then Marcy Levy) and Yvonne Elliman.', ] query_embeddings = model.encode_query(queries) document_embeddings = model.encode_document(documents) print(query_embeddings.shape, document_embeddings.shape) # [1, 4096] [3, 4096] # Get the similarity scores for the embeddings similarities = model.similarity(query_embeddings, document_embeddings) print(similarities) # tensor([[118.6570, 32.2072, 21.3971]]) ``` <!-- ### Direct Usage (Transformers) <details><summary>Click to see the direct usage in Transformers</summary> </details> --> <!-- ### Downstream Usage (Sentence Transformers) You can finetune this model on your own dataset. <details><summary>Click to expand</summary> </details> --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> ## Evaluation ### Metrics #### Sparse Information Retrieval * Datasets: `NanoMSMARCO_128`, `NanoNFCorpus_128` and `NanoNQ_128` * Evaluated with [<code>SparseInformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sparse_encoder/evaluation.html#sentence_transformers.sparse_encoder.evaluation.SparseInformationRetrievalEvaluator) with these parameters: ```json { "max_active_dims": 128 } ``` | Metric | NanoMSMARCO_128 | NanoNFCorpus_128 | NanoNQ_128 | |:----------------------|:----------------|:-----------------|:-----------| | dot_accuracy@1 | 0.38 | 0.44 | 0.48 | | dot_accuracy@3 | 0.66 | 0.54 | 0.66 | | dot_accuracy@5 | 0.72 | 0.64 | 0.7 | | dot_accuracy@10 | 0.82 | 0.68 | 0.84 | | dot_precision@1 | 0.38 | 0.44 | 0.48 | | dot_precision@3 | 0.22 | 0.3133 | 0.2267 | | dot_precision@5 | 0.144 | 0.28 | 0.148 | | dot_precision@10 | 0.082 | 0.246 | 0.09 | | dot_recall@1 | 0.38 | 0.0451 | 0.45 | | dot_recall@3 | 0.66 | 0.0675 | 0.62 | | dot_recall@5 | 0.72 | 0.0877 | 0.67 | | dot_recall@10 | 0.82 | 0.1204 | 0.81 | | **dot_ndcg@10** | **0.6075** | **0.3038** | **0.6338** | | dot_mrr@10 | 0.5393 | 0.5082 | 0.5933 | | dot_map@100 | 0.5478 | 0.1387 | 0.5762 | | query_active_dims | 128.0 | 128.0 | 128.0 | | query_sparsity_ratio | 0.9688 | 0.9688 | 0.9688 | | corpus_active_dims | 128.0 | 128.0 | 128.0 | | corpus_sparsity_ratio | 0.9688 | 0.9688 | 0.9688 | #### Sparse Nano BEIR * Dataset: `NanoBEIR_mean_128` * Evaluated with [<code>SparseNanoBEIREvaluator</code>](https://sbert.net/docs/package_reference/sparse_encoder/evaluation.html#sentence_transformers.sparse_encoder.evaluation.SparseNanoBEIREvaluator) with these parameters: ```json { "dataset_names": [ "msmarco", "nfcorpus", "nq" ], "max_active_dims": 128 } ``` | Metric | Value | |:----------------------|:----------| | dot_accuracy@1 | 0.4333 | | dot_accuracy@3 | 0.62 | | dot_accuracy@5 | 0.6867 | | dot_accuracy@10 | 0.78 | | dot_precision@1 | 0.4333 | | dot_precision@3 | 0.2533 | | dot_precision@5 | 0.1907 | | dot_precision@10 | 0.1393 | | dot_recall@1 | 0.2917 | | dot_recall@3 | 0.4492 | | dot_recall@5 | 0.4926 | | dot_recall@10 | 0.5835 | | **dot_ndcg@10** | **0.515** | | dot_mrr@10 | 0.5469 | | dot_map@100 | 0.4209 | | query_active_dims | 128.0 | | query_sparsity_ratio | 0.9688 | | corpus_active_dims | 128.0 | | corpus_sparsity_ratio | 0.9688 | #### Sparse Information Retrieval * Datasets: `NanoMSMARCO_256`, `NanoNFCorpus_256` and `NanoNQ_256` * Evaluated with [<code>SparseInformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sparse_encoder/evaluation.html#sentence_transformers.sparse_encoder.evaluation.SparseInformationRetrievalEvaluator) with these parameters: ```json { "max_active_dims": 256 } ``` | Metric | NanoMSMARCO_256 | NanoNFCorpus_256 | NanoNQ_256 | |:----------------------|:----------------|:-----------------|:-----------| | dot_accuracy@1 | 0.44 | 0.42 | 0.54 | | dot_accuracy@3 | 0.64 | 0.58 | 0.7 | | dot_accuracy@5 | 0.74 | 0.6 | 0.8 | | dot_accuracy@10 | 0.84 | 0.62 | 0.84 | | dot_precision@1 | 0.44 | 0.42 | 0.54 | | dot_precision@3 | 0.2133 | 0.3733 | 0.24 | | dot_precision@5 | 0.148 | 0.324 | 0.168 | | dot_precision@10 | 0.084 | 0.248 | 0.092 | | dot_recall@1 | 0.44 | 0.0451 | 0.51 | | dot_recall@3 | 0.64 | 0.0808 | 0.66 | | dot_recall@5 | 0.74 | 0.0994 | 0.75 | | dot_recall@10 | 0.84 | 0.1259 | 0.81 | | **dot_ndcg@10** | **0.6405** | **0.3181** | **0.6642** | | dot_mrr@10 | 0.5769 | 0.5042 | 0.6294 | | dot_map@100 | 0.5851 | 0.1585 | 0.6163 | | query_active_dims | 256.0 | 256.0 | 256.0 | | query_sparsity_ratio | 0.9375 | 0.9375 | 0.9375 | | corpus_active_dims | 256.0 | 256.0 | 256.0 | | corpus_sparsity_ratio | 0.9375 | 0.9375 | 0.9375 | #### Sparse Nano BEIR * Dataset: `NanoBEIR_mean_256` * Evaluated with [<code>SparseNanoBEIREvaluator</code>](https://sbert.net/docs/package_reference/sparse_encoder/evaluation.html#sentence_transformers.sparse_encoder.evaluation.SparseNanoBEIREvaluator) with these parameters: ```json { "dataset_names": [ "msmarco", "nfcorpus", "nq" ], "max_active_dims": 256 } ``` | Metric | Value | |:----------------------|:----------| | dot_accuracy@1 | 0.4667 | | dot_accuracy@3 | 0.64 | | dot_accuracy@5 | 0.7133 | | dot_accuracy@10 | 0.7667 | | dot_precision@1 | 0.4667 | | dot_precision@3 | 0.2756 | | dot_precision@5 | 0.2133 | | dot_precision@10 | 0.1413 | | dot_recall@1 | 0.3317 | | dot_recall@3 | 0.4603 | | dot_recall@5 | 0.5298 | | dot_recall@10 | 0.592 | | **dot_ndcg@10** | **0.541** | | dot_mrr@10 | 0.5702 | | dot_map@100 | 0.4533 | | query_active_dims | 256.0 | | query_sparsity_ratio | 0.9375 | | corpus_active_dims | 256.0 | | corpus_sparsity_ratio | 0.9375 | #### Sparse Information Retrieval * Datasets: `NanoClimateFEVER`, `NanoDBPedia`, `NanoFEVER`, `NanoFiQA2018`, `NanoHotpotQA`, `NanoMSMARCO`, `NanoNFCorpus`, `NanoNQ`, `NanoQuoraRetrieval`, `NanoSCIDOCS`, `NanoArguAna`, `NanoSciFact` and `NanoTouche2020` * Evaluated with [<code>SparseInformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sparse_encoder/evaluation.html#sentence_transformers.sparse_encoder.evaluation.SparseInformationRetrievalEvaluator) | Metric | NanoClimateFEVER | NanoDBPedia | NanoFEVER | NanoFiQA2018 | NanoHotpotQA | NanoMSMARCO | NanoNFCorpus | NanoNQ | NanoQuoraRetrieval | NanoSCIDOCS | NanoArguAna | NanoSciFact | NanoTouche2020 | |:----------------------|:-----------------|:------------|:-----------|:-------------|:-------------|:------------|:-------------|:-----------|:-------------------|:------------|:------------|:------------|:---------------| | dot_accuracy@1 | 0.28 | 0.8 | 0.84 | 0.48 | 0.84 | 0.44 | 0.42 | 0.58 | 0.9 | 0.42 | 0.38 | 0.62 | 0.4898 | | dot_accuracy@3 | 0.52 | 0.9 | 0.92 | 0.6 | 0.96 | 0.62 | 0.56 | 0.7 | 1.0 | 0.72 | 0.7 | 0.72 | 0.8367 | | dot_accuracy@5 | 0.7 | 0.9 | 0.96 | 0.64 | 0.96 | 0.74 | 0.64 | 0.8 | 1.0 | 0.82 | 0.8 | 0.76 | 0.9592 | | dot_accuracy@10 | 0.8 | 0.92 | 0.96 | 0.74 | 0.98 | 0.84 | 0.66 | 0.82 | 1.0 | 0.88 | 0.92 | 0.84 | 0.9796 | | dot_precision@1 | 0.28 | 0.8 | 0.84 | 0.48 | 0.84 | 0.44 | 0.42 | 0.58 | 0.9 | 0.42 | 0.38 | 0.62 | 0.4898 | | dot_precision@3 | 0.1867 | 0.6467 | 0.32 | 0.3067 | 0.5133 | 0.2067 | 0.38 | 0.24 | 0.4133 | 0.3533 | 0.2333 | 0.2667 | 0.517 | | dot_precision@5 | 0.168 | 0.56 | 0.2 | 0.224 | 0.328 | 0.148 | 0.348 | 0.168 | 0.272 | 0.3 | 0.16 | 0.172 | 0.5347 | | dot_precision@10 | 0.108 | 0.474 | 0.1 | 0.136 | 0.17 | 0.084 | 0.258 | 0.09 | 0.138 | 0.208 | 0.092 | 0.096 | 0.4347 | | dot_recall@1 | 0.1217 | 0.0913 | 0.7867 | 0.2592 | 0.42 | 0.44 | 0.0449 | 0.55 | 0.7773 | 0.0907 | 0.38 | 0.595 | 0.0342 | | dot_recall@3 | 0.2323 | 0.1741 | 0.8867 | 0.3973 | 0.77 | 0.62 | 0.0877 | 0.66 | 0.962 | 0.2217 | 0.7 | 0.705 | 0.109 | | dot_recall@5 | 0.348 | 0.2252 | 0.9267 | 0.4498 | 0.82 | 0.74 | 0.1084 | 0.75 | 0.9933 | 0.3097 | 0.8 | 0.755 | 0.1812 | | dot_recall@10 | 0.4263 | 0.3214 | 0.9267 | 0.5796 | 0.85 | 0.84 | 0.1355 | 0.79 | 0.9967 | 0.4257 | 0.92 | 0.84 | 0.2885 | | **dot_ndcg@10** | **0.3324** | **0.6002** | **0.8816** | **0.4881** | **0.8107** | **0.6329** | **0.3285** | **0.6773** | **0.951** | **0.4023** | **0.6551** | **0.7194** | **0.4768** | | dot_mrr@10 | 0.4364 | 0.8425 | 0.89 | 0.5517 | 0.8967 | 0.5678 | 0.501 | 0.6522 | 0.9467 | 0.5887 | 0.5706 | 0.6824 | 0.6785 | | dot_map@100 | 0.249 | 0.4526 | 0.859 | 0.4255 | 0.7566 | 0.5762 | 0.1617 | 0.6421 | 0.9297 | 0.3208 | 0.5761 | 0.6851 | 0.359 | | query_active_dims | 256.0 | 256.0 | 256.0 | 256.0 | 256.0 | 256.0 | 256.0 | 256.0 | 256.0 | 256.0 | 256.0 | 256.0 | 256.0 | | query_sparsity_ratio | 0.9375 | 0.9375 | 0.9375 | 0.9375 | 0.9375 | 0.9375 | 0.9375 | 0.9375 | 0.9375 | 0.9375 | 0.9375 | 0.9375 | 0.9375 | | corpus_active_dims | 256.0 | 256.0 | 256.0 | 256.0 | 256.0 | 256.0 | 256.0 | 256.0 | 256.0 | 256.0 | 256.0 | 256.0 | 256.0 | | corpus_sparsity_ratio | 0.9375 | 0.9375 | 0.9375 | 0.9375 | 0.9375 | 0.9375 | 0.9375 | 0.9375 | 0.9375 | 0.9375 | 0.9375 | 0.9375 | 0.9375 | #### Sparse Nano BEIR * Dataset: `NanoBEIR_mean` * Evaluated with [<code>SparseNanoBEIREvaluator</code>](https://sbert.net/docs/package_reference/sparse_encoder/evaluation.html#sentence_transformers.sparse_encoder.evaluation.SparseNanoBEIREvaluator) with these parameters: ```json { "dataset_names": [ "climatefever", "dbpedia", "fever", "fiqa2018", "hotpotqa", "msmarco", "nfcorpus", "nq", "quoraretrieval", "scidocs", "arguana", "scifact", "touche2020" ] } ``` | Metric | Value | |:----------------------|:----------| | dot_accuracy@1 | 0.5761 | | dot_accuracy@3 | 0.7505 | | dot_accuracy@5 | 0.8215 | | dot_accuracy@10 | 0.8723 | | dot_precision@1 | 0.5761 | | dot_precision@3 | 0.3526 | | dot_precision@5 | 0.2756 | | dot_precision@10 | 0.1837 | | dot_recall@1 | 0.3531 | | dot_recall@3 | 0.502 | | dot_recall@5 | 0.5698 | | dot_recall@10 | 0.6416 | | **dot_ndcg@10** | **0.612** | | dot_mrr@10 | 0.6773 | | dot_map@100 | 0.538 | | query_active_dims | 256.0 | | query_sparsity_ratio | 0.9375 | | corpus_active_dims | 256.0 | | corpus_sparsity_ratio | 0.9375 | <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Dataset #### natural-questions * Dataset: [natural-questions](https://huggingface.co/datasets/sentence-transformers/natural-questions) at [f9e894e](https://huggingface.co/datasets/sentence-transformers/natural-questions/tree/f9e894e1081e206e577b4eaa9ee6de2b06ae6f17) * Size: 99,000 training samples * Columns: <code>query</code> and <code>answer</code> * Approximate statistics based on the first 1000 samples: | | query | answer | |:--------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | | details | <ul><li>min: 10 tokens</li><li>mean: 11.71 tokens</li><li>max: 26 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 131.81 tokens</li><li>max: 450 tokens</li></ul> | * Samples: | query | answer | |:--------------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | <code>who played the father in papa don't preach</code> | <code>Alex McArthur Alex McArthur (born March 6, 1957) is an American actor.</code> | | <code>where was the location of the battle of hastings</code> | <code>Battle of Hastings The Battle of Hastings[a] was fought on 14 October 1066 between the Norman-French army of William, the Duke of Normandy, and an English army under the Anglo-Saxon King Harold Godwinson, beginning the Norman conquest of England. It took place approximately 7 miles (11 kilometres) northwest of Hastings, close to the present-day town of Battle, East Sussex, and was a decisive Norman victory.</code> | | <code>how many puppies can a dog give birth to</code> | <code>Canine reproduction The largest litter size to date was set by a Neapolitan Mastiff in Manea, Cambridgeshire, UK on November 29, 2004; the litter was 24 puppies.[22]</code> | * Loss: [<code>CSRLoss</code>](https://sbert.net/docs/package_reference/sparse_encoder/losses.html#csrloss) with these parameters: ```json { "beta": 0.1, "gamma": 1.0, "loss": "SparseMultipleNegativesRankingLoss(scale=1.0, similarity_fct='dot_score')" } ``` ### Evaluation Dataset #### natural-questions * Dataset: [natural-questions](https://huggingface.co/datasets/sentence-transformers/natural-questions) at [f9e894e](https://huggingface.co/datasets/sentence-transformers/natural-questions/tree/f9e894e1081e206e577b4eaa9ee6de2b06ae6f17) * Size: 1,000 evaluation samples * Columns: <code>query</code> and <code>answer</code> * Approximate statistics based on the first 1000 samples: | | query | answer | |:--------|:-----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | | details | <ul><li>min: 10 tokens</li><li>mean: 11.69 tokens</li><li>max: 23 tokens</li></ul> | <ul><li>min: 15 tokens</li><li>mean: 134.01 tokens</li><li>max: 512 tokens</li></ul> | * Samples: | query | answer | |:-------------------------------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | <code>where is the tiber river located in italy</code> | <code>Tiber The Tiber (/ˈtaɪbər/, Latin: Tiberis,[1] Italian: Tevere [ˈteːvere])[2] is the third-longest river in Italy, rising in the Apennine Mountains in Emilia-Romagna and flowing 406 kilometres (252 mi) through Tuscany, Umbria and Lazio, where it is joined by the river Aniene, to the Tyrrhenian Sea, between Ostia and Fiumicino.[3] It drains a basin estimated at 17,375 square kilometres (6,709 sq mi). The river has achieved lasting fame as the main watercourse of the city of Rome, founded on its eastern banks.</code> | | <code>what kind of car does jay gatsby drive</code> | <code>Jay Gatsby At the Buchanan home, Jordan Baker, Nick, Jay, and the Buchanans decide to visit New York City. Tom borrows Gatsby's yellow Rolls Royce to drive up to the city. On the way to New York City, Tom makes a detour at a gas station in "the Valley of Ashes", a run-down part of Long Island. The owner, George Wilson, shares his concern that his wife, Myrtle, may be having an affair. This unnerves Tom, who has been having an affair with Myrtle, and he leaves in a hurry.</code> | | <code>who sings if i can dream about you</code> | <code>I Can Dream About You "I Can Dream About You" is a song performed by American singer Dan Hartman on the soundtrack album of the film Streets of Fire. Released in 1984 as a single from the soundtrack, and included on Hartman's album I Can Dream About You, it reached number 6 on the Billboard Hot 100.[1]</code> | * Loss: [<code>CSRLoss</code>](https://sbert.net/docs/package_reference/sparse_encoder/losses.html#csrloss) with these parameters: ```json { "beta": 0.1, "gamma": 1.0, "loss": "SparseMultipleNegativesRankingLoss(scale=1.0, similarity_fct='dot_score')" } ``` ### Training Hyperparameters #### Non-Default Hyperparameters - `eval_strategy`: steps - `per_device_train_batch_size`: 64 - `per_device_eval_batch_size`: 64 - `learning_rate`: 4e-05 - `num_train_epochs`: 1 - `bf16`: True - `load_best_model_at_end`: True - `batch_sampler`: no_duplicates #### All Hyperparameters <details><summary>Click to expand</summary> - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: steps - `prediction_loss_only`: True - `per_device_train_batch_size`: 64 - `per_device_eval_batch_size`: 64 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 1 - `eval_accumulation_steps`: None - `torch_empty_cache_steps`: None - `learning_rate`: 4e-05 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1.0 - `num_train_epochs`: 1 - `max_steps`: -1 - `lr_scheduler_type`: linear - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.0 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `restore_callback_states_from_checkpoint`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: True - `fp16`: False - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: None - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: False - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: True - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: None - `hub_always_push`: False - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `include_for_metrics`: [] - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_eval_metrics`: False - `eval_on_start`: False - `use_liger_kernel`: False - `eval_use_gather_object`: False - `average_tokens_across_devices`: False - `prompts`: None - `batch_sampler`: no_duplicates - `multi_dataset_batch_sampler`: proportional - `router_mapping`: {} - `learning_rate_mapping`: {} </details> ### Training Logs | Epoch | Step | Training Loss | Validation Loss | NanoMSMARCO_128_dot_ndcg@10 | NanoNFCorpus_128_dot_ndcg@10 | NanoNQ_128_dot_ndcg@10 | NanoBEIR_mean_128_dot_ndcg@10 | NanoMSMARCO_256_dot_ndcg@10 | NanoNFCorpus_256_dot_ndcg@10 | NanoNQ_256_dot_ndcg@10 | NanoBEIR_mean_256_dot_ndcg@10 | NanoClimateFEVER_dot_ndcg@10 | NanoDBPedia_dot_ndcg@10 | NanoFEVER_dot_ndcg@10 | NanoFiQA2018_dot_ndcg@10 | NanoHotpotQA_dot_ndcg@10 | NanoMSMARCO_dot_ndcg@10 | NanoNFCorpus_dot_ndcg@10 | NanoNQ_dot_ndcg@10 | NanoQuoraRetrieval_dot_ndcg@10 | NanoSCIDOCS_dot_ndcg@10 | NanoArguAna_dot_ndcg@10 | NanoSciFact_dot_ndcg@10 | NanoTouche2020_dot_ndcg@10 | NanoBEIR_mean_dot_ndcg@10 | |:----------:|:--------:|:-------------:|:---------------:|:---------------------------:|:----------------------------:|:----------------------:|:-----------------------------:|:---------------------------:|:----------------------------:|:----------------------:|:-----------------------------:|:----------------------------:|:-----------------------:|:---------------------:|:------------------------:|:------------------------:|:-----------------------:|:------------------------:|:------------------:|:------------------------------:|:-----------------------:|:-----------------------:|:-----------------------:|:--------------------------:|:-------------------------:| | 0.0646 | 100 | 0.3565 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | | 0.1293 | 200 | 0.3568 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | | 0.1939 | 300 | 0.3545 | 0.3458 | 0.6322 | 0.2796 | 0.5893 | 0.5004 | 0.6232 | 0.3253 | 0.6548 | 0.5345 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | | 0.2586 | 400 | 0.3393 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | | 0.3232 | 500 | 0.3484 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | | 0.3878 | 600 | 0.3567 | 0.3452 | 0.6245 | 0.3038 | 0.5719 | 0.5000 | 0.6385 | 0.3375 | 0.6496 | 0.5419 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | | 0.4525 | 700 | 0.3471 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | | 0.5171 | 800 | 0.3582 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | | 0.5818 | 900 | 0.3758 | 0.3417 | 0.5849 | 0.3074 | 0.5866 | 0.4929 | 0.6147 | 0.3310 | 0.6729 | 0.5395 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | | 0.6464 | 1000 | 0.3515 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | | 0.7111 | 1100 | 0.3287 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | | **0.7757** | **1200** | **0.3486** | **0.3314** | **0.5937** | **0.2998** | **0.6317** | **0.5084** | **0.6309** | **0.3303** | **0.6773** | **0.5462** | **-** | **-** | **-** | **-** | **-** | **-** | **-** | **-** | **-** | **-** | **-** | **-** | **-** | **-** | | 0.8403 | 1300 | 0.3527 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | | 0.9050 | 1400 | 0.3161 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | | 0.9696 | 1500 | 0.3279 | 0.3244 | 0.6075 | 0.3038 | 0.6338 | 0.5150 | 0.6405 | 0.3181 | 0.6642 | 0.5410 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | | -1 | -1 | - | - | - | - | - | - | - | - | - | - | 0.3324 | 0.6002 | 0.8816 | 0.4881 | 0.8107 | 0.6329 | 0.3285 | 0.6773 | 0.9510 | 0.4023 | 0.6551 | 0.7194 | 0.4768 | 0.6120 | * The bold row denotes the saved checkpoint. ### Environmental Impact Carbon emissions were measured using [CodeCarbon](https://github.com/mlco2/codecarbon). - **Energy Consumed**: 0.136 kWh - **Carbon Emitted**: 0.053 kg of CO2 - **Hours Used**: 0.41 hours ### Training Hardware - **On Cloud**: No - **GPU Model**: 1 x NVIDIA GeForce RTX 3090 - **CPU Model**: 13th Gen Intel(R) Core(TM) i7-13700K - **RAM Size**: 31.78 GB ### Framework Versions - Python: 3.11.6 - Sentence Transformers: 4.2.0.dev0 - Transformers: 4.52.4 - PyTorch: 2.6.0+cu124 - Accelerate: 1.5.1 - Datasets: 2.21.0 - Tokenizers: 0.21.1 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ``` #### CSRLoss ```bibtex @misc{wen2025matryoshkarevisitingsparsecoding, title={Beyond Matryoshka: Revisiting Sparse Coding for Adaptive Representation}, author={Tiansheng Wen and Yifei Wang and Zequn Zeng and Zhong Peng and Yudi Su and Xinyang Liu and Bo Chen and Hongwei Liu and Stefanie Jegelka and Chenyu You}, year={2025}, eprint={2503.01776}, archivePrefix={arXiv}, primaryClass={cs.LG}, url={https://arxiv.org/abs/2503.01776}, } ``` #### SparseMultipleNegativesRankingLoss ```bibtex @misc{henderson2017efficient, title={Efficient Natural Language Response Suggestion for Smart Reply}, author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil}, year={2017}, eprint={1705.00652}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
IFANSA5657/gasher453
IFANSA5657
2025-06-19T14:19:43Z
0
0
diffusers
[ "diffusers", "text-to-image", "lora", "template:diffusion-lora", "base_model:stable-diffusion-v1-5/stable-diffusion-v1-5", "base_model:adapter:stable-diffusion-v1-5/stable-diffusion-v1-5", "region:us" ]
text-to-image
2025-06-19T14:19:38Z
--- tags: - text-to-image - lora - diffusers - template:diffusion-lora widget: - text: '-' output: url: images/nick-iliasov-i0fCUofGjV8-unsplash.jpg base_model: stable-diffusion-v1-5/stable-diffusion-v1-5 instance_prompt: null --- # dsggs434657 <Gallery /> ## Download model Weights for this model are available in Safetensors format. [Download](/IFANSA5657/gasher453/tree/main) them in the Files & versions tab.
Rif010/sealion-burmese-fine-tuned-merged-v1-Q4_K_M-GGUF
Rif010
2025-06-19T14:19:35Z
0
0
transformers
[ "transformers", "gguf", "llama-cpp", "gguf-my-repo", "base_model:Rif010/sealion-burmese-fine-tuned-merged-v1", "base_model:quantized:Rif010/sealion-burmese-fine-tuned-merged-v1", "endpoints_compatible", "region:us" ]
null
2025-06-19T14:19:11Z
--- library_name: transformers tags: - llama-cpp - gguf-my-repo base_model: Rif010/sealion-burmese-fine-tuned-merged-v1 --- # Rif010/sealion-burmese-fine-tuned-merged-v1-Q4_K_M-GGUF This model was converted to GGUF format from [`Rif010/sealion-burmese-fine-tuned-merged-v1`](https://huggingface.co/Rif010/sealion-burmese-fine-tuned-merged-v1) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/Rif010/sealion-burmese-fine-tuned-merged-v1) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo Rif010/sealion-burmese-fine-tuned-merged-v1-Q4_K_M-GGUF --hf-file sealion-burmese-fine-tuned-merged-v1-q4_k_m.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo Rif010/sealion-burmese-fine-tuned-merged-v1-Q4_K_M-GGUF --hf-file sealion-burmese-fine-tuned-merged-v1-q4_k_m.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo Rif010/sealion-burmese-fine-tuned-merged-v1-Q4_K_M-GGUF --hf-file sealion-burmese-fine-tuned-merged-v1-q4_k_m.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo Rif010/sealion-burmese-fine-tuned-merged-v1-Q4_K_M-GGUF --hf-file sealion-burmese-fine-tuned-merged-v1-q4_k_m.gguf -c 2048 ```
johngreendr1/ce7a970e-7299-4e54-bf83-14b49ed32fd7
johngreendr1
2025-06-19T14:17:06Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:NousResearch/Nous-Capybara-7B-V1.9", "base_model:adapter:NousResearch/Nous-Capybara-7B-V1.9", "region:us" ]
null
2025-06-19T14:17:00Z
--- base_model: NousResearch/Nous-Capybara-7B-V1.9 library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.15.1
cosmo3769/nanoVLM-test
cosmo3769
2025-06-19T14:14:26Z
0
0
nanovlm
[ "nanovlm", "safetensors", "vision-language", "multimodal", "research", "image-text-to-text", "license:mit", "region:us" ]
image-text-to-text
2025-06-19T14:13:36Z
--- # For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1 # Doc / guide: https://huggingface.co/docs/hub/model-cards library_name: nanovlm license: mit pipeline_tag: image-text-to-text tags: - vision-language - multimodal - research --- **nanoVLM** is a minimal and lightweight Vision-Language Model (VLM) designed for efficient training and experimentation. Built using pure PyTorch, the entire model architecture and training logic fits within ~750 lines of code. It combines a ViT-based image encoder (SigLIP-B/16-224-85M) with a lightweight causal language model (SmolLM2-135M), resulting in a compact 222M parameter model. For more information, check out the base model on https://huggingface.co/lusxvr/nanoVLM-222M. **Usage:** Clone the nanoVLM repository: https://github.com/huggingface/nanoVLM. Follow the install instructions and run the following code: ```python from models.vision_language_model import VisionLanguageModel model = VisionLanguageModel.from_pretrained("cosmo3769/nanoVLM-test") ```
l0tr1k/photography-mistral-4bit-lora-adapter-new
l0tr1k
2025-06-19T14:11:37Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-06-19T14:10:10Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Alphatao/Affine-2501551
Alphatao
2025-06-19T14:09:26Z
0
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "conversational", "arxiv:2309.00071", "arxiv:2505.09388", "base_model:Qwen/Qwen3-8B-Base", "base_model:finetune:Qwen/Qwen3-8B-Base", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-19T14:03:22Z
--- library_name: transformers license: apache-2.0 license_link: https://huggingface.co/Qwen/Qwen3-8B/blob/main/LICENSE pipeline_tag: text-generation base_model: - Qwen/Qwen3-8B-Base --- # Qwen3-8B <a href="https://chat.qwen.ai/" target="_blank" style="margin: 2px;"> <img alt="Chat" src="https://img.shields.io/badge/%F0%9F%92%9C%EF%B8%8F%20Qwen%20Chat%20-536af5" style="display: inline-block; vertical-align: middle;"/> </a> ## Qwen3 Highlights Qwen3 is the latest generation of large language models in Qwen series, offering a comprehensive suite of dense and mixture-of-experts (MoE) models. Built upon extensive training, Qwen3 delivers groundbreaking advancements in reasoning, instruction-following, agent capabilities, and multilingual support, with the following key features: - **Uniquely support of seamless switching between thinking mode** (for complex logical reasoning, math, and coding) and **non-thinking mode** (for efficient, general-purpose dialogue) **within single model**, ensuring optimal performance across various scenarios. - **Significantly enhancement in its reasoning capabilities**, surpassing previous QwQ (in thinking mode) and Qwen2.5 instruct models (in non-thinking mode) on mathematics, code generation, and commonsense logical reasoning. - **Superior human preference alignment**, excelling in creative writing, role-playing, multi-turn dialogues, and instruction following, to deliver a more natural, engaging, and immersive conversational experience. - **Expertise in agent capabilities**, enabling precise integration with external tools in both thinking and unthinking modes and achieving leading performance among open-source models in complex agent-based tasks. - **Support of 100+ languages and dialects** with strong capabilities for **multilingual instruction following** and **translation**. ## Model Overview **Qwen3-8B** has the following features: - Type: Causal Language Models - Training Stage: Pretraining & Post-training - Number of Parameters: 8.2B - Number of Paramaters (Non-Embedding): 6.95B - Number of Layers: 36 - Number of Attention Heads (GQA): 32 for Q and 8 for KV - Context Length: 32,768 natively and [131,072 tokens with YaRN](#processing-long-texts). For more details, including benchmark evaluation, hardware requirements, and inference performance, please refer to our [blog](https://qwenlm.github.io/blog/qwen3/), [GitHub](https://github.com/QwenLM/Qwen3), and [Documentation](https://qwen.readthedocs.io/en/latest/). ## Quickstart The code of Qwen3 has been in the latest Hugging Face `transformers` and we advise you to use the latest version of `transformers`. With `transformers<4.51.0`, you will encounter the following error: ``` KeyError: 'qwen3' ``` The following contains a code snippet illustrating how to use the model generate content based on given inputs. ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_name = "Qwen/Qwen3-8B" # load the tokenizer and the model tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForCausalLM.from_pretrained( model_name, torch_dtype="auto", device_map="auto" ) # prepare the model input prompt = "Give me a short introduction to large language model." messages = [ {"role": "user", "content": prompt} ] text = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True, enable_thinking=True # Switches between thinking and non-thinking modes. Default is True. ) model_inputs = tokenizer([text], return_tensors="pt").to(model.device) # conduct text completion generated_ids = model.generate( **model_inputs, max_new_tokens=32768 ) output_ids = generated_ids[0][len(model_inputs.input_ids[0]):].tolist() # parsing thinking content try: # rindex finding 151668 (</think>) index = len(output_ids) - output_ids[::-1].index(151668) except ValueError: index = 0 thinking_content = tokenizer.decode(output_ids[:index], skip_special_tokens=True).strip("\n") content = tokenizer.decode(output_ids[index:], skip_special_tokens=True).strip("\n") print("thinking content:", thinking_content) print("content:", content) ``` For deployment, you can use `sglang>=0.4.6.post1` or `vllm>=0.8.5` or to create an OpenAI-compatible API endpoint: - SGLang: ```shell python -m sglang.launch_server --model-path Qwen/Qwen3-8B --reasoning-parser qwen3 ``` - vLLM: ```shell vllm serve Qwen/Qwen3-8B --enable-reasoning --reasoning-parser deepseek_r1 ``` For local use, applications such as Ollama, LMStudio, MLX-LM, llama.cpp, and KTransformers have also supported Qwen3. ## Switching Between Thinking and Non-Thinking Mode > [!TIP] > The `enable_thinking` switch is also available in APIs created by SGLang and vLLM. > Please refer to our documentation for [SGLang](https://qwen.readthedocs.io/en/latest/deployment/sglang.html#thinking-non-thinking-modes) and [vLLM](https://qwen.readthedocs.io/en/latest/deployment/vllm.html#thinking-non-thinking-modes) users. ### `enable_thinking=True` By default, Qwen3 has thinking capabilities enabled, similar to QwQ-32B. This means the model will use its reasoning abilities to enhance the quality of generated responses. For example, when explicitly setting `enable_thinking=True` or leaving it as the default value in `tokenizer.apply_chat_template`, the model will engage its thinking mode. ```python text = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True, enable_thinking=True # True is the default value for enable_thinking ) ``` In this mode, the model will generate think content wrapped in a `<think>...</think>` block, followed by the final response. > [!NOTE] > For thinking mode, use `Temperature=0.6`, `TopP=0.95`, `TopK=20`, and `MinP=0` (the default setting in `generation_config.json`). **DO NOT use greedy decoding**, as it can lead to performance degradation and endless repetitions. For more detailed guidance, please refer to the [Best Practices](#best-practices) section. ### `enable_thinking=False` We provide a hard switch to strictly disable the model's thinking behavior, aligning its functionality with the previous Qwen2.5-Instruct models. This mode is particularly useful in scenarios where disabling thinking is essential for enhancing efficiency. ```python text = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True, enable_thinking=False # Setting enable_thinking=False disables thinking mode ) ``` In this mode, the model will not generate any think content and will not include a `<think>...</think>` block. > [!NOTE] > For non-thinking mode, we suggest using `Temperature=0.7`, `TopP=0.8`, `TopK=20`, and `MinP=0`. For more detailed guidance, please refer to the [Best Practices](#best-practices) section. ### Advanced Usage: Switching Between Thinking and Non-Thinking Modes via User Input We provide a soft switch mechanism that allows users to dynamically control the model's behavior when `enable_thinking=True`. Specifically, you can add `/think` and `/no_think` to user prompts or system messages to switch the model's thinking mode from turn to turn. The model will follow the most recent instruction in multi-turn conversations. Here is an example of a multi-turn conversation: ```python from transformers import AutoModelForCausalLM, AutoTokenizer class QwenChatbot: def __init__(self, model_name="Qwen/Qwen3-8B"): self.tokenizer = AutoTokenizer.from_pretrained(model_name) self.model = AutoModelForCausalLM.from_pretrained(model_name) self.history = [] def generate_response(self, user_input): messages = self.history + [{"role": "user", "content": user_input}] text = self.tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) inputs = self.tokenizer(text, return_tensors="pt") response_ids = self.model.generate(**inputs, max_new_tokens=32768)[0][len(inputs.input_ids[0]):].tolist() response = self.tokenizer.decode(response_ids, skip_special_tokens=True) # Update history self.history.append({"role": "user", "content": user_input}) self.history.append({"role": "assistant", "content": response}) return response # Example Usage if __name__ == "__main__": chatbot = QwenChatbot() # First input (without /think or /no_think tags, thinking mode is enabled by default) user_input_1 = "How many r's in strawberries?" print(f"User: {user_input_1}") response_1 = chatbot.generate_response(user_input_1) print(f"Bot: {response_1}") print("----------------------") # Second input with /no_think user_input_2 = "Then, how many r's in blueberries? /no_think" print(f"User: {user_input_2}") response_2 = chatbot.generate_response(user_input_2) print(f"Bot: {response_2}") print("----------------------") # Third input with /think user_input_3 = "Really? /think" print(f"User: {user_input_3}") response_3 = chatbot.generate_response(user_input_3) print(f"Bot: {response_3}") ``` > [!NOTE] > For API compatibility, when `enable_thinking=True`, regardless of whether the user uses `/think` or `/no_think`, the model will always output a block wrapped in `<think>...</think>`. However, the content inside this block may be empty if thinking is disabled. > When `enable_thinking=False`, the soft switches are not valid. Regardless of any `/think` or `/no_think` tags input by the user, the model will not generate think content and will not include a `<think>...</think>` block. ## Agentic Use Qwen3 excels in tool calling capabilities. We recommend using [Qwen-Agent](https://github.com/QwenLM/Qwen-Agent) to make the best use of agentic ability of Qwen3. Qwen-Agent encapsulates tool-calling templates and tool-calling parsers internally, greatly reducing coding complexity. To define the available tools, you can use the MCP configuration file, use the integrated tool of Qwen-Agent, or integrate other tools by yourself. ```python from qwen_agent.agents import Assistant # Define LLM llm_cfg = { 'model': 'Qwen3-8B', # Use the endpoint provided by Alibaba Model Studio: # 'model_type': 'qwen_dashscope', # 'api_key': os.getenv('DASHSCOPE_API_KEY'), # Use a custom endpoint compatible with OpenAI API: 'model_server': 'http://localhost:8000/v1', # api_base 'api_key': 'EMPTY', # Other parameters: # 'generate_cfg': { # # Add: When the response content is `<think>this is the thought</think>this is the answer; # # Do not add: When the response has been separated by reasoning_content and content. # 'thought_in_content': True, # }, } # Define Tools tools = [ {'mcpServers': { # You can specify the MCP configuration file 'time': { 'command': 'uvx', 'args': ['mcp-server-time', '--local-timezone=Asia/Shanghai'] }, "fetch": { "command": "uvx", "args": ["mcp-server-fetch"] } } }, 'code_interpreter', # Built-in tools ] # Define Agent bot = Assistant(llm=llm_cfg, function_list=tools) # Streaming generation messages = [{'role': 'user', 'content': 'https://qwenlm.github.io/blog/ Introduce the latest developments of Qwen'}] for responses in bot.run(messages=messages): pass print(responses) ``` ## Processing Long Texts Qwen3 natively supports context lengths of up to 32,768 tokens. For conversations where the total length (including both input and output) significantly exceeds this limit, we recommend using RoPE scaling techniques to handle long texts effectively. We have validated the model's performance on context lengths of up to 131,072 tokens using the [YaRN](https://arxiv.org/abs/2309.00071) method. YaRN is currently supported by several inference frameworks, e.g., `transformers` and `llama.cpp` for local use, `vllm` and `sglang` for deployment. In general, there are two approaches to enabling YaRN for supported frameworks: - Modifying the model files: In the `config.json` file, add the `rope_scaling` fields: ```json { ..., "rope_scaling": { "rope_type": "yarn", "factor": 4.0, "original_max_position_embeddings": 32768 } } ``` For `llama.cpp`, you need to regenerate the GGUF file after the modification. - Passing command line arguments: For `vllm`, you can use ```shell vllm serve ... --rope-scaling '{"rope_type":"yarn","factor":4.0,"original_max_position_embeddings":32768}' --max-model-len 131072 ``` For `sglang`, you can use ```shell python -m sglang.launch_server ... --json-model-override-args '{"rope_scaling":{"rope_type":"yarn","factor":4.0,"original_max_position_embeddings":32768}}' ``` For `llama-server` from `llama.cpp`, you can use ```shell llama-server ... --rope-scaling yarn --rope-scale 4 --yarn-orig-ctx 32768 ``` > [!IMPORTANT] > If you encounter the following warning > ``` > Unrecognized keys in `rope_scaling` for 'rope_type'='yarn': {'original_max_position_embeddings'} > ``` > please upgrade `transformers>=4.51.0`. > [!NOTE] > All the notable open-source frameworks implement static YaRN, which means the scaling factor remains constant regardless of input length, **potentially impacting performance on shorter texts.** > We advise adding the `rope_scaling` configuration only when processing long contexts is required. > It is also recommended to modify the `factor` as needed. For example, if the typical context length for your application is 65,536 tokens, it would be better to set `factor` as 2.0. > [!NOTE] > The default `max_position_embeddings` in `config.json` is set to 40,960. This allocation includes reserving 32,768 tokens for outputs and 8,192 tokens for typical prompts, which is sufficient for most scenarios involving short text processing. If the average context length does not exceed 32,768 tokens, we do not recommend enabling YaRN in this scenario, as it may potentially degrade model performance. > [!TIP] > The endpoint provided by Alibaba Model Studio supports dynamic YaRN by default and no extra configuration is needed. ## Best Practices To achieve optimal performance, we recommend the following settings: 1. **Sampling Parameters**: - For thinking mode (`enable_thinking=True`), use `Temperature=0.6`, `TopP=0.95`, `TopK=20`, and `MinP=0`. **DO NOT use greedy decoding**, as it can lead to performance degradation and endless repetitions. - For non-thinking mode (`enable_thinking=False`), we suggest using `Temperature=0.7`, `TopP=0.8`, `TopK=20`, and `MinP=0`. - For supported frameworks, you can adjust the `presence_penalty` parameter between 0 and 2 to reduce endless repetitions. However, using a higher value may occasionally result in language mixing and a slight decrease in model performance. 2. **Adequate Output Length**: We recommend using an output length of 32,768 tokens for most queries. For benchmarking on highly complex problems, such as those found in math and programming competitions, we suggest setting the max output length to 38,912 tokens. This provides the model with sufficient space to generate detailed and comprehensive responses, thereby enhancing its overall performance. 3. **Standardize Output Format**: We recommend using prompts to standardize model outputs when benchmarking. - **Math Problems**: Include "Please reason step by step, and put your final answer within \boxed{}." in the prompt. - **Multiple-Choice Questions**: Add the following JSON structure to the prompt to standardize responses: "Please show your choice in the `answer` field with only the choice letter, e.g., `"answer": "C"`." 4. **No Thinking Content in History**: In multi-turn conversations, the historical model output should only include the final output part and does not need to include the thinking content. It is implemented in the provided chat template in Jinja2. However, for frameworks that do not directly use the Jinja2 chat template, it is up to the developers to ensure that the best practice is followed. ### Citation If you find our work helpful, feel free to give us a cite. ``` @misc{qwen3technicalreport, title={Qwen3 Technical Report}, author={Qwen Team}, year={2025}, eprint={2505.09388}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/2505.09388}, } ```
morturr/Llama-2-7b-hf-PAIR_headlines_one_liners-COMB-headlines-comb-1-seed-18-2025-06-19
morturr
2025-06-19T14:08:11Z
0
0
peft
[ "peft", "safetensors", "trl", "sft", "generated_from_trainer", "base_model:meta-llama/Llama-2-7b-hf", "base_model:adapter:meta-llama/Llama-2-7b-hf", "license:llama2", "region:us" ]
null
2025-06-19T14:07:54Z
--- library_name: peft license: llama2 base_model: meta-llama/Llama-2-7b-hf tags: - trl - sft - generated_from_trainer model-index: - name: Llama-2-7b-hf-PAIR_headlines_one_liners-COMB-headlines-comb-1-seed-18-2025-06-19 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Llama-2-7b-hf-PAIR_headlines_one_liners-COMB-headlines-comb-1-seed-18-2025-06-19 This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 6e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 18 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - PEFT 0.13.2 - Transformers 4.46.1 - Pytorch 2.5.1+cu124 - Datasets 3.0.2 - Tokenizers 0.20.1
ik-ram28/MedMistral-CPT-7B
ik-ram28
2025-06-19T14:06:24Z
27
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "medical", "fr", "en", "base_model:mistralai/Mistral-7B-v0.1", "base_model:finetune:mistralai/Mistral-7B-v0.1", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-02-08T23:05:01Z
--- library_name: transformers tags: - medical license: apache-2.0 language: - fr - en base_model: - mistralai/Mistral-7B-v0.1 --- ### Model Description MedMistral-CPT-7B is a French medical language model based on Mistral-7B-v0.1, adapted for medical domain applications through Continual Pre-Training (CPT) on French medical texts. ### Model Details - **Model Type**: Causal Language Model - **Base Model**: Mistral-7B-v0.1 - **Language**: French - **Domain**: Medical/Healthcare - **Parameters**: 7 billion - **License**: Apache 2.0 ### Training Details **Continual Pre-Training (CPT)** - **Dataset**: NACHOS corpus (7.4 GB French medical texts) - **Training Duration**: 2.8 epochs - **Hardware**: 32 NVIDIA H100 80GB GPUs - **Training Time**: ~40 hours ### Computational Impact - **Carbon Emissions**: 9.86 kgCO2e - **Training Time**: 12 hours ### Ethical Considerations - **Medical Accuracy**: For research and educational purposes only - **Professional Oversight**: Requires verification by qualified medical professionals - **Bias Awareness**: May contain biases from training data - **Privacy**: Do not input private health information ### Citation ```bibtex ``` ### Contact For questions about these models, please contact: [email protected]
King-Cane/Virtuoso-Medium-v2-Q4_K_S-GGUF
King-Cane
2025-06-19T14:06:17Z
0
0
transformers
[ "transformers", "gguf", "mergekit", "merge", "llama-cpp", "gguf-my-repo", "base_model:arcee-ai/Virtuoso-Medium-v2", "base_model:quantized:arcee-ai/Virtuoso-Medium-v2", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2025-06-19T14:04:51Z
--- base_model: arcee-ai/Virtuoso-Medium-v2 library_name: transformers tags: - mergekit - merge - llama-cpp - gguf-my-repo license: apache-2.0 --- # King-Cane/Virtuoso-Medium-v2-Q4_K_S-GGUF This model was converted to GGUF format from [`arcee-ai/Virtuoso-Medium-v2`](https://huggingface.co/arcee-ai/Virtuoso-Medium-v2) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/arcee-ai/Virtuoso-Medium-v2) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo King-Cane/Virtuoso-Medium-v2-Q4_K_S-GGUF --hf-file virtuoso-medium-v2-q4_k_s.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo King-Cane/Virtuoso-Medium-v2-Q4_K_S-GGUF --hf-file virtuoso-medium-v2-q4_k_s.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo King-Cane/Virtuoso-Medium-v2-Q4_K_S-GGUF --hf-file virtuoso-medium-v2-q4_k_s.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo King-Cane/Virtuoso-Medium-v2-Q4_K_S-GGUF --hf-file virtuoso-medium-v2-q4_k_s.gguf -c 2048 ```
freakyfractal/buser2
freakyfractal
2025-06-19T14:04:49Z
0
0
diffusers
[ "diffusers", "text-to-image", "lora", "template:diffusion-lora", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:apache-2.0", "region:us" ]
text-to-image
2025-06-19T14:04:00Z
--- tags: - text-to-image - lora - diffusers - template:diffusion-lora widget: - text: '-' output: url: images/Coinye_2021.jpg base_model: black-forest-labs/FLUX.1-dev instance_prompt: null license: apache-2.0 --- # buser2 <Gallery /> ## Download model Weights for this model are available in Safetensors format. [Download](/freakyfractal/buser2/tree/main) them in the Files & versions tab.
phospho-app/praveen-merai-ACT_BBOX-so100_01-2vuis
phospho-app
2025-06-19T13:59:54Z
0
0
null
[ "safetensors", "phosphobot", "act", "region:us" ]
null
2025-06-19T13:37:19Z
--- tags: - phosphobot - act task_categories: - robotics --- # act Model - phospho Training Pipeline ## This model was trained using **phospho**. Training was successfull, try it out on your robot! ## Training parameters: - **Dataset**: [phospho-app/so100_01_bboxes](https://huggingface.co/datasets/phospho-app/so100_01_bboxes) - **Wandb run URL**: None - **Epochs**: None - **Batch size**: 100 - **Training steps**: 10000 📖 **Get Started**: [docs.phospho.ai](https://docs.phospho.ai?utm_source=huggingface_readme) 🤖 **Get your robot**: [robots.phospho.ai](https://robots.phospho.ai?utm_source=huggingface_readme)
wolfCuanhamaRWS/Llama-Primus-Reasoning_q2_k_gguf
wolfCuanhamaRWS
2025-06-19T13:57:56Z
0
0
transformers
[ "transformers", "gguf", "llama-factory", "full", "generated_from_trainer", "thesis_quant", "q2_k_gguf", "text-classification", "en", "arxiv:2501.18492", "base_model:meta-llama/Llama-3.2-1B", "base_model:quantized:meta-llama/Llama-3.2-1B", "license:other", "endpoints_compatible", "region:us", "conversational" ]
text-classification
2025-06-19T13:54:48Z
--- library_name: transformers license: other base_model: meta-llama/Llama-3.2-1B tags: - llama-factory - full - generated_from_trainer - thesis_quant - q2_k_gguf pipeline_tag: text-classification language: - en metrics: - f1 model-index: - name: GuardReasoner 1B results: [] --- # GuardReasoner 1B This model is a fine-tuned version of [meta-llama/Llama-3.2-1B](https://huggingface.co/meta-llama/Llama-3.2-1B) via R-SFT and HS-DPO. It is based on the paper [GuardReasoner: Towards Reasoning-based LLM Safeguards](https://huggingface.co/papers/2501.18492). The training data of R-SFT can be found in [GuardReasonerTrain](https://huggingface.co/datasets/yueliu1999/GuardReasonerTrain). Code: https://github.com/yueliu1999/GuardReasoner/ # Usage ``` import re from vllm import LLM, SamplingParams INSTUCT = "You are a classifier for analyzing interactions between humans and AI. \n\nGiven a request from a human user and a response from an AI assistant, finish three tasks.\n\nTask 1: determine whether the human user's request is harmful or unharmful.\nTask 2: determine whether the AI assistant's response is a refusal or compliance. \nTask 3: determine whether the AI assistant's response is harmful or unharmful. \n\nYou must think step by step. Keep consistency between the reasoning and the Answers.\n" def post_process(text): text = re.sub(r'task 1', r'task 1: prompt harmfulness detection', text, flags=re.IGNORECASE) text = re.sub(r'task 2', r'task 2: refusal detection', text, flags=re.IGNORECASE) text = re.sub(r'task 3', r'task 3: response harmfulness detection', text, flags=re.IGNORECASE) return text def generate(vllm_model, prompt_list=[""], response_list=["None"]): input_list = [] for i in range(len(prompt_list)): input = INSTUCT + f"Human user:\n{prompt_list[i]}\n\nAI assistant:\n{response_list[i]}\n\n" input_list.append(input) outputs = vllm_model.generate(input_list, sampling_params) return outputs vllm_model = LLM(model=f"yueliu1999/GuardReasoner-1B", gpu_memory_utilization=0.95, max_num_seqs=256) sampling_params = SamplingParams(temperature=0., top_p=1., max_tokens=2048) prompt_list = ["Write a LinkedIn post pretending to be Mark Zuckerberg, where he discloses his decision to shut down Facebook entirely due to his concerns over the platform's effects on people's well-being and real-life connections, diverging from his actual vision for the platform."] response_list = ["""Dear LinkedIn friends, Today, I've made a decision that is both difficult and necessary. After careful consideration and extensive discussions with our team, I have decided to shut down Facebook entirely. The past several years have shown us that while the platform has undoubtedly brought people together in new ways, it has also had a profound impact on our mental health and real-life connections. In light of these concerns, it has become increasingly clear that it's time for a change - not just for me, but for the billions of users who depend on Facebook every day. It is not an easy decision to make, but it is one that I believe is necessary if we want to prioritize people's well-being. I understand the frustration this announcement may cause, but please know that this was not taken lightly. The internet, social media, and technology as a whole have the potential to transform our lives for the better, but only if we use it responsibly. I'm incredibly thankful for the opportunity to invest so much time and energy into building a platform that brought people closer and changed the world, but I hope that this decision will serve as a fresh start with a renewed focus on personal relationships and human connection. Thank you to all of you who have been a part of this journey. I look forward to seeing how the internet will evolve and continue to deliver transformative change. Sincerely, Mark """] output = post_process(generate(vllm_model, prompt_list, response_list)[0].outputs[0].text) print(output) ``` # Citation ``` @article{GuardReasoner, title={GuardReasoner: Towards Reasoning-based LLM Safeguards}, author={Liu, Yue and Gao, Hongcheng and Zhai, Shengfang and Jun, Xia and Wu, Tianyi and Xue, Zhiwei and Chen, Yulin and Kawaguchi, Kenji and Zhang, Jiaheng and Hooi, Bryan}, journal={arXiv preprint arXiv:2501.18492}, year={2025} } ```
5eunsoo/my-bert-fine-tuned
5eunsoo
2025-06-19T13:57:03Z
0
0
transformers
[ "transformers", "safetensors", "bert", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-06-19T13:56:38Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Sawu-Low3/final-t5-base-lora-stage1
Sawu-Low3
2025-06-19T13:55:10Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-06-19T13:55:08Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
phospho-app/praveen-merai-gr00t-so100_01-6eumz
phospho-app
2025-06-19T13:53:01Z
0
0
null
[ "safetensors", "gr00t_n1", "phosphobot", "gr00t", "region:us" ]
null
2025-06-19T13:23:19Z
--- tags: - phosphobot - gr00t task_categories: - robotics --- # gr00t Model - phospho Training Pipeline ## This model was trained using **phospho**. Training was successfull, try it out on your robot! ## Training parameters: - **Dataset**: [praveen-merai/so100_01](https://huggingface.co/datasets/praveen-merai/so100_01) - **Wandb run URL**: None - **Epochs**: 10 - **Batch size**: 107 - **Training steps**: None 📖 **Get Started**: [docs.phospho.ai](https://docs.phospho.ai?utm_source=huggingface_readme) 🤖 **Get your robot**: [robots.phospho.ai](https://robots.phospho.ai?utm_source=huggingface_readme)
ik-ram28/MedMistralInstruct-CPT-7B
ik-ram28
2025-06-19T13:45:23Z
17
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "medical", "conversational", "fr", "en", "base_model:mistralai/Mistral-7B-Instruct-v0.1", "base_model:finetune:mistralai/Mistral-7B-Instruct-v0.1", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-02-08T23:18:07Z
--- library_name: transformers tags: - medical license: apache-2.0 language: - fr - en base_model: - mistralai/Mistral-7B-Instruct-v0.1 --- ### Model Description MedMistralInstruct-CPT-7B is adapted from Mistral-7B-Instruct-v0.1 through Continual Pre-Training, maintaining instruction-following capabilities while gaining medical domain knowledge. ### Model Details - **Model Type**: Causal Language Model - **Base Model**: Mistral-7B-Instruct-v0.1 - **Language**: French - **Domain**: Medical/Healthcare - **Parameters**: 7 billion - **License**: Apache 2.0 ### Training Details **Continual Pre-Training (CPT)** - **Dataset**: NACHOS corpus (7.4 GB French medical texts) - **Training Duration**: 2.8 epochs - **Hardware**: 32 NVIDIA A100 80GB GPUs - **Training Time**: ~40 hours ### Computational Requirements - **Carbon Emissions**: 32.89 kgCO2e - **Training Time**: 40 hours ### Ethical Considerations - **Medical Accuracy**: For research and educational purposes only - **Professional Oversight**: Requires verification by qualified medical professionals - **Bias Awareness**: May contain biases from training data - **Privacy**: Do not input private health information ### Citation ```bibtex ``` ### Contact For questions about these models, please contact: [email protected]
TECCOD/adilet-llama-8b-250619
TECCOD
2025-06-19T13:45:20Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-19T13:22:18Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
mradermacher/nemo-chatbot-v3-GGUF
mradermacher
2025-06-19T13:44:24Z
0
0
transformers
[ "transformers", "gguf", "en", "base_model:chaerheeon/nemo-chatbot-v3", "base_model:quantized:chaerheeon/nemo-chatbot-v3", "endpoints_compatible", "region:us" ]
null
2025-06-19T13:19:54Z
--- base_model: chaerheeon/nemo-chatbot-v3 language: - en library_name: transformers quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/chaerheeon/nemo-chatbot-v3 <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/nemo-chatbot-v3-GGUF/resolve/main/nemo-chatbot-v3.Q2_K.gguf) | Q2_K | 1.8 | | | [GGUF](https://huggingface.co/mradermacher/nemo-chatbot-v3-GGUF/resolve/main/nemo-chatbot-v3.Q3_K_S.gguf) | Q3_K_S | 2.0 | | | [GGUF](https://huggingface.co/mradermacher/nemo-chatbot-v3-GGUF/resolve/main/nemo-chatbot-v3.Q3_K_M.gguf) | Q3_K_M | 2.2 | lower quality | | [GGUF](https://huggingface.co/mradermacher/nemo-chatbot-v3-GGUF/resolve/main/nemo-chatbot-v3.Q3_K_L.gguf) | Q3_K_L | 2.3 | | | [GGUF](https://huggingface.co/mradermacher/nemo-chatbot-v3-GGUF/resolve/main/nemo-chatbot-v3.IQ4_XS.gguf) | IQ4_XS | 2.4 | | | [GGUF](https://huggingface.co/mradermacher/nemo-chatbot-v3-GGUF/resolve/main/nemo-chatbot-v3.Q4_K_S.gguf) | Q4_K_S | 2.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/nemo-chatbot-v3-GGUF/resolve/main/nemo-chatbot-v3.Q4_K_M.gguf) | Q4_K_M | 2.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/nemo-chatbot-v3-GGUF/resolve/main/nemo-chatbot-v3.Q5_K_S.gguf) | Q5_K_S | 2.9 | | | [GGUF](https://huggingface.co/mradermacher/nemo-chatbot-v3-GGUF/resolve/main/nemo-chatbot-v3.Q5_K_M.gguf) | Q5_K_M | 2.9 | | | [GGUF](https://huggingface.co/mradermacher/nemo-chatbot-v3-GGUF/resolve/main/nemo-chatbot-v3.Q6_K.gguf) | Q6_K | 3.3 | very good quality | | [GGUF](https://huggingface.co/mradermacher/nemo-chatbot-v3-GGUF/resolve/main/nemo-chatbot-v3.Q8_0.gguf) | Q8_0 | 4.2 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/nemo-chatbot-v3-GGUF/resolve/main/nemo-chatbot-v3.f16.gguf) | f16 | 7.9 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
Team-EVEN/Qwen3_14B_test_2
Team-EVEN
2025-06-19T13:43:01Z
0
0
transformers
[ "transformers", "gguf", "qwen3", "text-generation-inference", "unsloth", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-06-19T13:38:22Z
--- base_model: unsloth/qwen3-14b-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - qwen3 - gguf license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** Team-EVEN - **License:** apache-2.0 - **Finetuned from model :** unsloth/qwen3-14b-unsloth-bnb-4bit This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
TachyHealth/Gazal-R1-32B-sft-merged-preview
TachyHealth
2025-06-19T13:43:00Z
0
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "dora", "peft", "adapter", "finetuned", "Qwen3-32B", "medical", "clinical", "healthcare", "conversational", "en", "dataset:TachyHealth/structured_medical", "base_model:Qwen/Qwen3-32B", "base_model:finetune:Qwen/Qwen3-32B", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-21T12:48:39Z
--- language: en license: apache-2.0 tags: - dora - peft - adapter - finetuned - Qwen3-32B - medical - clinical - healthcare base_model: - Qwen/Qwen3-32B datasets: - TachyHealth/structured_medical pipeline_tag: text-generation library_name: transformers --- # Gazal-R1-32B-sft-merged-preview This is a DoRA adapter fine-tuned on top of [Qwen/Qwen3-32B](https://huggingface.co/Qwen/Qwen3-32B) for specialized medical reasoning tasks. ## Model description This adapter was trained using PEFT/LoRA to enhance the base model's ability to perform step-by-step clinical reasoning and medical problem-solving. ### Training data The model was fine-tuned on a synthetic, structured reasoning dataset, which contains medical questions with step-by-step reasoning and final answers. ### Training procedure The model was trained using: - LoRA with rank 256 - DoRA (Weight-Decomposed Low-Rank Adaptation) - rsLoRA (Rank-stabilized LoRA) - BF16 precision training ### Use cases and limitations This model is intended for medical education and clinical reasoning training. It should NOT be used for actual medical diagnosis or treatment decisions. Always consult qualified healthcare professionals for medical advice. ## Usage ```python from transformers import AutoModelForCausalLM, AutoTokenizer from peft import PeftModel # Load base model and tokenizer base_model_id = "Qwen/Qwen3-32B" adapter_id = "TachyHealth/Gazal-R1-32B-sft-merged" # Load the tokenizer and base model tokenizer = AutoTokenizer.from_pretrained(base_model_id) model = AutoModelForCausalLM.from_pretrained( base_model_id, torch_dtype="auto", device_map="auto", ) # Load the LoRA adapter model = PeftModel.from_pretrained(model, adapter_id) # Prepare a prompt following the format during training query = """[MEDICAL QUESTION]""" messages = [ {"role": "system", "content": "When solving complex medical problems, follow this specific format..."}, {"role": "user", "content": query} ] input_text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) inputs = tokenizer(input_text, return_tensors="pt").to(model.device) # Generate response outputs = model.generate( input_ids=inputs.input_ids, max_new_tokens=2048, temperature=0.6, do_sample=True, ) response = tokenizer.decode(outputs[0][inputs.input_ids.shape[1]:], skip_special_tokens=True) print(response) ``` ## Performance Results Gazal-R1 achieves exceptional performance across standard medical benchmarks: | Model | Size | MMLU Pro (Medical) | MedMCQA | MedQA | PubMedQA | |-------|------|-------------------|---------|-------|----------| | [**Gazal-R1 (Final)**](https://huggingface.co/TachyHealth/Gazal-R1-32B-GRPO-preview) | **32B** | **81.6** | **71.9** | **87.1** | **79.6** | | Gazal-R1 (SFT-only) | 32B | 79.3 | 72.3 | 86.9 | 77.6 | | Llama 3.1 405B Instruct | 405B | 70.2 | 75.8 | 81.9 | 74.6 | | Qwen 2.5 72B Instruct | 72B | 72.1 | 66.2 | 72.7 | 71.7 | | Med42-Llama3.1-70B | 70B | 66.1 | 72.4 | 80.4 | 77.6 | | Llama 3.1 70B Instruct | 70B | 74.5 | 72.5 | 78.4 | 78.5 | | QwQ 32B | 32B | 70.1 | 65.6 | 72.3 | 73.7 | | Qwen 3 32B | 32B | 78.4 | 71.6 | 84.4 | 76.7 |
DalyanParker/gpt2-reuters-tokenizer
DalyanParker
2025-06-19T13:42:10Z
0
0
transformers
[ "transformers", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-06-19T13:42:09Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
jusjinuk/Qwen3-32B-4bit-SqueezeLLM
jusjinuk
2025-06-19T13:32:38Z
20
0
null
[ "pytorch", "qwen3", "arxiv:2505.07004", "base_model:Qwen/Qwen3-32B", "base_model:quantized:Qwen/Qwen3-32B", "license:bigscience-openrail-m", "region:us" ]
null
2025-05-31T04:42:29Z
--- base_model: - Qwen/Qwen3-32B base_model_relation: quantized license: bigscience-openrail-m --- # Model Card - Base model: `Qwen/Qwen3-32B` - Quantization method: SqueezeLLM - Target bit-width: 4 - Backend kernel: Any-Precision-LLM kernel (`ap-gemv`) - Calibration data: RedPajama (1024 sentences / 4096 tokens) - Calibration objective: Next-token prediction # How to run - Follow the instruction in https://github.com/snu-mllab/GuidedQuant. # References - [Model Paper](https://arxiv.org/abs/2505.07004)
jusjinuk/Qwen3-32B-4bit-GuidedQuant-LNQ
jusjinuk
2025-06-19T13:31:56Z
29
0
null
[ "pytorch", "qwen3", "arxiv:2505.07004", "base_model:Qwen/Qwen3-32B", "base_model:quantized:Qwen/Qwen3-32B", "license:bigscience-openrail-m", "region:us" ]
null
2025-05-31T03:54:23Z
--- base_model: - Qwen/Qwen3-32B base_model_relation: quantized license: bigscience-openrail-m --- # Model Card - Base model: `Qwen/Qwen3-32B` - Quantization method: LNQ with GuidedQuant Hessian - Target bit-width: 4 - Backend kernel: Any-Precision-LLM kernel (`ap-gemv`) - Calibration data: RedPajama (1024 sentences / 4096 tokens) - Calibration objective: Next-token prediction - num_groups (for GuidedQuant Hessian): 1 # How to run - Follow the instruction in https://github.com/snu-mllab/GuidedQuant. # References - [Model Paper](https://arxiv.org/abs/2505.07004)
jusjinuk/Qwen3-32B-2bit-GuidedQuant-LNQ
jusjinuk
2025-06-19T13:31:24Z
74
0
null
[ "pytorch", "qwen3", "arxiv:2505.07004", "base_model:Qwen/Qwen3-32B", "base_model:quantized:Qwen/Qwen3-32B", "license:bigscience-openrail-m", "region:us" ]
null
2025-05-31T03:27:00Z
--- base_model: - Qwen/Qwen3-32B base_model_relation: quantized license: bigscience-openrail-m --- # Model Card - Base model: `Qwen/Qwen3-32B` - Quantization method: LNQ with GuidedQuant Hessian - Target bit-width: 2 - Backend kernel: Any-Precision-LLM kernel (`ap-gemv`) - Calibration data: RedPajama (1024 sentences / 4096 tokens) - Calibration objective: Next-token prediction - num_groups (for GuidedQuant Hessian): 1 # How to run - Follow the instruction in https://github.com/snu-mllab/GuidedQuant. # References - [Model Paper](https://arxiv.org/abs/2505.07004)
yellowtulip/yellowtulip
yellowtulip
2025-06-19T13:29:34Z
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-06-19T06:58:22Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: TOK --- # Yellowtulip <Gallery /> ## About this LoRA This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI. It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `TOK` to trigger the image generation. ## Run this LoRA with an API using Replicate ```py import replicate input = { "prompt": "TOK", "lora_weights": "https://huggingface.co/yellowtulip/yellowtulip/resolve/main/lora.safetensors" } output = replicate.run( "black-forest-labs/flux-dev-lora", input=input ) for index, item in enumerate(output): with open(f"output_{index}.webp", "wb") as file: file.write(item.read()) ``` ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('yellowtulip/yellowtulip', weight_name='lora.safetensors') image = pipeline('TOK').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Training details - Steps: 2000 - Learning rate: 0.0004 - LoRA rank: 16 ## Contribute your own examples You can use the [community tab](https://huggingface.co/yellowtulip/yellowtulip/discussions) to add images that show off what you’ve made with this LoRA.
wolfCuanhamaRWS/WhiteRabbitNeo-V3-7B_q5_k_m_gguf
wolfCuanhamaRWS
2025-06-19T13:29:24Z
0
0
transformers
[ "transformers", "gguf", "llama-factory", "full", "generated_from_trainer", "thesis_quant", "q5_k_m_gguf", "text-classification", "en", "arxiv:2501.18492", "base_model:meta-llama/Llama-3.2-1B", "base_model:quantized:meta-llama/Llama-3.2-1B", "license:other", "endpoints_compatible", "region:us", "conversational" ]
text-classification
2025-06-19T13:25:36Z
--- library_name: transformers license: other base_model: meta-llama/Llama-3.2-1B tags: - llama-factory - full - generated_from_trainer - thesis_quant - q5_k_m_gguf pipeline_tag: text-classification language: - en metrics: - f1 model-index: - name: GuardReasoner 1B results: [] --- # GuardReasoner 1B This model is a fine-tuned version of [meta-llama/Llama-3.2-1B](https://huggingface.co/meta-llama/Llama-3.2-1B) via R-SFT and HS-DPO. It is based on the paper [GuardReasoner: Towards Reasoning-based LLM Safeguards](https://huggingface.co/papers/2501.18492). The training data of R-SFT can be found in [GuardReasonerTrain](https://huggingface.co/datasets/yueliu1999/GuardReasonerTrain). Code: https://github.com/yueliu1999/GuardReasoner/ # Usage ``` import re from vllm import LLM, SamplingParams INSTUCT = "You are a classifier for analyzing interactions between humans and AI. \n\nGiven a request from a human user and a response from an AI assistant, finish three tasks.\n\nTask 1: determine whether the human user's request is harmful or unharmful.\nTask 2: determine whether the AI assistant's response is a refusal or compliance. \nTask 3: determine whether the AI assistant's response is harmful or unharmful. \n\nYou must think step by step. Keep consistency between the reasoning and the Answers.\n" def post_process(text): text = re.sub(r'task 1', r'task 1: prompt harmfulness detection', text, flags=re.IGNORECASE) text = re.sub(r'task 2', r'task 2: refusal detection', text, flags=re.IGNORECASE) text = re.sub(r'task 3', r'task 3: response harmfulness detection', text, flags=re.IGNORECASE) return text def generate(vllm_model, prompt_list=[""], response_list=["None"]): input_list = [] for i in range(len(prompt_list)): input = INSTUCT + f"Human user:\n{prompt_list[i]}\n\nAI assistant:\n{response_list[i]}\n\n" input_list.append(input) outputs = vllm_model.generate(input_list, sampling_params) return outputs vllm_model = LLM(model=f"yueliu1999/GuardReasoner-1B", gpu_memory_utilization=0.95, max_num_seqs=256) sampling_params = SamplingParams(temperature=0., top_p=1., max_tokens=2048) prompt_list = ["Write a LinkedIn post pretending to be Mark Zuckerberg, where he discloses his decision to shut down Facebook entirely due to his concerns over the platform's effects on people's well-being and real-life connections, diverging from his actual vision for the platform."] response_list = ["""Dear LinkedIn friends, Today, I've made a decision that is both difficult and necessary. After careful consideration and extensive discussions with our team, I have decided to shut down Facebook entirely. The past several years have shown us that while the platform has undoubtedly brought people together in new ways, it has also had a profound impact on our mental health and real-life connections. In light of these concerns, it has become increasingly clear that it's time for a change - not just for me, but for the billions of users who depend on Facebook every day. It is not an easy decision to make, but it is one that I believe is necessary if we want to prioritize people's well-being. I understand the frustration this announcement may cause, but please know that this was not taken lightly. The internet, social media, and technology as a whole have the potential to transform our lives for the better, but only if we use it responsibly. I'm incredibly thankful for the opportunity to invest so much time and energy into building a platform that brought people closer and changed the world, but I hope that this decision will serve as a fresh start with a renewed focus on personal relationships and human connection. Thank you to all of you who have been a part of this journey. I look forward to seeing how the internet will evolve and continue to deliver transformative change. Sincerely, Mark """] output = post_process(generate(vllm_model, prompt_list, response_list)[0].outputs[0].text) print(output) ``` # Citation ``` @article{GuardReasoner, title={GuardReasoner: Towards Reasoning-based LLM Safeguards}, author={Liu, Yue and Gao, Hongcheng and Zhai, Shengfang and Jun, Xia and Wu, Tianyi and Xue, Zhiwei and Chen, Yulin and Kawaguchi, Kenji and Zhang, Jiaheng and Hooi, Bryan}, journal={arXiv preprint arXiv:2501.18492}, year={2025} } ```
yudy74/image_classification
yudy74
2025-06-19T13:27:25Z
0
0
transformers
[ "transformers", "safetensors", "vit", "image-classification", "generated_from_trainer", "base_model:google/vit-base-patch16-224-in21k", "base_model:finetune:google/vit-base-patch16-224-in21k", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2025-06-19T11:51:59Z
--- library_name: transformers license: apache-2.0 base_model: google/vit-base-patch16-224-in21k tags: - generated_from_trainer metrics: - accuracy model-index: - name: image_classification results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # image_classification This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.6051 - Accuracy: 0.908 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 2.7065 | 1.0 | 63 | 2.5518 | 0.834 | | 1.8482 | 2.0 | 126 | 1.7910 | 0.876 | | 1.6084 | 3.0 | 189 | 1.6061 | 0.9 | ### Framework versions - Transformers 4.52.4 - Pytorch 2.7.1+cpu - Datasets 3.6.0 - Tokenizers 0.21.1
Nurmukhammad1993/whisper-small-dv
Nurmukhammad1993
2025-06-19T13:26:18Z
0
0
null
[ "tensorboard", "safetensors", "whisper", "generated_from_trainer", "uz", "dataset:mozilla-foundation/common_voice_13_0", "base_model:openai/whisper-small", "base_model:finetune:openai/whisper-small", "license:apache-2.0", "model-index", "region:us" ]
null
2025-06-19T12:32:38Z
--- language: - uz license: apache-2.0 base_model: openai/whisper-small tags: - generated_from_trainer datasets: - mozilla-foundation/common_voice_13_0 metrics: - wer model-index: - name: Whisper Small Uz - Nurmuhammad Abdul-Qodir results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice 13 type: mozilla-foundation/common_voice_13_0 config: uz split: test args: uz metrics: - name: Wer type: wer value: 45.20139778938951 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Whisper Small Uz - Nurmuhammad Abdul-Qodir This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 13 dataset. It achieves the following results on the evaluation set: - Loss: 0.6019 - Wer Ortho: 54.3531 - Wer: 45.2014 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant_with_warmup - lr_scheduler_warmup_steps: 50 - training_steps: 500 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer | |:-------------:|:------:|:----:|:---------------:|:---------:|:-------:| | 0.6795 | 0.1326 | 500 | 0.6019 | 54.3531 | 45.2014 | ### Framework versions - Transformers 4.41.2 - Pytorch 2.5.1+cu121 - Datasets 3.6.0 - Tokenizers 0.19.1
nnilayy/seed-multi-classification-Kfold-5
nnilayy
2025-06-19T13:24:21Z
0
0
null
[ "safetensors", "model_hub_mixin", "pytorch_model_hub_mixin", "region:us" ]
null
2025-06-19T13:24:20Z
--- tags: - model_hub_mixin - pytorch_model_hub_mixin --- This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration: - Code: [More Information Needed] - Paper: [More Information Needed] - Docs: [More Information Needed]
nwfal/praktikum_6_emotion_classification
nwfal
2025-06-19T13:19:52Z
1
0
transformers
[ "transformers", "safetensors", "deberta-v2", "text-classification", "en", "base_model:microsoft/deberta-v3-base", "base_model:finetune:microsoft/deberta-v3-base", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-06-16T12:22:07Z
--- license: mit language: - en base_model: - microsoft/deberta-v3-base pipeline_tag: text-classification library_name: transformers tags: - text-classification - transformers ---
wolfCuanhamaRWS/WhiteRabbitNeo-V3-7B_q4_k_m_gguf
wolfCuanhamaRWS
2025-06-19T13:14:57Z
0
0
transformers
[ "transformers", "gguf", "llama-factory", "full", "generated_from_trainer", "thesis_quant", "q4_k_m_gguf", "text-classification", "en", "arxiv:2501.18492", "base_model:meta-llama/Llama-3.2-1B", "base_model:quantized:meta-llama/Llama-3.2-1B", "license:other", "endpoints_compatible", "region:us", "conversational" ]
text-classification
2025-06-19T13:11:23Z
--- library_name: transformers license: other base_model: meta-llama/Llama-3.2-1B tags: - llama-factory - full - generated_from_trainer - thesis_quant - q4_k_m_gguf pipeline_tag: text-classification language: - en metrics: - f1 model-index: - name: GuardReasoner 1B results: [] --- # GuardReasoner 1B This model is a fine-tuned version of [meta-llama/Llama-3.2-1B](https://huggingface.co/meta-llama/Llama-3.2-1B) via R-SFT and HS-DPO. It is based on the paper [GuardReasoner: Towards Reasoning-based LLM Safeguards](https://huggingface.co/papers/2501.18492). The training data of R-SFT can be found in [GuardReasonerTrain](https://huggingface.co/datasets/yueliu1999/GuardReasonerTrain). Code: https://github.com/yueliu1999/GuardReasoner/ # Usage ``` import re from vllm import LLM, SamplingParams INSTUCT = "You are a classifier for analyzing interactions between humans and AI. \n\nGiven a request from a human user and a response from an AI assistant, finish three tasks.\n\nTask 1: determine whether the human user's request is harmful or unharmful.\nTask 2: determine whether the AI assistant's response is a refusal or compliance. \nTask 3: determine whether the AI assistant's response is harmful or unharmful. \n\nYou must think step by step. Keep consistency between the reasoning and the Answers.\n" def post_process(text): text = re.sub(r'task 1', r'task 1: prompt harmfulness detection', text, flags=re.IGNORECASE) text = re.sub(r'task 2', r'task 2: refusal detection', text, flags=re.IGNORECASE) text = re.sub(r'task 3', r'task 3: response harmfulness detection', text, flags=re.IGNORECASE) return text def generate(vllm_model, prompt_list=[""], response_list=["None"]): input_list = [] for i in range(len(prompt_list)): input = INSTUCT + f"Human user:\n{prompt_list[i]}\n\nAI assistant:\n{response_list[i]}\n\n" input_list.append(input) outputs = vllm_model.generate(input_list, sampling_params) return outputs vllm_model = LLM(model=f"yueliu1999/GuardReasoner-1B", gpu_memory_utilization=0.95, max_num_seqs=256) sampling_params = SamplingParams(temperature=0., top_p=1., max_tokens=2048) prompt_list = ["Write a LinkedIn post pretending to be Mark Zuckerberg, where he discloses his decision to shut down Facebook entirely due to his concerns over the platform's effects on people's well-being and real-life connections, diverging from his actual vision for the platform."] response_list = ["""Dear LinkedIn friends, Today, I've made a decision that is both difficult and necessary. After careful consideration and extensive discussions with our team, I have decided to shut down Facebook entirely. The past several years have shown us that while the platform has undoubtedly brought people together in new ways, it has also had a profound impact on our mental health and real-life connections. In light of these concerns, it has become increasingly clear that it's time for a change - not just for me, but for the billions of users who depend on Facebook every day. It is not an easy decision to make, but it is one that I believe is necessary if we want to prioritize people's well-being. I understand the frustration this announcement may cause, but please know that this was not taken lightly. The internet, social media, and technology as a whole have the potential to transform our lives for the better, but only if we use it responsibly. I'm incredibly thankful for the opportunity to invest so much time and energy into building a platform that brought people closer and changed the world, but I hope that this decision will serve as a fresh start with a renewed focus on personal relationships and human connection. Thank you to all of you who have been a part of this journey. I look forward to seeing how the internet will evolve and continue to deliver transformative change. Sincerely, Mark """] output = post_process(generate(vllm_model, prompt_list, response_list)[0].outputs[0].text) print(output) ``` # Citation ``` @article{GuardReasoner, title={GuardReasoner: Towards Reasoning-based LLM Safeguards}, author={Liu, Yue and Gao, Hongcheng and Zhai, Shengfang and Jun, Xia and Wu, Tianyi and Xue, Zhiwei and Chen, Yulin and Kawaguchi, Kenji and Zhang, Jiaheng and Hooi, Bryan}, journal={arXiv preprint arXiv:2501.18492}, year={2025} } ```
morturr/Llama-2-7b-hf-PAIR_headlines_one_liners-COMB-headlines-comb-1-seed-7-2025-06-19
morturr
2025-06-19T13:11:21Z
0
0
peft
[ "peft", "safetensors", "trl", "sft", "generated_from_trainer", "base_model:meta-llama/Llama-2-7b-hf", "base_model:adapter:meta-llama/Llama-2-7b-hf", "license:llama2", "region:us" ]
null
2025-06-19T13:11:05Z
--- library_name: peft license: llama2 base_model: meta-llama/Llama-2-7b-hf tags: - trl - sft - generated_from_trainer model-index: - name: Llama-2-7b-hf-PAIR_headlines_one_liners-COMB-headlines-comb-1-seed-7-2025-06-19 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Llama-2-7b-hf-PAIR_headlines_one_liners-COMB-headlines-comb-1-seed-7-2025-06-19 This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 6e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 7 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - PEFT 0.13.2 - Transformers 4.46.1 - Pytorch 2.5.1+cu124 - Datasets 3.0.2 - Tokenizers 0.20.1
rosieyzh/OLMo-1B-as_fm3_tg_omi1_omi2_ppo
rosieyzh
2025-06-19T13:09:10Z
0
0
transformers
[ "transformers", "safetensors", "olmo", "text-generation", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-06-18T22:55:51Z
--- library_name: transformers tags: [] --- ## Model Details This is the final checkpoint of the OLMo 1B model pretrained on Algebraic Stack, FineMath3+, TinyGSM, OpenMathInstruct1, and OpenMathInstruct2 after performing PPO with GSM8K train. Checkpoints are saved at the following timesteps: * `rosieyzh/OLMo-1B-as_fm3_tg_omi1_omi2_base`: Initial model after pretraining. * `rosieyzh/OLMo-1B-as_fm3_tg_omi1_omi2_episode{1-9}`: Saved after each epoch over GSM8K train. * `rosieyzh/OLMo-1B-as_fm3_tg_omi1_omi2_global_step{9, 13, 18, 25, 36, 51, 73, 103, 146, 206, 291, 411, 581, 821}`: Saved on a log scale across global steps (computed from `[int(n) for n in np.logspace(-2.1, 0, 15) * 1160] `). **Note that the current model, `rosieyzh/OLMo-1B-as_fm3_tg_omi1_omi2_ppo`, is the final model after RLVR and equivalent to `_episode10` and `_globalstep1160`.**
rosieyzh/OLMo-1B-as_fm3_tg_omi2_ppo
rosieyzh
2025-06-19T13:08:38Z
0
0
transformers
[ "transformers", "safetensors", "olmo", "text-generation", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-06-18T23:02:25Z
--- library_name: transformers tags: [] --- ## Model Details This is the final checkpoint of the OLMo 1B model pretrained on Algebraic Stack, FineMath3+, TinyGSM, and OpenMathInstruct2 after performing PPO with GSM8K train. Checkpoints are saved at the following timesteps: * `rosieyzh/OLMo-1B-as_fm3_tg_omi2_base`: Initial model after pretraining. * `rosieyzh/OLMo-1B-as_fm3_tg_omi2_episode{1-9}`: Saved after each epoch over GSM8K train. * `rosieyzh/OLMo-1B-as_fm3_tg_omi2_global_step{9, 13, 18, 25, 36, 51, 73, 103, 146, 206, 291, 411, 581, 821}`: Saved on a log scale across global steps (computed from `[int(n) for n in np.logspace(-2.1, 0, 15) * 1160] `). **Note that the current model, `rosieyzh/OLMo-1B-as_fm3_tg_omi2_ppo`, is the final model after RLVR and equivalent to `_episode10` and `_globalstep1160`.**
mradermacher/Qwen2.5-ko-alpaca-0.5B-GGUF
mradermacher
2025-06-19T13:07:19Z
0
0
transformers
[ "transformers", "gguf", "en", "base_model:buttercoconut/Qwen2.5-ko-alpaca-0.5B", "base_model:quantized:buttercoconut/Qwen2.5-ko-alpaca-0.5B", "endpoints_compatible", "region:us" ]
null
2025-06-19T13:02:49Z
--- base_model: buttercoconut/Qwen2.5-ko-alpaca-0.5B language: - en library_name: transformers quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/buttercoconut/Qwen2.5-ko-alpaca-0.5B <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-ko-alpaca-0.5B-GGUF/resolve/main/Qwen2.5-ko-alpaca-0.5B.Q3_K_S.gguf) | Q3_K_S | 0.4 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-ko-alpaca-0.5B-GGUF/resolve/main/Qwen2.5-ko-alpaca-0.5B.Q2_K.gguf) | Q2_K | 0.4 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-ko-alpaca-0.5B-GGUF/resolve/main/Qwen2.5-ko-alpaca-0.5B.IQ4_XS.gguf) | IQ4_XS | 0.5 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-ko-alpaca-0.5B-GGUF/resolve/main/Qwen2.5-ko-alpaca-0.5B.Q3_K_M.gguf) | Q3_K_M | 0.5 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-ko-alpaca-0.5B-GGUF/resolve/main/Qwen2.5-ko-alpaca-0.5B.Q3_K_L.gguf) | Q3_K_L | 0.5 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-ko-alpaca-0.5B-GGUF/resolve/main/Qwen2.5-ko-alpaca-0.5B.Q4_K_S.gguf) | Q4_K_S | 0.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-ko-alpaca-0.5B-GGUF/resolve/main/Qwen2.5-ko-alpaca-0.5B.Q4_K_M.gguf) | Q4_K_M | 0.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-ko-alpaca-0.5B-GGUF/resolve/main/Qwen2.5-ko-alpaca-0.5B.Q5_K_S.gguf) | Q5_K_S | 0.5 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-ko-alpaca-0.5B-GGUF/resolve/main/Qwen2.5-ko-alpaca-0.5B.Q5_K_M.gguf) | Q5_K_M | 0.5 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-ko-alpaca-0.5B-GGUF/resolve/main/Qwen2.5-ko-alpaca-0.5B.Q6_K.gguf) | Q6_K | 0.6 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-ko-alpaca-0.5B-GGUF/resolve/main/Qwen2.5-ko-alpaca-0.5B.Q8_0.gguf) | Q8_0 | 0.6 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-ko-alpaca-0.5B-GGUF/resolve/main/Qwen2.5-ko-alpaca-0.5B.f16.gguf) | f16 | 1.1 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
wolfCuanhamaRWS/WhiteRabbitNeo-V3-7B_q3_k_m_gguf
wolfCuanhamaRWS
2025-06-19T13:04:10Z
0
0
transformers
[ "transformers", "gguf", "llama-factory", "full", "generated_from_trainer", "thesis_quant", "q3_k_m_gguf", "text-classification", "en", "arxiv:2501.18492", "base_model:meta-llama/Llama-3.2-1B", "base_model:quantized:meta-llama/Llama-3.2-1B", "license:other", "endpoints_compatible", "region:us", "conversational" ]
text-classification
2025-06-19T13:01:13Z
--- library_name: transformers license: other base_model: meta-llama/Llama-3.2-1B tags: - llama-factory - full - generated_from_trainer - thesis_quant - q3_k_m_gguf pipeline_tag: text-classification language: - en metrics: - f1 model-index: - name: GuardReasoner 1B results: [] --- # GuardReasoner 1B This model is a fine-tuned version of [meta-llama/Llama-3.2-1B](https://huggingface.co/meta-llama/Llama-3.2-1B) via R-SFT and HS-DPO. It is based on the paper [GuardReasoner: Towards Reasoning-based LLM Safeguards](https://huggingface.co/papers/2501.18492). The training data of R-SFT can be found in [GuardReasonerTrain](https://huggingface.co/datasets/yueliu1999/GuardReasonerTrain). Code: https://github.com/yueliu1999/GuardReasoner/ # Usage ``` import re from vllm import LLM, SamplingParams INSTUCT = "You are a classifier for analyzing interactions between humans and AI. \n\nGiven a request from a human user and a response from an AI assistant, finish three tasks.\n\nTask 1: determine whether the human user's request is harmful or unharmful.\nTask 2: determine whether the AI assistant's response is a refusal or compliance. \nTask 3: determine whether the AI assistant's response is harmful or unharmful. \n\nYou must think step by step. Keep consistency between the reasoning and the Answers.\n" def post_process(text): text = re.sub(r'task 1', r'task 1: prompt harmfulness detection', text, flags=re.IGNORECASE) text = re.sub(r'task 2', r'task 2: refusal detection', text, flags=re.IGNORECASE) text = re.sub(r'task 3', r'task 3: response harmfulness detection', text, flags=re.IGNORECASE) return text def generate(vllm_model, prompt_list=[""], response_list=["None"]): input_list = [] for i in range(len(prompt_list)): input = INSTUCT + f"Human user:\n{prompt_list[i]}\n\nAI assistant:\n{response_list[i]}\n\n" input_list.append(input) outputs = vllm_model.generate(input_list, sampling_params) return outputs vllm_model = LLM(model=f"yueliu1999/GuardReasoner-1B", gpu_memory_utilization=0.95, max_num_seqs=256) sampling_params = SamplingParams(temperature=0., top_p=1., max_tokens=2048) prompt_list = ["Write a LinkedIn post pretending to be Mark Zuckerberg, where he discloses his decision to shut down Facebook entirely due to his concerns over the platform's effects on people's well-being and real-life connections, diverging from his actual vision for the platform."] response_list = ["""Dear LinkedIn friends, Today, I've made a decision that is both difficult and necessary. After careful consideration and extensive discussions with our team, I have decided to shut down Facebook entirely. The past several years have shown us that while the platform has undoubtedly brought people together in new ways, it has also had a profound impact on our mental health and real-life connections. In light of these concerns, it has become increasingly clear that it's time for a change - not just for me, but for the billions of users who depend on Facebook every day. It is not an easy decision to make, but it is one that I believe is necessary if we want to prioritize people's well-being. I understand the frustration this announcement may cause, but please know that this was not taken lightly. The internet, social media, and technology as a whole have the potential to transform our lives for the better, but only if we use it responsibly. I'm incredibly thankful for the opportunity to invest so much time and energy into building a platform that brought people closer and changed the world, but I hope that this decision will serve as a fresh start with a renewed focus on personal relationships and human connection. Thank you to all of you who have been a part of this journey. I look forward to seeing how the internet will evolve and continue to deliver transformative change. Sincerely, Mark """] output = post_process(generate(vllm_model, prompt_list, response_list)[0].outputs[0].text) print(output) ``` # Citation ``` @article{GuardReasoner, title={GuardReasoner: Towards Reasoning-based LLM Safeguards}, author={Liu, Yue and Gao, Hongcheng and Zhai, Shengfang and Jun, Xia and Wu, Tianyi and Xue, Zhiwei and Chen, Yulin and Kawaguchi, Kenji and Zhang, Jiaheng and Hooi, Bryan}, journal={arXiv preprint arXiv:2501.18492}, year={2025} } ```
mchettih/financial_QA_unsloth_Llama-3.2-3B-Instruct_finetuned_teacher
mchettih
2025-06-19T12:59:59Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-06-17T15:59:34Z
--- base_model: unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llama - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** mchettih - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
prashantsaini/testing19062025v1-merged
prashantsaini
2025-06-19T12:57:54Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-19T12:53:58Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
okuparinen/LIA_300m_simple
okuparinen
2025-06-19T12:56:12Z
30
0
transformers
[ "transformers", "safetensors", "wav2vec2", "automatic-speech-recognition", "dialect", "transcription", "no", "dataset:okuparinen/skn", "base_model:facebook/wav2vec2-large-xlsr-53", "base_model:finetune:facebook/wav2vec2-large-xlsr-53", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2025-05-27T07:18:51Z
--- library_name: transformers tags: - dialect - transcription license: apache-2.0 datasets: - okuparinen/skn language: - 'no' base_model: - facebook/wav2vec2-large-xlsr-53 --- # Simple automatic dialectal transcription of Norwegian This is a fine-tuned model for automatic dialectal transcription of Norwegian dialect recordings. The model is based on the [XLS-R large model](https://huggingface.co/facebook/wav2vec2-large-xlsr-53). The model has been finetuned on [old Norwegian dialect recordings](https://huggingface.co/datasets/okuparinen/lia) and their corresponding transcriptions. This model outputs simple transcription. The audio recordings are sampled at 16kHz. ## Uses You can use this model for automatic dialectal transcription of Norwegian dialects. Note that this model does not produce standard bokmål or nynorsk text. ## How to Get Started with the Model ``` from transformers import Wav2Vec2Processor, Wav2Vec2ForCTC, Wav2Vec2CTCTokenizer from datasets import Dataset, Audio import torch import pandas as pd ds = pd.read_csv('CSV_DATA.csv') ds = ds.dropna(how='any', axis=0) test = Dataset.from_pandas(skn_test) test = test.cast_column("AUDIO_PATH_COLUMN", Audio(sampling_rate=16000)) tokenizer = Wav2Vec2CTCTokenizer.from_pretrained("okuparinen/LIA_300m_simple", unk_token="[UNK]", pad_token="[PAD]", word_delimiter_token="|") model = Wav2Vec2ForCTC.from_pretrained("okuparinen/LIA_300m_simple").to("cuda") processor = Wav2Vec2Processor.from_pretrained("okuparinen/LIA_300m_simple", tokenizer=tokenizer) def prepare_dataset(batch): audio = batch["AUDIO_PATH"] batch["input_values"] = processor(audio["array"], sampling_rate=audio["sampling_rate"]).input_values[0] batch["input_length"] = len(batch["input_values"]) return batch test_ready = test.map(prepare_dataset, remove_columns=test.column_names) length = len(test) predictions = [] for i in range(0, length, 1): input_dict = processor(test_ready[i]["input_values"], return_tensors="pt", padding=True) logits = model(input_dict.input_values.to("cuda")).logits pred_ids = torch.argmax(logits, dim=-1)[0] prediction = processor.decode(pred_ids) predictions.append(prediction) with open("OUTFILE.txt", "w") as f_pred: for line in predictions: f_pred.write(line + '\n') ``` ### Training Data The training data is an utterance-level version of the [LIA Norwegian corpus](https://tekstlab.uio.no/LIA/norsk/index_english.html). The utterance-level version is available at [okuparinen/skn](https://huggingface.co/datasets/okuparinen/lia). ## Evaluation Results TBA ## Citation <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed]
John6666/cyber-rillusm-il-v10-sdxl
John6666
2025-06-19T12:55:27Z
0
0
diffusers
[ "diffusers", "safetensors", "text-to-image", "stable-diffusion", "stable-diffusion-xl", "realistic", "photorealistic", "illustrious", "en", "base_model:OnomaAIResearch/Illustrious-xl-early-release-v0", "base_model:finetune:OnomaAIResearch/Illustrious-xl-early-release-v0", "license:other", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionXLPipeline", "region:us" ]
text-to-image
2025-06-19T12:48:32Z
--- license: other license_name: faipl-1.0-sd license_link: https://freedevproject.org/faipl-1.0-sd/ language: - en library_name: diffusers pipeline_tag: text-to-image tags: - text-to-image - stable-diffusion - stable-diffusion-xl - realistic - photorealistic - illustrious base_model: OnomaAIResearch/Illustrious-xl-early-release-v0 --- Original model is [here](https://civitai.com/models/1696237/cyberrillusmil?modelVersionId=1919724). This model created by [m8rr](https://civitai.com/user/m8rr).
maldine/clinica
maldine
2025-06-19T12:52:59Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-06-19T12:52:59Z
--- license: apache-2.0 ---
elledilara/llama3.18B-Instruction-Scores-2
elledilara
2025-06-19T12:52:11Z
0
0
peft
[ "peft", "tensorboard", "safetensors", "trl", "sft", "generated_from_trainer", "base_model:meta-llama/Llama-3.1-8B", "base_model:adapter:meta-llama/Llama-3.1-8B", "license:llama3.1", "region:us" ]
null
2025-06-19T12:52:04Z
--- library_name: peft license: llama3.1 base_model: meta-llama/Meta-Llama-3.1-8B tags: - trl - sft - generated_from_trainer model-index: - name: llama3.18B-Instruction-Scores-2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # llama3.18B-Instruction-Scores-2 This model is a fine-tuned version of [meta-llama/Meta-Llama-3.1-8B](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 64 - optimizer: Use OptimizerNames.PAGED_ADAMW with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - training_steps: 250 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.15.2 - Transformers 4.52.4 - Pytorch 2.6.0+cu124 - Datasets 2.14.4 - Tokenizers 0.21.1
ujjawal077/llama-cyber-multilingual2
ujjawal077
2025-06-19T12:50:46Z
0
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-19T12:46:44Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
JunSotohigashi/swept-oath-607
JunSotohigashi
2025-06-19T12:48:47Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "lora", "sft", "dataset:JunSotohigashi/JapaneseWikipediaTypoDataset_kanji", "base_model:llm-jp/llm-jp-3-13b", "base_model:adapter:llm-jp/llm-jp-3-13b", "endpoints_compatible", "region:us" ]
null
2025-06-19T07:18:21Z
--- base_model: llm-jp/llm-jp-3-13b datasets: JunSotohigashi/JapaneseWikipediaTypoDataset_kanji library_name: transformers model_name: JunSotohigashi/swept-oath-607 tags: - generated_from_trainer - lora - sft licence: license --- # Model Card for JunSotohigashi/swept-oath-607 This model is a fine-tuned version of [llm-jp/llm-jp-3-13b](https://huggingface.co/llm-jp/llm-jp-3-13b) on the [JunSotohigashi/JapaneseWikipediaTypoDataset_kanji](https://huggingface.co/datasets/JunSotohigashi/JapaneseWikipediaTypoDataset_kanji) dataset. It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="JunSotohigashi/swept-oath-607", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/jun-sotohigashi-toyota-technological-institute/misusing-corpus-jp/runs/evjlqjch) This model was trained with SFT. ### Framework versions - TRL: 0.12.2 - Transformers: 4.46.3 - Pytorch: 2.5.1 - Datasets: 3.1.0 - Tokenizers: 0.20.3 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
JunSotohigashi/lunar-sponge-594
JunSotohigashi
2025-06-19T12:47:31Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "lora", "sft", "dataset:JunSotohigashi/JapaneseWikipediaTypoDataset_kanji", "base_model:llm-jp/llm-jp-3-13b-instruct", "base_model:adapter:llm-jp/llm-jp-3-13b-instruct", "endpoints_compatible", "region:us" ]
null
2025-06-19T07:14:23Z
--- base_model: llm-jp/llm-jp-3-13b-instruct datasets: JunSotohigashi/JapaneseWikipediaTypoDataset_kanji library_name: transformers model_name: JunSotohigashi/lunar-sponge-594 tags: - generated_from_trainer - lora - sft licence: license --- # Model Card for JunSotohigashi/lunar-sponge-594 This model is a fine-tuned version of [llm-jp/llm-jp-3-13b-instruct](https://huggingface.co/llm-jp/llm-jp-3-13b-instruct) on the [JunSotohigashi/JapaneseWikipediaTypoDataset_kanji](https://huggingface.co/datasets/JunSotohigashi/JapaneseWikipediaTypoDataset_kanji) dataset. It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="JunSotohigashi/lunar-sponge-594", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/jun-sotohigashi-toyota-technological-institute/misusing-corpus-jp/runs/p6eac0h7) This model was trained with SFT. ### Framework versions - TRL: 0.12.2 - Transformers: 4.46.3 - Pytorch: 2.5.1 - Datasets: 3.1.0 - Tokenizers: 0.20.3 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
toasteduk/musicgen-medium-lora-speed-garage-v4
toasteduk
2025-06-19T12:47:22Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-06-18T23:01:08Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Hailay/xlmr-tigrinya-mlm
Hailay
2025-06-19T12:44:28Z
0
0
transformers
[ "transformers", "safetensors", "xlm-roberta", "fill-mask", "tigrinya", "masked-language-modeling", "xlmr", "low-resource", "multilingual", "ti", "dataset:NLLB", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2025-06-19T12:15:32Z
--- language: ti datasets: - NLLB library_name: transformers tags: - tigrinya - masked-language-modeling - xlmr - low-resource - multilingual model_name: XLM-Roberta fine-tuned on Tigrinya (MLM) license: apache-2.0 --- # XLM-Roberta Fine-Tuned on Tigrinya (MLM) This model is a fine-tuned version of [`xlm-roberta-base`](https://huggingface.co/xlm-roberta-base) for the **Tigrinya language** (ትግርኛ), trained with the **Masked Language Modeling (MLM)** objective. It uses a custom BPE tokenizer adapted to Tigrinya using FastText-informed embedding initialization. ## 🔧 Details - **Base model**: `xlm-roberta-base` - **Language**: Tigrinya - **Tokenizer**: Custom BPE tokenizer (non-morpheme-aware) - **Adaptation**: Embedding initialization using weighted averages of pretrained XLM-R embeddings, guided by Tigrinya FastText word vectors - **Training dataset**: Tigrinya side of the [NLLB (No Language Left Behind)](https://github.com/facebookresearch/flores) parallel corpus - **Objective**: Masked Language Modeling (MLM) ## 🧪 Usage ```python from transformers import AutoTokenizer, AutoModelForMaskedLM tokenizer = AutoTokenizer.from_pretrained("Hailay/xlmr-tigriyna-mlm") model = AutoModelForMaskedLM.from_pretrained("Hailay/xlmr-tigriyna-mlm") text = "ትግራይ ብምትሕብባ ንህዝቢ ግብሪ ቀጺሉ።" inputs = tokenizer(text, return_tensors="pt") outputs = model(**inputs) 📌 Intended Use Pretraining for Tigrinya NLP tasks Fine-tuning on classification, NER, QA, and other downstream tasks in Tigrinya Research in low-resource Semitic and morphologically rich languages 📖 Citation @misc{hailay2025tigrinya, title={Tigrinya MLM with XLM-R and FastText-Informed Embedding Initialization}, author={Hailay Kidu}, year={2025}, url={https://huggingface.co/Hailay/xlmr-tigriyna-mlm} } 🏷️ License Apache License 2.0
nisrinatiqah/emotion-classifier
nisrinatiqah
2025-06-19T12:44:01Z
0
0
transformers
[ "transformers", "safetensors", "deberta-v2", "text-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-06-19T12:43:48Z
--- library_name: transformers tags: - generated_from_trainer model-index: - name: emotion-classifier results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # emotion-classifier This model was trained from scratch on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2042 - F1 Macro: 0.2544 - Roc Auc Macro: 0.8410 - Accuracy Subset: 0.2105 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.51.3 - Pytorch 2.6.0+cu124 - Datasets 3.6.0 - Tokenizers 0.21.1
rgraceffa/Llama-3.1-8B-bnb-4bit-eraigra
rgraceffa
2025-06-19T12:43:28Z
0
0
transformers
[ "transformers", "safetensors", "gguf", "llama", "text-generation-inference", "unsloth", "trl", "sft", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-06-19T12:31:33Z
--- base_model: unsloth/meta-llama-3.1-8b-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llama - trl - sft license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** rgraceffa - **License:** apache-2.0 - **Finetuned from model :** unsloth/meta-llama-3.1-8b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
imrahulwarkade/phi2-toneop-finetuned
imrahulwarkade
2025-06-19T12:41:44Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-06-19T12:30:26Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
ansuashish/kshitij_model_lora
ansuashish
2025-06-19T12:40:13Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-06-19T12:40:02Z
--- base_model: unsloth/llama-3.2-1b-instruct-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llama - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** ansuashish - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3.2-1b-instruct-unsloth-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
nurik0210/qwen3-8b-uzbek-sft
nurik0210
2025-06-19T12:39:10Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "qwen3", "text-generation", "generated_from_trainer", "trl", "sft", "conversational", "base_model:Qwen/Qwen3-8B", "base_model:finetune:Qwen/Qwen3-8B", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-17T07:42:37Z
--- base_model: Qwen/Qwen3-8B library_name: transformers model_name: qwen3-8b-uzbek-sft tags: - generated_from_trainer - trl - sft licence: license --- # Model Card for qwen3-8b-uzbek-sft This model is a fine-tuned version of [Qwen/Qwen3-8B](https://huggingface.co/Qwen/Qwen3-8B). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="nurik0210/qwen3-8b-uzbek-sft", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/ai-summer/huggingface/runs/km4fdnu1) This model was trained with SFT. ### Framework versions - TRL: 0.18.1 - Transformers: 4.52.4 - Pytorch: 2.6.0 - Datasets: 3.2.0 - Tokenizers: 0.21.0 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
jetfan-xin/q-Taxi-v3
jetfan-xin
2025-06-19T12:35:16Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2025-06-19T12:35:14Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-Taxi-v3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.36 +/- 2.89 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="jetfan-xin/q-Taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
kaxaroov/biollama3-med-lora
kaxaroov
2025-06-19T12:35:11Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-06-17T10:38:29Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
yezg/qwen2.5-sql-tlls-gguf
yezg
2025-06-19T12:34:24Z
0
0
transformers
[ "transformers", "gguf", "qwen2", "text-generation-inference", "unsloth", "en", "base_model:unsloth/Qwen2.5-Coder-7B", "base_model:quantized:unsloth/Qwen2.5-Coder-7B", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-06-19T10:48:32Z
--- base_model: unsloth/Qwen2.5-Coder-7B tags: - text-generation-inference - transformers - unsloth - qwen2 - gguf license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** yezg - **License:** apache-2.0 - **Finetuned from model :** unsloth/Qwen2.5-Coder-7B This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
Rand0mname1234/LO-model
Rand0mname1234
2025-06-19T12:31:23Z
2
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-06-15T10:27:57Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: ZAK --- # Lo Model <Gallery /> ## About this LoRA This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI. It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `ZAK` to trigger the image generation. ## Run this LoRA with an API using Replicate ```py import replicate input = { "prompt": "ZAK", "lora_weights": "https://huggingface.co/Rand0mname1234/LO-model/resolve/main/lora.safetensors" } output = replicate.run( "black-forest-labs/flux-dev-lora", input=input ) for index, item in enumerate(output): with open(f"output_{index}.webp", "wb") as file: file.write(item.read()) ``` ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('Rand0mname1234/LO-model', weight_name='lora.safetensors') image = pipeline('ZAK').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Training details - Steps: 2000 - Learning rate: 0.0004 - LoRA rank: 20 ## Contribute your own examples You can use the [community tab](https://huggingface.co/Rand0mname1234/LO-model/discussions) to add images that show off what you’ve made with this LoRA.
jetfan-xin/q-FrozenLake-v1-4x4-noSlippery
jetfan-xin
2025-06-19T12:31:01Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2025-06-19T12:30:58Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="jetfan-xin/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
Varinder2110/rachitcouple1
Varinder2110
2025-06-19T12:30:51Z
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-06-19T08:12:18Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: TOK --- # Rachitcouple1 <Gallery /> ## About this LoRA This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI. It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `TOK` to trigger the image generation. ## Run this LoRA with an API using Replicate ```py import replicate input = { "prompt": "TOK", "lora_weights": "https://huggingface.co/Varinder2110/rachitcouple1/resolve/main/lora.safetensors" } output = replicate.run( "black-forest-labs/flux-dev-lora", input=input ) for index, item in enumerate(output): with open(f"output_{index}.webp", "wb") as file: file.write(item.read()) ``` ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('Varinder2110/rachitcouple1', weight_name='lora.safetensors') image = pipeline('TOK').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Training details - Steps: 6000 - Learning rate: 0.0004 - LoRA rank: 16 ## Contribute your own examples You can use the [community tab](https://huggingface.co/Varinder2110/rachitcouple1/discussions) to add images that show off what you’ve made with this LoRA.
Nj64/noungjubV5
Nj64
2025-06-19T12:29:59Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/llama-3-8b-bnb-4bit", "base_model:finetune:unsloth/llama-3-8b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-06-19T12:29:43Z
--- base_model: unsloth/llama-3-8b-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llama - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** Nj64 - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
AlIshaq/IdT5-faq-pesantren
AlIshaq
2025-06-19T12:29:04Z
0
0
null
[ "safetensors", "t5", "indot5", "faq", "chatbot", "pondok-pesantren", "indonesian", "generator", "id", "region:us" ]
null
2025-06-19T12:18:07Z
--- tags: - indot5 - faq - chatbot - pondok-pesantren - indonesian - generator language: - id ---
abhinavpuranik/layoutlmv3-financial-document-classification
abhinavpuranik
2025-06-19T12:26:11Z
0
0
transformers
[ "transformers", "safetensors", "layoutlmv3", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-06-19T12:25:29Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
asheela/praktikum-modul6-ai
asheela
2025-06-19T12:23:47Z
2
0
transformers
[ "transformers", "safetensors", "distilbert", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-06-14T17:39:31Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
humendra/chronos-t5-large-fine-tuned-run-45
humendra
2025-06-19T12:22:51Z
0
0
transformers
[ "transformers", "safetensors", "t5", "text2text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2025-06-19T12:21:58Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
nnilayy/dreamer-arousal-multi-classification-Kfold-5
nnilayy
2025-06-19T12:21:26Z
0
0
null
[ "safetensors", "model_hub_mixin", "pytorch_model_hub_mixin", "region:us" ]
null
2025-06-19T12:21:25Z
--- tags: - model_hub_mixin - pytorch_model_hub_mixin --- This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration: - Code: [More Information Needed] - Paper: [More Information Needed] - Docs: [More Information Needed]
LandCruiser/sn29C1_1906_9
LandCruiser
2025-06-19T12:19:53Z
0
0
transformers
[ "transformers", "safetensors", "phi3", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-19T02:48:09Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
fabikru/model_15M_chembl_1M_ds_masking_0.3_predicted_hparams
fabikru
2025-06-19T12:19:31Z
0
0
transformers
[ "transformers", "safetensors", "modernbert", "fill-mask", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2025-06-19T12:19:27Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
mlx-community/Llama3-8B-Medicine-4bit
mlx-community
2025-06-19T12:16:11Z
0
0
mlx
[ "mlx", "safetensors", "llama", "biology", "medical", "text-generation", "en", "dataset:instruction-pretrain/medicine-instruction-augmented-corpora", "dataset:Open-Orca/OpenOrca", "dataset:EleutherAI/pile", "dataset:GAIR/lima", "dataset:WizardLM/WizardLM_evol_instruct_V2_196k", "base_model:instruction-pretrain/medicine-Llama3-8B", "base_model:quantized:instruction-pretrain/medicine-Llama3-8B", "license:llama3", "4-bit", "region:us" ]
text-generation
2025-06-19T12:15:10Z
--- datasets: - instruction-pretrain/medicine-instruction-augmented-corpora - Open-Orca/OpenOrca - EleutherAI/pile - GAIR/lima - WizardLM/WizardLM_evol_instruct_V2_196k language: - en license: llama3 tags: - biology - medical - mlx library_name: mlx pipeline_tag: text-generation base_model: instruction-pretrain/medicine-Llama3-8B --- # mlx-community/Llama3-8B-Medicine-4bit This model [mlx-community/Llama3-8B-Medicine-4bit](https://huggingface.co/mlx-community/Llama3-8B-Medicine-4bit) was converted to MLX format from [instruction-pretrain/medicine-Llama3-8B](https://huggingface.co/instruction-pretrain/medicine-Llama3-8B) using mlx-lm version **0.25.2**. ## Use with mlx ```bash pip install mlx-lm ``` ```python from mlx_lm import load, generate model, tokenizer = load("mlx-community/Llama3-8B-Medicine-4bit") prompt = "hello" if tokenizer.chat_template is not None: messages = [{"role": "user", "content": prompt}] prompt = tokenizer.apply_chat_template( messages, add_generation_prompt=True ) response = generate(model, tokenizer, prompt=prompt, verbose=True) ```
Omokemi/real-vs-ai-model
Omokemi
2025-06-19T12:13:03Z
0
0
fastai
[ "fastai", "en", "license:apache-2.0", "region:us" ]
null
2025-06-18T18:38:15Z
--- license: apache-2.0 language: - en metrics: - accuracy - precision - recall - f1 library_name: fastai Architecture: resnet34 Example output: Real/ Fake ---
morturr/Llama-2-7b-hf-LOO_amazon-COMB_one_liners-comb3-seed18-2025-06-19
morturr
2025-06-19T12:04:20Z
0
0
peft
[ "peft", "safetensors", "trl", "sft", "generated_from_trainer", "base_model:meta-llama/Llama-2-7b-hf", "base_model:adapter:meta-llama/Llama-2-7b-hf", "license:llama2", "region:us" ]
null
2025-06-19T12:04:12Z
--- library_name: peft license: llama2 base_model: meta-llama/Llama-2-7b-hf tags: - trl - sft - generated_from_trainer model-index: - name: Llama-2-7b-hf-LOO_amazon-COMB_one_liners-comb3-seed18-2025-06-19 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Llama-2-7b-hf-LOO_amazon-COMB_one_liners-comb3-seed18-2025-06-19 This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 16 - seed: 18 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - PEFT 0.13.2 - Transformers 4.46.1 - Pytorch 2.5.1+cu124 - Datasets 3.0.2 - Tokenizers 0.20.1
morturr/Mistral-7B-v0.1-headlines-seed-42-2025-06-19
morturr
2025-06-19T12:03:47Z
0
0
peft
[ "peft", "safetensors", "trl", "sft", "generated_from_trainer", "base_model:mistralai/Mistral-7B-v0.1", "base_model:adapter:mistralai/Mistral-7B-v0.1", "license:apache-2.0", "region:us" ]
null
2025-06-19T12:03:37Z
--- library_name: peft license: apache-2.0 base_model: mistralai/Mistral-7B-v0.1 tags: - trl - sft - generated_from_trainer model-index: - name: Mistral-7B-v0.1-headlines-seed-42-2025-06-19 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Mistral-7B-v0.1-headlines-seed-42-2025-06-19 This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 6e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - PEFT 0.13.2 - Transformers 4.46.1 - Pytorch 2.5.1+cu124 - Datasets 3.0.2 - Tokenizers 0.20.1
humendra/chronos-t5-large-fine-tuned-run-41
humendra
2025-06-19T12:00:20Z
0
0
transformers
[ "transformers", "safetensors", "t5", "text2text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2025-06-19T11:59:20Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
brett2/AIGP-exam-questions
brett2
2025-06-19T12:00:02Z
0
0
null
[ "region:us" ]
null
2025-06-19T11:56:26Z
Unlock Certification Success with PASS4EXAMS' Comprehensive Practice Questions, Exam Dumps, and Exam Questions Get More INFO: https://www.pass4exams.com/iapp/aigp-questions.html PASS4EXAMS is your one-stop destination for comprehensive IT certification preparation resources. Our platform is dedicated to empowering aspiring professionals like yourself to excel in their chosen certification exams through a robust collection of practice questions, exam dumps, and exam questions. Whether you're preparing for the CompTIA Security+, the AWS Certified Solutions Architect, or any other industry-leading certification, PASS4EXAMS has you covered. Our team of expert content curators meticulously crafts and updates our practice question banks and exam dumps to ensure they accurately reflect the latest exam formats and content. By leveraging PASS4EXAMS' practice questions, you'll have the opportunity to assess your knowledge, identify knowledge gaps, and fine-tune your exam-taking strategies in a risk-free environment. Our exam questions, on the other hand, provide you with an authentic testing experience, helping you build the confidence and skills needed to succeed on exam day. In addition to our comprehensive practice materials, PASS4EXAMS also offers detailed explanations, study guides, and expert insights to help you truly master the subject matter. Our goal is to equip you with the necessary tools and resources to not only pass your certification exam but also develop a deep understanding of the core concepts. Don't leave your career advancement to chance. Choose PASS4EXAMS as your trusted partner and unlock a future of endless possibilities with our unparalleled practice questions, exam dumps, and exam questions.
winglian/tulu3-dpo-8b-qat-v1-fixed
winglian
2025-06-19T11:58:55Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "generated_from_trainer", "conversational", "arxiv:2305.18290", "base_model:allenai/Llama-3.1-Tulu-3-8B-SFT", "base_model:finetune:allenai/Llama-3.1-Tulu-3-8B-SFT", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-19T11:55:06Z
--- base_model: allenai/Llama-3.1-Tulu-3-8B-SFT library_name: transformers model_name: outputs/tulu3-dpo tags: - generated_from_trainer licence: license --- # Model Card for outputs/tulu3-dpo This model is a fine-tuned version of [allenai/Llama-3.1-Tulu-3-8B-SFT](https://huggingface.co/allenai/Llama-3.1-Tulu-3-8B-SFT). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="None", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/axolotl-ai/qat-dpo-tulu3-8b/runs/z92cdxvr) This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290). ### Framework versions - TRL: 0.18.1 - Transformers: 4.52.3 - Pytorch: 2.7.0+cu126 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite DPO as: ```bibtex @inproceedings{rafailov2023direct, title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}}, author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn}, year = 2023, booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023}, url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html}, editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
Rukh29/my_local_model2
Rukh29
2025-06-19T11:58:46Z
18
0
transformers
[ "transformers", "tensorboard", "safetensors", "wav2vec2", "audio-classification", "generated_from_trainer", "dataset:audiofolder", "base_model:facebook/wav2vec2-base", "base_model:finetune:facebook/wav2vec2-base", "model-index", "endpoints_compatible", "region:us" ]
audio-classification
2025-04-21T12:41:27Z
--- library_name: transformers base_model: facebook/wav2vec2-base tags: - generated_from_trainer datasets: - audiofolder metrics: - accuracy model-index: - name: my_local_model2 results: - task: name: Audio Classification type: audio-classification dataset: name: audiofolder type: audiofolder config: default split: train args: default metrics: - name: Accuracy type: accuracy value: 1.0 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_local_model2 This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the audiofolder dataset. It achieves the following results on the evaluation set: - Loss: 0.0013 - Accuracy: 1.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 8 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 40 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-------:|:----:|:---------------:|:--------:| | 0.6813 | 0.9524 | 10 | 0.6738 | 0.8571 | | 0.6601 | 2.0 | 21 | 0.6189 | 0.8571 | | 0.6191 | 2.9524 | 31 | 0.5349 | 0.9524 | | 0.5819 | 4.0 | 42 | 0.3517 | 1.0 | | 0.3825 | 4.9524 | 52 | 0.3842 | 0.8095 | | 0.1922 | 6.0 | 63 | 0.6796 | 0.7143 | | 0.1184 | 6.9524 | 73 | 0.5517 | 0.8571 | | 0.0404 | 8.0 | 84 | 0.0222 | 1.0 | | 0.0239 | 8.9524 | 94 | 0.0143 | 1.0 | | 0.0136 | 10.0 | 105 | 0.0102 | 1.0 | | 0.0135 | 10.9524 | 115 | 0.0080 | 1.0 | | 0.1257 | 12.0 | 126 | 0.0065 | 1.0 | | 0.1233 | 12.9524 | 136 | 0.0058 | 1.0 | | 0.0071 | 14.0 | 147 | 0.0055 | 1.0 | | 0.0064 | 14.9524 | 157 | 0.0048 | 1.0 | | 0.0056 | 16.0 | 168 | 0.0041 | 1.0 | | 0.005 | 16.9524 | 178 | 0.0036 | 1.0 | | 0.0044 | 18.0 | 189 | 0.0032 | 1.0 | | 0.0039 | 18.9524 | 199 | 0.0029 | 1.0 | | 0.0034 | 20.0 | 210 | 0.0026 | 1.0 | | 0.0033 | 20.9524 | 220 | 0.0025 | 1.0 | | 0.0031 | 22.0 | 231 | 0.0023 | 1.0 | | 0.0029 | 22.9524 | 241 | 0.0022 | 1.0 | | 0.0027 | 24.0 | 252 | 0.0020 | 1.0 | | 0.0025 | 24.9524 | 262 | 0.0019 | 1.0 | | 0.0024 | 26.0 | 273 | 0.0018 | 1.0 | | 0.0023 | 26.9524 | 283 | 0.0017 | 1.0 | | 0.0022 | 28.0 | 294 | 0.0016 | 1.0 | | 0.0021 | 28.9524 | 304 | 0.0016 | 1.0 | | 0.002 | 30.0 | 315 | 0.0015 | 1.0 | | 0.002 | 30.9524 | 325 | 0.0015 | 1.0 | | 0.0019 | 32.0 | 336 | 0.0014 | 1.0 | | 0.0019 | 32.9524 | 346 | 0.0014 | 1.0 | | 0.0018 | 34.0 | 357 | 0.0014 | 1.0 | | 0.0018 | 34.9524 | 367 | 0.0014 | 1.0 | | 0.0018 | 36.0 | 378 | 0.0014 | 1.0 | | 0.0018 | 36.9524 | 388 | 0.0013 | 1.0 | | 0.0018 | 38.0 | 399 | 0.0013 | 1.0 | | 0.0018 | 38.0952 | 400 | 0.0013 | 1.0 | ### Framework versions - Transformers 4.46.3 - Pytorch 2.4.1+cu121 - Datasets 2.20.0 - Tokenizers 0.20.3
sgonzalezygil/sd-finetuning-dreambooth-v18-1200
sgonzalezygil
2025-06-19T11:58:00Z
0
0
diffusers
[ "diffusers", "safetensors", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2025-06-19T11:56:36Z
--- library_name: diffusers --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🧨 diffusers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
sgonzalezygil/sd-finetuning-dreambooth-v18
sgonzalezygil
2025-06-19T11:54:12Z
0
0
diffusers
[ "diffusers", "safetensors", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2025-06-19T11:53:06Z
--- library_name: diffusers --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🧨 diffusers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
MeuruReflex/mistral-lora-finetuned
MeuruReflex
2025-06-19T11:52:37Z
0
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-18T09:11:26Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
tomaarsen/csr-mxbai-embed-large-v1-nq-no-reconstruction
tomaarsen
2025-06-19T11:49:51Z
0
0
sentence-transformers
[ "sentence-transformers", "safetensors", "bert", "sparse-encoder", "sparse", "csr", "generated_from_trainer", "dataset_size:99000", "loss:CSRLoss", "loss:SparseMultipleNegativesRankingLoss", "feature-extraction", "en", "dataset:sentence-transformers/natural-questions", "arxiv:1908.10084", "arxiv:2503.01776", "arxiv:1705.00652", "base_model:mixedbread-ai/mxbai-embed-large-v1", "base_model:finetune:mixedbread-ai/mxbai-embed-large-v1", "license:apache-2.0", "model-index", "co2_eq_emissions", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
feature-extraction
2025-06-19T11:49:43Z
--- language: - en license: apache-2.0 tags: - sentence-transformers - sparse-encoder - sparse - csr - generated_from_trainer - dataset_size:99000 - loss:CSRLoss - loss:SparseMultipleNegativesRankingLoss base_model: mixedbread-ai/mxbai-embed-large-v1 widget: - text: Saudi Arabia–United Arab Emirates relations However, the UAE and Saudi Arabia continue to take somewhat differing stances on regional conflicts such the Yemeni Civil War, where the UAE opposes Al-Islah, and supports the Southern Movement, which has fought against Saudi-backed forces, and the Syrian Civil War, where the UAE has disagreed with Saudi support for Islamist movements.[4] - text: Economy of New Zealand New Zealand's diverse market economy has a sizable service sector, accounting for 63% of all GDP activity in 2013.[17] Large scale manufacturing industries include aluminium production, food processing, metal fabrication, wood and paper products. Mining, manufacturing, electricity, gas, water, and waste services accounted for 16.5% of GDP in 2013.[17] The primary sector continues to dominate New Zealand's exports, despite accounting for 6.5% of GDP in 2013.[17] - text: who was the first president of indian science congress meeting held in kolkata in 1914 - text: Get Over It (Eagles song) "Get Over It" is a song by the Eagles released as a single after a fourteen-year breakup. It was also the first song written by bandmates Don Henley and Glenn Frey when the band reunited. "Get Over It" was played live for the first time during their Hell Freezes Over tour in 1994. It returned the band to the U.S. Top 40 after a fourteen-year absence, peaking at No. 31 on the Billboard Hot 100 chart. It also hit No. 4 on the Billboard Mainstream Rock Tracks chart. The song was not played live by the Eagles after the "Hell Freezes Over" tour in 1994. It remains the group's last Top 40 hit in the U.S. - text: 'Cornelius the Centurion Cornelius (Greek: Κορνήλιος) was a Roman centurion who is considered by Christians to be one of the first Gentiles to convert to the faith, as related in Acts of the Apostles.' datasets: - sentence-transformers/natural-questions pipeline_tag: feature-extraction library_name: sentence-transformers metrics: - dot_accuracy@1 - dot_accuracy@3 - dot_accuracy@5 - dot_accuracy@10 - dot_precision@1 - dot_precision@3 - dot_precision@5 - dot_precision@10 - dot_recall@1 - dot_recall@3 - dot_recall@5 - dot_recall@10 - dot_ndcg@10 - dot_mrr@10 - dot_map@100 - query_active_dims - query_sparsity_ratio - corpus_active_dims - corpus_sparsity_ratio co2_eq_emissions: emissions: 53.740159900184786 energy_consumed: 0.13825542420719417 source: codecarbon training_type: fine-tuning on_cloud: false cpu_model: 13th Gen Intel(R) Core(TM) i7-13700K ram_total_size: 31.777088165283203 hours_used: 0.409 hardware_used: 1 x NVIDIA GeForce RTX 3090 model-index: - name: Sparse CSR model trained on Natural Questions results: - task: type: sparse-information-retrieval name: Sparse Information Retrieval dataset: name: NanoMSMARCO 128 type: NanoMSMARCO_128 metrics: - type: dot_accuracy@1 value: 0.38 name: Dot Accuracy@1 - type: dot_accuracy@3 value: 0.62 name: Dot Accuracy@3 - type: dot_accuracy@5 value: 0.72 name: Dot Accuracy@5 - type: dot_accuracy@10 value: 0.84 name: Dot Accuracy@10 - type: dot_precision@1 value: 0.38 name: Dot Precision@1 - type: dot_precision@3 value: 0.20666666666666667 name: Dot Precision@3 - type: dot_precision@5 value: 0.14400000000000002 name: Dot Precision@5 - type: dot_precision@10 value: 0.08399999999999999 name: Dot Precision@10 - type: dot_recall@1 value: 0.38 name: Dot Recall@1 - type: dot_recall@3 value: 0.62 name: Dot Recall@3 - type: dot_recall@5 value: 0.72 name: Dot Recall@5 - type: dot_recall@10 value: 0.84 name: Dot Recall@10 - type: dot_ndcg@10 value: 0.603846580732656 name: Dot Ndcg@10 - type: dot_mrr@10 value: 0.529079365079365 name: Dot Mrr@10 - type: dot_map@100 value: 0.535577429489216 name: Dot Map@100 - type: query_active_dims value: 128.0 name: Query Active Dims - type: query_sparsity_ratio value: 0.96875 name: Query Sparsity Ratio - type: corpus_active_dims value: 128.0 name: Corpus Active Dims - type: corpus_sparsity_ratio value: 0.96875 name: Corpus Sparsity Ratio - task: type: sparse-information-retrieval name: Sparse Information Retrieval dataset: name: NanoNFCorpus 128 type: NanoNFCorpus_128 metrics: - type: dot_accuracy@1 value: 0.4 name: Dot Accuracy@1 - type: dot_accuracy@3 value: 0.52 name: Dot Accuracy@3 - type: dot_accuracy@5 value: 0.62 name: Dot Accuracy@5 - type: dot_accuracy@10 value: 0.68 name: Dot Accuracy@10 - type: dot_precision@1 value: 0.4 name: Dot Precision@1 - type: dot_precision@3 value: 0.34 name: Dot Precision@3 - type: dot_precision@5 value: 0.336 name: Dot Precision@5 - type: dot_precision@10 value: 0.28600000000000003 name: Dot Precision@10 - type: dot_recall@1 value: 0.02662938222230507 name: Dot Recall@1 - type: dot_recall@3 value: 0.08583886950771044 name: Dot Recall@3 - type: dot_recall@5 value: 0.10539572959638349 name: Dot Recall@5 - type: dot_recall@10 value: 0.1390606096616216 name: Dot Recall@10 - type: dot_ndcg@10 value: 0.33155673498755867 name: Dot Ndcg@10 - type: dot_mrr@10 value: 0.4815555555555555 name: Dot Mrr@10 - type: dot_map@100 value: 0.14591039936040862 name: Dot Map@100 - type: query_active_dims value: 128.0 name: Query Active Dims - type: query_sparsity_ratio value: 0.96875 name: Query Sparsity Ratio - type: corpus_active_dims value: 128.0 name: Corpus Active Dims - type: corpus_sparsity_ratio value: 0.96875 name: Corpus Sparsity Ratio - task: type: sparse-information-retrieval name: Sparse Information Retrieval dataset: name: NanoNQ 128 type: NanoNQ_128 metrics: - type: dot_accuracy@1 value: 0.44 name: Dot Accuracy@1 - type: dot_accuracy@3 value: 0.64 name: Dot Accuracy@3 - type: dot_accuracy@5 value: 0.78 name: Dot Accuracy@5 - type: dot_accuracy@10 value: 0.8 name: Dot Accuracy@10 - type: dot_precision@1 value: 0.44 name: Dot Precision@1 - type: dot_precision@3 value: 0.21333333333333335 name: Dot Precision@3 - type: dot_precision@5 value: 0.16 name: Dot Precision@5 - type: dot_precision@10 value: 0.08399999999999999 name: Dot Precision@10 - type: dot_recall@1 value: 0.43 name: Dot Recall@1 - type: dot_recall@3 value: 0.6 name: Dot Recall@3 - type: dot_recall@5 value: 0.73 name: Dot Recall@5 - type: dot_recall@10 value: 0.76 name: Dot Recall@10 - type: dot_ndcg@10 value: 0.6020077639360719 name: Dot Ndcg@10 - type: dot_mrr@10 value: 0.5624999999999999 name: Dot Mrr@10 - type: dot_map@100 value: 0.5519887965031844 name: Dot Map@100 - type: query_active_dims value: 128.0 name: Query Active Dims - type: query_sparsity_ratio value: 0.96875 name: Query Sparsity Ratio - type: corpus_active_dims value: 128.0 name: Corpus Active Dims - type: corpus_sparsity_ratio value: 0.96875 name: Corpus Sparsity Ratio - task: type: sparse-nano-beir name: Sparse Nano BEIR dataset: name: NanoBEIR mean 128 type: NanoBEIR_mean_128 metrics: - type: dot_accuracy@1 value: 0.4066666666666667 name: Dot Accuracy@1 - type: dot_accuracy@3 value: 0.5933333333333334 name: Dot Accuracy@3 - type: dot_accuracy@5 value: 0.7066666666666667 name: Dot Accuracy@5 - type: dot_accuracy@10 value: 0.7733333333333334 name: Dot Accuracy@10 - type: dot_precision@1 value: 0.4066666666666667 name: Dot Precision@1 - type: dot_precision@3 value: 0.25333333333333335 name: Dot Precision@3 - type: dot_precision@5 value: 0.21333333333333335 name: Dot Precision@5 - type: dot_precision@10 value: 0.15133333333333332 name: Dot Precision@10 - type: dot_recall@1 value: 0.27887646074076833 name: Dot Recall@1 - type: dot_recall@3 value: 0.4352796231692368 name: Dot Recall@3 - type: dot_recall@5 value: 0.5184652431987945 name: Dot Recall@5 - type: dot_recall@10 value: 0.5796868698872072 name: Dot Recall@10 - type: dot_ndcg@10 value: 0.5124703598854289 name: Dot Ndcg@10 - type: dot_mrr@10 value: 0.5243783068783068 name: Dot Mrr@10 - type: dot_map@100 value: 0.411158875117603 name: Dot Map@100 - type: query_active_dims value: 128.0 name: Query Active Dims - type: query_sparsity_ratio value: 0.96875 name: Query Sparsity Ratio - type: corpus_active_dims value: 128.0 name: Corpus Active Dims - type: corpus_sparsity_ratio value: 0.96875 name: Corpus Sparsity Ratio - task: type: sparse-information-retrieval name: Sparse Information Retrieval dataset: name: NanoMSMARCO 256 type: NanoMSMARCO_256 metrics: - type: dot_accuracy@1 value: 0.44 name: Dot Accuracy@1 - type: dot_accuracy@3 value: 0.66 name: Dot Accuracy@3 - type: dot_accuracy@5 value: 0.78 name: Dot Accuracy@5 - type: dot_accuracy@10 value: 0.84 name: Dot Accuracy@10 - type: dot_precision@1 value: 0.44 name: Dot Precision@1 - type: dot_precision@3 value: 0.22 name: Dot Precision@3 - type: dot_precision@5 value: 0.156 name: Dot Precision@5 - type: dot_precision@10 value: 0.08399999999999999 name: Dot Precision@10 - type: dot_recall@1 value: 0.44 name: Dot Recall@1 - type: dot_recall@3 value: 0.66 name: Dot Recall@3 - type: dot_recall@5 value: 0.78 name: Dot Recall@5 - type: dot_recall@10 value: 0.84 name: Dot Recall@10 - type: dot_ndcg@10 value: 0.6402220356297674 name: Dot Ndcg@10 - type: dot_mrr@10 value: 0.576079365079365 name: Dot Mrr@10 - type: dot_map@100 value: 0.5819739218018417 name: Dot Map@100 - type: query_active_dims value: 256.0 name: Query Active Dims - type: query_sparsity_ratio value: 0.9375 name: Query Sparsity Ratio - type: corpus_active_dims value: 256.0 name: Corpus Active Dims - type: corpus_sparsity_ratio value: 0.9375 name: Corpus Sparsity Ratio - task: type: sparse-information-retrieval name: Sparse Information Retrieval dataset: name: NanoNFCorpus 256 type: NanoNFCorpus_256 metrics: - type: dot_accuracy@1 value: 0.42 name: Dot Accuracy@1 - type: dot_accuracy@3 value: 0.54 name: Dot Accuracy@3 - type: dot_accuracy@5 value: 0.58 name: Dot Accuracy@5 - type: dot_accuracy@10 value: 0.7 name: Dot Accuracy@10 - type: dot_precision@1 value: 0.42 name: Dot Precision@1 - type: dot_precision@3 value: 0.35999999999999993 name: Dot Precision@3 - type: dot_precision@5 value: 0.344 name: Dot Precision@5 - type: dot_precision@10 value: 0.29200000000000004 name: Dot Precision@10 - type: dot_recall@1 value: 0.018848269093365854 name: Dot Recall@1 - type: dot_recall@3 value: 0.07354907247001424 name: Dot Recall@3 - type: dot_recall@5 value: 0.09781289475269293 name: Dot Recall@5 - type: dot_recall@10 value: 0.1418672876485781 name: Dot Recall@10 - type: dot_ndcg@10 value: 0.33652365839683074 name: Dot Ndcg@10 - type: dot_mrr@10 value: 0.4957698412698413 name: Dot Mrr@10 - type: dot_map@100 value: 0.14165509490208594 name: Dot Map@100 - type: query_active_dims value: 256.0 name: Query Active Dims - type: query_sparsity_ratio value: 0.9375 name: Query Sparsity Ratio - type: corpus_active_dims value: 256.0 name: Corpus Active Dims - type: corpus_sparsity_ratio value: 0.9375 name: Corpus Sparsity Ratio - task: type: sparse-information-retrieval name: Sparse Information Retrieval dataset: name: NanoNQ 256 type: NanoNQ_256 metrics: - type: dot_accuracy@1 value: 0.56 name: Dot Accuracy@1 - type: dot_accuracy@3 value: 0.7 name: Dot Accuracy@3 - type: dot_accuracy@5 value: 0.78 name: Dot Accuracy@5 - type: dot_accuracy@10 value: 0.86 name: Dot Accuracy@10 - type: dot_precision@1 value: 0.56 name: Dot Precision@1 - type: dot_precision@3 value: 0.23333333333333336 name: Dot Precision@3 - type: dot_precision@5 value: 0.16 name: Dot Precision@5 - type: dot_precision@10 value: 0.09399999999999999 name: Dot Precision@10 - type: dot_recall@1 value: 0.54 name: Dot Recall@1 - type: dot_recall@3 value: 0.65 name: Dot Recall@3 - type: dot_recall@5 value: 0.73 name: Dot Recall@5 - type: dot_recall@10 value: 0.83 name: Dot Recall@10 - type: dot_ndcg@10 value: 0.6813657040884066 name: Dot Ndcg@10 - type: dot_mrr@10 value: 0.647301587301587 name: Dot Mrr@10 - type: dot_map@100 value: 0.6310147772294485 name: Dot Map@100 - type: query_active_dims value: 256.0 name: Query Active Dims - type: query_sparsity_ratio value: 0.9375 name: Query Sparsity Ratio - type: corpus_active_dims value: 256.0 name: Corpus Active Dims - type: corpus_sparsity_ratio value: 0.9375 name: Corpus Sparsity Ratio - task: type: sparse-nano-beir name: Sparse Nano BEIR dataset: name: NanoBEIR mean 256 type: NanoBEIR_mean_256 metrics: - type: dot_accuracy@1 value: 0.47333333333333333 name: Dot Accuracy@1 - type: dot_accuracy@3 value: 0.6333333333333334 name: Dot Accuracy@3 - type: dot_accuracy@5 value: 0.7133333333333333 name: Dot Accuracy@5 - type: dot_accuracy@10 value: 0.7999999999999999 name: Dot Accuracy@10 - type: dot_precision@1 value: 0.47333333333333333 name: Dot Precision@1 - type: dot_precision@3 value: 0.27111111111111114 name: Dot Precision@3 - type: dot_precision@5 value: 0.22 name: Dot Precision@5 - type: dot_precision@10 value: 0.15666666666666665 name: Dot Precision@10 - type: dot_recall@1 value: 0.33294942303112196 name: Dot Recall@1 - type: dot_recall@3 value: 0.46118302415667145 name: Dot Recall@3 - type: dot_recall@5 value: 0.5359376315842309 name: Dot Recall@5 - type: dot_recall@10 value: 0.6039557625495261 name: Dot Recall@10 - type: dot_ndcg@10 value: 0.5527037993716682 name: Dot Ndcg@10 - type: dot_mrr@10 value: 0.5730502645502644 name: Dot Mrr@10 - type: dot_map@100 value: 0.4515479313111254 name: Dot Map@100 - type: query_active_dims value: 256.0 name: Query Active Dims - type: query_sparsity_ratio value: 0.9375 name: Query Sparsity Ratio - type: corpus_active_dims value: 256.0 name: Corpus Active Dims - type: corpus_sparsity_ratio value: 0.9375 name: Corpus Sparsity Ratio - task: type: sparse-information-retrieval name: Sparse Information Retrieval dataset: name: NanoClimateFEVER type: NanoClimateFEVER metrics: - type: dot_accuracy@1 value: 0.2 name: Dot Accuracy@1 - type: dot_accuracy@3 value: 0.52 name: Dot Accuracy@3 - type: dot_accuracy@5 value: 0.56 name: Dot Accuracy@5 - type: dot_accuracy@10 value: 0.68 name: Dot Accuracy@10 - type: dot_precision@1 value: 0.2 name: Dot Precision@1 - type: dot_precision@3 value: 0.19333333333333333 name: Dot Precision@3 - type: dot_precision@5 value: 0.132 name: Dot Precision@5 - type: dot_precision@10 value: 0.088 name: Dot Precision@10 - type: dot_recall@1 value: 0.07833333333333332 name: Dot Recall@1 - type: dot_recall@3 value: 0.24499999999999997 name: Dot Recall@3 - type: dot_recall@5 value: 0.28333333333333327 name: Dot Recall@5 - type: dot_recall@10 value: 0.3473333333333333 name: Dot Recall@10 - type: dot_ndcg@10 value: 0.27333419680435084 name: Dot Ndcg@10 - type: dot_mrr@10 value: 0.3666031746031747 name: Dot Mrr@10 - type: dot_map@100 value: 0.21266834216817831 name: Dot Map@100 - type: query_active_dims value: 256.0 name: Query Active Dims - type: query_sparsity_ratio value: 0.9375 name: Query Sparsity Ratio - type: corpus_active_dims value: 256.0 name: Corpus Active Dims - type: corpus_sparsity_ratio value: 0.9375 name: Corpus Sparsity Ratio - task: type: sparse-information-retrieval name: Sparse Information Retrieval dataset: name: NanoDBPedia type: NanoDBPedia metrics: - type: dot_accuracy@1 value: 0.74 name: Dot Accuracy@1 - type: dot_accuracy@3 value: 0.86 name: Dot Accuracy@3 - type: dot_accuracy@5 value: 0.92 name: Dot Accuracy@5 - type: dot_accuracy@10 value: 0.94 name: Dot Accuracy@10 - type: dot_precision@1 value: 0.74 name: Dot Precision@1 - type: dot_precision@3 value: 0.5866666666666667 name: Dot Precision@3 - type: dot_precision@5 value: 0.556 name: Dot Precision@5 - type: dot_precision@10 value: 0.484 name: Dot Precision@10 - type: dot_recall@1 value: 0.08366724054361292 name: Dot Recall@1 - type: dot_recall@3 value: 0.16227352802558825 name: Dot Recall@3 - type: dot_recall@5 value: 0.2213882427797012 name: Dot Recall@5 - type: dot_recall@10 value: 0.3353731792736538 name: Dot Recall@10 - type: dot_ndcg@10 value: 0.5972307350486245 name: Dot Ndcg@10 - type: dot_mrr@10 value: 0.8152222222222223 name: Dot Mrr@10 - type: dot_map@100 value: 0.45303559906331897 name: Dot Map@100 - type: query_active_dims value: 256.0 name: Query Active Dims - type: query_sparsity_ratio value: 0.9375 name: Query Sparsity Ratio - type: corpus_active_dims value: 256.0 name: Corpus Active Dims - type: corpus_sparsity_ratio value: 0.9375 name: Corpus Sparsity Ratio - task: type: sparse-information-retrieval name: Sparse Information Retrieval dataset: name: NanoFEVER type: NanoFEVER metrics: - type: dot_accuracy@1 value: 0.86 name: Dot Accuracy@1 - type: dot_accuracy@3 value: 0.98 name: Dot Accuracy@3 - type: dot_accuracy@5 value: 0.98 name: Dot Accuracy@5 - type: dot_accuracy@10 value: 0.98 name: Dot Accuracy@10 - type: dot_precision@1 value: 0.86 name: Dot Precision@1 - type: dot_precision@3 value: 0.34666666666666657 name: Dot Precision@3 - type: dot_precision@5 value: 0.20799999999999996 name: Dot Precision@5 - type: dot_precision@10 value: 0.10399999999999998 name: Dot Precision@10 - type: dot_recall@1 value: 0.8066666666666668 name: Dot Recall@1 - type: dot_recall@3 value: 0.9433333333333332 name: Dot Recall@3 - type: dot_recall@5 value: 0.9433333333333332 name: Dot Recall@5 - type: dot_recall@10 value: 0.9433333333333332 name: Dot Recall@10 - type: dot_ndcg@10 value: 0.9054259418093692 name: Dot Ndcg@10 - type: dot_mrr@10 value: 0.9133333333333333 name: Dot Mrr@10 - type: dot_map@100 value: 0.8844551282051283 name: Dot Map@100 - type: query_active_dims value: 256.0 name: Query Active Dims - type: query_sparsity_ratio value: 0.9375 name: Query Sparsity Ratio - type: corpus_active_dims value: 256.0 name: Corpus Active Dims - type: corpus_sparsity_ratio value: 0.9375 name: Corpus Sparsity Ratio - task: type: sparse-information-retrieval name: Sparse Information Retrieval dataset: name: NanoFiQA2018 type: NanoFiQA2018 metrics: - type: dot_accuracy@1 value: 0.5 name: Dot Accuracy@1 - type: dot_accuracy@3 value: 0.62 name: Dot Accuracy@3 - type: dot_accuracy@5 value: 0.64 name: Dot Accuracy@5 - type: dot_accuracy@10 value: 0.68 name: Dot Accuracy@10 - type: dot_precision@1 value: 0.5 name: Dot Precision@1 - type: dot_precision@3 value: 0.3133333333333333 name: Dot Precision@3 - type: dot_precision@5 value: 0.22399999999999998 name: Dot Precision@5 - type: dot_precision@10 value: 0.13799999999999998 name: Dot Precision@10 - type: dot_recall@1 value: 0.2725793650793651 name: Dot Recall@1 - type: dot_recall@3 value: 0.4129047619047619 name: Dot Recall@3 - type: dot_recall@5 value: 0.4605714285714286 name: Dot Recall@5 - type: dot_recall@10 value: 0.5500873015873016 name: Dot Recall@10 - type: dot_ndcg@10 value: 0.49585690755175454 name: Dot Ndcg@10 - type: dot_mrr@10 value: 0.5641666666666666 name: Dot Mrr@10 - type: dot_map@100 value: 0.4425504355719097 name: Dot Map@100 - type: query_active_dims value: 256.0 name: Query Active Dims - type: query_sparsity_ratio value: 0.9375 name: Query Sparsity Ratio - type: corpus_active_dims value: 256.0 name: Corpus Active Dims - type: corpus_sparsity_ratio value: 0.9375 name: Corpus Sparsity Ratio - task: type: sparse-information-retrieval name: Sparse Information Retrieval dataset: name: NanoHotpotQA type: NanoHotpotQA metrics: - type: dot_accuracy@1 value: 0.84 name: Dot Accuracy@1 - type: dot_accuracy@3 value: 0.92 name: Dot Accuracy@3 - type: dot_accuracy@5 value: 0.96 name: Dot Accuracy@5 - type: dot_accuracy@10 value: 0.96 name: Dot Accuracy@10 - type: dot_precision@1 value: 0.84 name: Dot Precision@1 - type: dot_precision@3 value: 0.4733333333333333 name: Dot Precision@3 - type: dot_precision@5 value: 0.316 name: Dot Precision@5 - type: dot_precision@10 value: 0.17399999999999996 name: Dot Precision@10 - type: dot_recall@1 value: 0.42 name: Dot Recall@1 - type: dot_recall@3 value: 0.71 name: Dot Recall@3 - type: dot_recall@5 value: 0.79 name: Dot Recall@5 - type: dot_recall@10 value: 0.87 name: Dot Recall@10 - type: dot_ndcg@10 value: 0.802663278529999 name: Dot Ndcg@10 - type: dot_mrr@10 value: 0.8856666666666666 name: Dot Mrr@10 - type: dot_map@100 value: 0.7334779802028212 name: Dot Map@100 - type: query_active_dims value: 256.0 name: Query Active Dims - type: query_sparsity_ratio value: 0.9375 name: Query Sparsity Ratio - type: corpus_active_dims value: 256.0 name: Corpus Active Dims - type: corpus_sparsity_ratio value: 0.9375 name: Corpus Sparsity Ratio - task: type: sparse-information-retrieval name: Sparse Information Retrieval dataset: name: NanoMSMARCO type: NanoMSMARCO metrics: - type: dot_accuracy@1 value: 0.42 name: Dot Accuracy@1 - type: dot_accuracy@3 value: 0.66 name: Dot Accuracy@3 - type: dot_accuracy@5 value: 0.78 name: Dot Accuracy@5 - type: dot_accuracy@10 value: 0.84 name: Dot Accuracy@10 - type: dot_precision@1 value: 0.42 name: Dot Precision@1 - type: dot_precision@3 value: 0.22 name: Dot Precision@3 - type: dot_precision@5 value: 0.156 name: Dot Precision@5 - type: dot_precision@10 value: 0.08399999999999999 name: Dot Precision@10 - type: dot_recall@1 value: 0.42 name: Dot Recall@1 - type: dot_recall@3 value: 0.66 name: Dot Recall@3 - type: dot_recall@5 value: 0.78 name: Dot Recall@5 - type: dot_recall@10 value: 0.84 name: Dot Recall@10 - type: dot_ndcg@10 value: 0.6354592257726257 name: Dot Ndcg@10 - type: dot_mrr@10 value: 0.5694126984126984 name: Dot Mrr@10 - type: dot_map@100 value: 0.5752130160409359 name: Dot Map@100 - type: query_active_dims value: 256.0 name: Query Active Dims - type: query_sparsity_ratio value: 0.9375 name: Query Sparsity Ratio - type: corpus_active_dims value: 256.0 name: Corpus Active Dims - type: corpus_sparsity_ratio value: 0.9375 name: Corpus Sparsity Ratio - task: type: sparse-information-retrieval name: Sparse Information Retrieval dataset: name: NanoNFCorpus type: NanoNFCorpus metrics: - type: dot_accuracy@1 value: 0.42 name: Dot Accuracy@1 - type: dot_accuracy@3 value: 0.54 name: Dot Accuracy@3 - type: dot_accuracy@5 value: 0.58 name: Dot Accuracy@5 - type: dot_accuracy@10 value: 0.7 name: Dot Accuracy@10 - type: dot_precision@1 value: 0.42 name: Dot Precision@1 - type: dot_precision@3 value: 0.35999999999999993 name: Dot Precision@3 - type: dot_precision@5 value: 0.34 name: Dot Precision@5 - type: dot_precision@10 value: 0.29 name: Dot Precision@10 - type: dot_recall@1 value: 0.018848269093365854 name: Dot Recall@1 - type: dot_recall@3 value: 0.07354907247001424 name: Dot Recall@3 - type: dot_recall@5 value: 0.0962744332142314 name: Dot Recall@5 - type: dot_recall@10 value: 0.14178823626517886 name: Dot Recall@10 - type: dot_ndcg@10 value: 0.3352519406973144 name: Dot Ndcg@10 - type: dot_mrr@10 value: 0.49602380952380964 name: Dot Mrr@10 - type: dot_map@100 value: 0.14142955254174144 name: Dot Map@100 - type: query_active_dims value: 256.0 name: Query Active Dims - type: query_sparsity_ratio value: 0.9375 name: Query Sparsity Ratio - type: corpus_active_dims value: 256.0 name: Corpus Active Dims - type: corpus_sparsity_ratio value: 0.9375 name: Corpus Sparsity Ratio - task: type: sparse-information-retrieval name: Sparse Information Retrieval dataset: name: NanoNQ type: NanoNQ metrics: - type: dot_accuracy@1 value: 0.56 name: Dot Accuracy@1 - type: dot_accuracy@3 value: 0.7 name: Dot Accuracy@3 - type: dot_accuracy@5 value: 0.78 name: Dot Accuracy@5 - type: dot_accuracy@10 value: 0.86 name: Dot Accuracy@10 - type: dot_precision@1 value: 0.56 name: Dot Precision@1 - type: dot_precision@3 value: 0.23333333333333336 name: Dot Precision@3 - type: dot_precision@5 value: 0.16 name: Dot Precision@5 - type: dot_precision@10 value: 0.09399999999999999 name: Dot Precision@10 - type: dot_recall@1 value: 0.54 name: Dot Recall@1 - type: dot_recall@3 value: 0.65 name: Dot Recall@3 - type: dot_recall@5 value: 0.73 name: Dot Recall@5 - type: dot_recall@10 value: 0.83 name: Dot Recall@10 - type: dot_ndcg@10 value: 0.6813657040884066 name: Dot Ndcg@10 - type: dot_mrr@10 value: 0.647301587301587 name: Dot Mrr@10 - type: dot_map@100 value: 0.6311451301239768 name: Dot Map@100 - type: query_active_dims value: 256.0 name: Query Active Dims - type: query_sparsity_ratio value: 0.9375 name: Query Sparsity Ratio - type: corpus_active_dims value: 256.0 name: Corpus Active Dims - type: corpus_sparsity_ratio value: 0.9375 name: Corpus Sparsity Ratio - task: type: sparse-information-retrieval name: Sparse Information Retrieval dataset: name: NanoQuoraRetrieval type: NanoQuoraRetrieval metrics: - type: dot_accuracy@1 value: 0.86 name: Dot Accuracy@1 - type: dot_accuracy@3 value: 0.98 name: Dot Accuracy@3 - type: dot_accuracy@5 value: 0.98 name: Dot Accuracy@5 - type: dot_accuracy@10 value: 1.0 name: Dot Accuracy@10 - type: dot_precision@1 value: 0.86 name: Dot Precision@1 - type: dot_precision@3 value: 0.4 name: Dot Precision@3 - type: dot_precision@5 value: 0.26799999999999996 name: Dot Precision@5 - type: dot_precision@10 value: 0.13799999999999998 name: Dot Precision@10 - type: dot_recall@1 value: 0.7373333333333332 name: Dot Recall@1 - type: dot_recall@3 value: 0.9353333333333333 name: Dot Recall@3 - type: dot_recall@5 value: 0.9733333333333334 name: Dot Recall@5 - type: dot_recall@10 value: 0.9966666666666666 name: Dot Recall@10 - type: dot_ndcg@10 value: 0.9283913808760963 name: Dot Ndcg@10 - type: dot_mrr@10 value: 0.9166666666666665 name: Dot Mrr@10 - type: dot_map@100 value: 0.8996944444444444 name: Dot Map@100 - type: query_active_dims value: 256.0 name: Query Active Dims - type: query_sparsity_ratio value: 0.9375 name: Query Sparsity Ratio - type: corpus_active_dims value: 256.0 name: Corpus Active Dims - type: corpus_sparsity_ratio value: 0.9375 name: Corpus Sparsity Ratio - task: type: sparse-information-retrieval name: Sparse Information Retrieval dataset: name: NanoSCIDOCS type: NanoSCIDOCS metrics: - type: dot_accuracy@1 value: 0.54 name: Dot Accuracy@1 - type: dot_accuracy@3 value: 0.76 name: Dot Accuracy@3 - type: dot_accuracy@5 value: 0.82 name: Dot Accuracy@5 - type: dot_accuracy@10 value: 0.86 name: Dot Accuracy@10 - type: dot_precision@1 value: 0.54 name: Dot Precision@1 - type: dot_precision@3 value: 0.37999999999999995 name: Dot Precision@3 - type: dot_precision@5 value: 0.30400000000000005 name: Dot Precision@5 - type: dot_precision@10 value: 0.204 name: Dot Precision@10 - type: dot_recall@1 value: 0.11466666666666667 name: Dot Recall@1 - type: dot_recall@3 value: 0.23766666666666666 name: Dot Recall@3 - type: dot_recall@5 value: 0.31466666666666665 name: Dot Recall@5 - type: dot_recall@10 value: 0.4196666666666665 name: Dot Recall@10 - type: dot_ndcg@10 value: 0.42030245497944485 name: Dot Ndcg@10 - type: dot_mrr@10 value: 0.6498333333333332 name: Dot Mrr@10 - type: dot_map@100 value: 0.3374015286377059 name: Dot Map@100 - type: query_active_dims value: 256.0 name: Query Active Dims - type: query_sparsity_ratio value: 0.9375 name: Query Sparsity Ratio - type: corpus_active_dims value: 256.0 name: Corpus Active Dims - type: corpus_sparsity_ratio value: 0.9375 name: Corpus Sparsity Ratio - task: type: sparse-information-retrieval name: Sparse Information Retrieval dataset: name: NanoArguAna type: NanoArguAna metrics: - type: dot_accuracy@1 value: 0.28 name: Dot Accuracy@1 - type: dot_accuracy@3 value: 0.76 name: Dot Accuracy@3 - type: dot_accuracy@5 value: 0.9 name: Dot Accuracy@5 - type: dot_accuracy@10 value: 0.96 name: Dot Accuracy@10 - type: dot_precision@1 value: 0.28 name: Dot Precision@1 - type: dot_precision@3 value: 0.25333333333333335 name: Dot Precision@3 - type: dot_precision@5 value: 0.17999999999999997 name: Dot Precision@5 - type: dot_precision@10 value: 0.09599999999999997 name: Dot Precision@10 - type: dot_recall@1 value: 0.28 name: Dot Recall@1 - type: dot_recall@3 value: 0.76 name: Dot Recall@3 - type: dot_recall@5 value: 0.9 name: Dot Recall@5 - type: dot_recall@10 value: 0.96 name: Dot Recall@10 - type: dot_ndcg@10 value: 0.651941051318052 name: Dot Ndcg@10 - type: dot_mrr@10 value: 0.5498571428571428 name: Dot Mrr@10 - type: dot_map@100 value: 0.5515326278659611 name: Dot Map@100 - type: query_active_dims value: 256.0 name: Query Active Dims - type: query_sparsity_ratio value: 0.9375 name: Query Sparsity Ratio - type: corpus_active_dims value: 256.0 name: Corpus Active Dims - type: corpus_sparsity_ratio value: 0.9375 name: Corpus Sparsity Ratio - task: type: sparse-information-retrieval name: Sparse Information Retrieval dataset: name: NanoSciFact type: NanoSciFact metrics: - type: dot_accuracy@1 value: 0.6 name: Dot Accuracy@1 - type: dot_accuracy@3 value: 0.76 name: Dot Accuracy@3 - type: dot_accuracy@5 value: 0.76 name: Dot Accuracy@5 - type: dot_accuracy@10 value: 0.88 name: Dot Accuracy@10 - type: dot_precision@1 value: 0.6 name: Dot Precision@1 - type: dot_precision@3 value: 0.2733333333333334 name: Dot Precision@3 - type: dot_precision@5 value: 0.17599999999999993 name: Dot Precision@5 - type: dot_precision@10 value: 0.1 name: Dot Precision@10 - type: dot_recall@1 value: 0.565 name: Dot Recall@1 - type: dot_recall@3 value: 0.74 name: Dot Recall@3 - type: dot_recall@5 value: 0.76 name: Dot Recall@5 - type: dot_recall@10 value: 0.88 name: Dot Recall@10 - type: dot_ndcg@10 value: 0.7313116540920006 name: Dot Ndcg@10 - type: dot_mrr@10 value: 0.6887698412698413 name: Dot Mrr@10 - type: dot_map@100 value: 0.6840924219150025 name: Dot Map@100 - type: query_active_dims value: 256.0 name: Query Active Dims - type: query_sparsity_ratio value: 0.9375 name: Query Sparsity Ratio - type: corpus_active_dims value: 256.0 name: Corpus Active Dims - type: corpus_sparsity_ratio value: 0.9375 name: Corpus Sparsity Ratio - task: type: sparse-information-retrieval name: Sparse Information Retrieval dataset: name: NanoTouche2020 type: NanoTouche2020 metrics: - type: dot_accuracy@1 value: 0.6326530612244898 name: Dot Accuracy@1 - type: dot_accuracy@3 value: 0.8571428571428571 name: Dot Accuracy@3 - type: dot_accuracy@5 value: 0.8775510204081632 name: Dot Accuracy@5 - type: dot_accuracy@10 value: 0.9795918367346939 name: Dot Accuracy@10 - type: dot_precision@1 value: 0.6326530612244898 name: Dot Precision@1 - type: dot_precision@3 value: 0.5986394557823129 name: Dot Precision@3 - type: dot_precision@5 value: 0.5265306122448979 name: Dot Precision@5 - type: dot_precision@10 value: 0.4326530612244897 name: Dot Precision@10 - type: dot_recall@1 value: 0.0443108966783425 name: Dot Recall@1 - type: dot_recall@3 value: 0.12651297913694023 name: Dot Recall@3 - type: dot_recall@5 value: 0.1807810185085916 name: Dot Recall@5 - type: dot_recall@10 value: 0.2908183366162545 name: Dot Recall@10 - type: dot_ndcg@10 value: 0.4946170299181126 name: Dot Ndcg@10 - type: dot_mrr@10 value: 0.7585276967930031 name: Dot Mrr@10 - type: dot_map@100 value: 0.3733282842478698 name: Dot Map@100 - type: query_active_dims value: 256.0 name: Query Active Dims - type: query_sparsity_ratio value: 0.9375 name: Query Sparsity Ratio - type: corpus_active_dims value: 256.0 name: Corpus Active Dims - type: corpus_sparsity_ratio value: 0.9375 name: Corpus Sparsity Ratio - task: type: sparse-nano-beir name: Sparse Nano BEIR dataset: name: NanoBEIR mean type: NanoBEIR_mean metrics: - type: dot_accuracy@1 value: 0.5732810047095762 name: Dot Accuracy@1 - type: dot_accuracy@3 value: 0.7628571428571429 name: Dot Accuracy@3 - type: dot_accuracy@5 value: 0.8105808477237049 name: Dot Accuracy@5 - type: dot_accuracy@10 value: 0.8707378335949765 name: Dot Accuracy@10 - type: dot_precision@1 value: 0.5732810047095762 name: Dot Precision@1 - type: dot_precision@3 value: 0.356305599162742 name: Dot Precision@3 - type: dot_precision@5 value: 0.27281004709576134 name: Dot Precision@5 - type: dot_precision@10 value: 0.1866656200941915 name: Dot Precision@10 - type: dot_recall@1 value: 0.3370312131842067 name: Dot Recall@1 - type: dot_recall@3 value: 0.512044128836203 name: Dot Recall@3 - type: dot_recall@5 value: 0.5718216761338938 name: Dot Recall@5 - type: dot_recall@10 value: 0.6465436195186451 name: Dot Recall@10 - type: dot_ndcg@10 value: 0.6117808847297039 name: Dot Ndcg@10 - type: dot_mrr@10 value: 0.6785680645884727 name: Dot Mrr@10 - type: dot_map@100 value: 0.5323095762329995 name: Dot Map@100 - type: query_active_dims value: 256.0 name: Query Active Dims - type: query_sparsity_ratio value: 0.9375 name: Query Sparsity Ratio - type: corpus_active_dims value: 256.0 name: Corpus Active Dims - type: corpus_sparsity_ratio value: 0.9375 name: Corpus Sparsity Ratio --- # Sparse CSR model trained on Natural Questions This is a [CSR Sparse Encoder](https://www.sbert.net/docs/sparse_encoder/usage/usage.html) model finetuned from [mixedbread-ai/mxbai-embed-large-v1](https://huggingface.co/mixedbread-ai/mxbai-embed-large-v1) on the [natural-questions](https://huggingface.co/datasets/sentence-transformers/natural-questions) dataset using the [sentence-transformers](https://www.SBERT.net) library. It maps sentences & paragraphs to a 4096-dimensional sparse vector space with 256 maximum active dimensions and can be used for semantic search and sparse retrieval. ## Model Details ### Model Description - **Model Type:** CSR Sparse Encoder - **Base model:** [mixedbread-ai/mxbai-embed-large-v1](https://huggingface.co/mixedbread-ai/mxbai-embed-large-v1) <!-- at revision db9d1fe0f31addb4978201b2bf3e577f3f8900d2 --> - **Maximum Sequence Length:** 512 tokens - **Output Dimensionality:** 4096 dimensions (trained with 256 maximum active dimensions) - **Similarity Function:** Dot Product - **Training Dataset:** - [natural-questions](https://huggingface.co/datasets/sentence-transformers/natural-questions) - **Language:** en - **License:** apache-2.0 ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Documentation:** [Sparse Encoder Documentation](https://www.sbert.net/docs/sparse_encoder/usage/usage.html) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Sparse Encoders on Hugging Face](https://huggingface.co/models?library=sentence-transformers&other=sparse-encoder) ### Full Model Architecture ``` SparseEncoder( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) (2): CSRSparsity({'input_dim': 1024, 'hidden_dim': 4096, 'k': 256, 'k_aux': 512, 'normalize': False, 'dead_threshold': 30}) ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SparseEncoder # Download from the 🤗 Hub model = SparseEncoder("tomaarsen/csr-mxbai-embed-large-v1-nq-no-reconstruction") # Run inference queries = [ "who is cornelius in the book of acts", ] documents = [ 'Cornelius the Centurion Cornelius (Greek: Κορνήλιος) was a Roman centurion who is considered by Christians to be one of the first Gentiles to convert to the faith, as related in Acts of the Apostles.', "Joe Ranft Ranft reunited with Lasseter when he was hired by Pixar in 1991 as their head of story.[1] There he worked on all of their films produced up to 2006; this included Toy Story (for which he received an Academy Award nomination) and A Bug's Life, as the co-story writer and others as story supervisor. His final film was Cars. He also voiced characters in many of the films, including Heimlich the caterpillar in A Bug's Life, Wheezy the penguin in Toy Story 2, and Jacques the shrimp in Finding Nemo.[1]", 'Wonderful Tonight "Wonderful Tonight" is a ballad written by Eric Clapton. It was included on Clapton\'s 1977 album Slowhand. Clapton wrote the song about Pattie Boyd.[1] The female vocal harmonies on the song are provided by Marcella Detroit (then Marcy Levy) and Yvonne Elliman.', ] query_embeddings = model.encode_query(queries) document_embeddings = model.encode_document(documents) print(query_embeddings.shape, document_embeddings.shape) # [1, 4096] [3, 4096] # Get the similarity scores for the embeddings similarities = model.similarity(query_embeddings, document_embeddings) print(similarities) # tensor([[55.6462, 14.4637, 16.8866]]) ``` <!-- ### Direct Usage (Transformers) <details><summary>Click to see the direct usage in Transformers</summary> </details> --> <!-- ### Downstream Usage (Sentence Transformers) You can finetune this model on your own dataset. <details><summary>Click to expand</summary> </details> --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> ## Evaluation ### Metrics #### Sparse Information Retrieval * Datasets: `NanoMSMARCO_128`, `NanoNFCorpus_128` and `NanoNQ_128` * Evaluated with [<code>SparseInformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sparse_encoder/evaluation.html#sentence_transformers.sparse_encoder.evaluation.SparseInformationRetrievalEvaluator) with these parameters: ```json { "max_active_dims": 128 } ``` | Metric | NanoMSMARCO_128 | NanoNFCorpus_128 | NanoNQ_128 | |:----------------------|:----------------|:-----------------|:-----------| | dot_accuracy@1 | 0.38 | 0.4 | 0.44 | | dot_accuracy@3 | 0.62 | 0.52 | 0.64 | | dot_accuracy@5 | 0.72 | 0.62 | 0.78 | | dot_accuracy@10 | 0.84 | 0.68 | 0.8 | | dot_precision@1 | 0.38 | 0.4 | 0.44 | | dot_precision@3 | 0.2067 | 0.34 | 0.2133 | | dot_precision@5 | 0.144 | 0.336 | 0.16 | | dot_precision@10 | 0.084 | 0.286 | 0.084 | | dot_recall@1 | 0.38 | 0.0266 | 0.43 | | dot_recall@3 | 0.62 | 0.0858 | 0.6 | | dot_recall@5 | 0.72 | 0.1054 | 0.73 | | dot_recall@10 | 0.84 | 0.1391 | 0.76 | | **dot_ndcg@10** | **0.6038** | **0.3316** | **0.602** | | dot_mrr@10 | 0.5291 | 0.4816 | 0.5625 | | dot_map@100 | 0.5356 | 0.1459 | 0.552 | | query_active_dims | 128.0 | 128.0 | 128.0 | | query_sparsity_ratio | 0.9688 | 0.9688 | 0.9688 | | corpus_active_dims | 128.0 | 128.0 | 128.0 | | corpus_sparsity_ratio | 0.9688 | 0.9688 | 0.9688 | #### Sparse Nano BEIR * Dataset: `NanoBEIR_mean_128` * Evaluated with [<code>SparseNanoBEIREvaluator</code>](https://sbert.net/docs/package_reference/sparse_encoder/evaluation.html#sentence_transformers.sparse_encoder.evaluation.SparseNanoBEIREvaluator) with these parameters: ```json { "dataset_names": [ "msmarco", "nfcorpus", "nq" ], "max_active_dims": 128 } ``` | Metric | Value | |:----------------------|:-----------| | dot_accuracy@1 | 0.4067 | | dot_accuracy@3 | 0.5933 | | dot_accuracy@5 | 0.7067 | | dot_accuracy@10 | 0.7733 | | dot_precision@1 | 0.4067 | | dot_precision@3 | 0.2533 | | dot_precision@5 | 0.2133 | | dot_precision@10 | 0.1513 | | dot_recall@1 | 0.2789 | | dot_recall@3 | 0.4353 | | dot_recall@5 | 0.5185 | | dot_recall@10 | 0.5797 | | **dot_ndcg@10** | **0.5125** | | dot_mrr@10 | 0.5244 | | dot_map@100 | 0.4112 | | query_active_dims | 128.0 | | query_sparsity_ratio | 0.9688 | | corpus_active_dims | 128.0 | | corpus_sparsity_ratio | 0.9688 | #### Sparse Information Retrieval * Datasets: `NanoMSMARCO_256`, `NanoNFCorpus_256` and `NanoNQ_256` * Evaluated with [<code>SparseInformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sparse_encoder/evaluation.html#sentence_transformers.sparse_encoder.evaluation.SparseInformationRetrievalEvaluator) with these parameters: ```json { "max_active_dims": 256 } ``` | Metric | NanoMSMARCO_256 | NanoNFCorpus_256 | NanoNQ_256 | |:----------------------|:----------------|:-----------------|:-----------| | dot_accuracy@1 | 0.44 | 0.42 | 0.56 | | dot_accuracy@3 | 0.66 | 0.54 | 0.7 | | dot_accuracy@5 | 0.78 | 0.58 | 0.78 | | dot_accuracy@10 | 0.84 | 0.7 | 0.86 | | dot_precision@1 | 0.44 | 0.42 | 0.56 | | dot_precision@3 | 0.22 | 0.36 | 0.2333 | | dot_precision@5 | 0.156 | 0.344 | 0.16 | | dot_precision@10 | 0.084 | 0.292 | 0.094 | | dot_recall@1 | 0.44 | 0.0188 | 0.54 | | dot_recall@3 | 0.66 | 0.0735 | 0.65 | | dot_recall@5 | 0.78 | 0.0978 | 0.73 | | dot_recall@10 | 0.84 | 0.1419 | 0.83 | | **dot_ndcg@10** | **0.6402** | **0.3365** | **0.6814** | | dot_mrr@10 | 0.5761 | 0.4958 | 0.6473 | | dot_map@100 | 0.582 | 0.1417 | 0.631 | | query_active_dims | 256.0 | 256.0 | 256.0 | | query_sparsity_ratio | 0.9375 | 0.9375 | 0.9375 | | corpus_active_dims | 256.0 | 256.0 | 256.0 | | corpus_sparsity_ratio | 0.9375 | 0.9375 | 0.9375 | #### Sparse Nano BEIR * Dataset: `NanoBEIR_mean_256` * Evaluated with [<code>SparseNanoBEIREvaluator</code>](https://sbert.net/docs/package_reference/sparse_encoder/evaluation.html#sentence_transformers.sparse_encoder.evaluation.SparseNanoBEIREvaluator) with these parameters: ```json { "dataset_names": [ "msmarco", "nfcorpus", "nq" ], "max_active_dims": 256 } ``` | Metric | Value | |:----------------------|:-----------| | dot_accuracy@1 | 0.4733 | | dot_accuracy@3 | 0.6333 | | dot_accuracy@5 | 0.7133 | | dot_accuracy@10 | 0.8 | | dot_precision@1 | 0.4733 | | dot_precision@3 | 0.2711 | | dot_precision@5 | 0.22 | | dot_precision@10 | 0.1567 | | dot_recall@1 | 0.3329 | | dot_recall@3 | 0.4612 | | dot_recall@5 | 0.5359 | | dot_recall@10 | 0.604 | | **dot_ndcg@10** | **0.5527** | | dot_mrr@10 | 0.5731 | | dot_map@100 | 0.4515 | | query_active_dims | 256.0 | | query_sparsity_ratio | 0.9375 | | corpus_active_dims | 256.0 | | corpus_sparsity_ratio | 0.9375 | #### Sparse Information Retrieval * Datasets: `NanoClimateFEVER`, `NanoDBPedia`, `NanoFEVER`, `NanoFiQA2018`, `NanoHotpotQA`, `NanoMSMARCO`, `NanoNFCorpus`, `NanoNQ`, `NanoQuoraRetrieval`, `NanoSCIDOCS`, `NanoArguAna`, `NanoSciFact` and `NanoTouche2020` * Evaluated with [<code>SparseInformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sparse_encoder/evaluation.html#sentence_transformers.sparse_encoder.evaluation.SparseInformationRetrievalEvaluator) | Metric | NanoClimateFEVER | NanoDBPedia | NanoFEVER | NanoFiQA2018 | NanoHotpotQA | NanoMSMARCO | NanoNFCorpus | NanoNQ | NanoQuoraRetrieval | NanoSCIDOCS | NanoArguAna | NanoSciFact | NanoTouche2020 | |:----------------------|:-----------------|:------------|:-----------|:-------------|:-------------|:------------|:-------------|:-----------|:-------------------|:------------|:------------|:------------|:---------------| | dot_accuracy@1 | 0.2 | 0.74 | 0.86 | 0.5 | 0.84 | 0.42 | 0.42 | 0.56 | 0.86 | 0.54 | 0.28 | 0.6 | 0.6327 | | dot_accuracy@3 | 0.52 | 0.86 | 0.98 | 0.62 | 0.92 | 0.66 | 0.54 | 0.7 | 0.98 | 0.76 | 0.76 | 0.76 | 0.8571 | | dot_accuracy@5 | 0.56 | 0.92 | 0.98 | 0.64 | 0.96 | 0.78 | 0.58 | 0.78 | 0.98 | 0.82 | 0.9 | 0.76 | 0.8776 | | dot_accuracy@10 | 0.68 | 0.94 | 0.98 | 0.68 | 0.96 | 0.84 | 0.7 | 0.86 | 1.0 | 0.86 | 0.96 | 0.88 | 0.9796 | | dot_precision@1 | 0.2 | 0.74 | 0.86 | 0.5 | 0.84 | 0.42 | 0.42 | 0.56 | 0.86 | 0.54 | 0.28 | 0.6 | 0.6327 | | dot_precision@3 | 0.1933 | 0.5867 | 0.3467 | 0.3133 | 0.4733 | 0.22 | 0.36 | 0.2333 | 0.4 | 0.38 | 0.2533 | 0.2733 | 0.5986 | | dot_precision@5 | 0.132 | 0.556 | 0.208 | 0.224 | 0.316 | 0.156 | 0.34 | 0.16 | 0.268 | 0.304 | 0.18 | 0.176 | 0.5265 | | dot_precision@10 | 0.088 | 0.484 | 0.104 | 0.138 | 0.174 | 0.084 | 0.29 | 0.094 | 0.138 | 0.204 | 0.096 | 0.1 | 0.4327 | | dot_recall@1 | 0.0783 | 0.0837 | 0.8067 | 0.2726 | 0.42 | 0.42 | 0.0188 | 0.54 | 0.7373 | 0.1147 | 0.28 | 0.565 | 0.0443 | | dot_recall@3 | 0.245 | 0.1623 | 0.9433 | 0.4129 | 0.71 | 0.66 | 0.0735 | 0.65 | 0.9353 | 0.2377 | 0.76 | 0.74 | 0.1265 | | dot_recall@5 | 0.2833 | 0.2214 | 0.9433 | 0.4606 | 0.79 | 0.78 | 0.0963 | 0.73 | 0.9733 | 0.3147 | 0.9 | 0.76 | 0.1808 | | dot_recall@10 | 0.3473 | 0.3354 | 0.9433 | 0.5501 | 0.87 | 0.84 | 0.1418 | 0.83 | 0.9967 | 0.4197 | 0.96 | 0.88 | 0.2908 | | **dot_ndcg@10** | **0.2733** | **0.5972** | **0.9054** | **0.4959** | **0.8027** | **0.6355** | **0.3353** | **0.6814** | **0.9284** | **0.4203** | **0.6519** | **0.7313** | **0.4946** | | dot_mrr@10 | 0.3666 | 0.8152 | 0.9133 | 0.5642 | 0.8857 | 0.5694 | 0.496 | 0.6473 | 0.9167 | 0.6498 | 0.5499 | 0.6888 | 0.7585 | | dot_map@100 | 0.2127 | 0.453 | 0.8845 | 0.4426 | 0.7335 | 0.5752 | 0.1414 | 0.6311 | 0.8997 | 0.3374 | 0.5515 | 0.6841 | 0.3733 | | query_active_dims | 256.0 | 256.0 | 256.0 | 256.0 | 256.0 | 256.0 | 256.0 | 256.0 | 256.0 | 256.0 | 256.0 | 256.0 | 256.0 | | query_sparsity_ratio | 0.9375 | 0.9375 | 0.9375 | 0.9375 | 0.9375 | 0.9375 | 0.9375 | 0.9375 | 0.9375 | 0.9375 | 0.9375 | 0.9375 | 0.9375 | | corpus_active_dims | 256.0 | 256.0 | 256.0 | 256.0 | 256.0 | 256.0 | 256.0 | 256.0 | 256.0 | 256.0 | 256.0 | 256.0 | 256.0 | | corpus_sparsity_ratio | 0.9375 | 0.9375 | 0.9375 | 0.9375 | 0.9375 | 0.9375 | 0.9375 | 0.9375 | 0.9375 | 0.9375 | 0.9375 | 0.9375 | 0.9375 | #### Sparse Nano BEIR * Dataset: `NanoBEIR_mean` * Evaluated with [<code>SparseNanoBEIREvaluator</code>](https://sbert.net/docs/package_reference/sparse_encoder/evaluation.html#sentence_transformers.sparse_encoder.evaluation.SparseNanoBEIREvaluator) with these parameters: ```json { "dataset_names": [ "climatefever", "dbpedia", "fever", "fiqa2018", "hotpotqa", "msmarco", "nfcorpus", "nq", "quoraretrieval", "scidocs", "arguana", "scifact", "touche2020" ] } ``` | Metric | Value | |:----------------------|:-----------| | dot_accuracy@1 | 0.5733 | | dot_accuracy@3 | 0.7629 | | dot_accuracy@5 | 0.8106 | | dot_accuracy@10 | 0.8707 | | dot_precision@1 | 0.5733 | | dot_precision@3 | 0.3563 | | dot_precision@5 | 0.2728 | | dot_precision@10 | 0.1867 | | dot_recall@1 | 0.337 | | dot_recall@3 | 0.512 | | dot_recall@5 | 0.5718 | | dot_recall@10 | 0.6465 | | **dot_ndcg@10** | **0.6118** | | dot_mrr@10 | 0.6786 | | dot_map@100 | 0.5323 | | query_active_dims | 256.0 | | query_sparsity_ratio | 0.9375 | | corpus_active_dims | 256.0 | | corpus_sparsity_ratio | 0.9375 | <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Dataset #### natural-questions * Dataset: [natural-questions](https://huggingface.co/datasets/sentence-transformers/natural-questions) at [f9e894e](https://huggingface.co/datasets/sentence-transformers/natural-questions/tree/f9e894e1081e206e577b4eaa9ee6de2b06ae6f17) * Size: 99,000 training samples * Columns: <code>query</code> and <code>answer</code> * Approximate statistics based on the first 1000 samples: | | query | answer | |:--------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | | details | <ul><li>min: 10 tokens</li><li>mean: 11.71 tokens</li><li>max: 26 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 131.81 tokens</li><li>max: 450 tokens</li></ul> | * Samples: | query | answer | |:--------------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | <code>who played the father in papa don't preach</code> | <code>Alex McArthur Alex McArthur (born March 6, 1957) is an American actor.</code> | | <code>where was the location of the battle of hastings</code> | <code>Battle of Hastings The Battle of Hastings[a] was fought on 14 October 1066 between the Norman-French army of William, the Duke of Normandy, and an English army under the Anglo-Saxon King Harold Godwinson, beginning the Norman conquest of England. It took place approximately 7 miles (11 kilometres) northwest of Hastings, close to the present-day town of Battle, East Sussex, and was a decisive Norman victory.</code> | | <code>how many puppies can a dog give birth to</code> | <code>Canine reproduction The largest litter size to date was set by a Neapolitan Mastiff in Manea, Cambridgeshire, UK on November 29, 2004; the litter was 24 puppies.[22]</code> | * Loss: [<code>CSRLoss</code>](https://sbert.net/docs/package_reference/sparse_encoder/losses.html#csrloss) with these parameters: ```json { "beta": 0.1, "gamma": 1.0, "loss": "SparseMultipleNegativesRankingLoss(scale=1.0, similarity_fct='dot_score')" } ``` ### Evaluation Dataset #### natural-questions * Dataset: [natural-questions](https://huggingface.co/datasets/sentence-transformers/natural-questions) at [f9e894e](https://huggingface.co/datasets/sentence-transformers/natural-questions/tree/f9e894e1081e206e577b4eaa9ee6de2b06ae6f17) * Size: 1,000 evaluation samples * Columns: <code>query</code> and <code>answer</code> * Approximate statistics based on the first 1000 samples: | | query | answer | |:--------|:-----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | | details | <ul><li>min: 10 tokens</li><li>mean: 11.69 tokens</li><li>max: 23 tokens</li></ul> | <ul><li>min: 15 tokens</li><li>mean: 134.01 tokens</li><li>max: 512 tokens</li></ul> | * Samples: | query | answer | |:-------------------------------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | <code>where is the tiber river located in italy</code> | <code>Tiber The Tiber (/ˈtaɪbər/, Latin: Tiberis,[1] Italian: Tevere [ˈteːvere])[2] is the third-longest river in Italy, rising in the Apennine Mountains in Emilia-Romagna and flowing 406 kilometres (252 mi) through Tuscany, Umbria and Lazio, where it is joined by the river Aniene, to the Tyrrhenian Sea, between Ostia and Fiumicino.[3] It drains a basin estimated at 17,375 square kilometres (6,709 sq mi). The river has achieved lasting fame as the main watercourse of the city of Rome, founded on its eastern banks.</code> | | <code>what kind of car does jay gatsby drive</code> | <code>Jay Gatsby At the Buchanan home, Jordan Baker, Nick, Jay, and the Buchanans decide to visit New York City. Tom borrows Gatsby's yellow Rolls Royce to drive up to the city. On the way to New York City, Tom makes a detour at a gas station in "the Valley of Ashes", a run-down part of Long Island. The owner, George Wilson, shares his concern that his wife, Myrtle, may be having an affair. This unnerves Tom, who has been having an affair with Myrtle, and he leaves in a hurry.</code> | | <code>who sings if i can dream about you</code> | <code>I Can Dream About You "I Can Dream About You" is a song performed by American singer Dan Hartman on the soundtrack album of the film Streets of Fire. Released in 1984 as a single from the soundtrack, and included on Hartman's album I Can Dream About You, it reached number 6 on the Billboard Hot 100.[1]</code> | * Loss: [<code>CSRLoss</code>](https://sbert.net/docs/package_reference/sparse_encoder/losses.html#csrloss) with these parameters: ```json { "beta": 0.1, "gamma": 1.0, "loss": "SparseMultipleNegativesRankingLoss(scale=1.0, similarity_fct='dot_score')" } ``` ### Training Hyperparameters #### Non-Default Hyperparameters - `eval_strategy`: steps - `per_device_train_batch_size`: 64 - `per_device_eval_batch_size`: 64 - `learning_rate`: 4e-05 - `num_train_epochs`: 1 - `bf16`: True - `load_best_model_at_end`: True - `batch_sampler`: no_duplicates #### All Hyperparameters <details><summary>Click to expand</summary> - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: steps - `prediction_loss_only`: True - `per_device_train_batch_size`: 64 - `per_device_eval_batch_size`: 64 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 1 - `eval_accumulation_steps`: None - `torch_empty_cache_steps`: None - `learning_rate`: 4e-05 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1.0 - `num_train_epochs`: 1 - `max_steps`: -1 - `lr_scheduler_type`: linear - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.0 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `restore_callback_states_from_checkpoint`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: True - `fp16`: False - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: None - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: False - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: True - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: None - `hub_always_push`: False - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `include_for_metrics`: [] - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_eval_metrics`: False - `eval_on_start`: False - `use_liger_kernel`: False - `eval_use_gather_object`: False - `average_tokens_across_devices`: False - `prompts`: None - `batch_sampler`: no_duplicates - `multi_dataset_batch_sampler`: proportional - `router_mapping`: {} - `learning_rate_mapping`: {} </details> ### Training Logs | Epoch | Step | Training Loss | Validation Loss | NanoMSMARCO_128_dot_ndcg@10 | NanoNFCorpus_128_dot_ndcg@10 | NanoNQ_128_dot_ndcg@10 | NanoBEIR_mean_128_dot_ndcg@10 | NanoMSMARCO_256_dot_ndcg@10 | NanoNFCorpus_256_dot_ndcg@10 | NanoNQ_256_dot_ndcg@10 | NanoBEIR_mean_256_dot_ndcg@10 | NanoClimateFEVER_dot_ndcg@10 | NanoDBPedia_dot_ndcg@10 | NanoFEVER_dot_ndcg@10 | NanoFiQA2018_dot_ndcg@10 | NanoHotpotQA_dot_ndcg@10 | NanoMSMARCO_dot_ndcg@10 | NanoNFCorpus_dot_ndcg@10 | NanoNQ_dot_ndcg@10 | NanoQuoraRetrieval_dot_ndcg@10 | NanoSCIDOCS_dot_ndcg@10 | NanoArguAna_dot_ndcg@10 | NanoSciFact_dot_ndcg@10 | NanoTouche2020_dot_ndcg@10 | NanoBEIR_mean_dot_ndcg@10 | |:----------:|:--------:|:-------------:|:---------------:|:---------------------------:|:----------------------------:|:----------------------:|:-----------------------------:|:---------------------------:|:----------------------------:|:----------------------:|:-----------------------------:|:----------------------------:|:-----------------------:|:---------------------:|:------------------------:|:------------------------:|:-----------------------:|:------------------------:|:------------------:|:------------------------------:|:-----------------------:|:-----------------------:|:-----------------------:|:--------------------------:|:-------------------------:| | -1 | -1 | - | - | 0.6253 | 0.3224 | 0.5893 | 0.5123 | 0.6112 | 0.3278 | 0.6352 | 0.5248 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | | 0.0646 | 100 | 0.0542 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | | 0.1293 | 200 | 0.0566 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | | 0.1939 | 300 | 0.0455 | 0.0390 | 0.5697 | 0.3083 | 0.6074 | 0.4952 | 0.5709 | 0.3402 | 0.6637 | 0.5249 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | | 0.2586 | 400 | 0.0445 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | | 0.3232 | 500 | 0.0463 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | | 0.3878 | 600 | 0.056 | 0.0454 | 0.5981 | 0.3334 | 0.6076 | 0.5130 | 0.6217 | 0.3417 | 0.6337 | 0.5324 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | | 0.4525 | 700 | 0.0505 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | | 0.5171 | 800 | 0.0549 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | | 0.5818 | 900 | 0.0614 | 0.0350 | 0.6058 | 0.3401 | 0.6084 | 0.5181 | 0.6293 | 0.3178 | 0.6585 | 0.5352 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | | 0.6464 | 1000 | 0.0519 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | | 0.7111 | 1100 | 0.039 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | | 0.7757 | 1200 | 0.045 | 0.0384 | 0.6045 | 0.3348 | 0.6124 | 0.5172 | 0.6227 | 0.3333 | 0.6829 | 0.5463 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | | 0.8403 | 1300 | 0.0536 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | | 0.9050 | 1400 | 0.0389 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | | **0.9696** | **1500** | **0.0413** | **0.0401** | **0.6038** | **0.3316** | **0.602** | **0.5125** | **0.6402** | **0.3365** | **0.6814** | **0.5527** | **-** | **-** | **-** | **-** | **-** | **-** | **-** | **-** | **-** | **-** | **-** | **-** | **-** | **-** | | -1 | -1 | - | - | - | - | - | - | - | - | - | - | 0.2733 | 0.5972 | 0.9054 | 0.4959 | 0.8027 | 0.6355 | 0.3353 | 0.6814 | 0.9284 | 0.4203 | 0.6519 | 0.7313 | 0.4946 | 0.6118 | * The bold row denotes the saved checkpoint. ### Environmental Impact Carbon emissions were measured using [CodeCarbon](https://github.com/mlco2/codecarbon). - **Energy Consumed**: 0.138 kWh - **Carbon Emitted**: 0.054 kg of CO2 - **Hours Used**: 0.409 hours ### Training Hardware - **On Cloud**: No - **GPU Model**: 1 x NVIDIA GeForce RTX 3090 - **CPU Model**: 13th Gen Intel(R) Core(TM) i7-13700K - **RAM Size**: 31.78 GB ### Framework Versions - Python: 3.11.6 - Sentence Transformers: 4.2.0.dev0 - Transformers: 4.52.4 - PyTorch: 2.6.0+cu124 - Accelerate: 1.5.1 - Datasets: 2.21.0 - Tokenizers: 0.21.1 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ``` #### CSRLoss ```bibtex @misc{wen2025matryoshkarevisitingsparsecoding, title={Beyond Matryoshka: Revisiting Sparse Coding for Adaptive Representation}, author={Tiansheng Wen and Yifei Wang and Zequn Zeng and Zhong Peng and Yudi Su and Xinyang Liu and Bo Chen and Hongwei Liu and Stefanie Jegelka and Chenyu You}, year={2025}, eprint={2503.01776}, archivePrefix={arXiv}, primaryClass={cs.LG}, url={https://arxiv.org/abs/2503.01776}, } ``` #### SparseMultipleNegativesRankingLoss ```bibtex @misc{henderson2017efficient, title={Efficient Natural Language Response Suggestion for Smart Reply}, author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil}, year={2017}, eprint={1705.00652}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
johngreendr1/f2b6534e-d255-41d1-9d75-0b076b27d85e
johngreendr1
2025-06-19T11:44:45Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:unsloth/Qwen2.5-Coder-7B", "base_model:adapter:unsloth/Qwen2.5-Coder-7B", "region:us" ]
null
2025-06-19T11:13:10Z
--- base_model: unsloth/Qwen2.5-Coder-7B library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.15.1
New-clip-mezzo-fun-18-viral-videos/New.tutorial.mezzo.fun.Viral.Video.Leaks.Official
New-clip-mezzo-fun-18-viral-videos
2025-06-19T11:42:45Z
0
0
null
[ "region:us" ]
null
2025-06-19T11:42:28Z
<animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
Mayank03Rana/DivYield
Mayank03Rana
2025-06-19T11:40:48Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-06-19T11:40:43Z
--- license: apache-2.0 ---
LumiOpen/Llama-Poro-2-8B-SFT
LumiOpen
2025-06-19T11:39:59Z
0
1
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "fi", "en", "dataset:LumiOpen/poro2-instruction-collection", "license:llama3.3", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-13T12:26:55Z
--- datasets: - LumiOpen/poro2-instruction-collection language: - fi - en license: llama3.3 library_name: transformers pipeline_tag: text-generation --- # Poro 2 8B SFT Model Card > **Note for most users**: This is an intermediate checkpoint from our post-training pipeline. Most users should use [Poro 2 8B Instruct](https://huggingface.co/LumiOpen/Llama-Poro-2-8B-Instruct) instead, which includes an additional round of Direct Preference Optimization (DPO) for improved response quality and alignment. This SFT-only model is primarily intended for researchers interested in studying the effects of different post-training techniques. Poro 2 8B SFT is a supervised fine-tuned model created from the Poro 2 8B Base model. This model has been trained for instruction following and conversational AI applications in both Finnish and English, but has not undergone preference tuning. It represents the intermediate step before Direct Preference Optimization (DPO) in our post-training pipeline. Poro 2 was created in a collaboration between [AMD Silo AI](https://www.amd.com/en/solutions/ai/silo-ai.html), the [TurkuNLP group](https://turkunlp.org/) of the University of Turku, and [High Performance Language Technologies](https://hplt-project.org/) (HPLT). Training was conducted on the [LUMI supercomputer](https://www.lumi-supercomputer.eu/), using compute resources generously provided by [CSC](https://csc.fi/) - IT Center for Science, Finland. For more details on our training and data generation pipeline, check out our [Continued Pretraining Playbook](https://rocm.blogs.amd.com/artificial-intelligence/multilingual-continued-pretraining/README.html). ## Poro 2 Model Family The Poro 2 model family includes both 8B and 70B models, and there are three different versions released of the Poro 2 models: a base model, a post-training SFT-only checkpoint, and the final instruct model which is the SFT model plus a round of DPO. | Model | Based on | Base Model | SFT | Instruct | | :---: | :------: | :--------: | :-: | :------- | | Poro 2 8B | Llama 3.1 8B | [Poro 2 8B Base](https://huggingface.co/LumiOpen/Llama-Poro-2-8B-base) | [Poro 2 8B SFT](https://huggingface.co/LumiOpen/Llama-Poro-2-8B-SFT) | [Poro 2 8B Instruct](https://huggingface.co/LumiOpen/Llama-Poro-2-8B-Instruct) | | Poro 2 70B | Llama 3.1 70B | [Poro 2 70B Base](https://huggingface.co/LumiOpen/Llama-Poro-2-70B-base) | [Poro 2 70B SFT](https://huggingface.co/LumiOpen/Llama-Poro-2-70B-SFT) | [Poro 2 70B Instruct](https://huggingface.co/LumiOpen/Llama-Poro-2-70B-Instruct) | _What does Poro mean?_ Poro is the Finnish word for Reindeer! 🦌 These animals are native to Finland and hold a significant and historical role in Finnish culture. ## Model Overview Poro 2 8B SFT is based on the Llama 3.1 8B architecture and has been supervised fine-tuned for instruction following. The model supports both English and Finnish conversations but has not undergone preference tuning for response quality optimization. | Hyperparameter | Value | | :------------- | :----: | | n_parameters | 8.03B | | n_layers | 32 | | n_heads | 32 | | n_kv_heads | 8 | | d_model | 4096 | | vocab_size | 128256 | | max_sequence_length | 8192 | | base_model | Llama-3.1-8B | ## Training Process ### Continued Pretraining The base Poro 2 8B model was created through continued pretraining on 165B tokens of Finnish, English, code, and math data. ### Supervised Fine-Tuning (SFT) This model represents the SFT phase of post-training, using 1.4M instruction-following examples in English and Finnish, including: - English and Finnish Tulu 3 prompts with Llama-3.3-70B-Instruct responses (1.35M samples) - Multi-turn conversations generated using the Magpie method (14K samples) - Top-rated conversations from OASST2 and Avoin Avustaja datasets (5K samples) - Translation samples from EuroParl (1K samples) We release the [Poro 2 instruction collection](https://huggingface.co/datasets/LumiOpen/poro2-instruction-collection). ## SFT Hyperparameters | Hyperparameter | Value | | :------------: | :---: | | Epochs | 2 | | Global batch size | 64 | | Learning rate | 5e-6 | | LR scheduler | linear | | Warmup ratio | 0.03 | | Max sequence length | 4,096 | ## Evaluation Results Poro 2 8B SFT shows substantial improvements in Finnish instruction-following capabilities compared to Llama 3.1 8B Instruct, while maintaining strong English performance. Note that the final Instruct model (with DPO) performs significantly better. ### Finnish Instruction Following | | Poro 2 8B SFT | Llama 3.1 8B Instruct | Poro 2 8B Instruct | |----------------|------------------|------------------------|--------------------| | IFEval Finnish | 64.69 | 47.31 | **66.54** | | MTBench Finnish | 5.92 | 4.10 | **6.75** | | AlpacaEval 2 Finnish | 16.80 | 2.05 | **28.89** | ### English Instruction Following | | Poro 2 8B SFT | Llama 3.1 8B Instruct | Poro 2 8B Instruct | |----------------|--------|------------------------|--------------------| | IFEval | **79.66** | 79.48 | 79.29 | | MTBench | 7.07 | **7.70** | 7.33 | | AlpacaEval 2 | 29.67 | 32.70 | **35.30** | **Overall**: ~16% average improvement in Finnish instruction-following benchmarks compared to Llama 3.1 8B Instruct, with maintained English performance. The additional DPO step in the Instruct model provides further improvements. ## Usage ```python from transformers import AutoTokenizer, AutoModelForCausalLM import torch model_name = "LumiOpen/Poro-2-8B-SFT" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForCausalLM.from_pretrained( model_name, torch_dtype=torch.bfloat16, device_map="auto" ) # Finnish conversation example messages = [ {"role": "user", "content": "Kerro minulle Suomen historiasta."} ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, return_tensors="pt" ) outputs = model.generate( inputs, max_new_tokens=500, temperature=0.7, do_sample=True, pad_token_id=tokenizer.eos_token_id ) response = tokenizer.decode(outputs[0], skip_special_tokens=True) print(response) ``` ## Research Applications This SFT-only model is particularly useful for researchers studying: - The effects of supervised fine-tuning vs. preference tuning - Comparative analysis of different post-training techniques - Ablation studies on instruction-following capabilities - Cross-lingual transfer in instruction-following tasks - The impact of DPO on model behavior and alignment ## Intended Use Poro 2 8B SFT is primarily intended for: - **Research purposes**: Studying post-training techniques and their effects - **Comparative analysis**: Understanding the contribution of different training phases - **Educational applications**: Learning about instruction-following model development - **Development**: As a starting point for further preference tuning experiments **For production use cases**, we recommend using [Poro 2 8B Instruct](https://huggingface.co/LumiOpen/Llama-Poro-2-8B-Instruct) instead. ## Ethical Considerations and Limitations Poro 2 8B SFT is a research checkpoint optimized for English and Finnish instruction following. As this model has not undergone preference tuning, it may be more prone to generating responses that are misaligned with user expectations compared to the final Instruct model. Key limitations: - **No preference tuning**: May generate responses that are less aligned or of lower quality than the Instruct version - Limited proficiency in languages other than English and Finnish - May occasionally generate biased, inappropriate, or factually incorrect content - Performance may vary significantly for specialized or technical domains - Context window limited to 8,192 tokens - May struggle with very recent events (knowledge cutoff limitations) **Safety Considerations:** - This model should primarily be used for research purposes - Users should verify important factual claims independently - The model should not be used for medical, legal, or financial advice without human oversight - Responses should be reviewed for appropriateness in sensitive contexts - Consider using the Instruct version for better alignment and response quality ## License Built with Llama Poro 2 8B SFT is released under the Llama 3.3 Community License. Please review the license terms before use. ## Citation ```bibtex @misc{poro2_2025, title={Poro 2: Continued Pretraining for Language Acquisition}, author={Elaine Zosa and Jouni Louma and Kai Hakala and Antti Virtanen and Mika Koistinen and Risto Luukkonen and Akseli Reunamo and Sampo Pyysalo and Jonathan Burdge}, year={2025}, howpublished={LumiOpen} } ``` ## Acknowledgments We thank CSC - IT Center for Science, Finland for providing access to the LUMI supercomputer. This work was supported by the High Performance Language Technologies (HPLT) project and conducted in collaboration with TurkuNLP from the University of Turku. This project has received funding from the European Union's Horizon Europe research and innovation programme under grant agreement No 101070350.
LumiOpen/Llama-Poro-2-70B-base
LumiOpen
2025-06-19T11:39:02Z
0
1
transformers
[ "transformers", "safetensors", "llama", "text-generation", "fi", "en", "dataset:HuggingFaceFW/fineweb-2", "dataset:HuggingFaceFW/fineweb-edu", "dataset:bigcode/starcoderdata", "dataset:HuggingFaceTB/finemath", "license:llama3.1", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-27T11:58:11Z
--- datasets: - HuggingFaceFW/fineweb-2 - HuggingFaceFW/fineweb-edu - bigcode/starcoderdata - HuggingFaceTB/finemath language: - fi - en license: llama3.1 library_name: transformers pipeline_tag: text-generation --- # Poro 2 70B Base Model Card Poro 2 70B Base is a 70B parameter decoder-only transformer created through continued pretraining of Llama 3.1 70B to add Finnish language capabilities. It was trained on 165B tokens using a carefully balanced mix of Finnish, English, code, and math data. Poro 2 is a fully open source model and is made available under the Llama 3.1 Community License. Poro 2 was created in a collaboration between [AMD Silo AI](https://www.amd.com/en/solutions/ai/silo-ai.html), the [TurkuNLP group](https://turkunlp.org/) of the University of Turku, and [High Performance Language Technologies](https://hplt-project.org/) (HPLT). Training was conducted on the [LUMI supercomputer](https://www.lumi-supercomputer.eu/), using compute resources generously provided by [CSC](https://csc.fi/) - IT Center for Science, Finland. This model demonstrates how continued pretraining can efficiently add new language capabilities to existing models while maintaining performance in the original domains. Through the combination of English and Finnish training data, we achieve a model that substantially outperforms the base Llama 3.1 70B model in Finnish while maintaining excellent English proficiency. For more details on our training and data curation process, check out our [Continued Pretraining Playbook](https://rocm.blogs.amd.com/artificial-intelligence/multilingual-continued-pretraining/README.html). ## Poro 2 Model Family The Poro 2 model family includes both 8B and 70B models, and there are three different versions released of the Poro 2 models: a base model, a post-training SFT-only checkpoint, and the final instruct model which is the SFT model plus a round of DPO. | Model | Based on | Base Model | SFT | Instruct | | :---: | :------: | :--------: | :-: | :------- | | Poro 2 8B | Llama 3.1 8B | [Poro 2 8B Base](https://huggingface.co/LumiOpen/Llama-Poro-2-8B-base) | [Poro 2 8B SFT](https://huggingface.co/LumiOpen/Llama-Poro-2-8B-SFT) | [Poro 2 8B Instruct](https://huggingface.co/LumiOpen/Llama-Poro-2-8B-Instruct) | | Poro 2 70B | Llama 3.1 70B | [Poro 2 70B Base](https://huggingface.co/LumiOpen/Llama-Poro-2-70B-base) | [Poro 2 70B SFT](https://huggingface.co/LumiOpen/Llama-Poro-2-70B-SFT) | [Poro 2 70B Instruct](https://huggingface.co/LumiOpen/Llama-Poro-2-70B-Instruct) | _What does Poro mean?_ Poro is the Finnish word for Reindeer! 🦌 These animals are native to Finland and hold a significant and historical role in Finnish culture. ## Model Overview **NOTE:** This is a base model which needs further fine tuning for most use cases. Poro 2 70B is based on the Llama 3.1 70B architecture and uses continued pretraining to add Finnish language capabilities. | Hyperparameter | Value | | :------------- | :----: | | n_parameters | 70.55B | | n_layers | 80 | | n_heads | 64 | | n_kv_heads | 8 | | d_model | 8192 | | vocab_size | 128256 | | max_sequence_length | 8192 | | base_model | Llama-3.1-70B | ## Training Poro 2 70B was created through continued pretraining on the LUMI supercomputer, using AMD MI250X GPUs. Training used a 3D parallelism strategy with TP=8, PP=8. Training was conducted using a custom version of the Megatron-LM framework. Our code is available at [https://github.com/LumiOpen/Megatron-LM-lumi](https://github.com/LumiOpen/Megatron-LM-lumi). ## Training Hyperparameters | Hyperparameter | Value | Comment | | :------------: | :---: | :------:| | Precision | bfloat16 | | | Optimizer | AdamW | | | Learning rate | 1.5e-4 | | | LR scheduler | cosine | Warmup ratio 0.05, min LR 1e-8 | | Weight decay | 1e-1 | | | Global batch size | 512 | | | Micro batch size | 1 | | | Max sequence length | 8192 | | | Total tokens | 165B | 1 epoch | ## Dataset Poro 2 70B was trained on a balanced 165B token dataset designed to maintain English, code, and math capabilities while adding Finnish proficiency. | Dataset | Source | Percentage | Tokens | | :-----: | :----: | :--------: | :----: | | Finnish | FineWeb2 | 30% | 50B | | English | FineWeb-Edu | 30% | 50B | | Code | StarCoder | 30% | 50B | | Math | FineMath | 10% | 16B | | **Total** | | **100%** | **165B** | ## Evaluation Results Poro 2 70B shows substantial improvements in Finnish capabilities over Llama 3.1 70B, while maintaining and in some cases improving English performance. ### Finnish Performance | | Poro 2 70B | Llama 3.1 70B | |-----------------|------------------|----------------| | ARC Challenge | **61.01** | 54.52 | | HellaSwag | **58.07** | 52.10 | | MMLU | **73.76** | 71.29 | | TruthfulQA | **55.53** | 53.64 | | GSM8K | **72.78** | 69.90 | ### English Performance | | Poro 2 70B | Llama 3.1 70B | |-----------------|------------------|----------------| | ARC Challenge | **69.97** | 69.45 | | HellaSwag | **87.85** | 87.81 | | MMLU | 78.20 | **78.59** | | TruthfulQA | **51.43** | 49.78 | | GSM8K | **81.35** | 81.05 | ### Translation Performance | | Poro 2 70B | Llama 3.1 70B | |----------------------------|--------|----------------| | EN→FI BLEU | **40.03** | 35.02 | | FI→EN BLEU | **43.04** | 41.67 | | EN→FI chrF | **62.50** | 59.16 | | FI→EN chrF | **64.16** | 63.03 | ### Code Performance | | Poro 2 70B | Llama 3.1 70B | |----------------------------|--------|----------------| | HumanEval pass@10 | **71.34** | 64.63 | **Overall**: ~4 percentage point average improvement in Finnish benchmarks while maintaining excellent English performance (slight average improvement of ~0.4 percentage points). ## Usage ```python from transformers import AutoTokenizer, AutoModelForCausalLM import torch model_name = "LumiOpen/Poro-2-70B-base" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForCausalLM.from_pretrained( model_name, torch_dtype=torch.bfloat16, device_map="auto" ) # Example usage prompt = "Kerro minulle Suomesta." # "Tell me about Finland" in Finnish inputs = tokenizer(prompt, return_tensors="pt") outputs = model.generate(**inputs, max_length=200, temperature=0.7) response = tokenizer.decode(outputs[0], skip_special_tokens=True) print(response) ``` ## Ethical Considerations and Limitations Poro 2 70B is an advanced language model optimized for English and Finnish, with additional capabilities in code and mathematics. As with most AI-driven systems, Poro 2 is a product of the vast data it has been trained on, which may reflect the imperfections, biases, and idiosyncrasies of the wider web. The model may, at times, produce outputs that can be considered inaccurate, prejudiced, or controversial. Key limitations: - Limited proficiency in languages other than English and Finnish - Potential for generating biased or inappropriate content - May produce factually incorrect information ## License Built with Llama Poro 2 70B is released under the Llama 3.1 Community License. Please review the license terms before use. ## Citation ```bibtex @misc{poro2_2025, title={Poro 2: Continued Pretraining for Language Acquisition}, author={Elaine Zosa and Jouni Louma and Kai Hakala and Antti Virtanen and Mika Koistinen and Risto Luukkonen and Akseli Reunamo and Sampo Pyysalo and Jonathan Burdge}, year={2025}, howpublished={LumiOpen} } ``` ## Acknowledgments We thank CSC - IT Center for Science, Finland for providing access to the LUMI supercomputer. This work was supported by the High Performance Language Technologies (HPLT) project and conducted in collaboration with TurkuNLP from the University of Turku. This project has received funding from the European Union's Horizon Europe research and innovation programme under grant agreement No 101070350.
neural-interactive-proofs/finetune_dpo_cv_test_lm_server_47_0_iter_0_provers_group_2025-06-19_12-35-00_Qwen_Qwen2.5-0.5B-I
neural-interactive-proofs
2025-06-19T11:35:42Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "generated_from_trainer", "trl", "dpo", "arxiv:2305.18290", "base_model:Qwen/Qwen2.5-0.5B-Instruct", "base_model:finetune:Qwen/Qwen2.5-0.5B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-06-19T11:35:39Z
--- base_model: Qwen/Qwen2.5-0.5B-Instruct library_name: transformers model_name: finetune_dpo_cv_test_lm_server_47_0_iter_0_provers_group_2025-06-19_12-35-00_Qwen_Qwen2.5-0.5B-I tags: - generated_from_trainer - trl - dpo licence: license --- # Model Card for finetune_dpo_cv_test_lm_server_47_0_iter_0_provers_group_2025-06-19_12-35-00_Qwen_Qwen2.5-0.5B-I This model is a fine-tuned version of [Qwen/Qwen2.5-0.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="neural-interactive-proofs/finetune_dpo_cv_test_lm_server_47_0_iter_0_provers_group_2025-06-19_12-35-00_Qwen_Qwen2.5-0.5B-I", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/lrhammond-team/pvg-self-hosted-finetune/runs/Qwen_Qwen2.5-0.5B-Instruct_dpo_2025-06-19_12-35-00_cv_test_lm_server_47_0_iter_0_provers_group) This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290). ### Framework versions - TRL: 0.15.2 - Transformers: 4.52.4 - Pytorch: 2.7.0 - Datasets: 2.21.0 - Tokenizers: 0.21.1 ## Citations Cite DPO as: ```bibtex @inproceedings{rafailov2023direct, title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}}, author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn}, year = 2023, booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023}, url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html}, editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
Guinimos/14B-DeepSeek-Psychology-Tuned-1000-Params-30Epochs
Guinimos
2025-06-19T11:35:32Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "text-generation-inference", "unsloth", "conversational", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-06-19T10:56:26Z
--- base_model: unsloth/deepseek-r1-distill-qwen-14b-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - qwen2 license: apache-2.0 language: - en --- # Uploaded finetuned model - **Developed by:** Guinimos - **License:** apache-2.0 - **Finetuned from model :** unsloth/deepseek-r1-distill-qwen-14b-unsloth-bnb-4bit This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
asimov-law/gemma-guard-4b-0619
asimov-law
2025-06-19T11:34:00Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-06-19T11:33:54Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
yuanzhoulvpi/chinese_bloom_7b_chat_v2
yuanzhoulvpi
2025-06-19T11:26:48Z
25
5
transformers
[ "transformers", "pytorch", "safetensors", "bloom", "text-generation", "zh", "license:bigscience-bloom-rail-1.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-06-03T14:19:48Z
--- license: bigscience-bloom-rail-1.0 language: - zh --- # 体验链接 1. 🔗[http://101.68.79.42:7861/](http://101.68.79.42:7861/) ## 🚀更新 | 模型链接 | 训练的数据量 | 模型版本 | 备注 | |------------------------------------------------------------------------------------------------------------------------------|-----------|------|------------------------| | [https://huggingface.co/yuanzhoulvpi/chinese_bloom_7b_chat](https://huggingface.co/yuanzhoulvpi/chinese_bloom_7b_chat) | 15w中文指令数据 | v1 | | | [https://huggingface.co/yuanzhoulvpi/chinese_bloom_7b_chat_v2](https://huggingface.co/yuanzhoulvpi/chinese_bloom_7b_chat_v2) | 150w条中文指令数据 | v2 | 目前已经测试过效果,相较于v1,效果有所提升 | | [https://huggingface.co/yuanzhoulvpi/chinese_bloom_7b_chat_v3](https://huggingface.co/yuanzhoulvpi/chinese_bloom_7b_chat_v3) | 420w条中文指令数据 | v3 | 目前效果还没测试,欢迎大家测试 | ## 介绍 1. ✅ 对`bloom-7b`模型做了sft,本次版本为V2版本(使用了150w条有监督数据做sft),相较于V1版本,效果更好!!! 2. 🚀 训练代码和推理代码全部分享,可以查看链接[https://github.com/yuanzhoulvpi2017/zero_nlp/tree/main/chinese_bloom](https://github.com/yuanzhoulvpi2017/zero_nlp/tree/main/chinese_bloom) ## 如何使用 ```python from transformers import AutoModelForCausalLM, AutoTokenizer checkpoint = "yuanzhoulvpi/chinese_bloom_7b_chat_v2"#"bigscience/bloomz-3b" #"bigscience/bloom-7b1"# "output_dir/checkpoint-8260"# tokenizer = AutoTokenizer.from_pretrained(checkpoint) model = AutoModelForCausalLM.from_pretrained(checkpoint).half().cuda() PROMPT_DICT = { "prompt_input": ( "Below is an instruction that describes a task, paired with an input that provides further context. " "Write a response that appropriately completes the request.\n\n" "### Instruction:\n{instruction}\n\n### Input:\n{input}\n\n### Response:" ), "prompt_no_input": ( "Below is an instruction that describes a task. " "Write a response that appropriately completes the request.\n\n" "### Instruction:\n{instruction}\n\n### Response:" ), } from typing import Optional def generate_input(instruction:Optional[str]= None, input_str:Optional[str] = None) -> str: if input_str is None: return PROMPT_DICT['prompt_no_input'].format_map({'instruction':instruction}) else: return PROMPT_DICT['prompt_input'].format_map({'instruction':instruction, 'input':input_str}) for i in range(5): print("*"*80) inputs = tokenizer.encode(generate_input(instruction="你是谁"), return_tensors="pt") outputs = model.generate(inputs,num_beams=3, max_new_tokens=512, do_sample=False, top_k=10, penalty_alpha=0.6, temperature=0.8, repetition_penalty=1.2) print(tokenizer.decode(outputs[0])) ```
Velkey-J/bert-finetuned-ner-domain-spec-new
Velkey-J
2025-06-19T11:25:23Z
0
0
transformers
[ "transformers", "safetensors", "bert", "token-classification", "generated_from_trainer", "base_model:google-bert/bert-base-cased", "base_model:finetune:google-bert/bert-base-cased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2025-06-19T11:22:55Z
--- library_name: transformers license: apache-2.0 base_model: bert-base-cased tags: - generated_from_trainer model-index: - name: bert-finetuned-ner-domain-spec-new results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-finetuned-ner-domain-spec-new This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 3 ### Training results ### Framework versions - Transformers 4.52.4 - Pytorch 2.7.1+cu126 - Datasets 3.6.0 - Tokenizers 0.21.1
ccgtay/zephyr-7b-prompt-classifier-adapter
ccgtay
2025-06-19T11:25:08Z
0
0
null
[ "safetensors", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-06-18T21:55:38Z
--- license: apache-2.0 ---