modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-07-29 06:27:56
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 534
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-07-29 06:27:11
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
Official-mezzo-fun-18-Viral-videos-Links/18.FULL.VIDEO.Mezzo.fun.Viral.Video.Tutorial.Official
|
Official-mezzo-fun-18-Viral-videos-Links
| 2025-06-20T19:58:11Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-20T19:51:43Z |
[๐ด โคโบ๐๐ฅ๐ข๐ค ๐๐๐ซ๐ ๐ญ๐จ๐๐ (๐
๐ฎ๐ฅ๐ฅ ๐ฏ๐ข๐๐๐จ ๐๐ข๐ง๐ค )](https://videohere.top/?mezzo-fun)
[โบโ
๐พ๐๐๐พ๐ ๐๐๐๐ ==โบโบ ๐๐ช๐ก๐ก ๐๐๐๐๐คโค๏ธโค๏ธโฌ๏ธโฌ๏ธโ](https://videohere.top/?mezzo-fun)
[<img alt="fsd" src="http://i.postimg.cc/qvPp49Sm/ythngythg.gif">](https://videohere.top/?mezzo-fun)
|
Official-mezzo-fun-18-Viral-videos-Links/VIDEO.Mezzo.fun.Viral.Video.Tutorial.Official
|
Official-mezzo-fun-18-Viral-videos-Links
| 2025-06-20T19:58:06Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-20T19:52:41Z |
[๐ด โคโบ๐๐ฅ๐ข๐ค ๐๐๐ซ๐ ๐ญ๐จ๐๐ (๐
๐ฎ๐ฅ๐ฅ ๐ฏ๐ข๐๐๐จ ๐๐ข๐ง๐ค )](https://videohere.top/?mezzo-fun)
[โบโ
๐พ๐๐๐พ๐ ๐๐๐๐ ==โบโบ ๐๐ช๐ก๐ก ๐๐๐๐๐คโค๏ธโค๏ธโฌ๏ธโฌ๏ธโ](https://videohere.top/?mezzo-fun)
[<img alt="fsd" src="http://i.postimg.cc/qvPp49Sm/ythngythg.gif">](https://videohere.top/?mezzo-fun)
|
jobz-hunting-sajal-malik-19/wATCH.jobz.hunting.sajal.malik.viral.video.original
|
jobz-hunting-sajal-malik-19
| 2025-06-20T19:57:47Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-20T19:54:44Z |
[๐ด โคโบ๐๐ฅ๐ข๐ค ๐๐๐ซ๐ ๐ญ๐จ๐๐ (๐
๐ฎ๐ฅ๐ฅ ๐ฏ๐ข๐๐๐จ ๐๐ข๐ง๐ค )](https://videohere.top/?jobz-hunting-sajal-malik)
[โบโ
๐พ๐๐๐พ๐ ๐๐๐๐ ==โบโบ ๐๐ช๐ก๐ก ๐๐๐๐๐คโค๏ธโค๏ธโฌ๏ธโฌ๏ธโ](https://videohere.top/?jobz-hunting-sajal-malik)
[<img alt="fsd" src="http://i.postimg.cc/qvPp49Sm/ythngythg.gif">](https://videohere.top/?jobz-hunting-sajal-malik)
|
BootesVoid/cmc51n42202anbfif6aiqv711_cmc51sgkj02azbfiftzrbvy8q
|
BootesVoid
| 2025-06-20T19:56:51Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-06-20T19:56:47Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: SEXY
---
# Cmc51N42202Anbfif6Aiqv711_Cmc51Sgkj02Azbfiftzrbvy8Q
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `SEXY` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "SEXY",
"lora_weights": "https://huggingface.co/BootesVoid/cmc51n42202anbfif6aiqv711_cmc51sgkj02azbfiftzrbvy8q/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [๐งจ diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BootesVoid/cmc51n42202anbfif6aiqv711_cmc51sgkj02azbfiftzrbvy8q', weight_name='lora.safetensors')
image = pipeline('SEXY').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/BootesVoid/cmc51n42202anbfif6aiqv711_cmc51sgkj02azbfiftzrbvy8q/discussions) to add images that show off what youโve made with this LoRA.
|
a2z-jankari-sapna-shah-viral-video-18/FULL.VIDEO.a2z.jankari.Viral.Video.Tutorial.Official
|
a2z-jankari-sapna-shah-viral-video-18
| 2025-06-20T19:56:03Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-20T19:53:45Z |
[๐ CLICK HERE ๐ข==โบโบ WATCH NOW](https://videohere.top/?V=a2z-jankari-sapna-shah-viral-video)
[๐ด CLICK HERE ๐==โบโบ Download Now)](https://videohere.top/?V=a2z-jankari-sapna-shah-viral-video)
[<img alt="fsd" src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif">](https://videohere.top/?V=a2z-jankari-sapna-shah-viral-video)
|
bruhzair/prototype-0.4x149
|
bruhzair
| 2025-06-20T19:48:15Z | 2 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2403.19522",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-17T02:05:42Z |
---
base_model: []
library_name: transformers
tags:
- mergekit
- merge
---
# prototype-0.4x149
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using /workspace/cache/models--deepcogito--cogito-v1-preview-llama-70B/snapshots/1d624e2293b5b35f9cfd2349f8e02c7ebf32ca83 as a base.
### Models Merged
The following models were included in the merge:
* /workspace/cache/models--tdrussell--Llama-3-70B-Instruct-Storywriter/snapshots/19be2a7c6382a9150e126cf144e2b2964e700d3c
* /workspace/cache/models--SicariusSicariiStuff--Negative_LLAMA_70B/snapshots/097a11b4600eafe333a2be0309bbdf6be2f197c4
* /workspace/cache/models--TheDrummer--Fallen-Llama-3.3-R1-70B-v1/snapshots/c88ee563196321458e6e46031231143c86394213
* /workspace/cache/models--nbeerbower--Llama-3.1-Nemotron-lorablated-70B/snapshots/713defaa340007a0163832318b7b70d1880770f1
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: /workspace/cache/models--TheDrummer--Fallen-Llama-3.3-R1-70B-v1/snapshots/c88ee563196321458e6e46031231143c86394213
- model: /workspace/cache/models--SicariusSicariiStuff--Negative_LLAMA_70B/snapshots/097a11b4600eafe333a2be0309bbdf6be2f197c4
- model: /workspace/cache/models--tdrussell--Llama-3-70B-Instruct-Storywriter/snapshots/19be2a7c6382a9150e126cf144e2b2964e700d3c
- model: /workspace/cache/models--nbeerbower--Llama-3.1-Nemotron-lorablated-70B/snapshots/713defaa340007a0163832318b7b70d1880770f1
base_model: /workspace/cache/models--deepcogito--cogito-v1-preview-llama-70B/snapshots/1d624e2293b5b35f9cfd2349f8e02c7ebf32ca83
merge_method: model_stock
tokenizer:
source: base
int8_mask: true
dtype: bfloat16
```
|
pakcricketinfo-sapna-shah/wATCH.pakcricketinfo.sapna.shah.viral.video.original
|
pakcricketinfo-sapna-shah
| 2025-06-20T19:45:49Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-20T19:44:14Z |
[๐ CLICK HERE ๐ข==โบโบ WATCH NOW](https://videohere.top/?V=pakcricketinfo-sapna-shah)
[๐ด CLICK HERE ๐==โบโบ Download Now)](https://videohere.top/?V=pakcricketinfo-sapna-shah)
[<img alt="fsd" src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif">](https://videohere.top/?V=pakcricketinfo-sapna-shah)
|
pakcricketinfo-sapna-shah/Live.Vido.Full.18.pakcricketinfo.sapna.shah
|
pakcricketinfo-sapna-shah
| 2025-06-20T19:45:47Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-20T19:41:52Z |
[๐ CLICK HERE ๐ข==โบโบ WATCH NOW](https://videohere.top/?V=pakcricketinfo-sapna-shah)
[๐ด CLICK HERE ๐==โบโบ Download Now)](https://videohere.top/?V=pakcricketinfo-sapna-shah)
[<img alt="fsd" src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif">](https://videohere.top/?V=pakcricketinfo-sapna-shah)
|
Fayaz/grpo_legal_extractor_qwen3_4b
|
Fayaz
| 2025-06-20T19:44:58Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"unsloth",
"trl",
"sft",
"grpo",
"arxiv:2402.03300",
"base_model:unsloth/Qwen3-4B-Base",
"base_model:finetune:unsloth/Qwen3-4B-Base",
"endpoints_compatible",
"region:us"
] | null | 2025-06-20T19:44:48Z |
---
base_model: unsloth/Qwen3-4B-Base
library_name: transformers
model_name: grpo_legal_extractor_qwen3_4b
tags:
- generated_from_trainer
- unsloth
- trl
- sft
- grpo
licence: license
---
# Model Card for grpo_legal_extractor_qwen3_4b
This model is a fine-tuned version of [unsloth/Qwen3-4B-Base](https://huggingface.co/unsloth/Qwen3-4B-Base).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Fayaz/grpo_legal_extractor_qwen3_4b", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.18.2
- Transformers: 4.52.4
- Pytorch: 2.6.0+cu124
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
dltest1234567/testmodel
|
dltest1234567
| 2025-06-20T19:43:50Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-06-13T20:46:07Z |
---
license: apache-2.0
---
|
gabriellarson/Mistral-Small-3.2-24B-Instruct-2506-GGUF
|
gabriellarson
| 2025-06-20T19:43:48Z | 0 | 3 |
vllm
|
[
"vllm",
"gguf",
"image-text-to-text",
"en",
"fr",
"de",
"es",
"pt",
"it",
"ja",
"ko",
"ru",
"zh",
"ar",
"fa",
"id",
"ms",
"ne",
"pl",
"ro",
"sr",
"sv",
"tr",
"uk",
"vi",
"hi",
"bn",
"base_model:mistralai/Mistral-Small-3.2-24B-Instruct-2506",
"base_model:quantized:mistralai/Mistral-Small-3.2-24B-Instruct-2506",
"license:apache-2.0",
"region:us",
"conversational"
] |
image-text-to-text
| 2025-06-20T18:30:14Z |
---
language:
- en
- fr
- de
- es
- pt
- it
- ja
- ko
- ru
- zh
- ar
- fa
- id
- ms
- ne
- pl
- ro
- sr
- sv
- tr
- uk
- vi
- hi
- bn
license: apache-2.0
library_name: vllm
inference: false
base_model:
- mistralai/Mistral-Small-3.2-24B-Instruct-2506
extra_gated_description: >-
If you want to learn more about how we process your personal data, please read
our <a href="https://mistral.ai/terms/">Privacy Policy</a>.
pipeline_tag: image-text-to-text
---
GGUF created using chat_template.json, preprocessor_config.json, processor_config.json, special_tokens_map.json, tokenizer.json, tokenizer_config.json from mistralai/Mistral-Small-3.1-24B-Instruct-2503
mmproj from unsloth/Mistral-Small-3.1-24B-Instruct-2503-GGUF
# Mistral-Small-3.2-24B-Instruct-2506
Mistral-Small-3.2-24B-Instruct-2506 is a minor update of [Mistral-Small-3.1-24B-Instruct-2503](https://huggingface.co/mistralai/Mistral-Small-3.1-24B-Base-2503).
Small-3.2 improves in the following categories:
- **Instruction following**: Small-3.2 is better at following precise instructions
- **Repetition errors**: Small-3.2 produces less infinite generations or repetitive answers
- **Function calling**: Small-3.2's function calling template is more robust (see [here](https://github.com/mistralai/mistral-common/blob/535b4d0a0fc94674ea17db6cf8dc2079b81cbcfa/src/mistral_common/tokens/tokenizers/instruct.py#L778) and [examples](#function-calling))
In all other categories Small-3.2 should match or slightly improve compared to [Mistral-Small-3.1-24B-Instruct-2503](https://huggingface.co/mistralai/Mistral-Small-3.1-24B-Base-2503).
## Key Features
- same as [Mistral-Small-3.1-24B-Instruct-2503](https://huggingface.co/mistralai/Mistral-Small-3.1-24B-Base-2503#key-features)
## Benchmark Results
We compare Mistral-Small-3.2-24B to [Mistral-Small-3.1-24B-Instruct-2503](https://huggingface.co/mistralai/Mistral-Small-3.1-24B-Base-2503).
For more comparison against other models of similar size, please check [Mistral-Small-3.1's Benchmarks'](https://huggingface.co/mistralai/Mistral-Small-3.1-24B-Base-2503#benchmark-results)
### Text
#### Instruction Following / Chat / Tone
| Model | Wildbench v2 | Arena Hard v2 | IF (Internal; accuracy) |
|-------|---------------|---------------|------------------------|
| Small 3.1 24B Instruct | 55.6% | 19.56% | 82.75% |
| **Small 3.2 24B Instruct** | **65.33%** | **43.1%** | **84.78%** |
#### Infinite Generations
Small 3.2 reduces infitine generations by 2x on challenging, long and repetitive prompts.
| Model | Infinite Generations (Internal; Lower is better) |
|-------|-------|
| Small 3.1 24B Instruct | 2.11% |
| **Small 3.2 24B Instruct** | **1.29%** |
#### STEM
| Model | MMLU | MMLU Pro (5-shot CoT) | MATH | GPQA Main (5-shot CoT) | GPQA Diamond (5-shot CoT )| MBPP Plus - Pass@5 | HumanEval Plus - Pass@5 | SimpleQA (TotalAcc)|
|--------------------------------|-----------|-----------------------|------------------------|------------------------|---------------------------|--------------------|-------------------------|--------------------|
| Small 3.1 24B Instruct | 80.62% | 66.76% | 69.30% | 44.42% | 45.96% | 74.63% | 88.99% | 10.43% |
| **Small 3.2 24B Instruct** | 80.50% | **69.06%** | 69.42% | 44.22% | 46.13% | **78.33%** | **92.90%** | **12.10%** |
### Vision
| Model | MMMU | Mathvista | ChartQA | DocVQA | AI2D |
|--------------------------------|------------|-----------|-----------|-----------|-----------|
| Small 3.1 24B Instruct | **64.00%** | **68.91%**| 86.24% | 94.08% | 93.72% |
| **Small 3.2 24B Instruct** | 62.50% | 67.09% | **87.4%** | 94.86% | 92.91% |
## Usage
The model can be used with the following frameworks;
- [`vllm (recommended)`](https://github.com/vllm-project/vllm): See [here](#vllm-recommended)
- [`transformers`](https://github.com/huggingface/transformers): See [here](#transformers)
**Note 1**: We recommend using a relatively low temperature, such as `temperature=0.15`.
**Note 2**: Make sure to add a system prompt to the model to best tailer it for your needs. If you want to use the model as a general assistant, we recommend to use the one provided in the [SYSTEM_PROMPT.txt](https://huggingface.co/mistralai/Mistral-Small-3.2-24B-Instruct-2506/blob/main/SYSTEM_PROMPT.txt) file.
### vLLM (recommended)
We recommend using this model with [vLLM](https://github.com/vllm-project/vllm).
#### Installation
Make sure to install [`vLLM >= 0.9.1`](https://github.com/vllm-project/vllm/releases/tag/v0.9.1):
```
pip install vllm --upgrade
```
Doing so should automatically install [`mistral_common >= 1.6.2`](https://github.com/mistralai/mistral-common/releases/tag/v1.6.2).
To check:
```
python -c "import mistral_common; print(mistral_common.__version__)"
```
You can also make use of a ready-to-go [docker image](https://github.com/vllm-project/vllm/blob/main/Dockerfile) or on the [docker hub](https://hub.docker.com/layers/vllm/vllm-openai/latest/images/sha256-de9032a92ffea7b5c007dad80b38fd44aac11eddc31c435f8e52f3b7404bbf39).
#### Serve
We recommand that you use Mistral-Small-3.2-24B-Instruct-2506 in a server/client setting.
1. Spin up a server:
```
vllm serve mistralai/Mistral-Small-3.2-24B-Instruct-2506 --tokenizer_mode mistral --config_format mistral --load_format mistral --tool-call-parser mistral --enable-auto-tool-choice --limit_mm_per_prompt 'image=10' --tensor-parallel-size 2
```
**Note:** Running Mistral-Small-3.2-24B-Instruct-2506 on GPU requires ~55 GB of GPU RAM in bf16 or fp16.
2. To ping the client you can use a simple Python snippet. See the following examples.
#### Vision reasoning
Take leverage of the vision capabilities of Mistral-Small-3.2-24B-Instruct-2506 to take the best choice given a scenario, go catch them all !
<details>
<summary>Python snippet</summary>
```py
from datetime import datetime, timedelta
from openai import OpenAI
from huggingface_hub import hf_hub_download
# Modify OpenAI's API key and API base to use vLLM's API server.
openai_api_key = "EMPTY"
openai_api_base = "http://localhost:8000/v1"
TEMP = 0.15
MAX_TOK = 131072
client = OpenAI(
api_key=openai_api_key,
base_url=openai_api_base,
)
models = client.models.list()
model = models.data[0].id
def load_system_prompt(repo_id: str, filename: str) -> str:
file_path = hf_hub_download(repo_id=repo_id, filename=filename)
with open(file_path, "r") as file:
system_prompt = file.read()
today = datetime.today().strftime("%Y-%m-%d")
yesterday = (datetime.today() - timedelta(days=1)).strftime("%Y-%m-%d")
model_name = repo_id.split("/")[-1]
return system_prompt.format(name=model_name, today=today, yesterday=yesterday)
model_id = "mistralai/Mistral-Small-3.2-24B-Instruct-2506"
SYSTEM_PROMPT = load_system_prompt(model_id, "SYSTEM_PROMPT.txt")
image_url = "https://static.wikia.nocookie.net/essentialsdocs/images/7/70/Battle.png/revision/latest?cb=20220523172438"
messages = [
{"role": "system", "content": SYSTEM_PROMPT},
{
"role": "user",
"content": [
{
"type": "text",
"text": "What action do you think I should take in this situation? List all the possible actions and explain why you think they are good or bad.",
},
{"type": "image_url", "image_url": {"url": image_url}},
],
},
]
response = client.chat.completions.create(
model=model,
messages=messages,
temperature=TEMP,
max_tokens=MAX_TOK,
)
print(response.choices[0].message.content)
# In this situation, you are playing a Pokรฉmon game where your Pikachu (Level 42) is facing a wild Pidgey (Level 17). Here are the possible actions you can take and an analysis of each:
# 1. **FIGHT**:
# - **Pros**: Pikachu is significantly higher level than the wild Pidgey, which suggests that it should be able to defeat Pidgey easily. This could be a good opportunity to gain experience points and possibly items or money.
# - **Cons**: There is always a small risk of Pikachu fainting, especially if Pidgey has a powerful move or a status effect that could hinder Pikachu. However, given the large level difference, this risk is minimal.
# 2. **BAG**:
# - **Pros**: You might have items in your bag that could help in this battle, such as Potions, Pokรฉ Balls, or Berries. Using an item could help you capture the Pidgey or heal your Pikachu if needed.
# - **Cons**: Using items might not be necessary given the level difference. It could be more efficient to just fight and defeat the Pidgey quickly.
# 3. **POKรMON**:
# - **Pros**: You might have another Pokรฉmon in your party that is better suited for this battle or that you want to gain experience. Switching Pokรฉmon could also be a strategic move if you want to train a lower-level Pokรฉmon.
# - **Cons**: Switching Pokรฉmon might not be necessary since Pikachu is at a significant advantage. It could also waste time and potentially give Pidgey a turn to attack.
# 4. **RUN**:
# - **Pros**: Running away could save time and conserve your Pokรฉmon's health and resources. If you are in a hurry or do not need the experience or items, running away is a safe option.
# - **Cons**: Running away means you miss out on the experience points and potential items or money that you could gain from defeating the Pidgey. It also means you do not get the chance to capture the Pidgey if you wanted to.
# ### Recommendation:
# Given the significant level advantage, the best action is likely to **FIGHT**. This will allow you to quickly defeat the Pidgey, gain experience points, and potentially earn items or money. If you are concerned about Pikachu's health, you could use an item from your **BAG** to heal it before or during the battle. Running away or switching Pokรฉmon does not seem necessary in this situation.
```
</details>
#### Function calling
Mistral-Small-3.2-24B-Instruct-2506 is excellent at function / tool calling tasks via vLLM. *E.g.:*
<details>
<summary>Python snippet - easy</summary>
```py
from openai import OpenAI
from huggingface_hub import hf_hub_download
# Modify OpenAI's API key and API base to use vLLM's API server.
openai_api_key = "EMPTY"
openai_api_base = "http://localhost:8000/v1"
TEMP = 0.15
MAX_TOK = 131072
client = OpenAI(
api_key=openai_api_key,
base_url=openai_api_base,
)
models = client.models.list()
model = models.data[0].id
def load_system_prompt(repo_id: str, filename: str) -> str:
file_path = hf_hub_download(repo_id=repo_id, filename=filename)
with open(file_path, "r") as file:
system_prompt = file.read()
return system_prompt
model_id = "mistralai/Mistral-Small-3.2-24B-Instruct-2506"
SYSTEM_PROMPT = load_system_prompt(model_id, "SYSTEM_PROMPT.txt")
image_url = "https://huggingface.co/datasets/patrickvonplaten/random_img/resolve/main/europe.png"
tools = [
{
"type": "function",
"function": {
"name": "get_current_population",
"description": "Get the up-to-date population of a given country.",
"parameters": {
"type": "object",
"properties": {
"country": {
"type": "string",
"description": "The country to find the population of.",
},
"unit": {
"type": "string",
"description": "The unit for the population.",
"enum": ["millions", "thousands"],
},
},
"required": ["country", "unit"],
},
},
},
{
"type": "function",
"function": {
"name": "rewrite",
"description": "Rewrite a given text for improved clarity",
"parameters": {
"type": "object",
"properties": {
"text": {
"type": "string",
"description": "The input text to rewrite",
}
},
},
},
},
]
messages = [
{"role": "system", "content": SYSTEM_PROMPT},
{
"role": "user",
"content": "Could you please make the below article more concise?\n\nOpenAI is an artificial intelligence research laboratory consisting of the non-profit OpenAI Incorporated and its for-profit subsidiary corporation OpenAI Limited Partnership.",
},
{
"role": "assistant",
"content": "",
"tool_calls": [
{
"id": "bbc5b7ede",
"type": "function",
"function": {
"name": "rewrite",
"arguments": '{"text": "OpenAI is an artificial intelligence research laboratory consisting of the non-profit OpenAI Incorporated and its for-profit subsidiary corporation OpenAI Limited Partnership."}',
},
}
],
},
{
"role": "tool",
"content": '{"action":"rewrite","outcome":"OpenAI is a FOR-profit company."}',
"tool_call_id": "bbc5b7ede",
"name": "rewrite",
},
{
"role": "assistant",
"content": "---\n\nOpenAI is a FOR-profit company.",
},
{
"role": "user",
"content": [
{
"type": "text",
"text": "Can you tell me what is the biggest country depicted on the map?",
},
{
"type": "image_url",
"image_url": {
"url": image_url,
},
},
],
}
]
response = client.chat.completions.create(
model=model,
messages=messages,
temperature=TEMP,
max_tokens=MAX_TOK,
tools=tools,
tool_choice="auto",
)
assistant_message = response.choices[0].message.content
print(assistant_message)
# The biggest country depicted on the map is Russia.
messages.extend([
{"role": "assistant", "content": assistant_message},
{"role": "user", "content": "What is the population of that country in millions?"},
])
response = client.chat.completions.create(
model=model,
messages=messages,
temperature=TEMP,
max_tokens=MAX_TOK,
tools=tools,
tool_choice="auto",
)
print(response.choices[0].message.tool_calls)
# [ChatCompletionMessageToolCall(id='3e92V6Vfo', function=Function(arguments='{"country": "Russia", "unit": "millions"}', name='get_current_population'), type='function')]
```
</details>
<details>
<summary>Python snippet - complex</summary>
```python
import json
from openai import OpenAI
from huggingface_hub import hf_hub_download
# Modify OpenAI's API key and API base to use vLLM's API server.
openai_api_key = "EMPTY"
openai_api_base = "http://localhost:8000/v1"
TEMP = 0.15
MAX_TOK = 131072
client = OpenAI(
api_key=openai_api_key,
base_url=openai_api_base,
)
models = client.models.list()
model = models.data[0].id
def load_system_prompt(repo_id: str, filename: str) -> str:
file_path = hf_hub_download(repo_id=repo_id, filename=filename)
with open(file_path, "r") as file:
system_prompt = file.read()
return system_prompt
model_id = "mistralai/Mistral-Small-3.2-24B-Instruct-2506"
SYSTEM_PROMPT = load_system_prompt(model_id, "SYSTEM_PROMPT.txt")
image_url = "https://math-coaching.com/img/fiche/46/expressions-mathematiques.jpg"
def my_calculator(expression: str) -> str:
return str(eval(expression))
tools = [
{
"type": "function",
"function": {
"name": "my_calculator",
"description": "A calculator that can evaluate a mathematical expression.",
"parameters": {
"type": "object",
"properties": {
"expression": {
"type": "string",
"description": "The mathematical expression to evaluate.",
},
},
"required": ["expression"],
},
},
},
{
"type": "function",
"function": {
"name": "rewrite",
"description": "Rewrite a given text for improved clarity",
"parameters": {
"type": "object",
"properties": {
"text": {
"type": "string",
"description": "The input text to rewrite",
}
},
},
},
},
]
messages = [
{"role": "system", "content": SYSTEM_PROMPT},
{
"role": "user",
"content": [
{
"type": "text",
"text": "Can you calculate the results for all the equations displayed in the image? Only compute the ones that involve numbers.",
},
{
"type": "image_url",
"image_url": {
"url": image_url,
},
},
],
},
]
response = client.chat.completions.create(
model=model,
messages=messages,
temperature=TEMP,
max_tokens=MAX_TOK,
tools=tools,
tool_choice="auto",
)
tool_calls = response.choices[0].message.tool_calls
print(tool_calls)
# [ChatCompletionMessageToolCall(id='CyQBSAtGh', function=Function(arguments='{"expression": "6 + 2 * 3"}', name='my_calculator'), type='function'), ChatCompletionMessageToolCall(id='KQqRCqvzc', function=Function(arguments='{"expression": "19 - (8 + 2) + 1"}', name='my_calculator'), type='function')]
results = []
for tool_call in tool_calls:
function_name = tool_call.function.name
function_args = tool_call.function.arguments
if function_name == "my_calculator":
result = my_calculator(**json.loads(function_args))
results.append(result)
messages.append({"role": "assistant", "tool_calls": tool_calls})
for tool_call, result in zip(tool_calls, results):
messages.append(
{
"role": "tool",
"tool_call_id": tool_call.id,
"name": tool_call.function.name,
"content": result,
}
)
response = client.chat.completions.create(
model=model,
messages=messages,
temperature=TEMP,
max_tokens=MAX_TOK,
)
print(response.choices[0].message.content)
# Here are the results for the equations that involve numbers:
# 1. \( 6 + 2 \times 3 = 12 \)
# 3. \( 19 - (8 + 2) + 1 = 10 \)
# For the other equations, you need to substitute the variables with specific values to compute the results.
```
</details>
#### Instruction following
Mistral-Small-3.2-24B-Instruct-2506 will follow your instructions down to the last letter !
<details>
<summary>Python snippet</summary>
```python
from openai import OpenAI
from huggingface_hub import hf_hub_download
# Modify OpenAI's API key and API base to use vLLM's API server.
openai_api_key = "EMPTY"
openai_api_base = "http://localhost:8000/v1"
TEMP = 0.15
MAX_TOK = 131072
client = OpenAI(
api_key=openai_api_key,
base_url=openai_api_base,
)
models = client.models.list()
model = models.data[0].id
def load_system_prompt(repo_id: str, filename: str) -> str:
file_path = hf_hub_download(repo_id=repo_id, filename=filename)
with open(file_path, "r") as file:
system_prompt = file.read()
return system_prompt
model_id = "mistralai/Mistral-Small-3.2-24B-Instruct-2506"
SYSTEM_PROMPT = load_system_prompt(model_id, "SYSTEM_PROMPT.txt")
messages = [
{"role": "system", "content": SYSTEM_PROMPT},
{
"role": "user",
"content": "Write me a sentence where every word starts with the next letter in the alphabet - start with 'a' and end with 'z'.",
},
]
response = client.chat.completions.create(
model=model,
messages=messages,
temperature=TEMP,
max_tokens=MAX_TOK,
)
assistant_message = response.choices[0].message.content
print(assistant_message)
# Here's a sentence where each word starts with the next letter of the alphabet, starting from 'a' and ending with 'z':
# "Always brave cats dance elegantly, fluffy giraffes happily ignore jungle kites, lovingly munching nuts, observing playful quails racing swiftly, tiny unicorns vaulting while xylophones yodel zealously."
# This sentence follows the sequence from A to Z without skipping any letters.
```
</details>
### Transformers
You can also use Mistral-Small-3.2-24B-Instruct-2506 with `Transformers` !
To make the best use of our model with `Transformers` make sure to have [installed](https://github.com/mistralai/mistral-common) `mistral-common >= 1.6.2` to use our tokenizer.
```bash
pip install mistral-common --upgrade
```
Then load our tokenizer along with the model and generate:
<details>
<summary>Python snippet</summary>
```python
from datetime import datetime, timedelta
import torch
from mistral_common.protocol.instruct.request import ChatCompletionRequest
from mistral_common.tokens.tokenizers.mistral import MistralTokenizer
from huggingface_hub import hf_hub_download
from transformers import Mistral3ForConditionalGeneration
def load_system_prompt(repo_id: str, filename: str) -> str:
file_path = hf_hub_download(repo_id=repo_id, filename=filename)
with open(file_path, "r") as file:
system_prompt = file.read()
today = datetime.today().strftime("%Y-%m-%d")
yesterday = (datetime.today() - timedelta(days=1)).strftime("%Y-%m-%d")
model_name = repo_id.split("/")[-1]
return system_prompt.format(name=model_name, today=today, yesterday=yesterday)
model_id = "mistralai/Mistral-Small-3.2-24B-Instruct-2506"
SYSTEM_PROMPT = load_system_prompt(model_id, "SYSTEM_PROMPT.txt")
tokenizer = MistralTokenizer.from_hf_hub(model_id)
model = Mistral3ForConditionalGeneration.from_pretrained(
model_id, torch_dtype=torch.bfloat16
)
image_url = "https://static.wikia.nocookie.net/essentialsdocs/images/7/70/Battle.png/revision/latest?cb=20220523172438"
messages = [
{"role": "system", "content": SYSTEM_PROMPT},
{
"role": "user",
"content": [
{
"type": "text",
"text": "What action do you think I should take in this situation? List all the possible actions and explain why you think they are good or bad.",
},
{"type": "image_url", "image_url": {"url": image_url}},
],
},
]
tokenized = tokenizer.encode_chat_completion(ChatCompletionRequest(messages=messages))
input_ids = torch.tensor([tokenized.tokens])
attention_mask = torch.ones_like(input_ids)
pixel_values = torch.tensor(tokenized.images[0], dtype=torch.bfloat16).unsqueeze(0)
image_sizes = torch.tensor([pixel_values.shape[-2:]])
output = model.generate(
input_ids=input_ids,
attention_mask=attention_mask,
pixel_values=pixel_values,
image_sizes=image_sizes,
max_new_tokens=1000,
)[0]
decoded_output = tokenizer.decode(output[len(tokenized.tokens) :])
print(decoded_output)
# In this situation, you are playing a Pokรฉmon game where your Pikachu (Level 42) is facing a wild Pidgey (Level 17). Here are the possible actions you can take and an analysis of each:
# 1. **FIGHT**:
# - **Pros**: Pikachu is significantly higher level than the wild Pidgey, which suggests that it should be able to defeat Pidgey easily. This could be a good opportunity to gain experience points and possibly items or money.
# - **Cons**: There is always a small risk of Pikachu fainting, especially if Pidgey has a powerful move or a status effect that could hinder Pikachu. However, given the large level difference, this risk is minimal.
# 2. **BAG**:
# - **Pros**: You might have items in your bag that could help in this battle, such as Potions, Pokรฉ Balls, or Berries. Using an item could help you capture Pidgey or heal Pikachu if needed.
# - **Cons**: Using items might not be necessary given the level difference. It could be more efficient to just fight and defeat Pidgey quickly.
# 3. **POKรMON**:
# - **Pros**: You might have another Pokรฉmon in your party that is better suited for this battle or that you want to gain experience. Switching Pokรฉmon could also be strategic if you want to train a lower-level Pokรฉmon.
# - **Cons**: Switching Pokรฉmon might not be necessary since Pikachu is at a significant advantage. It could also waste time and potentially give Pidgey a turn to attack.
# 4. **RUN**:
# - **Pros**: Running away could be a quick way to avoid the battle altogether. This might be useful if you are trying to conserve resources or if you are in a hurry to get to another location.
# - **Cons**: Running away means you miss out on the experience points, items, or money that you could gain from defeating Pidgey. It also might not be the most efficient use of your time if you are trying to train your Pokรฉmon.
# ### Recommendation:
# Given the significant level advantage, the best action to take is likely **FIGHT**. This will allow you to quickly defeat Pidgey and gain experience points for Pikachu. If you are concerned about Pikachu's health, you could use the **BAG** to heal Pikachu before or during the battle. Running away or switching Pokรฉmon does not seem necessary in this situation.
```
</details>
|
stablediffusionapi/realcartoonanime-v2
|
stablediffusionapi
| 2025-06-20T19:41:16Z | 0 | 0 |
diffusers
|
[
"diffusers",
"modelslab.com",
"stable-diffusion-api",
"text-to-image",
"ultra-realistic",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2025-06-20T19:38:05Z |
---
license: creativeml-openrail-m
tags:
- modelslab.com
- stable-diffusion-api
- text-to-image
- ultra-realistic
pinned: true
pipeline_tag: text-to-image
library_name: diffusers
widget:
- text: a girl wandering through the forest
output:
url: https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/e3af7aee-37af-4665-af60-dd0800d2c3f8/width=1024/1323129.jpeg
---
# RealCartoon-Anime - V2 API Inference
<Gallery />
## Get API Key
Get API key from [ModelsLab API](http://modelslab.com), No Payment needed.
Replace Key in below code, change **model_id** to "realcartoonanime-v2"
Coding in PHP/Node/Java etc? Have a look at docs for more code examples: [View docs](https://docs.modelslab.com)
Try model for free: [Generate Images](https://modelslab.com/models/realcartoonanime-v2)
Model link: [View model](https://modelslab.com/models/realcartoonanime-v2)
View all models: [View Models](https://modelslab.com/models)
```python
import requests
import json
url = "https://modelslab.com/api/v6/images/text2img"
payload = json.dumps({
"key": "your_api_key",
"model_id": "realcartoonanime-v2",
"prompt": "ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K",
"negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime",
"width": "512",
"height": "512",
"samples": "1",
"num_inference_steps": "30",
"safety_checker": "no",
"enhance_prompt": "yes",
"seed": None,
"guidance_scale": 7.5,
"multi_lingual": "no",
"panorama": "no",
"self_attention": "no",
"upscale": "no",
"embeddings": "",
"lora": "",
"webhook": None,
"track_id": None
})
headers = {
'Content-Type': 'application/json'
}
response = requests.request("POST", url, headers=headers, data=payload)
print(response.text)
```
> Use this coupon code to get 25% off **DMGG0RBN**
|
88RedPanda88/nsp-bert-final-v2
|
88RedPanda88
| 2025-06-20T19:38:53Z | 262 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"next-sentence-prediction",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-01T16:35:33Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
88RedPanda88/nsp-bert-final
|
88RedPanda88
| 2025-06-20T19:36:29Z | 9 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"next-sentence-prediction",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-01T16:03:48Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
JonLoRA/deynairaLoRAv3
|
JonLoRA
| 2025-06-20T19:35:53Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-06-20T10:34:22Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: photo of a girl
---
# Deynairalorav3
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `photo of a girl` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "photo of a girl",
"lora_weights": "https://huggingface.co/JonLoRA/deynairaLoRAv3/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [๐งจ diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('JonLoRA/deynairaLoRAv3', weight_name='lora.safetensors')
image = pipeline('photo of a girl').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 6000
- Learning rate: 0.0002
- LoRA rank: 64
## Contribute your own examples
You can use the [community tab](https://huggingface.co/JonLoRA/deynairaLoRAv3/discussions) to add images that show off what youโve made with this LoRA.
|
stablediffusionapi/disneypixarcartoontypeb-v10
|
stablediffusionapi
| 2025-06-20T19:28:48Z | 0 | 0 |
diffusers
|
[
"diffusers",
"modelslab.com",
"stable-diffusion-api",
"text-to-image",
"ultra-realistic",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2025-06-20T19:25:37Z |
---
license: creativeml-openrail-m
tags:
- modelslab.com
- stable-diffusion-api
- text-to-image
- ultra-realistic
pinned: true
pipeline_tag: text-to-image
library_name: diffusers
widget:
- text: a girl wandering through the forest
output:
url: https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/af658ebe-1096-4f5f-9d78-55996d287ab4/width=1024/902862.jpeg
---
# Disney Pixar Cartoon type B - v1.0 API Inference
<Gallery />
## Get API Key
Get API key from [ModelsLab API](http://modelslab.com), No Payment needed.
Replace Key in below code, change **model_id** to "disneypixarcartoontypeb-v10"
Coding in PHP/Node/Java etc? Have a look at docs for more code examples: [View docs](https://docs.modelslab.com)
Try model for free: [Generate Images](https://modelslab.com/models/disneypixarcartoontypeb-v10)
Model link: [View model](https://modelslab.com/models/disneypixarcartoontypeb-v10)
View all models: [View Models](https://modelslab.com/models)
```python
import requests
import json
url = "https://modelslab.com/api/v6/images/text2img"
payload = json.dumps({
"key": "your_api_key",
"model_id": "disneypixarcartoontypeb-v10",
"prompt": "ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K",
"negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime",
"width": "512",
"height": "512",
"samples": "1",
"num_inference_steps": "30",
"safety_checker": "no",
"enhance_prompt": "yes",
"seed": None,
"guidance_scale": 7.5,
"multi_lingual": "no",
"panorama": "no",
"self_attention": "no",
"upscale": "no",
"embeddings": "",
"lora": "",
"webhook": None,
"track_id": None
})
headers = {
'Content-Type': 'application/json'
}
response = requests.request("POST", url, headers=headers, data=payload)
print(response.text)
```
> Use this coupon code to get 25% off **DMGG0RBN**
|
gabriellarson/NatureLM-8x7B-Inst-GGUF
|
gabriellarson
| 2025-06-20T19:24:11Z | 17 | 0 | null |
[
"gguf",
"biology",
"chemistry",
"en",
"arxiv:2502.07527",
"base_model:microsoft/NatureLM-8x7B-Inst",
"base_model:quantized:microsoft/NatureLM-8x7B-Inst",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2025-06-20T05:03:07Z |
---
license: mit
language:
- en
tags:
- biology
- chemistry
base_model:
- microsoft/NatureLM-8x7B-Inst
---
**these quants are currently not working**
# Model details
## Model description
Nature Language Model (NatureLM) is a sequence-based science foundation model designed for scientific discovery. Pre-trained with data from multiple scientific domains, NatureLM offers a unified, versatile model that enables various applications including generating and optimizing small molecules, proteins, RNA, and materials using text instructions; cross-domain generation/design such as protein-to-molecule and protein-to-RNA generation; and top performance across different domains.
- Developed by: SFM team โ Microsoft Research AI for Science
- Model type: Sequence-based science foundation model
- Language(s): English
- License: MIT License
- Finetuned from model: one version of the model is finetuned from Mixtral-8x7B-v0.1
# Model sources
## Repository:
We provide two repositories for 8x7B models, including both base versions and instruction-finetuned versions.
- https://huggingface.co/microsoft/NatureLM-8x7B
- https://huggingface.co/microsoft/NatureLM-8x7B-Inst
## Paper:
[[2502.07527] Nature Language Model: Deciphering the Language of Nature for Scientific Discovery](https://arxiv.org/abs/2502.07527)
# Uses
## Direct intended uses
NatureLM is designed to facilitate scientific discovery across multiple domains, including the generation and optimization of small molecules, proteins, and RNA. It offers two unique features: (1) Text-driven capability โ users can prompt NatureLM using natural language instructions; and (2) Cross-domain functionality โ NatureLM can perform complex cross-domain tasks, such as generating compounds for specific targets or designing protein binders for small molecules.
Downstream uses:
Science researchers can finetune NatureLM for their own tasks, especially cross-domain generation tasks.
## Out-of-scope uses
### Use in Real-World Applications Beyond Proof of Concept
NatureLM currently not ready to use in clinical applications, without rigorous external validation and additional specialized development. It is being released for research purposes only.
### Use outside of the science domain
NatureLM is not a general-purpose language model and is not designed or optimized to perform general tasks like text summarization or Q&A.
### Use by Non-Experts
NatureLM outputs scientific entities (e.g., molecules, proteins, materials) and requires expert interpretation, validation, and analysis. It is not intended for use by non-experts or individuals without the necessary domain knowledge to evaluate and verify its outputs. Outputs, such as small molecule inhibitors for target proteins, require rigorous validation to ensure safety and efficacy. Misuse by non-experts may lead to the design of inactive or suboptimal compounds, resulting in wasted resources and potentially delaying critical research or development efforts.
### CBRN Applications (Chemical, Biological, Radiological, and Nuclear)
NatureLM is not intended for the design, development, or optimization of agents or materials for harmful purposes, including but not limited to weapons of mass destruction, bioterrorism, or other malicious uses.
### Unethical or Harmful Applications
The use of NatureLM must align with ethical research practices. It is not intended for tasks that could cause harm to individuals, communities, or the environment.
## Risks and limitations
NatureLM may not always generate compounds or proteins precisely aligned with user instructions. Users are advised to apply their own adaptive filters before proceeding. Users are responsible for verification of model outputs and decision-making.
NatureLM was designed and tested using the English language. Performance in other languages may vary and should be assessed by someone who is both an expert in the expected outputs and a native speaker of that language.
NatureLM inherits any biases, errors, or omissions characteristic of its training data, which may be amplified by any AI-generated interpretations. For example, inorganic data in our training corpus is relatively limited, comprising only 0.02 billion tokens out of a total of 143 billion tokens. As a result, the model's performance on inorganic-related tasks is constrained. In contrast, protein-related data dominates the corpus, with 65.3 billion tokens, accounting for the majority of the training data.
There has not been a systematic effort to ensure that systems using NatureLM are protected from security vulnerabilities such as indirect prompt injection attacks. Any systems using it should take proactive measures to harden their systems as appropriate.
# Training details
## Training data
The pre-training data includes text, small molecules (SMILES notations), proteins (FASTA format), materials (chemical composition and space group number), DNA (FASTA format), and RNA (FASTA format). The dataset contains single-domain sequences and cross-domain sequences.
## Training procedure
Preprocessing
The training procedure involves two stages: Stage 1 focuses on training newly introduced tokens while freezing existing model parameters. Stage 2 involves joint optimization of both new and existing parameters to enhance overall performance.
## Training hyperparameters
- Learning Rate: 2ร10<sup>โ4</sup>
- Batch Size (Sentences): 8x7B model: 1536
- Context Length (Tokens): 8192
- GPU Number (H100): 8x7B model: 256
## Speeds, sizes, times
Model sized listed above;
# Evaluation
## Testing data, factors, and metrics
Testing data
The testing data includes 22 types of scientific tasks such as molecular generation, protein generation, material generation, RNA generation, and prediction tasks across small molecules, proteins, DNA.
## Factors
1. Cross-Domain Adaptability: The ability of NatureLM to perform tasks that span multiple scientific domains (e.g., protein-to-compound generation, RNA design for CRISPR targets, or material design with specific properties).
2. Accuracy of Outputs: For tasks like retrosynthesis, assess the correctness of the outputs compared to ground truth or experimentally validated data.
3. Diversity and Novelty of Outputs: Evaluate whether the generated outputs are novel (e.g., new molecules or materials not present in databases or training data).
4. Scalability Across Model Sizes: Assess the performance improvements as the model size increases (1B, 8B, and 46.7B parameters).
## Metrics
Accuracy, AUROC, and independently trained AI-based predictors are utilized for various tasks.
Evaluation results
1. We successfully demonstrated that NatureLM is capable of performing tasks such as target-to-compound, target-to-RNA, and DNA-to-RNA generation.
2. NatureLM achieves state-of-the-art results on retrosynthesis benchmarks and the MatBench benchmark for materials.
3. NatureLM can generate novel proteins, small molecules, and materials.
# Summary
Nature Language Model (NatureLM) is a groundbreaking sequence-based science foundation model designed to unify multiple scientific domains, including small molecules, materials, proteins, DNA and RNA. This innovative model leverages the "language of nature" to enable scientific discovery through text-based instructions. NatureLM represents a significant advancement in the field of artificial intelligence, providing researchers with a powerful tool to drive innovation and accelerate scientific breakthroughs. By integrating knowledge across multiple scientific domains, NatureLM paves the way for new discoveries and advancements in various fields of science. We hope to release it to benefit more users and
contribute to the development of AI for Science research.
# Model card contact
This work was conducted in Microsoft Research AI for Science. We welcome feedback and collaboration from our audience. If you have suggestions, questions, or observe unexpected/offensive behavior in our technology, please contact us at:
- Yingce Xia, [email protected]
- Chen Hu, [email protected]
- Yawen Yang, [email protected]
If the team receives reports of undesired behavior or identifies issues independently, we will update this repository with appropriate mitigations.
|
Mezzo-fun-Viral-video-Link/wATCH.Mezzo.fun.viral.video.Leaks.Official
|
Mezzo-fun-Viral-video-Link
| 2025-06-20T19:22:25Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-20T19:21:47Z |
<animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
<animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
AmberYifan/Qwen2.5-7B-Instruct-userfeedback-SFT-SPIN-iter2
|
AmberYifan
| 2025-06-20T19:21:28Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"trl",
"dpo",
"conversational",
"arxiv:2305.18290",
"base_model:AmberYifan/Qwen2.5-7B-Instruct-userfeedback-SFT-SPIN-iter1",
"base_model:finetune:AmberYifan/Qwen2.5-7B-Instruct-userfeedback-SFT-SPIN-iter1",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-20T18:50:46Z |
---
base_model: AmberYifan/Qwen2.5-7B-Instruct-userfeedback-SFT-SPIN-iter1
library_name: transformers
model_name: Qwen2.5-7B-Instruct-userfeedback-SFT-SPIN-iter2
tags:
- generated_from_trainer
- trl
- dpo
licence: license
---
# Model Card for Qwen2.5-7B-Instruct-userfeedback-SFT-SPIN-iter2
This model is a fine-tuned version of [AmberYifan/Qwen2.5-7B-Instruct-userfeedback-SFT-SPIN-iter1](https://huggingface.co/AmberYifan/Qwen2.5-7B-Instruct-userfeedback-SFT-SPIN-iter1).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="AmberYifan/Qwen2.5-7B-Instruct-userfeedback-SFT-SPIN-iter2", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/yifanwang/huggingface/runs/yrmpk3mo)
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.12.2
- Transformers: 4.46.3
- Pytorch: 2.7.0
- Datasets: 3.6.0
- Tokenizers: 0.20.3
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
kamal-kaur-mms-viral-video-link/New.clip.18.kamal.kaur.mms.viral.video
|
kamal-kaur-mms-viral-video-link
| 2025-06-20T19:19:45Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-20T19:19:31Z |
<animated-image data-catalyst=""><a href="https://wtach.club/leakvideo/?h" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
Zeinab321/Mistral
|
Zeinab321
| 2025-06-20T19:17:13Z | 17 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"mistral",
"text-generation",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/Mistral-Nemo-Base-2407-bnb-4bit",
"base_model:quantized:unsloth/Mistral-Nemo-Base-2407-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2025-06-16T21:16:33Z |
---
base_model:
- unsloth/Mistral-Nemo-Base-2407-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** Zeinab321
- **License:** apache-2.0
- **Finetuned from model :** Zeinab321/Mistral
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
BootesVoid/cmc54zdg602mjbfif6u3mk3jn_cmc55ek9i02ntbfif6f4rdj91
|
BootesVoid
| 2025-06-20T19:16:17Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-06-20T19:16:15Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: LW
---
# Cmc54Zdg602Mjbfif6U3Mk3Jn_Cmc55Ek9I02Ntbfif6F4Rdj91
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `LW` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "LW",
"lora_weights": "https://huggingface.co/BootesVoid/cmc54zdg602mjbfif6u3mk3jn_cmc55ek9i02ntbfif6f4rdj91/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [๐งจ diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BootesVoid/cmc54zdg602mjbfif6u3mk3jn_cmc55ek9i02ntbfif6f4rdj91', weight_name='lora.safetensors')
image = pipeline('LW').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/BootesVoid/cmc54zdg602mjbfif6u3mk3jn_cmc55ek9i02ntbfif6f4rdj91/discussions) to add images that show off what youโve made with this LoRA.
|
andrewsamce/ppo-LunarLander-v2
|
andrewsamce
| 2025-06-20T19:14:32Z | 21 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-06-06T19:01:27Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 268.76 +/- 15.19
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
|
moxin-org/Moxin-7B-VLM
|
moxin-org
| 2025-06-20T19:13:34Z | 59 | 1 | null |
[
"arxiv:2412.06845",
"license:mit",
"region:us"
] | null | 2025-06-09T23:40:00Z |
---
license: mit
---
<h1 align="center"> Moxin 7B VLM </h1>
<p align="center"> <a href="https://github.com/moxin-org/Moxin-VLM">Home Page</a>    |    <a href="https://arxiv.org/abs/2412.06845">Technical Report</a>    |    <a href="https://huggingface.co/moxin-org/Moxin-7B-LLM">Base Model</a>    |    <a href="https://huggingface.co/moxin-org/Moxin-7B-Chat">Chat Model</a>    |    <a href="https://huggingface.co/moxin-org/Moxin-7B-Instruct">Instruct Model</a>    |    <a href="https://huggingface.co/moxin-org/Moxin-7B-Reasoning">Reasoning Model</a>    |    <a href="https://huggingface.co/moxin-org/Moxin-7B-VLM">VLM Model</a> </p>
---
## Installation
```bash
git clone https://github.com/moxin-org/Moxin-VLM.git
cd Moxin-VLM
conda create -n moxin-vlm python=3.10 -y
conda activate moxin-vlm
pip install torch==2.4.1 torchvision==0.19.1
pip install transformers==4.46.0 peft==0.15.2
pip install -e .
# Install Flash Attention 2
# =>> If you run into difficulty, try `pip cache remove flash_attn` first
pip install flash-attn==2.6.3 --no-build-isolation
```
## Pretrained Models
Please find our Pretrained Models on our huggingface page: [moxin-org/Moxin-7B-VLM](https://huggingface.co/moxin-org/Moxin-7B-VLM).
We've also provided a hf_convert version [Moxin-7B-VLM-hf](https://huggingface.co/bobchenyx/Moxin-7B-VLM-hf) based on [openvla](https://github.com/openvla/openvla).
Please refer to the attached scripts for downloading and running our model locally.
```bash
python scripts/snapshot_download.py
```
## Usage
For a complete terminal-based CLI for interacting with our VLMs.
```bash
python scripts/generate.py --model_path moxin-org/Moxin-7B-VLM
```
For a faster loading, inference and demo.
```bash
python scripts/fast_inference.py
```
---
## Acknowledgments
This project is based on [Prismatic VLMs](https://github.com/TRI-ML/prismatic-vlms) by [TRI-ML](https://github.com/TRI-ML).
Special thanks to the original contributors for their excellent work.
## Citation
If you find our code or models useful in your work, please cite [our paper](https://arxiv.org/abs/2412.06845v5):
```bibtex
@article{zhao2024fully,
title={Fully Open Source Moxin-7B Technical Report},
author={Zhao, Pu and Shen, Xuan and Kong, Zhenglun and Shen, Yixin and Chang, Sung-En and Rupprecht, Timothy and Lu, Lei and Nan, Enfu and Yang, Changdi and He, Yumei and others},
journal={arXiv preprint arXiv:2412.06845},
year={2024}
}
|
wandb/WeaveFluencyScorerV1
|
wandb
| 2025-06-20T19:12:42Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"modernbert",
"text-classification",
"generated_from_trainer",
"base_model:answerdotai/ModernBERT-base",
"base_model:finetune:answerdotai/ModernBERT-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-06-20T19:12:27Z |
---
library_name: transformers
license: apache-2.0
base_model: answerdotai/ModernBERT-base
tags:
- generated_from_trainer
metrics:
- f1
- accuracy
- precision
- recall
model-index:
- name: fluency-scorer
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fluency-scorer
This model is a fine-tuned version of [answerdotai/ModernBERT-base](https://huggingface.co/answerdotai/ModernBERT-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3830
- F1: 0.8183
- Accuracy: 0.8212
- Precision: 0.8171
- Recall: 0.8212
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Accuracy | Precision | Recall |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:--------:|:---------:|:------:|
| No log | 0 | 0 | 0.7214 | 0.5368 | 0.5168 | 0.6201 | 0.5168 |
| 0.5801 | 1.0 | 6158 | 0.4019 | 0.8069 | 0.8092 | 0.8056 | 0.8092 |
| 0.4354 | 2.0 | 12316 | 0.3835 | 0.8176 | 0.8212 | 0.8165 | 0.8212 |
| 0.4089 | 3.0 | 18474 | 0.3830 | 0.8183 | 0.8212 | 0.8171 | 0.8212 |
### Framework versions
- Transformers 4.48.1
- Pytorch 2.4.1+cu121
- Datasets 3.0.1
- Tokenizers 0.21.0
|
wandb/WeaveContextRelevanceScorerV1
|
wandb
| 2025-06-20T19:11:53Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"deberta-v2",
"token-classification",
"generated_from_trainer",
"base_model:tasksource/deberta-base-long-nli",
"base_model:finetune:tasksource/deberta-base-long-nli",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2025-06-20T19:11:36Z |
---
library_name: transformers
license: apache-2.0
base_model: tasksource/deberta-base-long-nli
tags:
- generated_from_trainer
model-index:
- name: deberta-base-long-nli-relevance-token-clf
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta-base-long-nli-relevance-token-clf
This model is a fine-tuned version of [tasksource/deberta-base-long-nli](https://huggingface.co/tasksource/deberta-base-long-nli) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 12
- eval_batch_size: 32
- seed: 2024
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 4
- total_train_batch_size: 384
- total_eval_batch_size: 256
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Accuracy | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:---:|:--------:|:---------:|:------:|
| No log | 0 | 0 | 0.2041 | 0.0 | 0.0 | 0.0 | 0.0 |
### Framework versions
- Transformers 4.47.0
- Pytorch 2.5.1+cu124
- Datasets 3.1.0
- Tokenizers 0.21.0
|
zahraase1im/distilbert-rotten-tomatoes
|
zahraase1im
| 2025-06-20T19:09:09Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-06-20T19:04:05Z |
---
library_name: transformers
license: apache-2.0
base_model: distilbert/distilbert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: distilbert-rotten-tomatoes
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-rotten-tomatoes
This model is a fine-tuned version of [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.52.4
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1
|
buidgduy/model_qwen2_1.5B_full_1
|
buidgduy
| 2025-06-20T19:08:09Z | 0 | 0 |
transformers
|
[
"transformers",
"text-generation-inference",
"unsloth",
"qwen2",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-20T19:07:39Z |
---
base_model: unsloth/Qwen2-1.5b-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** buidgduy
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen2-1.5b-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
gutimazue/beto-prostata-bs16
|
gutimazue
| 2025-06-20T19:06:58Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"token-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2025-06-20T19:06:36Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Anuj5504/youtube-sentiment-v2
|
Anuj5504
| 2025-06-20T19:06:11Z | 0 | 0 | null |
[
"safetensors",
"distilbert",
"emotion",
"youtube",
"text-classification",
"region:us"
] |
text-classification
| 2025-06-20T19:00:26Z |
---
pipeline_tag: text-classification
tags:
- distilbert
- emotion
- youtube
- safetensors
---
# YouTube Sentiment Classifier
This is a fine-tuned DistilBERT model for emotion classification of YouTube comments...
|
New-videos-Jaipur-Couple-18-Viral-video/FULL.VIDEO.Jaipur.Couple.Viral.Video.Tutorial.Official
|
New-videos-Jaipur-Couple-18-Viral-video
| 2025-06-20T19:06:07Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-20T19:05:15Z |
<animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
<animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
Ankz123/my-lora-model
|
Ankz123
| 2025-06-20T19:03:33Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-20T19:03:20Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
AGofficial/AgGPT-8.9
|
AGofficial
| 2025-06-20T19:03:25Z | 0 | 1 | null |
[
"en",
"license:mit",
"region:us"
] | null | 2025-06-20T19:00:00Z |
---
license: mit
language:
- en
---
# AgGPT-8.9
Utilizing the TinyBrain-2 model, we have developed JavaScript and Python implementations of a highly efficient language model that closely mirrors the capabilities of AgGPT-9, while maintaining a significantly reduced size.
|
ArunP3799/qwen3b_baseline_math_step_8
|
ArunP3799
| 2025-06-20T19:01:41Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-20T18:59:30Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
magnifi/parser_user_v45a_epoch_6_lr_0.0018
|
magnifi
| 2025-06-20T19:00:43Z | 0 | 0 | null |
[
"safetensors",
"mistral",
"license:apache-2.0",
"region:us"
] | null | 2025-06-20T18:54:27Z |
---
license: apache-2.0
---
|
mradermacher/Deep-Think-32B-GGUF
|
mradermacher
| 2025-06-20T18:55:28Z | 2 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"zh",
"base_model:cloudyu/Deep-Think-32B",
"base_model:quantized:cloudyu/Deep-Think-32B",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-06-20T07:56:11Z |
---
base_model: cloudyu/Deep-Think-32B
language:
- en
- zh
library_name: transformers
license: mit
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/cloudyu/Deep-Think-32B
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Deep-Think-32B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Deep-Think-32B-GGUF/resolve/main/Deep-Think-32B.Q2_K.gguf) | Q2_K | 12.4 | |
| [GGUF](https://huggingface.co/mradermacher/Deep-Think-32B-GGUF/resolve/main/Deep-Think-32B.Q3_K_S.gguf) | Q3_K_S | 14.5 | |
| [GGUF](https://huggingface.co/mradermacher/Deep-Think-32B-GGUF/resolve/main/Deep-Think-32B.Q3_K_M.gguf) | Q3_K_M | 16.0 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Deep-Think-32B-GGUF/resolve/main/Deep-Think-32B.Q3_K_L.gguf) | Q3_K_L | 17.3 | |
| [GGUF](https://huggingface.co/mradermacher/Deep-Think-32B-GGUF/resolve/main/Deep-Think-32B.IQ4_XS.gguf) | IQ4_XS | 18.0 | |
| [GGUF](https://huggingface.co/mradermacher/Deep-Think-32B-GGUF/resolve/main/Deep-Think-32B.Q4_K_S.gguf) | Q4_K_S | 18.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Deep-Think-32B-GGUF/resolve/main/Deep-Think-32B.Q4_K_M.gguf) | Q4_K_M | 20.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Deep-Think-32B-GGUF/resolve/main/Deep-Think-32B.Q5_K_S.gguf) | Q5_K_S | 22.7 | |
| [GGUF](https://huggingface.co/mradermacher/Deep-Think-32B-GGUF/resolve/main/Deep-Think-32B.Q5_K_M.gguf) | Q5_K_M | 23.4 | |
| [GGUF](https://huggingface.co/mradermacher/Deep-Think-32B-GGUF/resolve/main/Deep-Think-32B.Q6_K.gguf) | Q6_K | 27.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Deep-Think-32B-GGUF/resolve/main/Deep-Think-32B.Q8_0.gguf) | Q8_0 | 34.9 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Ejja87/Model1
|
Ejja87
| 2025-06-20T18:54:13Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-06-20T18:54:13Z |
---
license: apache-2.0
---
|
pj-mathematician/JobSkillBGE-large-en-v1.5-v2
|
pj-mathematician
| 2025-06-20T18:51:24Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:114699",
"loss:CachedGISTEmbedLoss",
"arxiv:1908.10084",
"base_model:BAAI/bge-large-en-v1.5",
"base_model:finetune:BAAI/bge-large-en-v1.5",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-06-20T18:48:54Z |
---
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:114699
- loss:CachedGISTEmbedLoss
base_model: BAAI/bge-large-en-v1.5
widget:
- source_sentence: 'Bus drivers, including those operating in various sectors like
public transit, intercity, private, or school services, need strong driving skills,
knowledge of traffic laws, and the ability to operate safely in diverse conditions.
Additionally, effective communication skills and the ability to handle passenger
inquiries and emergencies are crucial.
[''bus driver'', ''intercity bus driver'', ''private bus operator'', ''transit
bus driver'', ''public service vehicle operator'', ''passenger driver'', ''international
bus driver'', ''public bus operator'', ''touristic bus driver'', ''coach driver'',
''private coach driver'', ''public bus driver'', ''bus operator'', ''driver of
bus'', ''bus driving operator'', ''schoolbus driver'']'
sentences:
- 'The skill of determining shreds sizes percentage in cigarettes is primarily required
by tobacco processing technicians and quality control specialists in the cigarette
manufacturing industry, who ensure that the tobacco shreds meet specific size
and quality standards for consistent product performance.
[''determine shreds sizes percentage in cigarettes'', ''determine shreds sizes
percentage in cigarettes'', ''determine the shreds sizes percentage of cigarettes'',
''determine shreds size percentages in cigarettes'', ''agree shreds sizes percentage
in cigarettes'', ''determine the shreds sizes percentage in cigarettes'', ''confirm
shreds sizes percentage in cigarettes'', ''sort shreds sizes percentage in cigarettes'']'
- 'Job roles such as curriculum developers, educational consultants, and instructional
designers require skills like analyzing, evaluating, and scrutinizing curriculums
to improve educational outcomes. For legislative programmes, roles including policy
analysts, legislative aides, and compliance officers use skills to test, evaluate,
and scrutinize legislative processes to ensure effective and efficient policy
implementation.
[''analyse curriculum'', ''test legislative programmes'', ''evaluate legislative
programmes'', ''evaluate curriculum'', ''test curriculum'', ''investigate curriculum'',
''scrutinise curriculum'', ''analyze curriculum'', ''scrutinise legislative processes'',
''investigate legislative programmes'']'
- 'Job roles such as customer service representatives, flight attendants, and hotel
concierges require a strong focus on passengers or customers, ensuring their needs
and comfort are prioritized to provide excellent service and support.
[''focus on passengers'', ''prioritise passengers'', ''ensure passenger prioritisation'',
''make passengers a priority'', ''maintain a focus on passengers'', ''ensure passengers
are the priority focus'', ''ensure passengers are prioritised'', ''attend to passengers'',
''ensure a focus on passengers'']'
- source_sentence: 'A medical laboratory assistant, or any of its synonyms such as
a biomedical laboratory assistant, requires strong attention to detail, proficiency
in using laboratory equipment, and a foundational understanding of medical science.
Additionally, skills in sample handling, data recording, and basic research methodologies
are crucial for roles like a clinical research assistant or an assistant in medical
laboratory.
[''medical laboratory assistant'', ''medical laboratory research assistant'',
''biomedical laboratory assistant'', ''clinical research assistant'', ''assistant
in medical laboratory'', ''biomedical laboratory research assistant'', ''assistant
clinical researcher'', ''medical lab assistant'', ''assistant in biomedical laboratory'']'
sentences:
- 'Job roles such as automotive mechanics, fleet managers, and vehicle technicians
require skills to ensure vehicle operability and regular maintenance, which involves
diagnosing and repairing issues to keep vehicles roadworthy and operational.
[''ensure vehicle operability'', ''keep vehicle roadworthy'', ''keep vehicle operational'',
''ensure operability of the vehicle'', ''ensure vehicle remains operational'',
''ensure maintenance of vehicle'', ''ensure regular vehicle maintenance'', ''ensure
operation of the vehicle'', ''ensure operability'']'
- 'The skill of classroom management is primarily required by teachers and educators
at all levels, from kindergarten to higher education, to ensure a productive,
safe, and organized learning environment. It involves maintaining discipline,
organizing space and materials, and facilitating effective instruction, roles
that are crucial for teaching assistants and substitute teachers as well.
[''perform classroom management'', ''performing classroom management'', ''conduct
classroom management'', ''practice classroom management'', ''carry out classroom
management'', ''implement classroom management'', ''performs classroom management'']'
- 'Job roles requiring expertise in stem cells, including embryonic and adult stem
cells, typically include stem cell researchers, regenerative medicine scientists,
and biomedical engineers who focus on the development and application of stem
cell technologies for therapeutic purposes. Additionally, clinical researchers
and medical practitioners in specialized fields such as oncology and hematology
may utilize knowledge of stem cells for treatment and research purposes.
[''stem cells'', ''undifferentiated biological cells'', ''embryonic stem cells'',
''development of stem cells'', ''stem cell'', ''adult stem cells'', ''stem cells'']'
- source_sentence: 'For roles such as ''physiotherapist'', ''neuromusculoskeletal
physiotherapist'', ''osteopath'', and ''chiropractor'', the skills needed include
a deep understanding of human anatomy and physiology, strong diagnostic skills,
and the ability to apply manual therapy techniques to treat musculoskeletal issues.
Additionally, effective communication skills are crucial for explaining treatments
and exercises to patients, while adaptability and problem-solving skills are essential
for tailoring treatments to individual patient needs.
[''physiotherapist'', ''neuromusculoskeletal physiotherapist'', ''osteopath'',
''eurythmy therapist'', ''respiratory therapist'', ''remedial physiotherapist'',
''physiotherapist manager'', ''occupational therapist'', ''neurological physiotherapist'',
''occupational physiotherapist'', ''bobath physiotherapist'', ''neuromuscular
physiotherapist'', ''manipulative physiotherapist'', ''hydrotherapist'', ''rehabilitation
therapist'', ''masseuse'', ''health promotion worker'', ''cardiovascular physiotherapist'',
''respiratory physiotherapist'', ''chiropractor'', ''sports physiotherapist'',
''chiropractic therapist'', ''neurodevelopmental physiotherapist'', ''physical
therapist'', ''health and well-being therapist'', ''business physiotherapist'']'
sentences:
- 'Job roles that require skills in dealing with emergency care situations include
emergency medical technicians (EMTs), paramedics, and emergency room nurses or
doctors, all of whom must quickly and effectively manage critical health situations
to save lives.
[''deal with emergency care situations'', ''deal with emergency care situation'',
''handle emergency care situation'', ''apply knowledge in emergency care situations'',
''handle emergency care situations'']'
- 'Job roles such as fashion designers, stylist coordinators, and jewelry designers
require the skill to distinguish and evaluate accessories, their differences,
and applications, to ensure the right aesthetic and functional fit for their designs
or clients. This skill is crucial for creating cohesive looks and enhancing the
overall visual appeal in fashion and design industries.
[''distinguish accessories'', ''evaluate accessories and their differences'',
''evaluate accessories and their application'', ''differentiate accessories'',
''distinguish accessories and their application'', ''distinguish differences in
accessories'']'
- 'Job roles that require expertise in curriculum objectives include educational
consultants, curriculum developers, and instructional designers, who are tasked
with creating and refining educational content and learning goals to meet specific
educational standards and student needs. Teachers and headteachers also utilize
these skills to align their teaching methods and materials with the set educational
targets and aims.
[''curriculum objectives'', ''curriculum objective'', ''curriculum goals'', ''curriculum
targets'', ''curriculum aims'', ''curricula objectives'']'
- source_sentence: 'A mine surveyor, also known as a mining surveyor or mine planning
surveyor, requires expertise in geomatics and mining engineering to accurately
map and plan mine operations, ensuring safety and efficiency. They must also possess
strong analytical skills and the ability to use specialized software for creating
detailed mine plans and maintaining accurate records.
[''mine surveyor'', ''mining surveyor'', ''mine operations surveyor'', ''mine
plan maker'', ''mine records keeper'', ''mine surveyors'', ''planner of mining
operations'', ''mine planning surveyor'']'
sentences:
- 'Job roles such as data analysts, business analysts, and financial analysts require
the skill to present reports or prepare statistical reports, as they often need
to communicate complex data insights clearly and effectively to stakeholders.
[''present reports'', ''present a report'', ''submit presentation'', ''prepare
statistical reports'']'
- 'Job roles such as Food Safety Manager, Quality Assurance Specialist, and Public
Health Inspector require the skill of developing food safety programs to ensure
compliance with regulations and maintain high standards of food safety in various
settings including manufacturing, retail, and public health sectors.
[''develop food safety programmes'', ''creating food safety programmes'', ''develop
programmes for food safety'', ''food safety programmes creating'', ''food safety
programmes developing'', ''develop food safety programs'', ''food safety programme
developing'', ''food safety programme creating'', ''create food safety programmes'',
''create programmes for food safety'', ''developing food safety programmes'']'
- 'The skill of using a sander, whether it be a handheld, manual, automatic, or
drywall sander, is primarily required by construction workers, carpenters, and
drywall installers for tasks such as roughening and smoothing wall surfaces to
prepare them for painting or finishing.
[''use sander'', ''use handheld sander'', ''roughening of wall surfaces'', ''use
drywall sander'', ''sanding of wall surfaces'', ''using sander'', ''sander usage'',
''use manual sander'', ''drywall sanding'', ''use automatic sander'']'
- source_sentence: 'An insulation supervisor, regardless of the specific type of insulation
material or installation area, requires strong project management skills, knowledge
of building codes and safety regulations, and expertise in insulation techniques
to oversee the installation process effectively and ensure quality standards are
met.
[''insulation supervisor'', ''supervisor of installation of insulating materials'',
''supervisor of insulation materials installation'', ''supervisor of installation
of insulation'', ''solid wall insulation installation supervisor'', ''insulation
installers supervisor'', ''cavity wall insulation installation supervisor'', ''loft
insulation installation supervisor'']'
sentences:
- 'Job roles such as Food Safety Inspector, Public Health Officer, and Environmental
Health Specialist require the skill of taking action on food safety violations
to ensure compliance with health regulations and maintain public safety standards.
[''take action on food safety violations'', ''invoke action on food safety violations'',
''agree action on food safety violations'', ''pursue action on food safety violations'',
''determine action on food safety violations'']'
- 'Job roles that require skills in operating and supervising textile printing machines
include Textile Printer Operators, Printing Machine Technicians, and Textile Production
Specialists. These roles involve setting up, running, and maintaining printing
machinery to ensure high-quality textile printing.
[''tend textile printing machines'', ''activate and supervise printing machines
for textile material'', ''activate and supervise textile printing machines'',
''tend printing machines for textile'', ''tend printing machines for textile material'',
''care for textile printing machines'', ''operate printing machines for textile
material'', ''operate textile printing machines'']'
- 'The skill of installing insulation material is primarily required by job roles
such as insulation workers, HVAC technicians, and construction specialists, who
are responsible for improving energy efficiency and thermal comfort in buildings
by correctly fitting and fixing insulation materials in various structures.
[''install insulation material'', ''insulate structure'', ''fix insulation'',
''insulation material installation'', ''installation of insulation material'',
''fitting insulation'', ''insulating structure'', ''installing insulation material'',
''fixing insulation'', ''fit insulation'']'
pipeline_tag: sentence-similarity
library_name: sentence-transformers
metrics:
- cosine_accuracy@1
- cosine_accuracy@20
- cosine_accuracy@50
- cosine_accuracy@100
- cosine_accuracy@150
- cosine_accuracy@200
- cosine_precision@1
- cosine_precision@20
- cosine_precision@50
- cosine_precision@100
- cosine_precision@150
- cosine_precision@200
- cosine_recall@1
- cosine_recall@20
- cosine_recall@50
- cosine_recall@100
- cosine_recall@150
- cosine_recall@200
- cosine_ndcg@1
- cosine_ndcg@20
- cosine_ndcg@50
- cosine_ndcg@100
- cosine_ndcg@150
- cosine_ndcg@200
- cosine_mrr@1
- cosine_mrr@20
- cosine_mrr@50
- cosine_mrr@100
- cosine_mrr@150
- cosine_mrr@200
- cosine_map@1
- cosine_map@20
- cosine_map@50
- cosine_map@100
- cosine_map@150
- cosine_map@200
- cosine_map@500
model-index:
- name: SentenceTransformer based on BAAI/bge-large-en-v1.5
results:
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: full en
type: full_en
metrics:
- type: cosine_accuracy@1
value: 0.7203947368421053
name: Cosine Accuracy@1
- type: cosine_accuracy@20
value: 1.0
name: Cosine Accuracy@20
- type: cosine_accuracy@50
value: 1.0
name: Cosine Accuracy@50
- type: cosine_accuracy@100
value: 1.0
name: Cosine Accuracy@100
- type: cosine_accuracy@150
value: 1.0
name: Cosine Accuracy@150
- type: cosine_accuracy@200
value: 1.0
name: Cosine Accuracy@200
- type: cosine_precision@1
value: 0.7203947368421053
name: Cosine Precision@1
- type: cosine_precision@20
value: 0.4916118421052631
name: Cosine Precision@20
- type: cosine_precision@50
value: 0.3869736842105263
name: Cosine Precision@50
- type: cosine_precision@100
value: 0.3046052631578947
name: Cosine Precision@100
- type: cosine_precision@150
value: 0.2575
name: Cosine Precision@150
- type: cosine_precision@200
value: 0.2250328947368421
name: Cosine Precision@200
- type: cosine_recall@1
value: 0.010176836251443517
name: Cosine Recall@1
- type: cosine_recall@20
value: 0.13155034702974863
name: Cosine Recall@20
- type: cosine_recall@50
value: 0.2509109642167453
name: Cosine Recall@50
- type: cosine_recall@100
value: 0.38660292276765434
name: Cosine Recall@100
- type: cosine_recall@150
value: 0.48222911118709905
name: Cosine Recall@150
- type: cosine_recall@200
value: 0.555593320773802
name: Cosine Recall@200
- type: cosine_ndcg@1
value: 0.7203947368421053
name: Cosine Ndcg@1
- type: cosine_ndcg@20
value: 0.5307474055663989
name: Cosine Ndcg@20
- type: cosine_ndcg@50
value: 0.44447520594610196
name: Cosine Ndcg@50
- type: cosine_ndcg@100
value: 0.434845358559754
name: Cosine Ndcg@100
- type: cosine_ndcg@150
value: 0.47821684318798985
name: Cosine Ndcg@150
- type: cosine_ndcg@200
value: 0.5217544441967603
name: Cosine Ndcg@200
- type: cosine_mrr@1
value: 0.7203947368421053
name: Cosine Mrr@1
- type: cosine_mrr@20
value: 0.8361781096010973
name: Cosine Mrr@20
- type: cosine_mrr@50
value: 0.8361781096010973
name: Cosine Mrr@50
- type: cosine_mrr@100
value: 0.8361781096010973
name: Cosine Mrr@100
- type: cosine_mrr@150
value: 0.8361781096010973
name: Cosine Mrr@150
- type: cosine_mrr@200
value: 0.8361781096010973
name: Cosine Mrr@200
- type: cosine_map@1
value: 0.7203947368421053
name: Cosine Map@1
- type: cosine_map@20
value: 0.3349546102812917
name: Cosine Map@20
- type: cosine_map@50
value: 0.2333868330865278
name: Cosine Map@50
- type: cosine_map@100
value: 0.20565538628439212
name: Cosine Map@100
- type: cosine_map@150
value: 0.2210663674628728
name: Cosine Map@150
- type: cosine_map@200
value: 0.2387037675984284
name: Cosine Map@200
- type: cosine_map@500
value: 0.28801647424274696
name: Cosine Map@500
---
# Job-Skill matching fintuned BAAI/bge-large-en-v1.5 (v2)
Top performing model on [TalentCLEF 2025](https://talentclef.github.io/talentclef/) Task B. Use it for job title <-> skill set matching
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [BAAI/bge-large-en-v1.5](https://huggingface.co/BAAI/bge-large-en-v1.5) <!-- at revision d4aa6901d3a41ba39fb536a557fa166f842b0e09 -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 1024 dimensions
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the ๐ค Hub
model = SentenceTransformer("pj-mathematician/JobSkillBGE-large-en-v1.5-v2")
# Run inference
sentences = [
"An insulation supervisor, regardless of the specific type of insulation material or installation area, requires strong project management skills, knowledge of building codes and safety regulations, and expertise in insulation techniques to oversee the installation process effectively and ensure quality standards are met.\n['insulation supervisor', 'supervisor of installation of insulating materials', 'supervisor of insulation materials installation', 'supervisor of installation of insulation', 'solid wall insulation installation supervisor', 'insulation installers supervisor', 'cavity wall insulation installation supervisor', 'loft insulation installation supervisor']",
"The skill of installing insulation material is primarily required by job roles such as insulation workers, HVAC technicians, and construction specialists, who are responsible for improving energy efficiency and thermal comfort in buildings by correctly fitting and fixing insulation materials in various structures.\n['install insulation material', 'insulate structure', 'fix insulation', 'insulation material installation', 'installation of insulation material', 'fitting insulation', 'insulating structure', 'installing insulation material', 'fixing insulation', 'fit insulation']",
"Job roles such as Food Safety Inspector, Public Health Officer, and Environmental Health Specialist require the skill of taking action on food safety violations to ensure compliance with health regulations and maintain public safety standards.\n['take action on food safety violations', 'invoke action on food safety violations', 'agree action on food safety violations', 'pursue action on food safety violations', 'determine action on food safety violations']",
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Information Retrieval
* Dataset: `full_en`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:---------------------|:-----------|
| cosine_accuracy@1 | 0.7204 |
| cosine_accuracy@20 | 1.0 |
| cosine_accuracy@50 | 1.0 |
| cosine_accuracy@100 | 1.0 |
| cosine_accuracy@150 | 1.0 |
| cosine_accuracy@200 | 1.0 |
| cosine_precision@1 | 0.7204 |
| cosine_precision@20 | 0.4916 |
| cosine_precision@50 | 0.387 |
| cosine_precision@100 | 0.3046 |
| cosine_precision@150 | 0.2575 |
| cosine_precision@200 | 0.225 |
| cosine_recall@1 | 0.0102 |
| cosine_recall@20 | 0.1316 |
| cosine_recall@50 | 0.2509 |
| cosine_recall@100 | 0.3866 |
| cosine_recall@150 | 0.4822 |
| cosine_recall@200 | 0.5556 |
| cosine_ndcg@1 | 0.7204 |
| cosine_ndcg@20 | 0.5307 |
| cosine_ndcg@50 | 0.4445 |
| cosine_ndcg@100 | 0.4348 |
| cosine_ndcg@150 | 0.4782 |
| **cosine_ndcg@200** | **0.5218** |
| cosine_mrr@1 | 0.7204 |
| cosine_mrr@20 | 0.8362 |
| cosine_mrr@50 | 0.8362 |
| cosine_mrr@100 | 0.8362 |
| cosine_mrr@150 | 0.8362 |
| cosine_mrr@200 | 0.8362 |
| cosine_map@1 | 0.7204 |
| cosine_map@20 | 0.335 |
| cosine_map@50 | 0.2334 |
| cosine_map@100 | 0.2057 |
| cosine_map@150 | 0.2211 |
| cosine_map@200 | 0.2387 |
| cosine_map@500 | 0.288 |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 114,699 training samples
* Columns: <code>anchor</code> and <code>positive</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 78 tokens</li><li>mean: 144.94 tokens</li><li>max: 354 tokens</li></ul> | <ul><li>min: 51 tokens</li><li>mean: 114.13 tokens</li><li>max: 274 tokens</li></ul> |
* Samples:
| anchor | positive |
|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>A technical director or any of its synonyms requires a strong blend of technical expertise and leadership skills, including the ability to oversee technical operations, manage teams, and ensure the successful execution of technical projects while maintaining operational efficiency and innovation.<br>['technical director', 'technical and operations director', 'head of technical', 'director of technical arts', 'head of technical department', 'technical supervisor', 'technical manager']</code> | <code>Job roles that require promoting health and safety include occupational health and safety specialists, safety managers, and public health educators, all of whom work to ensure safe and healthy environments in workplaces and communities.<br>['promote health and safety', 'promote importance of health and safety', 'promoting health and safety', 'advertise health and safety']</code> |
| <code>A technical director or any of its synonyms requires a strong blend of technical expertise and leadership skills, including the ability to oversee technical operations, manage teams, and ensure the successful execution of technical projects while maintaining operational efficiency and innovation.<br>['technical director', 'technical and operations director', 'head of technical', 'director of technical arts', 'head of technical department', 'technical supervisor', 'technical manager']</code> | <code>Job roles that require organizing rehearsals include directors, choreographers, and conductors in theater, dance, and music ensembles, who must efficiently plan and schedule practice sessions to prepare performers for a successful final performance.<br>['organise rehearsals', 'organise rehearsal', 'organize rehearsals', 'plan rehearsals', 'arrange rehearsals', 'organising rehearsals', 'schedule rehearsals']</code> |
| <code>A technical director or any of its synonyms requires a strong blend of technical expertise and leadership skills, including the ability to oversee technical operations, manage teams, and ensure the successful execution of technical projects while maintaining operational efficiency and innovation.<br>['technical director', 'technical and operations director', 'head of technical', 'director of technical arts', 'head of technical department', 'technical supervisor', 'technical manager']</code> | <code>Job roles such as Health and Safety Managers, Environmental Health Officers, and Risk Management Specialists often require the skill of negotiating health and safety issues with third parties to ensure compliance and protection standards are met across different organizations and sites.<br>['negotiate health and safety issues with third parties', 'agree with third parties on health and safety', 'negotiate issues on health and safety with third parties', 'negotiate with third parties on health and safety issues', 'negotiate health and safety matters with third parties']</code> |
* Loss: [<code>CachedGISTEmbedLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cachedgistembedloss) with these parameters:
```json
{'guide': SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
), 'temperature': 0.01, 'mini_batch_size': 32, 'margin_strategy': 'absolute', 'margin': 0.0}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 128
- `per_device_eval_batch_size`: 128
- `gradient_accumulation_steps`: 2
- `num_train_epochs`: 5
- `warmup_ratio`: 0.05
- `log_on_each_node`: False
- `fp16`: True
- `dataloader_num_workers`: 4
- `ddp_find_unused_parameters`: True
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 128
- `per_device_eval_batch_size`: 128
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 2
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 5
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.05
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: False
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: True
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: True
- `dataloader_num_workers`: 4
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `tp_size`: 0
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: True
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | full_en_cosine_ndcg@200 |
|:------:|:----:|:-------------:|:-----------------------:|
| -1 | -1 | - | 0.4795 |
| 0.0022 | 1 | 10.6462 | - |
| 0.2232 | 100 | 4.5115 | - |
| 0.4464 | 200 | 2.9237 | 0.5255 |
| 0.6696 | 300 | 2.5327 | - |
| 0.8929 | 400 | 2.3451 | 0.5305 |
| 1.1161 | 500 | 1.9882 | - |
| 1.3393 | 600 | 1.7738 | 0.5240 |
| 1.5625 | 700 | 1.7365 | - |
| 1.7857 | 800 | 1.6932 | 0.5251 |
| 2.0089 | 900 | 1.6184 | - |
| 2.2321 | 1000 | 1.285 | 0.5254 |
| 2.4554 | 1100 | 1.2651 | - |
| 2.6786 | 1200 | 1.2739 | 0.5238 |
| 2.9018 | 1300 | 1.2625 | - |
| 3.125 | 1400 | 1.0726 | 0.5251 |
| 3.3482 | 1500 | 0.9606 | - |
| 3.5714 | 1600 | 0.9594 | 0.5214 |
| 3.7946 | 1700 | 0.954 | - |
| 4.0179 | 1800 | 0.9264 | 0.5239 |
| 4.2411 | 1900 | 0.7486 | - |
| 4.4643 | 2000 | 0.7424 | 0.5218 |
| 4.6875 | 2100 | 0.7127 | - |
| 4.9107 | 2200 | 0.7129 | 0.5218 |
### Framework Versions
- Python: 3.11.11
- Sentence Transformers: 4.1.0
- Transformers: 4.51.2
- PyTorch: 2.6.0+cu124
- Accelerate: 1.6.0
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
LazarM05/Llama_Philosopher-Merged
|
LazarM05
| 2025-06-20T18:50:45Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-20T18:46:37Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
BootesVoid/cmbq0a5fr00smh4x50oaoaxxi_cmc53r5tu02iibfif28r3c9ib
|
BootesVoid
| 2025-06-20T18:44:54Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-06-20T18:44:52Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: MIASTARR
---
# Cmbq0A5Fr00Smh4X50Oaoaxxi_Cmc53R5Tu02Iibfif28R3C9Ib
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `MIASTARR` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "MIASTARR",
"lora_weights": "https://huggingface.co/BootesVoid/cmbq0a5fr00smh4x50oaoaxxi_cmc53r5tu02iibfif28r3c9ib/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [๐งจ diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BootesVoid/cmbq0a5fr00smh4x50oaoaxxi_cmc53r5tu02iibfif28r3c9ib', weight_name='lora.safetensors')
image = pipeline('MIASTARR').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/BootesVoid/cmbq0a5fr00smh4x50oaoaxxi_cmc53r5tu02iibfif28r3c9ib/discussions) to add images that show off what youโve made with this LoRA.
|
Anuj5504/youtube-sentiment
|
Anuj5504
| 2025-06-20T18:41:59Z | 12 | 1 | null |
[
"safetensors",
"distilbert",
"region:us"
] | null | 2025-06-18T12:31:34Z |
# My Fine-Tuned Model
This model is fine-tuned for emotion classification with labels: joy, fear, anger, etc.
## Usage
```python
from transformers import pipeline
clf = pipeline("text-classification", model="your-username/your-model-name")
clf("I'm so happy today!")
|
MacrossRamen/expression
|
MacrossRamen
| 2025-06-20T18:38:51Z | 0 | 0 | null |
[
"license:mit",
"region:us"
] | null | 2025-06-20T18:33:21Z |
---
license: mit
---
Not mine. Stored here to ease of use at Replicate.
Owner: https://civitai.com/models/914282/expression-helper-20?modelVersionId=1023284
|
science-of-finetuning/gemma3_1B-kansas_abortion-L19-k100-lr1e-03-x32-local-shuffling-Crosscoder
|
science-of-finetuning
| 2025-06-20T18:38:42Z | 0 | 0 | null |
[
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"region:us"
] | null | 2025-06-20T18:38:24Z |
---
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Code: [More Information Needed]
- Paper: [More Information Needed]
- Docs: [More Information Needed]
|
pj-mathematician/JobGTE-multilingual-base-v2
|
pj-mathematician
| 2025-06-20T18:29:27Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:124788",
"loss:GISTEmbedLoss",
"arxiv:1908.10084",
"arxiv:2402.16829",
"base_model:Alibaba-NLP/gte-multilingual-base",
"base_model:finetune:Alibaba-NLP/gte-multilingual-base",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-06-20T18:26:40Z |
---
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:124788
- loss:GISTEmbedLoss
base_model: Alibaba-NLP/gte-multilingual-base
widget:
- source_sentence: ๅ
ถไปๆบๆขฐใ่ฎพๅคๅๆๅฝข่ดง็ฉ็ง่ตๆๅกไปฃ่กจ
sentences:
- ๅ
ถไปๆบๆขฐๅ่ฎพๅค็ง่ตๆๅกๅทฅไฝไบบๅ
- ็ตๅญๅ็ตไฟก่ฎพๅคๅ้ถ้จไปถ็ฉๆต็ป็
- ๅทฅไธไธปๅจ
- source_sentence: ๅ
ฌไบค่ฝฆๅธๆบ
sentences:
- ่กจๆผ็ฏๅ
่ฎพ่ฎกๅธ
- ไน็ฏๅบๅฐๆฟๅฎ่ฃ
ๅทฅ
- ๅฝ้
ๅทดๅฃซๅธๆบ
- source_sentence: online communication manager
sentences:
- trades union official
- social media manager
- budget manager
- source_sentence: Projektmanagerin
sentences:
- Projektmanager/Projektmanagerin
- Category-Manager
- Infanterist
- source_sentence: Volksvertreter
sentences:
- Parlamentarier
- Oberbรผrgermeister
- Konsul
pipeline_tag: sentence-similarity
library_name: sentence-transformers
metrics:
- cosine_accuracy@1
- cosine_accuracy@20
- cosine_accuracy@50
- cosine_accuracy@100
- cosine_accuracy@150
- cosine_accuracy@200
- cosine_precision@1
- cosine_precision@20
- cosine_precision@50
- cosine_precision@100
- cosine_precision@150
- cosine_precision@200
- cosine_recall@1
- cosine_recall@20
- cosine_recall@50
- cosine_recall@100
- cosine_recall@150
- cosine_recall@200
- cosine_ndcg@1
- cosine_ndcg@20
- cosine_ndcg@50
- cosine_ndcg@100
- cosine_ndcg@150
- cosine_ndcg@200
- cosine_mrr@1
- cosine_mrr@20
- cosine_mrr@50
- cosine_mrr@100
- cosine_mrr@150
- cosine_mrr@200
- cosine_map@1
- cosine_map@20
- cosine_map@50
- cosine_map@100
- cosine_map@150
- cosine_map@200
- cosine_map@500
model-index:
- name: SentenceTransformer based on Alibaba-NLP/gte-multilingual-base
results:
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: full en
type: full_en
metrics:
- type: cosine_accuracy@1
value: 0.6666666666666666
name: Cosine Accuracy@1
- type: cosine_accuracy@20
value: 0.9904761904761905
name: Cosine Accuracy@20
- type: cosine_accuracy@50
value: 0.9904761904761905
name: Cosine Accuracy@50
- type: cosine_accuracy@100
value: 0.9904761904761905
name: Cosine Accuracy@100
- type: cosine_accuracy@150
value: 0.9904761904761905
name: Cosine Accuracy@150
- type: cosine_accuracy@200
value: 0.9904761904761905
name: Cosine Accuracy@200
- type: cosine_precision@1
value: 0.6666666666666666
name: Cosine Precision@1
- type: cosine_precision@20
value: 0.5147619047619048
name: Cosine Precision@20
- type: cosine_precision@50
value: 0.31999999999999995
name: Cosine Precision@50
- type: cosine_precision@100
value: 0.19047619047619047
name: Cosine Precision@100
- type: cosine_precision@150
value: 0.1361904761904762
name: Cosine Precision@150
- type: cosine_precision@200
value: 0.10542857142857143
name: Cosine Precision@200
- type: cosine_recall@1
value: 0.06854687410617222
name: Cosine Recall@1
- type: cosine_recall@20
value: 0.5491240579458434
name: Cosine Recall@20
- type: cosine_recall@50
value: 0.7553654907661455
name: Cosine Recall@50
- type: cosine_recall@100
value: 0.8503209224897438
name: Cosine Recall@100
- type: cosine_recall@150
value: 0.8994749092946579
name: Cosine Recall@150
- type: cosine_recall@200
value: 0.9207884118691805
name: Cosine Recall@200
- type: cosine_ndcg@1
value: 0.6666666666666666
name: Cosine Ndcg@1
- type: cosine_ndcg@20
value: 0.6952098522285352
name: Cosine Ndcg@20
- type: cosine_ndcg@50
value: 0.7229572913271685
name: Cosine Ndcg@50
- type: cosine_ndcg@100
value: 0.7732532874348539
name: Cosine Ndcg@100
- type: cosine_ndcg@150
value: 0.7947334799125039
name: Cosine Ndcg@150
- type: cosine_ndcg@200
value: 0.8038564389556094
name: Cosine Ndcg@200
- type: cosine_mrr@1
value: 0.6666666666666666
name: Cosine Mrr@1
- type: cosine_mrr@20
value: 0.8182539682539683
name: Cosine Mrr@20
- type: cosine_mrr@50
value: 0.8182539682539683
name: Cosine Mrr@50
- type: cosine_mrr@100
value: 0.8182539682539683
name: Cosine Mrr@100
- type: cosine_mrr@150
value: 0.8182539682539683
name: Cosine Mrr@150
- type: cosine_mrr@200
value: 0.8182539682539683
name: Cosine Mrr@200
- type: cosine_map@1
value: 0.6666666666666666
name: Cosine Map@1
- type: cosine_map@20
value: 0.5566401101002375
name: Cosine Map@20
- type: cosine_map@50
value: 0.55344017265156
name: Cosine Map@50
- type: cosine_map@100
value: 0.5852249415484134
name: Cosine Map@100
- type: cosine_map@150
value: 0.5943042662925763
name: Cosine Map@150
- type: cosine_map@200
value: 0.5975837437975446
name: Cosine Map@200
- type: cosine_map@500
value: 0.6015742986218369
name: Cosine Map@500
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: full es
type: full_es
metrics:
- type: cosine_accuracy@1
value: 0.12432432432432433
name: Cosine Accuracy@1
- type: cosine_accuracy@20
value: 1.0
name: Cosine Accuracy@20
- type: cosine_accuracy@50
value: 1.0
name: Cosine Accuracy@50
- type: cosine_accuracy@100
value: 1.0
name: Cosine Accuracy@100
- type: cosine_accuracy@150
value: 1.0
name: Cosine Accuracy@150
- type: cosine_accuracy@200
value: 1.0
name: Cosine Accuracy@200
- type: cosine_precision@1
value: 0.12432432432432433
name: Cosine Precision@1
- type: cosine_precision@20
value: 0.575945945945946
name: Cosine Precision@20
- type: cosine_precision@50
value: 0.3923243243243244
name: Cosine Precision@50
- type: cosine_precision@100
value: 0.2565945945945946
name: Cosine Precision@100
- type: cosine_precision@150
value: 0.19282882882882882
name: Cosine Precision@150
- type: cosine_precision@200
value: 0.1527837837837838
name: Cosine Precision@200
- type: cosine_recall@1
value: 0.0036138931714884822
name: Cosine Recall@1
- type: cosine_recall@20
value: 0.3852888120551914
name: Cosine Recall@20
- type: cosine_recall@50
value: 0.5659574514538841
name: Cosine Recall@50
- type: cosine_recall@100
value: 0.6898678629281393
name: Cosine Recall@100
- type: cosine_recall@150
value: 0.7540209165372845
name: Cosine Recall@150
- type: cosine_recall@200
value: 0.7858170054407897
name: Cosine Recall@200
- type: cosine_ndcg@1
value: 0.12432432432432433
name: Cosine Ndcg@1
- type: cosine_ndcg@20
value: 0.6168674053047035
name: Cosine Ndcg@20
- type: cosine_ndcg@50
value: 0.5913690595071309
name: Cosine Ndcg@50
- type: cosine_ndcg@100
value: 0.62350509928888
name: Cosine Ndcg@100
- type: cosine_ndcg@150
value: 0.6556716735369459
name: Cosine Ndcg@150
- type: cosine_ndcg@200
value: 0.6716557949894583
name: Cosine Ndcg@200
- type: cosine_mrr@1
value: 0.12432432432432433
name: Cosine Mrr@1
- type: cosine_mrr@20
value: 0.5581081081081081
name: Cosine Mrr@20
- type: cosine_mrr@50
value: 0.5581081081081081
name: Cosine Mrr@50
- type: cosine_mrr@100
value: 0.5581081081081081
name: Cosine Mrr@100
- type: cosine_mrr@150
value: 0.5581081081081081
name: Cosine Mrr@150
- type: cosine_mrr@200
value: 0.5581081081081081
name: Cosine Mrr@200
- type: cosine_map@1
value: 0.12432432432432433
name: Cosine Map@1
- type: cosine_map@20
value: 0.48407152706202555
name: Cosine Map@20
- type: cosine_map@50
value: 0.43043374125481026
name: Cosine Map@50
- type: cosine_map@100
value: 0.43735327570764515
name: Cosine Map@100
- type: cosine_map@150
value: 0.45269435912524697
name: Cosine Map@150
- type: cosine_map@200
value: 0.45930097680668164
name: Cosine Map@200
- type: cosine_map@500
value: 0.47204219228541466
name: Cosine Map@500
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: full de
type: full_de
metrics:
- type: cosine_accuracy@1
value: 0.2955665024630542
name: Cosine Accuracy@1
- type: cosine_accuracy@20
value: 0.9753694581280788
name: Cosine Accuracy@20
- type: cosine_accuracy@50
value: 0.9852216748768473
name: Cosine Accuracy@50
- type: cosine_accuracy@100
value: 0.9901477832512315
name: Cosine Accuracy@100
- type: cosine_accuracy@150
value: 0.9901477832512315
name: Cosine Accuracy@150
- type: cosine_accuracy@200
value: 0.9901477832512315
name: Cosine Accuracy@200
- type: cosine_precision@1
value: 0.2955665024630542
name: Cosine Precision@1
- type: cosine_precision@20
value: 0.5103448275862069
name: Cosine Precision@20
- type: cosine_precision@50
value: 0.36935960591133016
name: Cosine Precision@50
- type: cosine_precision@100
value: 0.23965517241379314
name: Cosine Precision@100
- type: cosine_precision@150
value: 0.1807881773399015
name: Cosine Precision@150
- type: cosine_precision@200
value: 0.1461576354679803
name: Cosine Precision@200
- type: cosine_recall@1
value: 0.01108543831680986
name: Cosine Recall@1
- type: cosine_recall@20
value: 0.3207974783481294
name: Cosine Recall@20
- type: cosine_recall@50
value: 0.5042046446720455
name: Cosine Recall@50
- type: cosine_recall@100
value: 0.6172666777909689
name: Cosine Recall@100
- type: cosine_recall@150
value: 0.6848138831682932
name: Cosine Recall@150
- type: cosine_recall@200
value: 0.7253195006357535
name: Cosine Recall@200
- type: cosine_ndcg@1
value: 0.2955665024630542
name: Cosine Ndcg@1
- type: cosine_ndcg@20
value: 0.537849085734973
name: Cosine Ndcg@20
- type: cosine_ndcg@50
value: 0.5288037060639387
name: Cosine Ndcg@50
- type: cosine_ndcg@100
value: 0.5551941695921919
name: Cosine Ndcg@100
- type: cosine_ndcg@150
value: 0.5887611959940118
name: Cosine Ndcg@150
- type: cosine_ndcg@200
value: 0.6092219717029682
name: Cosine Ndcg@200
- type: cosine_mrr@1
value: 0.2955665024630542
name: Cosine Mrr@1
- type: cosine_mrr@20
value: 0.5164773875147672
name: Cosine Mrr@20
- type: cosine_mrr@50
value: 0.5167647438366063
name: Cosine Mrr@50
- type: cosine_mrr@100
value: 0.5168213657719442
name: Cosine Mrr@100
- type: cosine_mrr@150
value: 0.5168213657719442
name: Cosine Mrr@150
- type: cosine_mrr@200
value: 0.5168213657719442
name: Cosine Mrr@200
- type: cosine_map@1
value: 0.2955665024630542
name: Cosine Map@1
- type: cosine_map@20
value: 0.398398563122481
name: Cosine Map@20
- type: cosine_map@50
value: 0.36032758502543594
name: Cosine Map@50
- type: cosine_map@100
value: 0.3632259128424842
name: Cosine Map@100
- type: cosine_map@150
value: 0.37822275477623696
name: Cosine Map@150
- type: cosine_map@200
value: 0.3863148456840816
name: Cosine Map@200
- type: cosine_map@500
value: 0.399227009561676
name: Cosine Map@500
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: full zh
type: full_zh
metrics:
- type: cosine_accuracy@1
value: 0.6796116504854369
name: Cosine Accuracy@1
- type: cosine_accuracy@20
value: 0.9805825242718447
name: Cosine Accuracy@20
- type: cosine_accuracy@50
value: 0.9902912621359223
name: Cosine Accuracy@50
- type: cosine_accuracy@100
value: 0.9902912621359223
name: Cosine Accuracy@100
- type: cosine_accuracy@150
value: 0.9902912621359223
name: Cosine Accuracy@150
- type: cosine_accuracy@200
value: 0.9902912621359223
name: Cosine Accuracy@200
- type: cosine_precision@1
value: 0.6796116504854369
name: Cosine Precision@1
- type: cosine_precision@20
value: 0.488349514563107
name: Cosine Precision@20
- type: cosine_precision@50
value: 0.29631067961165053
name: Cosine Precision@50
- type: cosine_precision@100
value: 0.17883495145631062
name: Cosine Precision@100
- type: cosine_precision@150
value: 0.12776699029126212
name: Cosine Precision@150
- type: cosine_precision@200
value: 0.09990291262135924
name: Cosine Precision@200
- type: cosine_recall@1
value: 0.06931865009287731
name: Cosine Recall@1
- type: cosine_recall@20
value: 0.5250914458143515
name: Cosine Recall@20
- type: cosine_recall@50
value: 0.7082715439925011
name: Cosine Recall@50
- type: cosine_recall@100
value: 0.8169166539243944
name: Cosine Recall@100
- type: cosine_recall@150
value: 0.8613232254521018
name: Cosine Recall@150
- type: cosine_recall@200
value: 0.8898175710074696
name: Cosine Recall@200
- type: cosine_ndcg@1
value: 0.6796116504854369
name: Cosine Ndcg@1
- type: cosine_ndcg@20
value: 0.6680745295820606
name: Cosine Ndcg@20
- type: cosine_ndcg@50
value: 0.6856578240865067
name: Cosine Ndcg@50
- type: cosine_ndcg@100
value: 0.7378907298421352
name: Cosine Ndcg@100
- type: cosine_ndcg@150
value: 0.7576651805692517
name: Cosine Ndcg@150
- type: cosine_ndcg@200
value: 0.7696718049970358
name: Cosine Ndcg@200
- type: cosine_mrr@1
value: 0.6796116504854369
name: Cosine Mrr@1
- type: cosine_mrr@20
value: 0.8158576051779936
name: Cosine Mrr@20
- type: cosine_mrr@50
value: 0.816279724215562
name: Cosine Mrr@50
- type: cosine_mrr@100
value: 0.816279724215562
name: Cosine Mrr@100
- type: cosine_mrr@150
value: 0.816279724215562
name: Cosine Mrr@150
- type: cosine_mrr@200
value: 0.816279724215562
name: Cosine Mrr@200
- type: cosine_map@1
value: 0.6796116504854369
name: Cosine Map@1
- type: cosine_map@20
value: 0.522177160195635
name: Cosine Map@20
- type: cosine_map@50
value: 0.5082601209392789
name: Cosine Map@50
- type: cosine_map@100
value: 0.5371705298206915
name: Cosine Map@100
- type: cosine_map@150
value: 0.5454012672534121
name: Cosine Map@150
- type: cosine_map@200
value: 0.5494570875591636
name: Cosine Map@200
- type: cosine_map@500
value: 0.5542116087189223
name: Cosine Map@500
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: mix es
type: mix_es
metrics:
- type: cosine_accuracy@1
value: 0.7087883515340614
name: Cosine Accuracy@1
- type: cosine_accuracy@20
value: 0.9552782111284451
name: Cosine Accuracy@20
- type: cosine_accuracy@50
value: 0.9802392095683827
name: Cosine Accuracy@50
- type: cosine_accuracy@100
value: 0.9901196047841914
name: Cosine Accuracy@100
- type: cosine_accuracy@150
value: 0.9937597503900156
name: Cosine Accuracy@150
- type: cosine_accuracy@200
value: 0.9958398335933437
name: Cosine Accuracy@200
- type: cosine_precision@1
value: 0.7087883515340614
name: Cosine Precision@1
- type: cosine_precision@20
value: 0.12158086323452937
name: Cosine Precision@20
- type: cosine_precision@50
value: 0.05122204888195529
name: Cosine Precision@50
- type: cosine_precision@100
value: 0.026125845033801356
name: Cosine Precision@100
- type: cosine_precision@150
value: 0.017548968625411682
name: Cosine Precision@150
- type: cosine_precision@200
value: 0.013239729589183572
name: Cosine Precision@200
- type: cosine_recall@1
value: 0.2737959042171211
name: Cosine Recall@1
- type: cosine_recall@20
value: 0.8990032934650719
name: Cosine Recall@20
- type: cosine_recall@50
value: 0.9459438377535101
name: Cosine Recall@50
- type: cosine_recall@100
value: 0.9650979372508233
name: Cosine Recall@100
- type: cosine_recall@150
value: 0.9731582596637198
name: Cosine Recall@150
- type: cosine_recall@200
value: 0.979086496793205
name: Cosine Recall@200
- type: cosine_ndcg@1
value: 0.7087883515340614
name: Cosine Ndcg@1
- type: cosine_ndcg@20
value: 0.7814741332820433
name: Cosine Ndcg@20
- type: cosine_ndcg@50
value: 0.7944033394497885
name: Cosine Ndcg@50
- type: cosine_ndcg@100
value: 0.7986024294603647
name: Cosine Ndcg@100
- type: cosine_ndcg@150
value: 0.8001222520801115
name: Cosine Ndcg@150
- type: cosine_ndcg@200
value: 0.801183843730514
name: Cosine Ndcg@200
- type: cosine_mrr@1
value: 0.7087883515340614
name: Cosine Mrr@1
- type: cosine_mrr@20
value: 0.7804158804359833
name: Cosine Mrr@20
- type: cosine_mrr@50
value: 0.7812547046826683
name: Cosine Mrr@50
- type: cosine_mrr@100
value: 0.7813961782842836
name: Cosine Mrr@100
- type: cosine_mrr@150
value: 0.7814280971923943
name: Cosine Mrr@150
- type: cosine_mrr@200
value: 0.7814392363829243
name: Cosine Mrr@200
- type: cosine_map@1
value: 0.7087883515340614
name: Cosine Map@1
- type: cosine_map@20
value: 0.7070596364024803
name: Cosine Map@20
- type: cosine_map@50
value: 0.7106867578203881
name: Cosine Map@50
- type: cosine_map@100
value: 0.7112928928384499
name: Cosine Map@100
- type: cosine_map@150
value: 0.7114314004578745
name: Cosine Map@150
- type: cosine_map@200
value: 0.711504950521157
name: Cosine Map@200
- type: cosine_map@500
value: 0.7116431478000537
name: Cosine Map@500
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: mix de
type: mix_de
metrics:
- type: cosine_accuracy@1
value: 0.6484659386375455
name: Cosine Accuracy@1
- type: cosine_accuracy@20
value: 0.9323972958918356
name: Cosine Accuracy@20
- type: cosine_accuracy@50
value: 0.968278731149246
name: Cosine Accuracy@50
- type: cosine_accuracy@100
value: 0.984919396775871
name: Cosine Accuracy@100
- type: cosine_accuracy@150
value: 0.9885595423816953
name: Cosine Accuracy@150
- type: cosine_accuracy@200
value: 0.9937597503900156
name: Cosine Accuracy@200
- type: cosine_precision@1
value: 0.6484659386375455
name: Cosine Precision@1
- type: cosine_precision@20
value: 0.12093083723348932
name: Cosine Precision@20
- type: cosine_precision@50
value: 0.05140925637025482
name: Cosine Precision@50
- type: cosine_precision@100
value: 0.02647425897035882
name: Cosine Precision@100
- type: cosine_precision@150
value: 0.017892182353960822
name: Cosine Precision@150
- type: cosine_precision@200
value: 0.013530941237649509
name: Cosine Precision@200
- type: cosine_recall@1
value: 0.2435517420696828
name: Cosine Recall@1
- type: cosine_recall@20
value: 0.87873114924597
name: Cosine Recall@20
- type: cosine_recall@50
value: 0.9319899462645173
name: Cosine Recall@50
- type: cosine_recall@100
value: 0.9596117178020455
name: Cosine Recall@100
- type: cosine_recall@150
value: 0.9718322066215982
name: Cosine Recall@150
- type: cosine_recall@200
value: 0.9799791991679667
name: Cosine Recall@200
- type: cosine_ndcg@1
value: 0.6484659386375455
name: Cosine Ndcg@1
- type: cosine_ndcg@20
value: 0.7448150588358
name: Cosine Ndcg@20
- type: cosine_ndcg@50
value: 0.7595232400510039
name: Cosine Ndcg@50
- type: cosine_ndcg@100
value: 0.7656851368194345
name: Cosine Ndcg@100
- type: cosine_ndcg@150
value: 0.7681576326024331
name: Cosine Ndcg@150
- type: cosine_ndcg@200
value: 0.7696474672652458
name: Cosine Ndcg@200
- type: cosine_mrr@1
value: 0.6484659386375455
name: Cosine Mrr@1
- type: cosine_mrr@20
value: 0.7323691045739125
name: Cosine Mrr@20
- type: cosine_mrr@50
value: 0.733538875120878
name: Cosine Mrr@50
- type: cosine_mrr@100
value: 0.733776247038599
name: Cosine Mrr@100
- type: cosine_mrr@150
value: 0.7338087409764548
name: Cosine Mrr@150
- type: cosine_mrr@200
value: 0.7338398642058079
name: Cosine Mrr@200
- type: cosine_map@1
value: 0.6484659386375455
name: Cosine Map@1
- type: cosine_map@20
value: 0.6646138211839377
name: Cosine Map@20
- type: cosine_map@50
value: 0.6683657128313888
name: Cosine Map@50
- type: cosine_map@100
value: 0.6692634410264182
name: Cosine Map@100
- type: cosine_map@150
value: 0.669518875077899
name: Cosine Map@150
- type: cosine_map@200
value: 0.6696171599377958
name: Cosine Map@200
- type: cosine_map@500
value: 0.6697127210085475
name: Cosine Map@500
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: mix zh
type: mix_zh
metrics:
- type: cosine_accuracy@1
value: 0.7667014613778705
name: Cosine Accuracy@1
- type: cosine_accuracy@20
value: 0.9843423799582464
name: Cosine Accuracy@20
- type: cosine_accuracy@50
value: 0.9932150313152401
name: Cosine Accuracy@50
- type: cosine_accuracy@100
value: 0.9958246346555324
name: Cosine Accuracy@100
- type: cosine_accuracy@150
value: 0.9973903966597077
name: Cosine Accuracy@150
- type: cosine_accuracy@200
value: 0.9979123173277662
name: Cosine Accuracy@200
- type: cosine_precision@1
value: 0.7667014613778705
name: Cosine Precision@1
- type: cosine_precision@20
value: 0.13870041753653445
name: Cosine Precision@20
- type: cosine_precision@50
value: 0.05810020876826725
name: Cosine Precision@50
- type: cosine_precision@100
value: 0.029598121085595
name: Cosine Precision@100
- type: cosine_precision@150
value: 0.01986778009742519
name: Cosine Precision@150
- type: cosine_precision@200
value: 0.014945198329853866
name: Cosine Precision@200
- type: cosine_recall@1
value: 0.25692041952480366
name: Cosine Recall@1
- type: cosine_recall@20
value: 0.9156576200417536
name: Cosine Recall@20
- type: cosine_recall@50
value: 0.9582637439109255
name: Cosine Recall@50
- type: cosine_recall@100
value: 0.9765483646485734
name: Cosine Recall@100
- type: cosine_recall@150
value: 0.9833768267223383
name: Cosine Recall@150
- type: cosine_recall@200
value: 0.986464857341684
name: Cosine Recall@200
- type: cosine_ndcg@1
value: 0.7667014613778705
name: Cosine Ndcg@1
- type: cosine_ndcg@20
value: 0.8002168358295473
name: Cosine Ndcg@20
- type: cosine_ndcg@50
value: 0.8125113081884888
name: Cosine Ndcg@50
- type: cosine_ndcg@100
value: 0.8167350090334409
name: Cosine Ndcg@100
- type: cosine_ndcg@150
value: 0.8181122471507385
name: Cosine Ndcg@150
- type: cosine_ndcg@200
value: 0.8186874070081017
name: Cosine Ndcg@200
- type: cosine_mrr@1
value: 0.7667014613778705
name: Cosine Mrr@1
- type: cosine_mrr@20
value: 0.8421752732824312
name: Cosine Mrr@20
- type: cosine_mrr@50
value: 0.8424954415974232
name: Cosine Mrr@50
- type: cosine_mrr@100
value: 0.8425358910333786
name: Cosine Mrr@100
- type: cosine_mrr@150
value: 0.8425483391786986
name: Cosine Mrr@150
- type: cosine_mrr@200
value: 0.8425515411459873
name: Cosine Mrr@200
- type: cosine_map@1
value: 0.7667014613778705
name: Cosine Map@1
- type: cosine_map@20
value: 0.7007206423896271
name: Cosine Map@20
- type: cosine_map@50
value: 0.7046277360194696
name: Cosine Map@50
- type: cosine_map@100
value: 0.7053668771050886
name: Cosine Map@100
- type: cosine_map@150
value: 0.7055166914145262
name: Cosine Map@150
- type: cosine_map@200
value: 0.7055658329670217
name: Cosine Map@200
- type: cosine_map@500
value: 0.7056512281794008
name: Cosine Map@500
---
# Job - Job matching Alibaba-NLP/gte-multilingual-base (v2)
Top performing model on [TalentCLEF 2025](https://talentclef.github.io/talentclef/) Task A. Use it for multilingual job title matching
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [Alibaba-NLP/gte-multilingual-base](https://huggingface.co/Alibaba-NLP/gte-multilingual-base) <!-- at revision 9fdd4ee8bba0e2808a34e0e739576f6740d2b225 -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 768 dimensions
- **Similarity Function:** Cosine Similarity
- **Training Datasets:**
- full_en
- full_de
- full_es
- full_zh
- mix
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: NewModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the ๐ค Hub
model = SentenceTransformer("pj-mathematician/JobGTE-multilingual-base-v2")
# Run inference
sentences = [
'Volksvertreter',
'Parlamentarier',
'Oberbรผrgermeister',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Information Retrieval
* Datasets: `full_en`, `full_es`, `full_de`, `full_zh`, `mix_es`, `mix_de` and `mix_zh`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | full_en | full_es | full_de | full_zh | mix_es | mix_de | mix_zh |
|:---------------------|:-----------|:-----------|:-----------|:-----------|:-----------|:-----------|:-----------|
| cosine_accuracy@1 | 0.6667 | 0.1243 | 0.2956 | 0.6796 | 0.7088 | 0.6485 | 0.7667 |
| cosine_accuracy@20 | 0.9905 | 1.0 | 0.9754 | 0.9806 | 0.9553 | 0.9324 | 0.9843 |
| cosine_accuracy@50 | 0.9905 | 1.0 | 0.9852 | 0.9903 | 0.9802 | 0.9683 | 0.9932 |
| cosine_accuracy@100 | 0.9905 | 1.0 | 0.9901 | 0.9903 | 0.9901 | 0.9849 | 0.9958 |
| cosine_accuracy@150 | 0.9905 | 1.0 | 0.9901 | 0.9903 | 0.9938 | 0.9886 | 0.9974 |
| cosine_accuracy@200 | 0.9905 | 1.0 | 0.9901 | 0.9903 | 0.9958 | 0.9938 | 0.9979 |
| cosine_precision@1 | 0.6667 | 0.1243 | 0.2956 | 0.6796 | 0.7088 | 0.6485 | 0.7667 |
| cosine_precision@20 | 0.5148 | 0.5759 | 0.5103 | 0.4883 | 0.1216 | 0.1209 | 0.1387 |
| cosine_precision@50 | 0.32 | 0.3923 | 0.3694 | 0.2963 | 0.0512 | 0.0514 | 0.0581 |
| cosine_precision@100 | 0.1905 | 0.2566 | 0.2397 | 0.1788 | 0.0261 | 0.0265 | 0.0296 |
| cosine_precision@150 | 0.1362 | 0.1928 | 0.1808 | 0.1278 | 0.0175 | 0.0179 | 0.0199 |
| cosine_precision@200 | 0.1054 | 0.1528 | 0.1462 | 0.0999 | 0.0132 | 0.0135 | 0.0149 |
| cosine_recall@1 | 0.0685 | 0.0036 | 0.0111 | 0.0693 | 0.2738 | 0.2436 | 0.2569 |
| cosine_recall@20 | 0.5491 | 0.3853 | 0.3208 | 0.5251 | 0.899 | 0.8787 | 0.9157 |
| cosine_recall@50 | 0.7554 | 0.566 | 0.5042 | 0.7083 | 0.9459 | 0.932 | 0.9583 |
| cosine_recall@100 | 0.8503 | 0.6899 | 0.6173 | 0.8169 | 0.9651 | 0.9596 | 0.9765 |
| cosine_recall@150 | 0.8995 | 0.754 | 0.6848 | 0.8613 | 0.9732 | 0.9718 | 0.9834 |
| cosine_recall@200 | 0.9208 | 0.7858 | 0.7253 | 0.8898 | 0.9791 | 0.98 | 0.9865 |
| cosine_ndcg@1 | 0.6667 | 0.1243 | 0.2956 | 0.6796 | 0.7088 | 0.6485 | 0.7667 |
| cosine_ndcg@20 | 0.6952 | 0.6169 | 0.5378 | 0.6681 | 0.7815 | 0.7448 | 0.8002 |
| cosine_ndcg@50 | 0.723 | 0.5914 | 0.5288 | 0.6857 | 0.7944 | 0.7595 | 0.8125 |
| cosine_ndcg@100 | 0.7733 | 0.6235 | 0.5552 | 0.7379 | 0.7986 | 0.7657 | 0.8167 |
| cosine_ndcg@150 | 0.7947 | 0.6557 | 0.5888 | 0.7577 | 0.8001 | 0.7682 | 0.8181 |
| **cosine_ndcg@200** | **0.8039** | **0.6717** | **0.6092** | **0.7697** | **0.8012** | **0.7696** | **0.8187** |
| cosine_mrr@1 | 0.6667 | 0.1243 | 0.2956 | 0.6796 | 0.7088 | 0.6485 | 0.7667 |
| cosine_mrr@20 | 0.8183 | 0.5581 | 0.5165 | 0.8159 | 0.7804 | 0.7324 | 0.8422 |
| cosine_mrr@50 | 0.8183 | 0.5581 | 0.5168 | 0.8163 | 0.7813 | 0.7335 | 0.8425 |
| cosine_mrr@100 | 0.8183 | 0.5581 | 0.5168 | 0.8163 | 0.7814 | 0.7338 | 0.8425 |
| cosine_mrr@150 | 0.8183 | 0.5581 | 0.5168 | 0.8163 | 0.7814 | 0.7338 | 0.8425 |
| cosine_mrr@200 | 0.8183 | 0.5581 | 0.5168 | 0.8163 | 0.7814 | 0.7338 | 0.8426 |
| cosine_map@1 | 0.6667 | 0.1243 | 0.2956 | 0.6796 | 0.7088 | 0.6485 | 0.7667 |
| cosine_map@20 | 0.5566 | 0.4841 | 0.3984 | 0.5222 | 0.7071 | 0.6646 | 0.7007 |
| cosine_map@50 | 0.5534 | 0.4304 | 0.3603 | 0.5083 | 0.7107 | 0.6684 | 0.7046 |
| cosine_map@100 | 0.5852 | 0.4374 | 0.3632 | 0.5372 | 0.7113 | 0.6693 | 0.7054 |
| cosine_map@150 | 0.5943 | 0.4527 | 0.3782 | 0.5454 | 0.7114 | 0.6695 | 0.7055 |
| cosine_map@200 | 0.5976 | 0.4593 | 0.3863 | 0.5495 | 0.7115 | 0.6696 | 0.7056 |
| cosine_map@500 | 0.6016 | 0.472 | 0.3992 | 0.5542 | 0.7116 | 0.6697 | 0.7057 |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Datasets
<details><summary>full_en</summary>
#### full_en
* Dataset: full_en
* Size: 28,880 training samples
* Columns: <code>anchor</code> and <code>positive</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive |
|:--------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 3 tokens</li><li>mean: 5.68 tokens</li><li>max: 11 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 5.76 tokens</li><li>max: 12 tokens</li></ul> |
* Samples:
| anchor | positive |
|:-----------------------------------------|:-----------------------------------------|
| <code>air commodore</code> | <code>flight lieutenant</code> |
| <code>command and control officer</code> | <code>flight officer</code> |
| <code>air commodore</code> | <code>command and control officer</code> |
* Loss: [<code>GISTEmbedLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#gistembedloss) with these parameters:
```json
{'guide': SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
), 'temperature': 0.01, 'margin_strategy': 'absolute', 'margin': 0.0}
```
</details>
<details><summary>full_de</summary>
#### full_de
* Dataset: full_de
* Size: 23,023 training samples
* Columns: <code>anchor</code> and <code>positive</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive |
|:--------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 3 tokens</li><li>mean: 7.99 tokens</li><li>max: 30 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 8.19 tokens</li><li>max: 30 tokens</li></ul> |
* Samples:
| anchor | positive |
|:----------------------------------|:-----------------------------------------------------|
| <code>Staffelkommandantin</code> | <code>Kommodore</code> |
| <code>Luftwaffenoffizierin</code> | <code>Luftwaffenoffizier/Luftwaffenoffizierin</code> |
| <code>Staffelkommandantin</code> | <code>Luftwaffenoffizierin</code> |
* Loss: [<code>GISTEmbedLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#gistembedloss) with these parameters:
```json
{'guide': SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
), 'temperature': 0.01, 'margin_strategy': 'absolute', 'margin': 0.0}
```
</details>
<details><summary>full_es</summary>
#### full_es
* Dataset: full_es
* Size: 20,724 training samples
* Columns: <code>anchor</code> and <code>positive</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive |
|:--------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 3 tokens</li><li>mean: 9.13 tokens</li><li>max: 32 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 8.84 tokens</li><li>max: 32 tokens</li></ul> |
* Samples:
| anchor | positive |
|:------------------------------------|:-------------------------------------------|
| <code>jefe de escuadrรณn</code> | <code>instructor</code> |
| <code>comandante de aeronave</code> | <code>instructor de simulador</code> |
| <code>instructor</code> | <code>oficial del Ejรฉrcito del Aire</code> |
* Loss: [<code>GISTEmbedLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#gistembedloss) with these parameters:
```json
{'guide': SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
), 'temperature': 0.01, 'margin_strategy': 'absolute', 'margin': 0.0}
```
</details>
<details><summary>full_zh</summary>
#### full_zh
* Dataset: full_zh
* Size: 30,401 training samples
* Columns: <code>anchor</code> and <code>positive</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive |
|:--------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 5 tokens</li><li>mean: 7.15 tokens</li><li>max: 14 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 7.46 tokens</li><li>max: 21 tokens</li></ul> |
* Samples:
| anchor | positive |
|:------------------|:---------------------|
| <code>ๆๆฏๆป็</code> | <code>ๆๆฏๅ่ฟ่ฅๆป็</code> |
| <code>ๆๆฏๆป็</code> | <code>ๆๆฏไธป็ฎก</code> |
| <code>ๆๆฏๆป็</code> | <code>ๆๆฏ่บๆฏๆป็</code> |
* Loss: [<code>GISTEmbedLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#gistembedloss) with these parameters:
```json
{'guide': SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
), 'temperature': 0.01, 'margin_strategy': 'absolute', 'margin': 0.0}
```
</details>
<details><summary>mix</summary>
#### mix
* Dataset: mix
* Size: 21,760 training samples
* Columns: <code>anchor</code> and <code>positive</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive |
|:--------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 2 tokens</li><li>mean: 6.71 tokens</li><li>max: 19 tokens</li></ul> | <ul><li>min: 2 tokens</li><li>mean: 7.69 tokens</li><li>max: 19 tokens</li></ul> |
* Samples:
| anchor | positive |
|:------------------------------------------|:----------------------------------------------------------------|
| <code>technical manager</code> | <code>Technischer Direktor fรผr Bรผhne, Film und Fernsehen</code> |
| <code>head of technical</code> | <code>directora tรฉcnica</code> |
| <code>head of technical department</code> | <code>ๆๆฏ่บๆฏๆป็</code> |
* Loss: [<code>GISTEmbedLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#gistembedloss) with these parameters:
```json
{'guide': SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
), 'temperature': 0.01, 'margin_strategy': 'absolute', 'margin': 0.0}
```
</details>
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 128
- `per_device_eval_batch_size`: 128
- `gradient_accumulation_steps`: 2
- `num_train_epochs`: 5
- `warmup_ratio`: 0.05
- `log_on_each_node`: False
- `fp16`: True
- `dataloader_num_workers`: 4
- `ddp_find_unused_parameters`: True
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 128
- `per_device_eval_batch_size`: 128
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 2
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 5
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.05
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: False
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: True
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: True
- `dataloader_num_workers`: 4
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `tp_size`: 0
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: True
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | full_en_cosine_ndcg@200 | full_es_cosine_ndcg@200 | full_de_cosine_ndcg@200 | full_zh_cosine_ndcg@200 | mix_es_cosine_ndcg@200 | mix_de_cosine_ndcg@200 | mix_zh_cosine_ndcg@200 |
|:------:|:----:|:-------------:|:-----------------------:|:-----------------------:|:-----------------------:|:-----------------------:|:----------------------:|:----------------------:|:----------------------:|
| -1 | -1 | - | 0.7447 | 0.6125 | 0.5378 | 0.7240 | 0.7029 | 0.6345 | 0.7437 |
| 0.0082 | 1 | 4.3088 | - | - | - | - | - | - | - |
| 0.8230 | 100 | 1.9026 | - | - | - | - | - | - | - |
| 1.6502 | 200 | 0.9336 | 0.8024 | 0.6703 | 0.6109 | 0.7695 | 0.7914 | 0.7594 | 0.8136 |
| 2.4774 | 300 | 0.161 | - | - | - | - | - | - | - |
| 3.3045 | 400 | 0.1398 | 0.8039 | 0.6717 | 0.6092 | 0.7697 | 0.8012 | 0.7696 | 0.8187 |
### Framework Versions
- Python: 3.11.11
- Sentence Transformers: 4.1.0
- Transformers: 4.51.2
- PyTorch: 2.6.0+cu124
- Accelerate: 1.6.0
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### GISTEmbedLoss
```bibtex
@misc{solatorio2024gistembed,
title={GISTEmbed: Guided In-sample Selection of Training Negatives for Text Embedding Fine-tuning},
author={Aivin V. Solatorio},
year={2024},
eprint={2402.16829},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
andrewsamce/cart-pole
|
andrewsamce
| 2025-06-20T18:27:23Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-06-20T18:27:19Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: cart-pole
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 9.30 +/- 0.64
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
mastur96/737dadaf-9de7-47f5-a473-f0c9a706af7e
|
mastur96
| 2025-06-20T18:26:30Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"unsloth",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-20T14:45:32Z |
---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
pj-mathematician/JobGTE-multilingual-base-v1
|
pj-mathematician
| 2025-06-20T18:26:12Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:124788",
"loss:GISTEmbedLoss",
"arxiv:1908.10084",
"arxiv:2402.16829",
"base_model:Alibaba-NLP/gte-multilingual-base",
"base_model:finetune:Alibaba-NLP/gte-multilingual-base",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-06-20T18:23:09Z |
---
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:124788
- loss:GISTEmbedLoss
base_model: Alibaba-NLP/gte-multilingual-base
widget:
- source_sentence: ๅ
ถไปๆบๆขฐใ่ฎพๅคๅๆๅฝข่ดง็ฉ็ง่ตๆๅกไปฃ่กจ
sentences:
- ๅ
ถไปๆบๆขฐๅ่ฎพๅค็ง่ตๆๅกๅทฅไฝไบบๅ
- ็ตๅญๅ็ตไฟก่ฎพๅคๅ้ถ้จไปถ็ฉๆต็ป็
- ๅทฅไธไธปๅจ
- source_sentence: ๅ
ฌไบค่ฝฆๅธๆบ
sentences:
- ่กจๆผ็ฏๅ
่ฎพ่ฎกๅธ
- ไน็ฏๅบๅฐๆฟๅฎ่ฃ
ๅทฅ
- ๅฝ้
ๅทดๅฃซๅธๆบ
- source_sentence: online communication manager
sentences:
- trades union official
- social media manager
- budget manager
- source_sentence: Projektmanagerin
sentences:
- Projektmanager/Projektmanagerin
- Category-Manager
- Infanterist
- source_sentence: Volksvertreter
sentences:
- Parlamentarier
- Oberbรผrgermeister
- Konsul
pipeline_tag: sentence-similarity
library_name: sentence-transformers
metrics:
- cosine_accuracy@1
- cosine_accuracy@20
- cosine_accuracy@50
- cosine_accuracy@100
- cosine_accuracy@150
- cosine_accuracy@200
- cosine_precision@1
- cosine_precision@20
- cosine_precision@50
- cosine_precision@100
- cosine_precision@150
- cosine_precision@200
- cosine_recall@1
- cosine_recall@20
- cosine_recall@50
- cosine_recall@100
- cosine_recall@150
- cosine_recall@200
- cosine_ndcg@1
- cosine_ndcg@20
- cosine_ndcg@50
- cosine_ndcg@100
- cosine_ndcg@150
- cosine_ndcg@200
- cosine_mrr@1
- cosine_mrr@20
- cosine_mrr@50
- cosine_mrr@100
- cosine_mrr@150
- cosine_mrr@200
- cosine_map@1
- cosine_map@20
- cosine_map@50
- cosine_map@100
- cosine_map@150
- cosine_map@200
- cosine_map@500
model-index:
- name: SentenceTransformer based on Alibaba-NLP/gte-multilingual-base
results:
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: full en
type: full_en
metrics:
- type: cosine_accuracy@1
value: 0.6571428571428571
name: Cosine Accuracy@1
- type: cosine_accuracy@20
value: 0.9904761904761905
name: Cosine Accuracy@20
- type: cosine_accuracy@50
value: 0.9904761904761905
name: Cosine Accuracy@50
- type: cosine_accuracy@100
value: 0.9904761904761905
name: Cosine Accuracy@100
- type: cosine_accuracy@150
value: 0.9904761904761905
name: Cosine Accuracy@150
- type: cosine_accuracy@200
value: 0.9904761904761905
name: Cosine Accuracy@200
- type: cosine_precision@1
value: 0.6571428571428571
name: Cosine Precision@1
- type: cosine_precision@20
value: 0.5171428571428571
name: Cosine Precision@20
- type: cosine_precision@50
value: 0.316
name: Cosine Precision@50
- type: cosine_precision@100
value: 0.18895238095238095
name: Cosine Precision@100
- type: cosine_precision@150
value: 0.13384126984126984
name: Cosine Precision@150
- type: cosine_precision@200
value: 0.10433333333333335
name: Cosine Precision@200
- type: cosine_recall@1
value: 0.0678253733846715
name: Cosine Recall@1
- type: cosine_recall@20
value: 0.5470006025464504
name: Cosine Recall@20
- type: cosine_recall@50
value: 0.7399645316315758
name: Cosine Recall@50
- type: cosine_recall@100
value: 0.8452891149669638
name: Cosine Recall@100
- type: cosine_recall@150
value: 0.8838497168796887
name: Cosine Recall@150
- type: cosine_recall@200
value: 0.9109269128757174
name: Cosine Recall@200
- type: cosine_ndcg@1
value: 0.6571428571428571
name: Cosine Ndcg@1
- type: cosine_ndcg@20
value: 0.6953571805621692
name: Cosine Ndcg@20
- type: cosine_ndcg@50
value: 0.7150421121165462
name: Cosine Ndcg@50
- type: cosine_ndcg@100
value: 0.7679394555495317
name: Cosine Ndcg@100
- type: cosine_ndcg@150
value: 0.7856911059911225
name: Cosine Ndcg@150
- type: cosine_ndcg@200
value: 0.7969632777290026
name: Cosine Ndcg@200
- type: cosine_mrr@1
value: 0.6571428571428571
name: Cosine Mrr@1
- type: cosine_mrr@20
value: 0.8138095238095239
name: Cosine Mrr@20
- type: cosine_mrr@50
value: 0.8138095238095239
name: Cosine Mrr@50
- type: cosine_mrr@100
value: 0.8138095238095239
name: Cosine Mrr@100
- type: cosine_mrr@150
value: 0.8138095238095239
name: Cosine Mrr@150
- type: cosine_mrr@200
value: 0.8138095238095239
name: Cosine Mrr@200
- type: cosine_map@1
value: 0.6571428571428571
name: Cosine Map@1
- type: cosine_map@20
value: 0.5578605627627369
name: Cosine Map@20
- type: cosine_map@50
value: 0.5471407389299809
name: Cosine Map@50
- type: cosine_map@100
value: 0.5795933384755297
name: Cosine Map@100
- type: cosine_map@150
value: 0.5874505508842796
name: Cosine Map@150
- type: cosine_map@200
value: 0.5912226659397186
name: Cosine Map@200
- type: cosine_map@500
value: 0.5952587557760031
name: Cosine Map@500
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: full es
type: full_es
metrics:
- type: cosine_accuracy@1
value: 0.12432432432432433
name: Cosine Accuracy@1
- type: cosine_accuracy@20
value: 1.0
name: Cosine Accuracy@20
- type: cosine_accuracy@50
value: 1.0
name: Cosine Accuracy@50
- type: cosine_accuracy@100
value: 1.0
name: Cosine Accuracy@100
- type: cosine_accuracy@150
value: 1.0
name: Cosine Accuracy@150
- type: cosine_accuracy@200
value: 1.0
name: Cosine Accuracy@200
- type: cosine_precision@1
value: 0.12432432432432433
name: Cosine Precision@1
- type: cosine_precision@20
value: 0.5718918918918919
name: Cosine Precision@20
- type: cosine_precision@50
value: 0.3885405405405405
name: Cosine Precision@50
- type: cosine_precision@100
value: 0.25172972972972973
name: Cosine Precision@100
- type: cosine_precision@150
value: 0.1904864864864865
name: Cosine Precision@150
- type: cosine_precision@200
value: 0.1521891891891892
name: Cosine Precision@200
- type: cosine_recall@1
value: 0.0036619075252531876
name: Cosine Recall@1
- type: cosine_recall@20
value: 0.3842245968041533
name: Cosine Recall@20
- type: cosine_recall@50
value: 0.5640822196868902
name: Cosine Recall@50
- type: cosine_recall@100
value: 0.6741986120580108
name: Cosine Recall@100
- type: cosine_recall@150
value: 0.7463851968088967
name: Cosine Recall@150
- type: cosine_recall@200
value: 0.7825399601398452
name: Cosine Recall@200
- type: cosine_ndcg@1
value: 0.12432432432432433
name: Cosine Ndcg@1
- type: cosine_ndcg@20
value: 0.6139182209948354
name: Cosine Ndcg@20
- type: cosine_ndcg@50
value: 0.5873893466818746
name: Cosine Ndcg@50
- type: cosine_ndcg@100
value: 0.6144038475288277
name: Cosine Ndcg@100
- type: cosine_ndcg@150
value: 0.6498632077214272
name: Cosine Ndcg@150
- type: cosine_ndcg@200
value: 0.6680602466150343
name: Cosine Ndcg@200
- type: cosine_mrr@1
value: 0.12432432432432433
name: Cosine Mrr@1
- type: cosine_mrr@20
value: 0.5581081081081081
name: Cosine Mrr@20
- type: cosine_mrr@50
value: 0.5581081081081081
name: Cosine Mrr@50
- type: cosine_mrr@100
value: 0.5581081081081081
name: Cosine Mrr@100
- type: cosine_mrr@150
value: 0.5581081081081081
name: Cosine Mrr@150
- type: cosine_mrr@200
value: 0.5581081081081081
name: Cosine Mrr@200
- type: cosine_map@1
value: 0.12432432432432433
name: Cosine Map@1
- type: cosine_map@20
value: 0.47988875190050484
name: Cosine Map@20
- type: cosine_map@50
value: 0.4249833337950364
name: Cosine Map@50
- type: cosine_map@100
value: 0.430155652024808
name: Cosine Map@100
- type: cosine_map@150
value: 0.4458862132745998
name: Cosine Map@150
- type: cosine_map@200
value: 0.45334655744992447
name: Cosine Map@200
- type: cosine_map@500
value: 0.4656066165331343
name: Cosine Map@500
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: full de
type: full_de
metrics:
- type: cosine_accuracy@1
value: 0.2955665024630542
name: Cosine Accuracy@1
- type: cosine_accuracy@20
value: 0.9704433497536946
name: Cosine Accuracy@20
- type: cosine_accuracy@50
value: 0.9852216748768473
name: Cosine Accuracy@50
- type: cosine_accuracy@100
value: 0.9852216748768473
name: Cosine Accuracy@100
- type: cosine_accuracy@150
value: 0.9901477832512315
name: Cosine Accuracy@150
- type: cosine_accuracy@200
value: 0.9901477832512315
name: Cosine Accuracy@200
- type: cosine_precision@1
value: 0.2955665024630542
name: Cosine Precision@1
- type: cosine_precision@20
value: 0.5083743842364532
name: Cosine Precision@20
- type: cosine_precision@50
value: 0.3654187192118227
name: Cosine Precision@50
- type: cosine_precision@100
value: 0.24133004926108376
name: Cosine Precision@100
- type: cosine_precision@150
value: 0.18036124794745487
name: Cosine Precision@150
- type: cosine_precision@200
value: 0.14467980295566504
name: Cosine Precision@200
- type: cosine_recall@1
value: 0.01108543831680986
name: Cosine Recall@1
- type: cosine_recall@20
value: 0.3221185941380065
name: Cosine Recall@20
- type: cosine_recall@50
value: 0.5024502430161547
name: Cosine Recall@50
- type: cosine_recall@100
value: 0.6247617904371989
name: Cosine Recall@100
- type: cosine_recall@150
value: 0.6829583450315939
name: Cosine Recall@150
- type: cosine_recall@200
value: 0.7216293640715983
name: Cosine Recall@200
- type: cosine_ndcg@1
value: 0.2955665024630542
name: Cosine Ndcg@1
- type: cosine_ndcg@20
value: 0.5393376062142305
name: Cosine Ndcg@20
- type: cosine_ndcg@50
value: 0.5267125529267169
name: Cosine Ndcg@50
- type: cosine_ndcg@100
value: 0.55793511917882
name: Cosine Ndcg@100
- type: cosine_ndcg@150
value: 0.5879547828450983
name: Cosine Ndcg@150
- type: cosine_ndcg@200
value: 0.6071252185389439
name: Cosine Ndcg@200
- type: cosine_mrr@1
value: 0.2955665024630542
name: Cosine Mrr@1
- type: cosine_mrr@20
value: 0.5104381157401634
name: Cosine Mrr@20
- type: cosine_mrr@50
value: 0.5109752961295605
name: Cosine Mrr@50
- type: cosine_mrr@100
value: 0.5109752961295605
name: Cosine Mrr@100
- type: cosine_mrr@150
value: 0.5110222114474118
name: Cosine Mrr@150
- type: cosine_mrr@200
value: 0.5110222114474118
name: Cosine Mrr@200
- type: cosine_map@1
value: 0.2955665024630542
name: Cosine Map@1
- type: cosine_map@20
value: 0.40097257642946377
name: Cosine Map@20
- type: cosine_map@50
value: 0.35882787401455
name: Cosine Map@50
- type: cosine_map@100
value: 0.3633182590941781
name: Cosine Map@100
- type: cosine_map@150
value: 0.3776727961080201
name: Cosine Map@150
- type: cosine_map@200
value: 0.3848401555555339
name: Cosine Map@200
- type: cosine_map@500
value: 0.3978065874082948
name: Cosine Map@500
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: full zh
type: full_zh
metrics:
- type: cosine_accuracy@1
value: 0.6601941747572816
name: Cosine Accuracy@1
- type: cosine_accuracy@20
value: 0.9805825242718447
name: Cosine Accuracy@20
- type: cosine_accuracy@50
value: 0.9902912621359223
name: Cosine Accuracy@50
- type: cosine_accuracy@100
value: 0.9902912621359223
name: Cosine Accuracy@100
- type: cosine_accuracy@150
value: 0.9902912621359223
name: Cosine Accuracy@150
- type: cosine_accuracy@200
value: 0.9902912621359223
name: Cosine Accuracy@200
- type: cosine_precision@1
value: 0.6601941747572816
name: Cosine Precision@1
- type: cosine_precision@20
value: 0.4781553398058253
name: Cosine Precision@20
- type: cosine_precision@50
value: 0.28951456310679613
name: Cosine Precision@50
- type: cosine_precision@100
value: 0.17572815533980585
name: Cosine Precision@100
- type: cosine_precision@150
value: 0.12595469255663433
name: Cosine Precision@150
- type: cosine_precision@200
value: 0.09815533980582528
name: Cosine Precision@200
- type: cosine_recall@1
value: 0.06151358631979527
name: Cosine Recall@1
- type: cosine_recall@20
value: 0.5107966412908705
name: Cosine Recall@20
- type: cosine_recall@50
value: 0.6922746152164951
name: Cosine Recall@50
- type: cosine_recall@100
value: 0.8004152884148357
name: Cosine Recall@100
- type: cosine_recall@150
value: 0.8465065661615649
name: Cosine Recall@150
- type: cosine_recall@200
value: 0.8770990926698364
name: Cosine Recall@200
- type: cosine_ndcg@1
value: 0.6601941747572816
name: Cosine Ndcg@1
- type: cosine_ndcg@20
value: 0.6539867858378715
name: Cosine Ndcg@20
- type: cosine_ndcg@50
value: 0.6707332209240133
name: Cosine Ndcg@50
- type: cosine_ndcg@100
value: 0.72342020484322
name: Cosine Ndcg@100
- type: cosine_ndcg@150
value: 0.7437750875502527
name: Cosine Ndcg@150
- type: cosine_ndcg@200
value: 0.7553648453187212
name: Cosine Ndcg@200
- type: cosine_mrr@1
value: 0.6601941747572816
name: Cosine Mrr@1
- type: cosine_mrr@20
value: 0.8037216828478965
name: Cosine Mrr@20
- type: cosine_mrr@50
value: 0.8040950958426687
name: Cosine Mrr@50
- type: cosine_mrr@100
value: 0.8040950958426687
name: Cosine Mrr@100
- type: cosine_mrr@150
value: 0.8040950958426687
name: Cosine Mrr@150
- type: cosine_mrr@200
value: 0.8040950958426687
name: Cosine Mrr@200
- type: cosine_map@1
value: 0.6601941747572816
name: Cosine Map@1
- type: cosine_map@20
value: 0.5087334164702914
name: Cosine Map@20
- type: cosine_map@50
value: 0.49260246320797585
name: Cosine Map@50
- type: cosine_map@100
value: 0.5217412166882693
name: Cosine Map@100
- type: cosine_map@150
value: 0.529859818130126
name: Cosine Map@150
- type: cosine_map@200
value: 0.533378795921413
name: Cosine Map@200
- type: cosine_map@500
value: 0.5386011712914499
name: Cosine Map@500
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: mix es
type: mix_es
metrics:
- type: cosine_accuracy@1
value: 0.7280291211648466
name: Cosine Accuracy@1
- type: cosine_accuracy@20
value: 0.9599583983359334
name: Cosine Accuracy@20
- type: cosine_accuracy@50
value: 0.9791991679667187
name: Cosine Accuracy@50
- type: cosine_accuracy@100
value: 0.9942797711908476
name: Cosine Accuracy@100
- type: cosine_accuracy@150
value: 0.9958398335933437
name: Cosine Accuracy@150
- type: cosine_accuracy@200
value: 0.9973998959958398
name: Cosine Accuracy@200
- type: cosine_precision@1
value: 0.7280291211648466
name: Cosine Precision@1
- type: cosine_precision@20
value: 0.12433697347893914
name: Cosine Precision@20
- type: cosine_precision@50
value: 0.05145085803432139
name: Cosine Precision@50
- type: cosine_precision@100
value: 0.02625065002600105
name: Cosine Precision@100
- type: cosine_precision@150
value: 0.017621771537528162
name: Cosine Precision@150
- type: cosine_precision@200
value: 0.013283931357254294
name: Cosine Precision@200
- type: cosine_recall@1
value: 0.28133620582918556
name: Cosine Recall@1
- type: cosine_recall@20
value: 0.9183394002426764
name: Cosine Recall@20
- type: cosine_recall@50
value: 0.9499306638932224
name: Cosine Recall@50
- type: cosine_recall@100
value: 0.9700901369388107
name: Cosine Recall@100
- type: cosine_recall@150
value: 0.9767724042295025
name: Cosine Recall@150
- type: cosine_recall@200
value: 0.9818166059975733
name: Cosine Recall@200
- type: cosine_ndcg@1
value: 0.7280291211648466
name: Cosine Ndcg@1
- type: cosine_ndcg@20
value: 0.8043549768911603
name: Cosine Ndcg@20
- type: cosine_ndcg@50
value: 0.81295852465432
name: Cosine Ndcg@50
- type: cosine_ndcg@100
value: 0.817339429558165
name: Cosine Ndcg@100
- type: cosine_ndcg@150
value: 0.8186380742931886
name: Cosine Ndcg@150
- type: cosine_ndcg@200
value: 0.8195485984235017
name: Cosine Ndcg@200
- type: cosine_mrr@1
value: 0.7280291211648466
name: Cosine Mrr@1
- type: cosine_mrr@20
value: 0.7968549154271433
name: Cosine Mrr@20
- type: cosine_mrr@50
value: 0.7974653825839162
name: Cosine Mrr@50
- type: cosine_mrr@100
value: 0.7976914864910069
name: Cosine Mrr@100
- type: cosine_mrr@150
value: 0.7977044635908871
name: Cosine Mrr@150
- type: cosine_mrr@200
value: 0.7977139196654446
name: Cosine Mrr@200
- type: cosine_map@1
value: 0.7280291211648466
name: Cosine Map@1
- type: cosine_map@20
value: 0.7350836192117531
name: Cosine Map@20
- type: cosine_map@50
value: 0.7374205090112232
name: Cosine Map@50
- type: cosine_map@100
value: 0.737988888492803
name: Cosine Map@100
- type: cosine_map@150
value: 0.7381133157945164
name: Cosine Map@150
- type: cosine_map@200
value: 0.7381788581828236
name: Cosine Map@200
- type: cosine_map@500
value: 0.7382854440643231
name: Cosine Map@500
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: mix de
type: mix_de
metrics:
- type: cosine_accuracy@1
value: 0.6703068122724909
name: Cosine Accuracy@1
- type: cosine_accuracy@20
value: 0.9505980239209568
name: Cosine Accuracy@20
- type: cosine_accuracy@50
value: 0.9776391055642226
name: Cosine Accuracy@50
- type: cosine_accuracy@100
value: 0.9864794591783671
name: Cosine Accuracy@100
- type: cosine_accuracy@150
value: 0.9932397295891836
name: Cosine Accuracy@150
- type: cosine_accuracy@200
value: 0.9947997919916797
name: Cosine Accuracy@200
- type: cosine_precision@1
value: 0.6703068122724909
name: Cosine Precision@1
- type: cosine_precision@20
value: 0.1251690067602704
name: Cosine Precision@20
- type: cosine_precision@50
value: 0.052282891315652634
name: Cosine Precision@50
- type: cosine_precision@100
value: 0.026729069162766517
name: Cosine Precision@100
- type: cosine_precision@150
value: 0.01799965331946611
name: Cosine Precision@150
- type: cosine_precision@200
value: 0.013541341653666149
name: Cosine Precision@200
- type: cosine_recall@1
value: 0.25235742763043856
name: Cosine Recall@1
- type: cosine_recall@20
value: 0.9095857167620037
name: Cosine Recall@20
- type: cosine_recall@50
value: 0.9482405962905183
name: Cosine Recall@50
- type: cosine_recall@100
value: 0.96845207141619
name: Cosine Recall@100
- type: cosine_recall@150
value: 0.9781591263650546
name: Cosine Recall@150
- type: cosine_recall@200
value: 0.9810192407696308
name: Cosine Recall@200
- type: cosine_ndcg@1
value: 0.6703068122724909
name: Cosine Ndcg@1
- type: cosine_ndcg@20
value: 0.7735712514376322
name: Cosine Ndcg@20
- type: cosine_ndcg@50
value: 0.7843644592705362
name: Cosine Ndcg@50
- type: cosine_ndcg@100
value: 0.7889444470773866
name: Cosine Ndcg@100
- type: cosine_ndcg@150
value: 0.7908660087982327
name: Cosine Ndcg@150
- type: cosine_ndcg@200
value: 0.791403470160319
name: Cosine Ndcg@200
- type: cosine_mrr@1
value: 0.6703068122724909
name: Cosine Mrr@1
- type: cosine_mrr@20
value: 0.7520307321055828
name: Cosine Mrr@20
- type: cosine_mrr@50
value: 0.7529374175534339
name: Cosine Mrr@50
- type: cosine_mrr@100
value: 0.7530616872072472
name: Cosine Mrr@100
- type: cosine_mrr@150
value: 0.7531202644382351
name: Cosine Mrr@150
- type: cosine_mrr@200
value: 0.7531293951311296
name: Cosine Mrr@200
- type: cosine_map@1
value: 0.6703068122724909
name: Cosine Map@1
- type: cosine_map@20
value: 0.6967639778693541
name: Cosine Map@20
- type: cosine_map@50
value: 0.699575457224443
name: Cosine Map@50
- type: cosine_map@100
value: 0.70027844357658
name: Cosine Map@100
- type: cosine_map@150
value: 0.7004487000056766
name: Cosine Map@150
- type: cosine_map@200
value: 0.7004863395843564
name: Cosine Map@200
- type: cosine_map@500
value: 0.7005835771389989
name: Cosine Map@500
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: mix zh
type: mix_zh
metrics:
- type: cosine_accuracy@1
value: 0.19084763390535622
name: Cosine Accuracy@1
- type: cosine_accuracy@20
value: 1.0
name: Cosine Accuracy@20
- type: cosine_accuracy@50
value: 1.0
name: Cosine Accuracy@50
- type: cosine_accuracy@100
value: 1.0
name: Cosine Accuracy@100
- type: cosine_accuracy@150
value: 1.0
name: Cosine Accuracy@150
- type: cosine_accuracy@200
value: 1.0
name: Cosine Accuracy@200
- type: cosine_precision@1
value: 0.19084763390535622
name: Cosine Precision@1
- type: cosine_precision@20
value: 0.15439417576703063
name: Cosine Precision@20
- type: cosine_precision@50
value: 0.0617576703068123
name: Cosine Precision@50
- type: cosine_precision@100
value: 0.03087883515340615
name: Cosine Precision@100
- type: cosine_precision@150
value: 0.020585890102270757
name: Cosine Precision@150
- type: cosine_precision@200
value: 0.015439417576703075
name: Cosine Precision@200
- type: cosine_recall@1
value: 0.06137978852487433
name: Cosine Recall@1
- type: cosine_recall@20
value: 1.0
name: Cosine Recall@20
- type: cosine_recall@50
value: 1.0
name: Cosine Recall@50
- type: cosine_recall@100
value: 1.0
name: Cosine Recall@100
- type: cosine_recall@150
value: 1.0
name: Cosine Recall@150
- type: cosine_recall@200
value: 1.0
name: Cosine Recall@200
- type: cosine_ndcg@1
value: 0.19084763390535622
name: Cosine Ndcg@1
- type: cosine_ndcg@20
value: 0.5474303590499686
name: Cosine Ndcg@20
- type: cosine_ndcg@50
value: 0.5474303590499686
name: Cosine Ndcg@50
- type: cosine_ndcg@100
value: 0.5474303590499686
name: Cosine Ndcg@100
- type: cosine_ndcg@150
value: 0.5474303590499686
name: Cosine Ndcg@150
- type: cosine_ndcg@200
value: 0.5474303590499686
name: Cosine Ndcg@200
- type: cosine_mrr@1
value: 0.19084763390535622
name: Cosine Mrr@1
- type: cosine_mrr@20
value: 0.4093433087972877
name: Cosine Mrr@20
- type: cosine_mrr@50
value: 0.4093433087972877
name: Cosine Mrr@50
- type: cosine_mrr@100
value: 0.4093433087972877
name: Cosine Mrr@100
- type: cosine_mrr@150
value: 0.4093433087972877
name: Cosine Mrr@150
- type: cosine_mrr@200
value: 0.4093433087972877
name: Cosine Mrr@200
- type: cosine_map@1
value: 0.19084763390535622
name: Cosine Map@1
- type: cosine_map@20
value: 0.32981711891302556
name: Cosine Map@20
- type: cosine_map@50
value: 0.32981711891302556
name: Cosine Map@50
- type: cosine_map@100
value: 0.32981711891302556
name: Cosine Map@100
- type: cosine_map@150
value: 0.32981711891302556
name: Cosine Map@150
- type: cosine_map@200
value: 0.32981711891302556
name: Cosine Map@200
- type: cosine_map@500
value: 0.32981711891302556
name: Cosine Map@500
---
# Job - Job matching Alibaba-NLP/gte-multilingual-base (v1)
Top performing model on [TalentCLEF 2025](https://talentclef.github.io/talentclef/) Task A. Use it for multilingual job title matching
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [Alibaba-NLP/gte-multilingual-base](https://huggingface.co/Alibaba-NLP/gte-multilingual-base) <!-- at revision 9fdd4ee8bba0e2808a34e0e739576f6740d2b225 -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 768 dimensions
- **Similarity Function:** Cosine Similarity
- **Training Datasets:**
- full_en
- full_de
- full_es
- full_zh
- mix
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: NewModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the ๐ค Hub
model = SentenceTransformer("pj-mathematician/JobGTE-multilingual-base-v1")
# Run inference
sentences = [
'Volksvertreter',
'Parlamentarier',
'Oberbรผrgermeister',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Information Retrieval
* Datasets: `full_en`, `full_es`, `full_de`, `full_zh`, `mix_es`, `mix_de` and `mix_zh`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | full_en | full_es | full_de | full_zh | mix_es | mix_de | mix_zh |
|:---------------------|:----------|:-----------|:-----------|:-----------|:-----------|:-----------|:-----------|
| cosine_accuracy@1 | 0.6571 | 0.1243 | 0.2956 | 0.6602 | 0.728 | 0.6703 | 0.1908 |
| cosine_accuracy@20 | 0.9905 | 1.0 | 0.9704 | 0.9806 | 0.96 | 0.9506 | 1.0 |
| cosine_accuracy@50 | 0.9905 | 1.0 | 0.9852 | 0.9903 | 0.9792 | 0.9776 | 1.0 |
| cosine_accuracy@100 | 0.9905 | 1.0 | 0.9852 | 0.9903 | 0.9943 | 0.9865 | 1.0 |
| cosine_accuracy@150 | 0.9905 | 1.0 | 0.9901 | 0.9903 | 0.9958 | 0.9932 | 1.0 |
| cosine_accuracy@200 | 0.9905 | 1.0 | 0.9901 | 0.9903 | 0.9974 | 0.9948 | 1.0 |
| cosine_precision@1 | 0.6571 | 0.1243 | 0.2956 | 0.6602 | 0.728 | 0.6703 | 0.1908 |
| cosine_precision@20 | 0.5171 | 0.5719 | 0.5084 | 0.4782 | 0.1243 | 0.1252 | 0.1544 |
| cosine_precision@50 | 0.316 | 0.3885 | 0.3654 | 0.2895 | 0.0515 | 0.0523 | 0.0618 |
| cosine_precision@100 | 0.189 | 0.2517 | 0.2413 | 0.1757 | 0.0263 | 0.0267 | 0.0309 |
| cosine_precision@150 | 0.1338 | 0.1905 | 0.1804 | 0.126 | 0.0176 | 0.018 | 0.0206 |
| cosine_precision@200 | 0.1043 | 0.1522 | 0.1447 | 0.0982 | 0.0133 | 0.0135 | 0.0154 |
| cosine_recall@1 | 0.0678 | 0.0037 | 0.0111 | 0.0615 | 0.2813 | 0.2524 | 0.0614 |
| cosine_recall@20 | 0.547 | 0.3842 | 0.3221 | 0.5108 | 0.9183 | 0.9096 | 1.0 |
| cosine_recall@50 | 0.74 | 0.5641 | 0.5025 | 0.6923 | 0.9499 | 0.9482 | 1.0 |
| cosine_recall@100 | 0.8453 | 0.6742 | 0.6248 | 0.8004 | 0.9701 | 0.9685 | 1.0 |
| cosine_recall@150 | 0.8838 | 0.7464 | 0.683 | 0.8465 | 0.9768 | 0.9782 | 1.0 |
| cosine_recall@200 | 0.9109 | 0.7825 | 0.7216 | 0.8771 | 0.9818 | 0.981 | 1.0 |
| cosine_ndcg@1 | 0.6571 | 0.1243 | 0.2956 | 0.6602 | 0.728 | 0.6703 | 0.1908 |
| cosine_ndcg@20 | 0.6954 | 0.6139 | 0.5393 | 0.654 | 0.8044 | 0.7736 | 0.5474 |
| cosine_ndcg@50 | 0.715 | 0.5874 | 0.5267 | 0.6707 | 0.813 | 0.7844 | 0.5474 |
| cosine_ndcg@100 | 0.7679 | 0.6144 | 0.5579 | 0.7234 | 0.8173 | 0.7889 | 0.5474 |
| cosine_ndcg@150 | 0.7857 | 0.6499 | 0.588 | 0.7438 | 0.8186 | 0.7909 | 0.5474 |
| **cosine_ndcg@200** | **0.797** | **0.6681** | **0.6071** | **0.7554** | **0.8195** | **0.7914** | **0.5474** |
| cosine_mrr@1 | 0.6571 | 0.1243 | 0.2956 | 0.6602 | 0.728 | 0.6703 | 0.1908 |
| cosine_mrr@20 | 0.8138 | 0.5581 | 0.5104 | 0.8037 | 0.7969 | 0.752 | 0.4093 |
| cosine_mrr@50 | 0.8138 | 0.5581 | 0.511 | 0.8041 | 0.7975 | 0.7529 | 0.4093 |
| cosine_mrr@100 | 0.8138 | 0.5581 | 0.511 | 0.8041 | 0.7977 | 0.7531 | 0.4093 |
| cosine_mrr@150 | 0.8138 | 0.5581 | 0.511 | 0.8041 | 0.7977 | 0.7531 | 0.4093 |
| cosine_mrr@200 | 0.8138 | 0.5581 | 0.511 | 0.8041 | 0.7977 | 0.7531 | 0.4093 |
| cosine_map@1 | 0.6571 | 0.1243 | 0.2956 | 0.6602 | 0.728 | 0.6703 | 0.1908 |
| cosine_map@20 | 0.5579 | 0.4799 | 0.401 | 0.5087 | 0.7351 | 0.6968 | 0.3298 |
| cosine_map@50 | 0.5471 | 0.425 | 0.3588 | 0.4926 | 0.7374 | 0.6996 | 0.3298 |
| cosine_map@100 | 0.5796 | 0.4302 | 0.3633 | 0.5217 | 0.738 | 0.7003 | 0.3298 |
| cosine_map@150 | 0.5875 | 0.4459 | 0.3777 | 0.5299 | 0.7381 | 0.7004 | 0.3298 |
| cosine_map@200 | 0.5912 | 0.4533 | 0.3848 | 0.5334 | 0.7382 | 0.7005 | 0.3298 |
| cosine_map@500 | 0.5953 | 0.4656 | 0.3978 | 0.5386 | 0.7383 | 0.7006 | 0.3298 |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Datasets
<details><summary>full_en</summary>
#### full_en
* Dataset: full_en
* Size: 28,880 training samples
* Columns: <code>anchor</code> and <code>positive</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive |
|:--------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 3 tokens</li><li>mean: 5.68 tokens</li><li>max: 11 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 5.76 tokens</li><li>max: 12 tokens</li></ul> |
* Samples:
| anchor | positive |
|:-----------------------------------------|:-----------------------------------------|
| <code>air commodore</code> | <code>flight lieutenant</code> |
| <code>command and control officer</code> | <code>flight officer</code> |
| <code>air commodore</code> | <code>command and control officer</code> |
* Loss: [<code>GISTEmbedLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#gistembedloss) with these parameters:
```json
{'guide': SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
), 'temperature': 0.01, 'margin_strategy': 'absolute', 'margin': 0.0}
```
</details>
<details><summary>full_de</summary>
#### full_de
* Dataset: full_de
* Size: 23,023 training samples
* Columns: <code>anchor</code> and <code>positive</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive |
|:--------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 3 tokens</li><li>mean: 7.99 tokens</li><li>max: 30 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 8.19 tokens</li><li>max: 30 tokens</li></ul> |
* Samples:
| anchor | positive |
|:----------------------------------|:-----------------------------------------------------|
| <code>Staffelkommandantin</code> | <code>Kommodore</code> |
| <code>Luftwaffenoffizierin</code> | <code>Luftwaffenoffizier/Luftwaffenoffizierin</code> |
| <code>Staffelkommandantin</code> | <code>Luftwaffenoffizierin</code> |
* Loss: [<code>GISTEmbedLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#gistembedloss) with these parameters:
```json
{'guide': SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
), 'temperature': 0.01, 'margin_strategy': 'absolute', 'margin': 0.0}
```
</details>
<details><summary>full_es</summary>
#### full_es
* Dataset: full_es
* Size: 20,724 training samples
* Columns: <code>anchor</code> and <code>positive</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive |
|:--------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 3 tokens</li><li>mean: 9.13 tokens</li><li>max: 32 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 8.84 tokens</li><li>max: 32 tokens</li></ul> |
* Samples:
| anchor | positive |
|:------------------------------------|:-------------------------------------------|
| <code>jefe de escuadrรณn</code> | <code>instructor</code> |
| <code>comandante de aeronave</code> | <code>instructor de simulador</code> |
| <code>instructor</code> | <code>oficial del Ejรฉrcito del Aire</code> |
* Loss: [<code>GISTEmbedLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#gistembedloss) with these parameters:
```json
{'guide': SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
), 'temperature': 0.01, 'margin_strategy': 'absolute', 'margin': 0.0}
```
</details>
<details><summary>full_zh</summary>
#### full_zh
* Dataset: full_zh
* Size: 30,401 training samples
* Columns: <code>anchor</code> and <code>positive</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive |
|:--------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 5 tokens</li><li>mean: 7.15 tokens</li><li>max: 14 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 7.46 tokens</li><li>max: 21 tokens</li></ul> |
* Samples:
| anchor | positive |
|:------------------|:---------------------|
| <code>ๆๆฏๆป็</code> | <code>ๆๆฏๅ่ฟ่ฅๆป็</code> |
| <code>ๆๆฏๆป็</code> | <code>ๆๆฏไธป็ฎก</code> |
| <code>ๆๆฏๆป็</code> | <code>ๆๆฏ่บๆฏๆป็</code> |
* Loss: [<code>GISTEmbedLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#gistembedloss) with these parameters:
```json
{'guide': SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
), 'temperature': 0.01, 'margin_strategy': 'absolute', 'margin': 0.0}
```
</details>
<details><summary>mix</summary>
#### mix
* Dataset: mix
* Size: 21,760 training samples
* Columns: <code>anchor</code> and <code>positive</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive |
|:--------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 2 tokens</li><li>mean: 6.71 tokens</li><li>max: 19 tokens</li></ul> | <ul><li>min: 2 tokens</li><li>mean: 7.69 tokens</li><li>max: 19 tokens</li></ul> |
* Samples:
| anchor | positive |
|:------------------------------------------|:----------------------------------------------------------------|
| <code>technical manager</code> | <code>Technischer Direktor fรผr Bรผhne, Film und Fernsehen</code> |
| <code>head of technical</code> | <code>directora tรฉcnica</code> |
| <code>head of technical department</code> | <code>ๆๆฏ่บๆฏๆป็</code> |
* Loss: [<code>GISTEmbedLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#gistembedloss) with these parameters:
```json
{'guide': SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
), 'temperature': 0.01, 'margin_strategy': 'absolute', 'margin': 0.0}
```
</details>
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 64
- `per_device_eval_batch_size`: 128
- `gradient_accumulation_steps`: 2
- `num_train_epochs`: 5
- `warmup_ratio`: 0.05
- `log_on_each_node`: False
- `fp16`: True
- `dataloader_num_workers`: 4
- `ddp_find_unused_parameters`: True
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 64
- `per_device_eval_batch_size`: 128
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 2
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 5
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.05
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: False
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: True
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: True
- `dataloader_num_workers`: 4
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `tp_size`: 0
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: True
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | full_en_cosine_ndcg@200 | full_es_cosine_ndcg@200 | full_de_cosine_ndcg@200 | full_zh_cosine_ndcg@200 | mix_es_cosine_ndcg@200 | mix_de_cosine_ndcg@200 | mix_zh_cosine_ndcg@200 |
|:------:|:----:|:-------------:|:-----------------------:|:-----------------------:|:-----------------------:|:-----------------------:|:----------------------:|:----------------------:|:----------------------:|
| -1 | -1 | - | 0.7447 | 0.6125 | 0.5378 | 0.7240 | 0.7029 | 0.6345 | 0.5531 |
| 0.0010 | 1 | 3.4866 | - | - | - | - | - | - | - |
| 0.1027 | 100 | 2.5431 | - | - | - | - | - | - | - |
| 0.2053 | 200 | 1.4536 | 0.7993 | 0.6633 | 0.5974 | 0.7642 | 0.7567 | 0.7011 | 0.5498 |
| 0.3080 | 300 | 1.1018 | - | - | - | - | - | - | - |
| 0.4107 | 400 | 0.9184 | 0.7925 | 0.6586 | 0.6058 | 0.7587 | 0.7749 | 0.7278 | 0.5486 |
| 0.5133 | 500 | 0.8973 | - | - | - | - | - | - | - |
| 0.6160 | 600 | 0.7309 | 0.7951 | 0.6671 | 0.6096 | 0.7708 | 0.7793 | 0.7339 | 0.5525 |
| 0.7187 | 700 | 0.7297 | - | - | - | - | - | - | - |
| 0.8214 | 800 | 0.7281 | 0.7929 | 0.6711 | 0.6088 | 0.7645 | 0.7899 | 0.7444 | 0.5479 |
| 0.9240 | 900 | 0.6607 | - | - | - | - | - | - | - |
| 1.0267 | 1000 | 0.6075 | 0.7915 | 0.6659 | 0.6088 | 0.7665 | 0.7968 | 0.7588 | 0.5482 |
| 1.1294 | 1100 | 0.4553 | - | - | - | - | - | - | - |
| 1.2320 | 1200 | 0.4775 | 0.7979 | 0.6696 | 0.6033 | 0.7669 | 0.7959 | 0.7624 | 0.5484 |
| 1.3347 | 1300 | 0.4838 | - | - | - | - | - | - | - |
| 1.4374 | 1400 | 0.4912 | 0.7973 | 0.6757 | 0.6112 | 0.7656 | 0.7978 | 0.7650 | 0.5487 |
| 1.5400 | 1500 | 0.4732 | - | - | - | - | - | - | - |
| 1.6427 | 1600 | 0.5269 | 0.8031 | 0.6723 | 0.6108 | 0.7654 | 0.8008 | 0.7660 | 0.5492 |
| 1.7454 | 1700 | 0.4822 | - | - | - | - | - | - | - |
| 1.8480 | 1800 | 0.5072 | 0.7962 | 0.6668 | 0.6051 | 0.7592 | 0.8001 | 0.7714 | 0.5486 |
| 1.9507 | 1900 | 0.4709 | - | - | - | - | - | - | - |
| 2.0544 | 2000 | 0.3772 | 0.7940 | 0.6647 | 0.6037 | 0.7579 | 0.8064 | 0.7732 | 0.5479 |
| 2.1571 | 2100 | 0.3982 | - | - | - | - | - | - | - |
| 2.2598 | 2200 | 0.3073 | 0.7969 | 0.6652 | 0.6005 | 0.7625 | 0.8054 | 0.7734 | 0.5493 |
| 2.3624 | 2300 | 0.383 | - | - | - | - | - | - | - |
| 2.4651 | 2400 | 0.3687 | 0.7925 | 0.6690 | 0.5987 | 0.7583 | 0.8081 | 0.7735 | 0.5477 |
| 2.5678 | 2500 | 0.3472 | - | - | - | - | - | - | - |
| 2.6704 | 2600 | 0.3557 | 0.7956 | 0.6758 | 0.6019 | 0.7659 | 0.8082 | 0.7767 | 0.5491 |
| 2.7731 | 2700 | 0.3527 | - | - | - | - | - | - | - |
| 2.8758 | 2800 | 0.3446 | 0.7945 | 0.6719 | 0.6020 | 0.7616 | 0.8124 | 0.7818 | 0.5496 |
| 2.9784 | 2900 | 0.3566 | - | - | - | - | - | - | - |
| 3.0821 | 3000 | 0.3252 | 0.7948 | 0.6682 | 0.6025 | 0.7617 | 0.8152 | 0.7848 | 0.5516 |
| 3.1848 | 3100 | 0.2968 | - | - | - | - | - | - | - |
| 3.2875 | 3200 | 0.2962 | 0.7953 | 0.6717 | 0.6086 | 0.7613 | 0.8110 | 0.7824 | 0.5482 |
| 3.3901 | 3300 | 0.3084 | - | - | - | - | - | - | - |
| 3.4928 | 3400 | 0.2909 | 0.7940 | 0.6634 | 0.6023 | 0.7615 | 0.8138 | 0.7822 | 0.5457 |
| 3.5955 | 3500 | 0.2964 | - | - | - | - | - | - | - |
| 3.6982 | 3600 | 0.3193 | 0.7960 | 0.6635 | 0.6070 | 0.7534 | 0.8164 | 0.7844 | 0.5467 |
| 3.8008 | 3700 | 0.3514 | - | - | - | - | - | - | - |
| 3.9035 | 3800 | 0.3147 | 0.7973 | 0.6696 | 0.6125 | 0.7616 | 0.8176 | 0.7885 | 0.5469 |
| 4.0062 | 3900 | 0.2738 | - | - | - | - | - | - | - |
| 4.1088 | 4000 | 0.2842 | 0.7960 | 0.6672 | 0.6082 | 0.7536 | 0.8174 | 0.7891 | 0.5479 |
| 4.2115 | 4100 | 0.2739 | - | - | - | - | - | - | - |
| 4.3142 | 4200 | 0.2704 | 0.7979 | 0.6681 | 0.6111 | 0.7540 | 0.8180 | 0.7891 | 0.5476 |
| 4.4168 | 4300 | 0.2529 | - | - | - | - | - | - | - |
| 4.5195 | 4400 | 0.272 | 0.7968 | 0.6685 | 0.6087 | 0.7564 | 0.8185 | 0.7901 | 0.5476 |
| 4.6222 | 4500 | 0.3 | - | - | - | - | - | - | - |
| 4.7248 | 4600 | 0.2598 | 0.7972 | 0.6675 | 0.6072 | 0.7556 | 0.8190 | 0.7909 | 0.5478 |
| 4.8275 | 4700 | 0.3101 | - | - | - | - | - | - | - |
| 4.9302 | 4800 | 0.2524 | 0.7970 | 0.6681 | 0.6071 | 0.7554 | 0.8195 | 0.7914 | 0.5474 |
### Framework Versions
- Python: 3.11.11
- Sentence Transformers: 4.1.0
- Transformers: 4.51.2
- PyTorch: 2.6.0+cu124
- Accelerate: 1.6.0
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### GISTEmbedLoss
```bibtex
@misc{solatorio2024gistembed,
title={GISTEmbed: Guided In-sample Selection of Training Negatives for Text Embedding Fine-tuning},
author={Aivin V. Solatorio},
year={2024},
eprint={2402.16829},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
haihp02/oioioi-last
|
haihp02
| 2025-06-20T18:25:45Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-20T18:24:20Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
hudsongouge/DAT-Byte-200M
|
hudsongouge
| 2025-06-20T18:25:22Z | 0 | 1 | null |
[
"safetensors",
"text-generation",
"en",
"dataset:hudsongouge/Open-Discord",
"dataset:sedthh/gutenberg_english",
"arxiv:2410.05258",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2025-06-20T01:11:43Z |
---
license: apache-2.0
datasets:
- hudsongouge/Open-Discord
- sedthh/gutenberg_english
language:
- en
pipeline_tag: text-generation
---
# DAT Byte Small (200M)
**DAT Byte** is a family of byte-level **D**ifferential-**A**ttention **T**ransformers, trained from scratch on an RTX 5090.
This model is the smallest in the family, with approximately 200 million parameters.
It was trained on a set of Discord chat data, public domain books, and English Bible translations. Larger models in the family received a larger and more diverse training set.
---
## Training Data
As the smallest DAT Byte model, this version was trained on less data than its larger family members.
The training data was composed exclusively of the following sources:
- [**Gutenberg English**](https://huggingface.co/datasets/sedthh/gutenberg_english) โ English books in the public domain
- [**OpenDiscord**](https://huggingface.co/datasets/hudsongouge/Open-Discord) โ Discord dumps in ChatML format
- Proprietary Discord dumps (similar structure and tone to OpenDiscord)
- A diverse set of public domain English Bible translations (~34MB total)
Only the datasets listed above were used, and each was included in its entirety.
The Discord datasets (combined ~693MB) were formatted in **ChatML**, with usernames serving as speaker roles, enabling the model to learn natural dialogue structure and dynamics. Discord data included many diverse topics, especially code. Thus, the model understands basic syntax patterns of some common programming languages. However, due to its lack of training on large scale high quality code samples, generated code will likely not be reliable or production-quality.
Larger models in the family received a larger and more diverse training set.
---
## Architecture
This model follows the structure proposed in [**Differential Transformer** (Ye et al., 2024)](https://arxiv.org/abs/2410.05258), which introduces *Differential Attention*.
Differential Attention is particularly helpful in creating byte-level LLMs as it reduces attention noise and allows the model to better grasp semantic meaning at such a high granularity.
Key architectural details:
- **Model Type:** Decoder-only Transformer
- **Positional Encoding:** RoPE (Rotary Positional Embeddings)
- **Normalization:** Pre-layernorm (LayerNorm before attention and MLP blocks)
- **Hidden Size:** 768
- **FFN Size:** 3,072
- **Attention Heads:** 12
- **Layers:** 28
- **Vocabulary Size:** 259 (256 byte tokens + 3 special tokens)
---
## Benchmarks
Coming soon.
## Usage
Stay tuned.
---
## Citation
If you use **DAT Byte Small** in your research, fine-tune it, or build on this work, please cite the original author:
### BibTeX entry
```bibtex
@misc{gouge2025datbyte,
title = {DAT Byte: Byte-Level Differential Attention Transformers},
author = {Hudson Gouge},
year = {2025},
url = {https://huggingface.co/hudsongouge/dat-byte-small},
note = {DAT Byte Small (200M) Model Card}
}
```
Please include this citation in any derivative work, publication, or project that makes use of the DAT Byte architecture or training artifacts.
|
Vaibhavbarala/Fake-news
|
Vaibhavbarala
| 2025-06-20T18:24:15Z | 0 | 0 | null |
[
"safetensors",
"bert",
"license:other",
"region:us"
] | null | 2025-06-20T18:19:46Z |
---
license: other
license_name: vaibahv
license_link: LICENSE
---
|
CowLiker/micro
|
CowLiker
| 2025-06-20T18:23:34Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] |
text-to-image
| 2025-06-20T18:22:53Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: '-'
output:
url: images/df0r49j-37a375fe-e811-4702-9278-d7e062d15f18.png
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: >-
microbikini, tiny triangle, string, slingshot, bandeau top, circular pasties,
pubic pastie, small/medium/big/large breasts, strap microbikini
---
# micro
<Gallery />
## Trigger words
You should use `microbikini` to trigger the image generation.
You should use `tiny triangle` to trigger the image generation.
You should use `string` to trigger the image generation.
You should use `slingshot` to trigger the image generation.
You should use `bandeau top` to trigger the image generation.
You should use `circular pasties` to trigger the image generation.
You should use `pubic pastie` to trigger the image generation.
You should use `small/medium/big/large breasts` to trigger the image generation.
You should use `strap microbikini` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/CowLiker/micro/tree/main) them in the Files & versions tab.
|
jingjietan/adam-clad-kaggle-child-mpnet-personality
|
jingjietan
| 2025-06-20T18:23:20Z | 4 | 0 | null |
[
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"region:us"
] | null | 2025-06-19T18:30:50Z |
---
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Code: [More Information Needed]
- Paper: [More Information Needed]
- Docs: [More Information Needed]
|
c4tdr0ut/roshidererp
|
c4tdr0ut
| 2025-06-20T18:22:02Z | 15 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"base_model:unsloth/Meta-Llama-3.1-8B",
"base_model:finetune:unsloth/Meta-Llama-3.1-8B",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-19T19:31:33Z |
---
base_model: unsloth/Meta-Llama-3.1-8B
tags:
- text-generation-inference
- transformers
- unsloth
- llama
license: apache-2.0
language:
- en
pipeline_tag: text-generation
---
<p><img src="https://cdn-uploads.huggingface.co/production/uploads/noauth/1urjH2SsGHNQXZRQZIK_S.webp" width="250" height="250" /></p>
<h1>RoshidereRP</h1>
<p>This is a Llama 3.1 8B base finetune on a 1.1m token dataset that was tranied on 8bit LoRa with up,down,q,v,m and lm head targeted
The dataset includes fanfiction and books in english about "Alya Sometimes Hides Her Feelings in Russian". I made this finetune to experiment with augmentoolkit 3 and
to test different methods of finetuning</p>
enjoy
|
pj-mathematician/JobGTE-multilingual-base-pruned
|
pj-mathematician
| 2025-06-20T18:20:17Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:86648",
"loss:MSELoss",
"arxiv:1908.10084",
"arxiv:2004.09813",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-06-20T18:18:11Z |
---
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:86648
- loss:MSELoss
widget:
- source_sentence: Familienberaterin
sentences:
- electric power station operator
- venue booker & promoter
- betrieblicher Aus- und Weiterbildner/betriebliche Aus- und Weiterbildnerin
- source_sentence: high school RS teacher
sentences:
- infantryman
- Schnellbedienungsrestaurantteamleiter
- drill setup operator
- source_sentence: lighting designer
sentences:
- software support manager
- ็ดๅๆบ็ปดๆคๅ่ฐๅ
- bus maintenance supervisor
- source_sentence: ๆบๅบๆถ้ฒๅ
sentences:
- Flakeๆไฝๅ
- tรฉcnico en gestiรณn de residuos peligrosos/tรฉcnica en gestiรณn de residuos peligrosos
- ไธ้จๅญฆๆ ก่ๅธ
- source_sentence: Entwicklerin fรผr mobile Anwendungen
sentences:
- fashion design expert
- Mergers-and-Acquisitions-Analyst/Mergers-and-Acquisitions-Analystin
- commercial bid manager
pipeline_tag: sentence-similarity
library_name: sentence-transformers
metrics:
- cosine_accuracy@1
- cosine_accuracy@20
- cosine_accuracy@50
- cosine_accuracy@100
- cosine_accuracy@150
- cosine_accuracy@200
- cosine_precision@1
- cosine_precision@20
- cosine_precision@50
- cosine_precision@100
- cosine_precision@150
- cosine_precision@200
- cosine_recall@1
- cosine_recall@20
- cosine_recall@50
- cosine_recall@100
- cosine_recall@150
- cosine_recall@200
- cosine_ndcg@1
- cosine_ndcg@20
- cosine_ndcg@50
- cosine_ndcg@100
- cosine_ndcg@150
- cosine_ndcg@200
- cosine_mrr@1
- cosine_mrr@20
- cosine_mrr@50
- cosine_mrr@100
- cosine_mrr@150
- cosine_mrr@200
- cosine_map@1
- cosine_map@20
- cosine_map@50
- cosine_map@100
- cosine_map@150
- cosine_map@200
- cosine_map@500
model-index:
- name: SentenceTransformer
results:
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: full en
type: full_en
metrics:
- type: cosine_accuracy@1
value: 0.6476190476190476
name: Cosine Accuracy@1
- type: cosine_accuracy@20
value: 0.9714285714285714
name: Cosine Accuracy@20
- type: cosine_accuracy@50
value: 0.9904761904761905
name: Cosine Accuracy@50
- type: cosine_accuracy@100
value: 0.9904761904761905
name: Cosine Accuracy@100
- type: cosine_accuracy@150
value: 0.9904761904761905
name: Cosine Accuracy@150
- type: cosine_accuracy@200
value: 0.9904761904761905
name: Cosine Accuracy@200
- type: cosine_precision@1
value: 0.6476190476190476
name: Cosine Precision@1
- type: cosine_precision@20
value: 0.47952380952380946
name: Cosine Precision@20
- type: cosine_precision@50
value: 0.28838095238095235
name: Cosine Precision@50
- type: cosine_precision@100
value: 0.17304761904761906
name: Cosine Precision@100
- type: cosine_precision@150
value: 0.12444444444444444
name: Cosine Precision@150
- type: cosine_precision@200
value: 0.09857142857142859
name: Cosine Precision@200
- type: cosine_recall@1
value: 0.06609801577496094
name: Cosine Recall@1
- type: cosine_recall@20
value: 0.5122224752770898
name: Cosine Recall@20
- type: cosine_recall@50
value: 0.6835205863376973
name: Cosine Recall@50
- type: cosine_recall@100
value: 0.7899550177449521
name: Cosine Recall@100
- type: cosine_recall@150
value: 0.8399901051245952
name: Cosine Recall@150
- type: cosine_recall@200
value: 0.875868212220809
name: Cosine Recall@200
- type: cosine_ndcg@1
value: 0.6476190476190476
name: Cosine Ndcg@1
- type: cosine_ndcg@20
value: 0.6467537144833913
name: Cosine Ndcg@20
- type: cosine_ndcg@50
value: 0.6579566361404572
name: Cosine Ndcg@50
- type: cosine_ndcg@100
value: 0.7095129047395976
name: Cosine Ndcg@100
- type: cosine_ndcg@150
value: 0.7310060454392588
name: Cosine Ndcg@150
- type: cosine_ndcg@200
value: 0.746053293561821
name: Cosine Ndcg@200
- type: cosine_mrr@1
value: 0.6476190476190476
name: Cosine Mrr@1
- type: cosine_mrr@20
value: 0.7901817137111254
name: Cosine Mrr@20
- type: cosine_mrr@50
value: 0.7909547501984476
name: Cosine Mrr@50
- type: cosine_mrr@100
value: 0.7909547501984476
name: Cosine Mrr@100
- type: cosine_mrr@150
value: 0.7909547501984476
name: Cosine Mrr@150
- type: cosine_mrr@200
value: 0.7909547501984476
name: Cosine Mrr@200
- type: cosine_map@1
value: 0.6476190476190476
name: Cosine Map@1
- type: cosine_map@20
value: 0.5025649155749793
name: Cosine Map@20
- type: cosine_map@50
value: 0.48398477448194993
name: Cosine Map@50
- type: cosine_map@100
value: 0.5117703759309522
name: Cosine Map@100
- type: cosine_map@150
value: 0.520199435224254
name: Cosine Map@150
- type: cosine_map@200
value: 0.5249113393002316
name: Cosine Map@200
- type: cosine_map@500
value: 0.5304170344184883
name: Cosine Map@500
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: full es
type: full_es
metrics:
- type: cosine_accuracy@1
value: 0.11891891891891893
name: Cosine Accuracy@1
- type: cosine_accuracy@20
value: 1.0
name: Cosine Accuracy@20
- type: cosine_accuracy@50
value: 1.0
name: Cosine Accuracy@50
- type: cosine_accuracy@100
value: 1.0
name: Cosine Accuracy@100
- type: cosine_accuracy@150
value: 1.0
name: Cosine Accuracy@150
- type: cosine_accuracy@200
value: 1.0
name: Cosine Accuracy@200
- type: cosine_precision@1
value: 0.11891891891891893
name: Cosine Precision@1
- type: cosine_precision@20
value: 0.5267567567567567
name: Cosine Precision@20
- type: cosine_precision@50
value: 0.3437837837837838
name: Cosine Precision@50
- type: cosine_precision@100
value: 0.21897297297297297
name: Cosine Precision@100
- type: cosine_precision@150
value: 0.1658018018018018
name: Cosine Precision@150
- type: cosine_precision@200
value: 0.1332972972972973
name: Cosine Precision@200
- type: cosine_recall@1
value: 0.0035840147528632613
name: Cosine Recall@1
- type: cosine_recall@20
value: 0.35407760203362965
name: Cosine Recall@20
- type: cosine_recall@50
value: 0.5097999383006715
name: Cosine Recall@50
- type: cosine_recall@100
value: 0.6076073817878247
name: Cosine Recall@100
- type: cosine_recall@150
value: 0.6705429838138021
name: Cosine Recall@150
- type: cosine_recall@200
value: 0.7125464731776301
name: Cosine Recall@200
- type: cosine_ndcg@1
value: 0.11891891891891893
name: Cosine Ndcg@1
- type: cosine_ndcg@20
value: 0.5708144272431339
name: Cosine Ndcg@20
- type: cosine_ndcg@50
value: 0.535516963498245
name: Cosine Ndcg@50
- type: cosine_ndcg@100
value: 0.558980163264909
name: Cosine Ndcg@100
- type: cosine_ndcg@150
value: 0.5900024611410689
name: Cosine Ndcg@150
- type: cosine_ndcg@200
value: 0.609478782549869
name: Cosine Ndcg@200
- type: cosine_mrr@1
value: 0.11891891891891893
name: Cosine Mrr@1
- type: cosine_mrr@20
value: 0.5531531531531532
name: Cosine Mrr@20
- type: cosine_mrr@50
value: 0.5531531531531532
name: Cosine Mrr@50
- type: cosine_mrr@100
value: 0.5531531531531532
name: Cosine Mrr@100
- type: cosine_mrr@150
value: 0.5531531531531532
name: Cosine Mrr@150
- type: cosine_mrr@200
value: 0.5531531531531532
name: Cosine Mrr@200
- type: cosine_map@1
value: 0.11891891891891893
name: Cosine Map@1
- type: cosine_map@20
value: 0.4379349002801489
name: Cosine Map@20
- type: cosine_map@50
value: 0.3739269627118989
name: Cosine Map@50
- type: cosine_map@100
value: 0.37629843599877466
name: Cosine Map@100
- type: cosine_map@150
value: 0.3891828650842837
name: Cosine Map@150
- type: cosine_map@200
value: 0.39584338663408436
name: Cosine Map@200
- type: cosine_map@500
value: 0.4062909401616274
name: Cosine Map@500
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: full de
type: full_de
metrics:
- type: cosine_accuracy@1
value: 0.2955665024630542
name: Cosine Accuracy@1
- type: cosine_accuracy@20
value: 0.9704433497536946
name: Cosine Accuracy@20
- type: cosine_accuracy@50
value: 0.9753694581280788
name: Cosine Accuracy@50
- type: cosine_accuracy@100
value: 0.9901477832512315
name: Cosine Accuracy@100
- type: cosine_accuracy@150
value: 0.9901477832512315
name: Cosine Accuracy@150
- type: cosine_accuracy@200
value: 0.9901477832512315
name: Cosine Accuracy@200
- type: cosine_precision@1
value: 0.2955665024630542
name: Cosine Precision@1
- type: cosine_precision@20
value: 0.42906403940886706
name: Cosine Precision@20
- type: cosine_precision@50
value: 0.29802955665024633
name: Cosine Precision@50
- type: cosine_precision@100
value: 0.19433497536945815
name: Cosine Precision@100
- type: cosine_precision@150
value: 0.14824302134646963
name: Cosine Precision@150
- type: cosine_precision@200
value: 0.1197783251231527
name: Cosine Precision@200
- type: cosine_recall@1
value: 0.01108543831680986
name: Cosine Recall@1
- type: cosine_recall@20
value: 0.26675038089672504
name: Cosine Recall@20
- type: cosine_recall@50
value: 0.40921566733257536
name: Cosine Recall@50
- type: cosine_recall@100
value: 0.5097664540706716
name: Cosine Recall@100
- type: cosine_recall@150
value: 0.5728593162394238
name: Cosine Recall@150
- type: cosine_recall@200
value: 0.6120176690658915
name: Cosine Recall@200
- type: cosine_ndcg@1
value: 0.2955665024630542
name: Cosine Ndcg@1
- type: cosine_ndcg@20
value: 0.46962753993631184
name: Cosine Ndcg@20
- type: cosine_ndcg@50
value: 0.444898497416845
name: Cosine Ndcg@50
- type: cosine_ndcg@100
value: 0.466960324034805
name: Cosine Ndcg@100
- type: cosine_ndcg@150
value: 0.49816218513136795
name: Cosine Ndcg@150
- type: cosine_ndcg@200
value: 0.5165485300965951
name: Cosine Ndcg@200
- type: cosine_mrr@1
value: 0.2955665024630542
name: Cosine Mrr@1
- type: cosine_mrr@20
value: 0.5046767633988724
name: Cosine Mrr@20
- type: cosine_mrr@50
value: 0.50477528556636
name: Cosine Mrr@50
- type: cosine_mrr@100
value: 0.5049589761635289
name: Cosine Mrr@100
- type: cosine_mrr@150
value: 0.5049589761635289
name: Cosine Mrr@150
- type: cosine_mrr@200
value: 0.5049589761635289
name: Cosine Mrr@200
- type: cosine_map@1
value: 0.2955665024630542
name: Cosine Map@1
- type: cosine_map@20
value: 0.33658821160388247
name: Cosine Map@20
- type: cosine_map@50
value: 0.2853400586620685
name: Cosine Map@50
- type: cosine_map@100
value: 0.2817732307206079
name: Cosine Map@100
- type: cosine_map@150
value: 0.2931317333364438
name: Cosine Map@150
- type: cosine_map@200
value: 0.2988160532231927
name: Cosine Map@200
- type: cosine_map@500
value: 0.31093362375086947
name: Cosine Map@500
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: full zh
type: full_zh
metrics:
- type: cosine_accuracy@1
value: 0.6601941747572816
name: Cosine Accuracy@1
- type: cosine_accuracy@20
value: 0.970873786407767
name: Cosine Accuracy@20
- type: cosine_accuracy@50
value: 0.9902912621359223
name: Cosine Accuracy@50
- type: cosine_accuracy@100
value: 0.9902912621359223
name: Cosine Accuracy@100
- type: cosine_accuracy@150
value: 0.9902912621359223
name: Cosine Accuracy@150
- type: cosine_accuracy@200
value: 0.9902912621359223
name: Cosine Accuracy@200
- type: cosine_precision@1
value: 0.6601941747572816
name: Cosine Precision@1
- type: cosine_precision@20
value: 0.44805825242718444
name: Cosine Precision@20
- type: cosine_precision@50
value: 0.27126213592233006
name: Cosine Precision@50
- type: cosine_precision@100
value: 0.16650485436893206
name: Cosine Precision@100
- type: cosine_precision@150
value: 0.1211003236245955
name: Cosine Precision@150
- type: cosine_precision@200
value: 0.09529126213592234
name: Cosine Precision@200
- type: cosine_recall@1
value: 0.06611246215014785
name: Cosine Recall@1
- type: cosine_recall@20
value: 0.48409390608352504
name: Cosine Recall@20
- type: cosine_recall@50
value: 0.6568473638827299
name: Cosine Recall@50
- type: cosine_recall@100
value: 0.7685416895166794
name: Cosine Recall@100
- type: cosine_recall@150
value: 0.8277686060133904
name: Cosine Recall@150
- type: cosine_recall@200
value: 0.8616979590623105
name: Cosine Recall@200
- type: cosine_ndcg@1
value: 0.6601941747572816
name: Cosine Ndcg@1
- type: cosine_ndcg@20
value: 0.6231250904534316
name: Cosine Ndcg@20
- type: cosine_ndcg@50
value: 0.6383496204608501
name: Cosine Ndcg@50
- type: cosine_ndcg@100
value: 0.6917257705456975
name: Cosine Ndcg@100
- type: cosine_ndcg@150
value: 0.7167434657424917
name: Cosine Ndcg@150
- type: cosine_ndcg@200
value: 0.7303448958665071
name: Cosine Ndcg@200
- type: cosine_mrr@1
value: 0.6601941747572816
name: Cosine Mrr@1
- type: cosine_mrr@20
value: 0.8015776699029126
name: Cosine Mrr@20
- type: cosine_mrr@50
value: 0.8020876238109248
name: Cosine Mrr@50
- type: cosine_mrr@100
value: 0.8020876238109248
name: Cosine Mrr@100
- type: cosine_mrr@150
value: 0.8020876238109248
name: Cosine Mrr@150
- type: cosine_mrr@200
value: 0.8020876238109248
name: Cosine Mrr@200
- type: cosine_map@1
value: 0.6601941747572816
name: Cosine Map@1
- type: cosine_map@20
value: 0.4750205237443607
name: Cosine Map@20
- type: cosine_map@50
value: 0.45785161483741715
name: Cosine Map@50
- type: cosine_map@100
value: 0.4848085275553208
name: Cosine Map@100
- type: cosine_map@150
value: 0.4937216396074153
name: Cosine Map@150
- type: cosine_map@200
value: 0.49777622471594557
name: Cosine Map@200
- type: cosine_map@500
value: 0.5039795405740248
name: Cosine Map@500
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: mix es
type: mix_es
metrics:
- type: cosine_accuracy@1
value: 0.6297451898075923
name: Cosine Accuracy@1
- type: cosine_accuracy@20
value: 0.9105564222568903
name: Cosine Accuracy@20
- type: cosine_accuracy@50
value: 0.9495579823192928
name: Cosine Accuracy@50
- type: cosine_accuracy@100
value: 0.9729589183567343
name: Cosine Accuracy@100
- type: cosine_accuracy@150
value: 0.983359334373375
name: Cosine Accuracy@150
- type: cosine_accuracy@200
value: 0.9901196047841914
name: Cosine Accuracy@200
- type: cosine_precision@1
value: 0.6297451898075923
name: Cosine Precision@1
- type: cosine_precision@20
value: 0.11167446697867915
name: Cosine Precision@20
- type: cosine_precision@50
value: 0.04850754030161208
name: Cosine Precision@50
- type: cosine_precision@100
value: 0.02535101404056163
name: Cosine Precision@100
- type: cosine_precision@150
value: 0.0172300225342347
name: Cosine Precision@150
- type: cosine_precision@200
value: 0.0130811232449298
name: Cosine Precision@200
- type: cosine_recall@1
value: 0.24340068840848872
name: Cosine Recall@1
- type: cosine_recall@20
value: 0.8288215338137336
name: Cosine Recall@20
- type: cosine_recall@50
value: 0.8986566129311838
name: Cosine Recall@50
- type: cosine_recall@100
value: 0.9398509273704282
name: Cosine Recall@100
- type: cosine_recall@150
value: 0.9576876408389668
name: Cosine Recall@150
- type: cosine_recall@200
value: 0.9695267810712429
name: Cosine Recall@200
- type: cosine_ndcg@1
value: 0.6297451898075923
name: Cosine Ndcg@1
- type: cosine_ndcg@20
value: 0.7010427232190379
name: Cosine Ndcg@20
- type: cosine_ndcg@50
value: 0.7200844211181043
name: Cosine Ndcg@50
- type: cosine_ndcg@100
value: 0.7290848607488584
name: Cosine Ndcg@100
- type: cosine_ndcg@150
value: 0.7325985285606116
name: Cosine Ndcg@150
- type: cosine_ndcg@200
value: 0.7347463892077523
name: Cosine Ndcg@200
- type: cosine_mrr@1
value: 0.6297451898075923
name: Cosine Mrr@1
- type: cosine_mrr@20
value: 0.7036709577939534
name: Cosine Mrr@20
- type: cosine_mrr@50
value: 0.7049808414398148
name: Cosine Mrr@50
- type: cosine_mrr@100
value: 0.7053260954286938
name: Cosine Mrr@100
- type: cosine_mrr@150
value: 0.7054145837924506
name: Cosine Mrr@150
- type: cosine_mrr@200
value: 0.7054541569954363
name: Cosine Mrr@200
- type: cosine_map@1
value: 0.6297451898075923
name: Cosine Map@1
- type: cosine_map@20
value: 0.6194189058349782
name: Cosine Map@20
- type: cosine_map@50
value: 0.6244340507841626
name: Cosine Map@50
- type: cosine_map@100
value: 0.6256943736433496
name: Cosine Map@100
- type: cosine_map@150
value: 0.6260195205413376
name: Cosine Map@150
- type: cosine_map@200
value: 0.6261650797332174
name: Cosine Map@200
- type: cosine_map@500
value: 0.6263452093477304
name: Cosine Map@500
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: mix de
type: mix_de
metrics:
- type: cosine_accuracy@1
value: 0.5564222568902756
name: Cosine Accuracy@1
- type: cosine_accuracy@20
value: 0.8866354654186167
name: Cosine Accuracy@20
- type: cosine_accuracy@50
value: 0.9381175247009881
name: Cosine Accuracy@50
- type: cosine_accuracy@100
value: 0.9594383775351014
name: Cosine Accuracy@100
- type: cosine_accuracy@150
value: 0.9708788351534061
name: Cosine Accuracy@150
- type: cosine_accuracy@200
value: 0.9776391055642226
name: Cosine Accuracy@200
- type: cosine_precision@1
value: 0.5564222568902756
name: Cosine Precision@1
- type: cosine_precision@20
value: 0.109464378575143
name: Cosine Precision@20
- type: cosine_precision@50
value: 0.048060322412896525
name: Cosine Precision@50
- type: cosine_precision@100
value: 0.025273010920436823
name: Cosine Precision@100
- type: cosine_precision@150
value: 0.017313225862367825
name: Cosine Precision@150
- type: cosine_precision@200
value: 0.013143525741029644
name: Cosine Precision@200
- type: cosine_recall@1
value: 0.20931703934824059
name: Cosine Recall@1
- type: cosine_recall@20
value: 0.7988992893049055
name: Cosine Recall@20
- type: cosine_recall@50
value: 0.8741029641185647
name: Cosine Recall@50
- type: cosine_recall@100
value: 0.9173426937077482
name: Cosine Recall@100
- type: cosine_recall@150
value: 0.9424076963078523
name: Cosine Recall@150
- type: cosine_recall@200
value: 0.953631478592477
name: Cosine Recall@200
- type: cosine_ndcg@1
value: 0.5564222568902756
name: Cosine Ndcg@1
- type: cosine_ndcg@20
value: 0.6541310877479573
name: Cosine Ndcg@20
- type: cosine_ndcg@50
value: 0.674790854916742
name: Cosine Ndcg@50
- type: cosine_ndcg@100
value: 0.6844997445798996
name: Cosine Ndcg@100
- type: cosine_ndcg@150
value: 0.6894214573457343
name: Cosine Ndcg@150
- type: cosine_ndcg@200
value: 0.6914881284159038
name: Cosine Ndcg@200
- type: cosine_mrr@1
value: 0.5564222568902756
name: Cosine Mrr@1
- type: cosine_mrr@20
value: 0.6476945170199107
name: Cosine Mrr@20
- type: cosine_mrr@50
value: 0.6493649946597936
name: Cosine Mrr@50
- type: cosine_mrr@100
value: 0.6496801333421218
name: Cosine Mrr@100
- type: cosine_mrr@150
value: 0.6497778366579644
name: Cosine Mrr@150
- type: cosine_mrr@200
value: 0.6498156890114056
name: Cosine Mrr@200
- type: cosine_map@1
value: 0.5564222568902756
name: Cosine Map@1
- type: cosine_map@20
value: 0.5648326970643027
name: Cosine Map@20
- type: cosine_map@50
value: 0.57003456255067
name: Cosine Map@50
- type: cosine_map@100
value: 0.5714370828517599
name: Cosine Map@100
- type: cosine_map@150
value: 0.5719002990233493
name: Cosine Map@150
- type: cosine_map@200
value: 0.5720497397197026
name: Cosine Map@200
- type: cosine_map@500
value: 0.5723109788233504
name: Cosine Map@500
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: mix zh
type: mix_zh
metrics:
- type: cosine_accuracy@1
value: 0.6085594989561587
name: Cosine Accuracy@1
- type: cosine_accuracy@20
value: 0.9592901878914405
name: Cosine Accuracy@20
- type: cosine_accuracy@50
value: 0.9791231732776617
name: Cosine Accuracy@50
- type: cosine_accuracy@100
value: 0.9874739039665971
name: Cosine Accuracy@100
- type: cosine_accuracy@150
value: 0.9911273486430062
name: Cosine Accuracy@150
- type: cosine_accuracy@200
value: 0.9937369519832986
name: Cosine Accuracy@200
- type: cosine_precision@1
value: 0.6085594989561587
name: Cosine Precision@1
- type: cosine_precision@20
value: 0.12656576200417535
name: Cosine Precision@20
- type: cosine_precision@50
value: 0.05518789144050106
name: Cosine Precision@50
- type: cosine_precision@100
value: 0.028747390396659713
name: Cosine Precision@100
- type: cosine_precision@150
value: 0.019425887265135697
name: Cosine Precision@150
- type: cosine_precision@200
value: 0.014705114822546978
name: Cosine Precision@200
- type: cosine_recall@1
value: 0.2043804056069192
name: Cosine Recall@1
- type: cosine_recall@20
value: 0.8346468336812805
name: Cosine Recall@20
- type: cosine_recall@50
value: 0.9095772442588727
name: Cosine Recall@50
- type: cosine_recall@100
value: 0.9475643702157271
name: Cosine Recall@100
- type: cosine_recall@150
value: 0.9609168406402228
name: Cosine Recall@150
- type: cosine_recall@200
value: 0.9697807933194154
name: Cosine Recall@200
- type: cosine_ndcg@1
value: 0.6085594989561587
name: Cosine Ndcg@1
- type: cosine_ndcg@20
value: 0.6853247290079303
name: Cosine Ndcg@20
- type: cosine_ndcg@50
value: 0.7066940880968873
name: Cosine Ndcg@50
- type: cosine_ndcg@100
value: 0.715400790265437
name: Cosine Ndcg@100
- type: cosine_ndcg@150
value: 0.7180808450243259
name: Cosine Ndcg@150
- type: cosine_ndcg@200
value: 0.7197629642909036
name: Cosine Ndcg@200
- type: cosine_mrr@1
value: 0.6085594989561587
name: Cosine Mrr@1
- type: cosine_mrr@20
value: 0.7236528792595264
name: Cosine Mrr@20
- type: cosine_mrr@50
value: 0.7243308740364213
name: Cosine Mrr@50
- type: cosine_mrr@100
value: 0.7244524590415827
name: Cosine Mrr@100
- type: cosine_mrr@150
value: 0.7244814620971008
name: Cosine Mrr@150
- type: cosine_mrr@200
value: 0.7244960285685315
name: Cosine Mrr@200
- type: cosine_map@1
value: 0.6085594989561587
name: Cosine Map@1
- type: cosine_map@20
value: 0.5652211952239553
name: Cosine Map@20
- type: cosine_map@50
value: 0.5716374350069462
name: Cosine Map@50
- type: cosine_map@100
value: 0.5730756815932735
name: Cosine Map@100
- type: cosine_map@150
value: 0.5733543252173214
name: Cosine Map@150
- type: cosine_map@200
value: 0.5734860037813889
name: Cosine Map@200
- type: cosine_map@500
value: 0.5736416699680624
name: Cosine Map@500
---
# Job - Job matching Alibaba-NLP/gte-multilingual-base pruned
Top performing model on [TalentCLEF 2025](https://talentclef.github.io/talentclef/) Task A. Use it for multilingual job title matching
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
<!-- - **Base model:** [Unknown](https://huggingface.co/unknown) -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 768 dimensions
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: NewModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the ๐ค Hub
model = SentenceTransformer("pj-mathematician/JobGTE-multilingual-base-pruned")
# Run inference
sentences = [
'Entwicklerin fรผr mobile Anwendungen',
'Mergers-and-Acquisitions-Analyst/Mergers-and-Acquisitions-Analystin',
'fashion design expert',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Information Retrieval
* Datasets: `full_en`, `full_es`, `full_de`, `full_zh`, `mix_es`, `mix_de` and `mix_zh`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | full_en | full_es | full_de | full_zh | mix_es | mix_de | mix_zh |
|:---------------------|:-----------|:-----------|:-----------|:-----------|:-----------|:-----------|:-----------|
| cosine_accuracy@1 | 0.6476 | 0.1189 | 0.2956 | 0.6602 | 0.6297 | 0.5564 | 0.6086 |
| cosine_accuracy@20 | 0.9714 | 1.0 | 0.9704 | 0.9709 | 0.9106 | 0.8866 | 0.9593 |
| cosine_accuracy@50 | 0.9905 | 1.0 | 0.9754 | 0.9903 | 0.9496 | 0.9381 | 0.9791 |
| cosine_accuracy@100 | 0.9905 | 1.0 | 0.9901 | 0.9903 | 0.973 | 0.9594 | 0.9875 |
| cosine_accuracy@150 | 0.9905 | 1.0 | 0.9901 | 0.9903 | 0.9834 | 0.9709 | 0.9911 |
| cosine_accuracy@200 | 0.9905 | 1.0 | 0.9901 | 0.9903 | 0.9901 | 0.9776 | 0.9937 |
| cosine_precision@1 | 0.6476 | 0.1189 | 0.2956 | 0.6602 | 0.6297 | 0.5564 | 0.6086 |
| cosine_precision@20 | 0.4795 | 0.5268 | 0.4291 | 0.4481 | 0.1117 | 0.1095 | 0.1266 |
| cosine_precision@50 | 0.2884 | 0.3438 | 0.298 | 0.2713 | 0.0485 | 0.0481 | 0.0552 |
| cosine_precision@100 | 0.173 | 0.219 | 0.1943 | 0.1665 | 0.0254 | 0.0253 | 0.0287 |
| cosine_precision@150 | 0.1244 | 0.1658 | 0.1482 | 0.1211 | 0.0172 | 0.0173 | 0.0194 |
| cosine_precision@200 | 0.0986 | 0.1333 | 0.1198 | 0.0953 | 0.0131 | 0.0131 | 0.0147 |
| cosine_recall@1 | 0.0661 | 0.0036 | 0.0111 | 0.0661 | 0.2434 | 0.2093 | 0.2044 |
| cosine_recall@20 | 0.5122 | 0.3541 | 0.2668 | 0.4841 | 0.8288 | 0.7989 | 0.8346 |
| cosine_recall@50 | 0.6835 | 0.5098 | 0.4092 | 0.6568 | 0.8987 | 0.8741 | 0.9096 |
| cosine_recall@100 | 0.79 | 0.6076 | 0.5098 | 0.7685 | 0.9399 | 0.9173 | 0.9476 |
| cosine_recall@150 | 0.84 | 0.6705 | 0.5729 | 0.8278 | 0.9577 | 0.9424 | 0.9609 |
| cosine_recall@200 | 0.8759 | 0.7125 | 0.612 | 0.8617 | 0.9695 | 0.9536 | 0.9698 |
| cosine_ndcg@1 | 0.6476 | 0.1189 | 0.2956 | 0.6602 | 0.6297 | 0.5564 | 0.6086 |
| cosine_ndcg@20 | 0.6468 | 0.5708 | 0.4696 | 0.6231 | 0.701 | 0.6541 | 0.6853 |
| cosine_ndcg@50 | 0.658 | 0.5355 | 0.4449 | 0.6383 | 0.7201 | 0.6748 | 0.7067 |
| cosine_ndcg@100 | 0.7095 | 0.559 | 0.467 | 0.6917 | 0.7291 | 0.6845 | 0.7154 |
| cosine_ndcg@150 | 0.731 | 0.59 | 0.4982 | 0.7167 | 0.7326 | 0.6894 | 0.7181 |
| **cosine_ndcg@200** | **0.7461** | **0.6095** | **0.5165** | **0.7303** | **0.7347** | **0.6915** | **0.7198** |
| cosine_mrr@1 | 0.6476 | 0.1189 | 0.2956 | 0.6602 | 0.6297 | 0.5564 | 0.6086 |
| cosine_mrr@20 | 0.7902 | 0.5532 | 0.5047 | 0.8016 | 0.7037 | 0.6477 | 0.7237 |
| cosine_mrr@50 | 0.791 | 0.5532 | 0.5048 | 0.8021 | 0.705 | 0.6494 | 0.7243 |
| cosine_mrr@100 | 0.791 | 0.5532 | 0.505 | 0.8021 | 0.7053 | 0.6497 | 0.7245 |
| cosine_mrr@150 | 0.791 | 0.5532 | 0.505 | 0.8021 | 0.7054 | 0.6498 | 0.7245 |
| cosine_mrr@200 | 0.791 | 0.5532 | 0.505 | 0.8021 | 0.7055 | 0.6498 | 0.7245 |
| cosine_map@1 | 0.6476 | 0.1189 | 0.2956 | 0.6602 | 0.6297 | 0.5564 | 0.6086 |
| cosine_map@20 | 0.5026 | 0.4379 | 0.3366 | 0.475 | 0.6194 | 0.5648 | 0.5652 |
| cosine_map@50 | 0.484 | 0.3739 | 0.2853 | 0.4579 | 0.6244 | 0.57 | 0.5716 |
| cosine_map@100 | 0.5118 | 0.3763 | 0.2818 | 0.4848 | 0.6257 | 0.5714 | 0.5731 |
| cosine_map@150 | 0.5202 | 0.3892 | 0.2931 | 0.4937 | 0.626 | 0.5719 | 0.5734 |
| cosine_map@200 | 0.5249 | 0.3958 | 0.2988 | 0.4978 | 0.6262 | 0.572 | 0.5735 |
| cosine_map@500 | 0.5304 | 0.4063 | 0.3109 | 0.504 | 0.6263 | 0.5723 | 0.5736 |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 86,648 training samples
* Columns: <code>sentence</code> and <code>label</code>
* Approximate statistics based on the first 1000 samples:
| | sentence | label |
|:--------|:---------------------------------------------------------------------------------|:-------------------------------------|
| type | string | list |
| details | <ul><li>min: 2 tokens</li><li>mean: 8.25 tokens</li><li>max: 54 tokens</li></ul> | <ul><li>size: 768 elements</li></ul> |
* Samples:
| sentence | label |
|:-----------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------|
| <code></code> | <code>[-0.07171934843063354, 0.03595816716551781, -0.029780959710478783, 0.006593302357941866, 0.040611181408166885, ...]</code> |
| <code>airport environment officer</code> | <code>[-0.022075481712818146, 0.02999737113714218, -0.02189866080880165, 0.016531817615032196, 0.012234307825565338, ...]</code> |
| <code>Flakeๆไฝๅ</code> | <code>[-0.04815564677119255, 0.023524893447756767, -0.01583661139011383, 0.042527906596660614, 0.03815540298819542, ...]</code> |
* Loss: [<code>MSELoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#mseloss)
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 128
- `per_device_eval_batch_size`: 128
- `gradient_accumulation_steps`: 2
- `learning_rate`: 0.0001
- `num_train_epochs`: 5
- `warmup_ratio`: 0.05
- `log_on_each_node`: False
- `fp16`: True
- `dataloader_num_workers`: 4
- `ddp_find_unused_parameters`: True
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 128
- `per_device_eval_batch_size`: 128
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 2
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 0.0001
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 5
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.05
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: False
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: True
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: True
- `dataloader_num_workers`: 4
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `tp_size`: 0
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: True
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | full_en_cosine_ndcg@200 | full_es_cosine_ndcg@200 | full_de_cosine_ndcg@200 | full_zh_cosine_ndcg@200 | mix_es_cosine_ndcg@200 | mix_de_cosine_ndcg@200 | mix_zh_cosine_ndcg@200 |
|:------:|:----:|:-------------:|:-----------------------:|:-----------------------:|:-----------------------:|:-----------------------:|:----------------------:|:----------------------:|:----------------------:|
| -1 | -1 | - | 0.5348 | 0.4311 | 0.3678 | 0.5333 | 0.2580 | 0.1924 | 0.2871 |
| 0.0030 | 1 | 0.0017 | - | - | - | - | - | - | - |
| 0.2959 | 100 | 0.001 | - | - | - | - | - | - | - |
| 0.5917 | 200 | 0.0005 | 0.6702 | 0.5287 | 0.4566 | 0.6809 | 0.5864 | 0.5302 | 0.4739 |
| 0.8876 | 300 | 0.0004 | - | - | - | - | - | - | - |
| 1.1834 | 400 | 0.0004 | 0.7057 | 0.5643 | 0.4790 | 0.7033 | 0.6604 | 0.6055 | 0.6003 |
| 1.4793 | 500 | 0.0004 | - | - | - | - | - | - | - |
| 1.7751 | 600 | 0.0003 | 0.7184 | 0.5783 | 0.4910 | 0.7127 | 0.6927 | 0.6416 | 0.6485 |
| 2.0710 | 700 | 0.0003 | - | - | - | - | - | - | - |
| 2.3669 | 800 | 0.0003 | 0.7307 | 0.5938 | 0.5023 | 0.7233 | 0.7125 | 0.6639 | 0.6847 |
| 2.6627 | 900 | 0.0003 | - | - | - | - | - | - | - |
| 2.9586 | 1000 | 0.0003 | 0.7371 | 0.6002 | 0.5085 | 0.7228 | 0.7222 | 0.6761 | 0.6998 |
| 3.2544 | 1100 | 0.0003 | - | - | - | - | - | - | - |
| 3.5503 | 1200 | 0.0003 | 0.7402 | 0.6059 | 0.5109 | 0.7279 | 0.7285 | 0.6841 | 0.7120 |
| 3.8462 | 1300 | 0.0003 | - | - | - | - | - | - | - |
| 4.1420 | 1400 | 0.0003 | 0.7449 | 0.6083 | 0.5154 | 0.7294 | 0.7333 | 0.6894 | 0.7176 |
| 4.4379 | 1500 | 0.0003 | - | - | - | - | - | - | - |
| 4.7337 | 1600 | 0.0003 | 0.7461 | 0.6095 | 0.5165 | 0.7303 | 0.7347 | 0.6915 | 0.7198 |
### Framework Versions
- Python: 3.11.11
- Sentence Transformers: 4.1.0
- Transformers: 4.51.3
- PyTorch: 2.6.0+cu124
- Accelerate: 1.6.0
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MSELoss
```bibtex
@inproceedings{reimers-2020-multilingual-sentence-bert,
title = "Making Monolingual Sentence Embeddings Multilingual using Knowledge Distillation",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2020",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/2004.09813",
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
Official-mezzo-fun-10-Viral-videos-Link/FULL.VIDEO.mezzo.fun.Viral.Video.Tutorial.Official
|
Official-mezzo-fun-10-Viral-videos-Link
| 2025-06-20T18:19:34Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-20T18:18:03Z |
<animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
<animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
Jaipur-5-star-hotel-Viral-Video/FULL.VIDEO.Jaipur.5.star.hotel.Viral.Video.Tutorial.Official
|
Jaipur-5-star-hotel-Viral-Video
| 2025-06-20T18:15:50Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-20T18:15:13Z |
<animated-image data-catalyst=""><a href="https://tinyurl.com/56hn7ue8/?news-viral-video" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
makataomu/poca-SoccerTwos
|
makataomu
| 2025-06-20T18:13:51Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SoccerTwos",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
] |
reinforcement-learning
| 2025-06-20T07:52:59Z |
---
library_name: ml-agents
tags:
- SoccerTwos
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog ๐ถ to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: makataomu/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play ๐
|
manu225/dravokbot-signaux-crypto
|
manu225
| 2025-06-20T18:13:09Z | 0 | 0 | null |
[
"license:other",
"region:us"
] | null | 2025-06-20T18:13:04Z |
---
license: other
license_name: drakobot-signaux
license_link: LICENSE
---
|
Vandita/distilled_BertSarc
|
Vandita
| 2025-06-20T18:07:36Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-cased",
"base_model:finetune:distilbert/distilbert-base-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-06-20T18:07:13Z |
---
library_name: transformers
license: apache-2.0
base_model: distilbert-base-cased
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: distilled_BertSarc
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilled_BertSarc
This model is a fine-tuned version of [distilbert-base-cased](https://huggingface.co/distilbert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8244
- Accuracy: 0.8024
- Precision: 0.8860
- Recall: 0.2399
- F1: 0.3776
- Mcc: 0.3958
- Roc Auc: 0.7296
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 6
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | Mcc | Roc Auc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|:------:|:-------:|
| 0.9413 | 1.0 | 1469 | 0.8562 | 0.7326 | 0.9417 | 0.3276 | 0.4861 | 0.4496 | 0.8080 |
| 0.7582 | 2.0 | 2938 | 0.8315 | 0.7386 | 0.9410 | 0.3444 | 0.5042 | 0.4624 | 0.8226 |
| 0.7121 | 3.0 | 4407 | 0.8223 | 0.7364 | 0.9522 | 0.3338 | 0.4943 | 0.4601 | 0.8310 |
| 0.6838 | 4.0 | 5876 | 0.8656 | 0.7364 | 0.9511 | 0.3342 | 0.4946 | 0.4598 | 0.8239 |
| 0.5701 | 5.0 | 7345 | 0.8694 | 0.7350 | 0.9587 | 0.3276 | 0.4883 | 0.4586 | 0.8231 |
| 0.5849 | 6.0 | 8814 | 0.8656 | 0.7343 | 0.9609 | 0.3250 | 0.4857 | 0.4576 | 0.8215 |
### Framework versions
- Transformers 4.52.4
- Pytorch 2.6.0+cu124
- Datasets 2.14.4
- Tokenizers 0.21.1
|
StuffedPumpkins/yourfavreadhead
|
StuffedPumpkins
| 2025-06-20T18:03:10Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:mit",
"region:us"
] |
text-to-image
| 2025-06-20T18:02:58Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: '-'
output:
url: images/yourfavreadhead_000760_00_20250620193326.png
- text: '-'
output:
url: images/yourfavreadhead_000790_00_20250620193759.png
- text: '-'
output:
url: images/yourfavreadhead_000850_00_20250620194328.png
- text: '-'
output:
url: images/yourfavreadhead_000860_00_20250620194424.png
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: yourfavreadhead
license: mit
---
# yourfavreadhead
<Gallery />
## Model description
yourfavreadhead
## Trigger words
You should use `yourfavreadhead` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/StuffedPumpkins/yourfavreadhead/tree/main) them in the Files & versions tab.
|
MikCil/reddere-voces-orpheus-ft
|
MikCil
| 2025-06-20T18:01:47Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"base_model:unsloth/orpheus-3b-0.1-ft",
"base_model:finetune:unsloth/orpheus-3b-0.1-ft",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-20T17:59:39Z |
---
base_model: unsloth/orpheus-3b-0.1-ft
tags:
- text-generation-inference
- transformers
- unsloth
- llama
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** MikCil
- **License:** apache-2.0
- **Finetuned from model :** unsloth/orpheus-3b-0.1-ft
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
BootesVoid/cmbfkco9x0144kfxsocdd5v8u_cmc52yq8702fdbfifsu559dm8
|
BootesVoid
| 2025-06-20T17:58:54Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-06-20T17:58:52Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: LUJURIA33
---
# Cmbfkco9X0144Kfxsocdd5V8U_Cmc52Yq8702Fdbfifsu559Dm8
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `LUJURIA33` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "LUJURIA33",
"lora_weights": "https://huggingface.co/BootesVoid/cmbfkco9x0144kfxsocdd5v8u_cmc52yq8702fdbfifsu559dm8/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [๐งจ diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BootesVoid/cmbfkco9x0144kfxsocdd5v8u_cmc52yq8702fdbfifsu559dm8', weight_name='lora.safetensors')
image = pipeline('LUJURIA33').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/BootesVoid/cmbfkco9x0144kfxsocdd5v8u_cmc52yq8702fdbfifsu559dm8/discussions) to add images that show off what youโve made with this LoRA.
|
Hot-New-Clip-Sajal-Malik-19-Viral-videos/FULL.VIDEO.Sajal.Malik.Viral.Video.Tutorial.Official
|
Hot-New-Clip-Sajal-Malik-19-Viral-videos
| 2025-06-20T17:58:13Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-20T17:57:53Z |
<animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
<animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
whtgursmeepdoes/t5-RAG-radiology-model
|
whtgursmeepdoes
| 2025-06-20T17:57:26Z | 0 | 0 | null |
[
"safetensors",
"t5",
"region:us"
] | null | 2025-06-20T02:26:57Z |
# T5-small Fine-Tuned with RAG-style Context for Radiology Label Extraction
This model is a **fine-tuned version of `t5-small`**, trained on a radiology dataset using a **RAG-style setup**. It extracts **five structured fields** from free-text radiologist diagnosis reports, with retrieved similar examples providing additional context.
> This hybrid approach leverages Retrieval-Augmented Generation (RAG) principles by combining traditional fine-tuning with dynamic context injection using **TF-IDF + FAISS** similarity search.
---
## Performance
- **Test Loss**: `0.1902`
- **Dataset Source**: Medical reports sourced from AIIMS (via internship assignment)
- **Model Size**: `t5-small` (~60M parameters)
- **Training Epochs**: 4
- **Batch Size**: 8
---
## Task: Structured Information Extraction from Radiology Text
Given a free-text radiologist diagnosis, the model extracts:
- `Abnormal/Normal`
- `Pathologies Extracted`
- `Midline Shift`
- `Location & Brain Organ`
- `Bleed Subcategory`
---
## Example
**Input Prompt**:
Extract info: Evidence of subdural hematoma in the right fronto-parietal region with 6mm midline shift.
**Output**:
Abnormal/Normal: Abnormal
Pathologies Extracted: Subdural Hematoma
Midline Shift: 6mm
Location & Brain Organ: Right Fronto-parietal
Bleed Subcategory: Subdural
---
## How It Works
1. **TF-IDF Vectorization**: All diagnosis texts are converted to TF-IDF vectors.
2. **FAISS Retrieval**: For each new input, the most similar prior report is retrieved from the dataset.
3. **Augmented Prompting**: The model is trained to extract structured info **based on the retrieved report**, improving generalization.
4. **Fine-tuning**: T5 is trained using Hugging Faceโs Trainer API with the retrieved document as input and the structured labels as target.
---
## Training Code Summary
The model was trained using:
- `TfidfVectorizer` for document vectorization
- `faiss.IndexFlatL2` for similarity retrieval
- Hugging Faceโs `Trainer` and `Seq2Seq` APIs
- Training on GPU using Google Colab
```python
input_text = f"Extract info: {retrieved_diagnosis}"
labels = structured_labels
```
## Intended Use
This model is ideal for:
- Preprocessing free-text radiology reports
- Building structured datasets for supervised learning on imaging data
- Assisting annotation pipelines in medical NLP applications
## Author
Developed by Gursmeep Kaur during a medical NLP internship project
|
robinfaro/molm-log_prob-router_advanced
|
robinfaro
| 2025-06-20T17:52:11Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"MoLM",
"text-generation",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"region:us"
] |
text-generation
| 2025-06-20T17:26:02Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
hishab/titulm-llama-3.2-3b-v1.0
|
hishab
| 2025-06-20T17:49:46Z | 33 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"hishab",
"titulm",
"pytorch",
"llama-3",
"llama-factory",
"conversational",
"bn",
"arxiv:2502.11187",
"base_model:meta-llama/Llama-3.2-3B",
"base_model:finetune:meta-llama/Llama-3.2-3B",
"license:llama3.2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-10-04T19:12:14Z |
---
base_model:
- meta-llama/Llama-3.2-3B
language:
- bn
library_name: transformers
license: llama3.2
pipeline_tag: text-generation
tags:
- hishab
- titulm
- pytorch
- llama
- llama-3
- llama-factory
---
## Model Information
This model is a continually pre-trained version of the [meta-llama/Llama-3.2-3B](https://huggingface.co/meta-llama/Llama-3.2-3B) architecture, fine-tuned on extensive Bangla datasets. The primary goal of the continual pretraining was to enhance the model's ability to generate high-quality Bangla text. By extending the pretraining process specifically on Bangla data, the model has demonstrated superior performance in Bangla language understanding evaluation benchmarks and text generation tasks.
The model is described in the paper [TituLLMs: A Family of Bangla LLMs with Comprehensive Benchmarking](https://huggingface.co/papers/2502.11187). The code for training and evaluation can be found [here](https://github.com/hishab/TituLM).
**Model Architecture:** Llama 3.2 is an auto-regressive language model with optimized transformer architecture.
| | Training Data | Params | Input modalities | Output modalities | Context Length | GQA | Shared Embeddings | Token count | Knowledge cutoff |
| :---- | :---- | :---- | :---- | :---- | :---- | :---- | :---- | :---- | :---- |
| Llama 3.2 (text only) | Hishab curated Bangla text corpus | 3B(3.21B) | Monolingual Text(Bangla) | Monolingual Text(Bangla) | 4096 | Yes | Yes | 6B tokens | |
**Supported Languages:** Bengali (primary) and English (secondary)
**Llama 3.2 Model Family:** Token counts refer to pretraining data only. All model versions use Grouped-Query Attention (GQA) for improved inference scalability.
**Model Release Date:** October 24, 2024
**Status:** This is a static model trained on an offline dataset. Future versions may be released to improve model capabilities.
**License:** We are using a similar license to Llama 3.2. Use of Llama 3.2 is governed by the [Llama 3.2 Community License](https://github.com/meta-llama/llama-models/blob/main/models/llama3_2/LICENSE) (a custom, commercial license agreement).
## How to use
- Use with transformers
Starting with transformers >= 4.43.0 onward, you can run conversational inference using the Transformers pipeline abstraction or by leveraging the Auto classes with the generate() function.
Make sure to update your transformers installation via pip install --upgrade transformers.
```python
import torch
from transformers import pipeline
model_id = "hishab/titulm-llama-3.2-3b-v1.0"
pipe = pipeline(
"text-generation",
model=model_id,
torch_dtype=torch.bfloat16,
device_map="auto"
)
pipe("เฆเฆฎเฆพเฆฆเงเฆฐ เฆฆเงเฆถเงเฆฐ เฆจเฆพเฆฎ")
```
## Hardware and Software
**Training Factors:** We used [llama-factory](https://github.com/hiyouga/LLaMA-Factory) training library, Cloud GPU cluster, and production infrastructure for pretraining. Fine-tuning, annotation, and evaluation were also performed on cloud infrastructure.
## Training Data
**Overview:** We have collected a large Bangla raw dataset of text data from a wide variety of sources. Our collected data so far includes a mix of web documents, books, translated text, transliterated text, transcribe text, code-mixed text, conversations, and open-source raw data. The dataset is cleaned and filtered by different filtering criteria to ensure the quality of the data. Our collected data size is roughly around 268 GB. We separated __22GB__ data from that using a ratio of the data actual data size. Total trained tokens are __6B__ tokens.
Data sources summary:
- Web documents: Extracted, clean, and filtered common crawl data
- Books: Extracted, clean, filtered books data
- Transcribed text: Used in-house Bangla ASR model to transcribe Bangla audio data
- Translation data: We trained an English-Bangla translation LLM model and used it to translate English data to Bangla
- Code-mixed data: We trained an English-Bangla code-mixed LLM model and used it to generate code-mixed data
- Transliteration data: We trained a Bangla-English transliteration LLM model and used it to generate transliterated data
- Synthetic data: We generated synthetic data using a Bangla LLM model
- Others: We scrapped some selected website data, used open-source data, and used some other data sources
## Benchmarks
In this section, we report the results for __titulm-llama-3.2-3b-v1.0__ models on standard automatic benchmarks. For all these evaluations, we used [lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness) evaluations library.
### Evaluation Datasets
We evaluated our pre-trained models on both Bangla and English benchmark datasets. Although the model is trained on Bangla data, its English capability is also evaluated on English benchmark datasets. The evaluation datasets are as follows:
#### Bangla Benchmark datasets
We evaluated the models on the following datasets:
- [Bangla MMLU](): A private multiple choice question dataset developed by Hishab curated from various sources.
- [CommonsenseQa Bangla](https://huggingface.co/datasets/hishab/commonsenseqa-bn): A Bangla translation of the CommonsenseQA dataset. The dataset was translated using a new method called Expressive Semantic Translation (EST), which combines Google Machine Translation with LLM-based rewriting modifications.
- [OpenbookQA Bangla](https://huggingface.co/datasets/hishab/openbookqa-bn): A Bangla translation of the OpenbookQA dataset. The dataset was translated using a new method called Expressive Semantic Translation (EST), which combines Google Machine Translation with LLM-based rewriting modifications.
- [Piqa Bangla](https://huggingface.co/datasets/hishab/piqa-bn): A Bangla translation of the Piqa dataset. The dataset was translated using a new method called Expressive Semantic Translation (EST), which combines Google Machine Translation with LLM-based rewriting modifications.
- [BoolQ Bangla](https://huggingface.co/datasets/hishab/boolq_bn): The dataset contains 15,942 examples, with each entry consisting of a triplet: (question, passage, answer). The questions are naturally occurring, generated from unprompted and unconstrained settings. Input passages were sourced from Bangla Wikipedia, Banglapedia, and News Articles, and GPT-4 was used to generate corresponding yes/no questions with answers.
#### English Benchmark datasets
- [MMLU](https://huggingface.co/datasets/cais/mmlu): This is a massive multitask test consisting of multiple-choice questions from various branches of knowledge.
- [CommonseQa](https://huggingface.co/datasets/tau/commonsense_qa): CommonsenseQA is a new multiple-choice question-answering dataset that requires different types of commonsense knowledge to predict the correct answers.
- [OpenbookQA](https://huggingface.co/datasets/allenai/openbookqa): OpenBookQA aims to promote research in advanced question-answering, probing a deeper understanding of both the topic (with salient facts summarized as an open book, also provided with the dataset) and the language it is expressed in.
- [Piqa](https://huggingface.co/datasets/ybisk/piqa): The PIQA dataset focuses on physical commonsense reasoning, challenging AI to handle everyday situations requiring practical knowledge and unconventional solutions. Inspired by instructables.com, it aims to enhance AI's ability to understand and reason about physical interactions.
- [BoolQ](https://huggingface.co/datasets/google/boolq): BoolQ is a question-answer dataset for yes/no questions containing 15942 examples. These questions are naturally occurring. They are generated in unprompted and unconstrained settings. Each example is a triplet of (question, passage, answer), with the title of the page as optional additional context. The text-pair classification setup is similar to existing natural language inference tasks.
### Evaluation Results
#### Evaluation of Bangla Benchmark datasets
- **llama-3.2-3b** performs better on **Bangla MMLU** with a 0-shot score of **0.36** and a 5-shot score of **0.38**. It also leads in **BoolQ BN** with a 0-shot score of **0.55** and in **OpenBook QA BN** with a 5-shot score of **0.32**.
- **hishab/titulm-llama-3.2-3b-v1.0** outperforms in **Commonsense QA BN**, **OpenBook QA BN**, and **PIQA BN** in both 0-shot and 5-shot settings, with the highest score of **0.61** in **PIQA BN**.
| Model | Shots | Bangla MMLU | BoolQ BN | Commonsense QA BN | OpenBook QA BN | PIQA BN |
|---------------------------------|---------|-------------|----------|-------------------|----------------|---------|
| llama-3.2-3b | 0-shot | **0.36** | **0.55** | 0.26 | 0.31 | 0.56 |
| | 5-shot | **0.38** | - | 0.29 | **0.32** | 0.58 |
| hishab/titulm-llama-3.2-3b-v1.0 | 0-shot | 0.36 | 0.67 | **0.30** | **0.35** | **0.61**|
| | 5-shot | 0.36 | - | **0.30** | 0.35 | **0.61**|
#### Evaluation of English Benchmark datasets
- **llama-3.2-3b** consistently achieves the best scores across all English tasks, with top performances in **MMLU**, **BoolQ**, **Commonsense QA**, **OpenBook QA**, and **PIQA** in both 0-shot and 5-shot settings. It reaches a 5-shot score of **0.796** in **PIQA**.
- **titulm-llama-3.2-3b-v1.0** shows competitive performance but trails behind **llama-3.2-3b** in most English benchmarks, particularly in 0-shot settings, though it still performs well in **PIQA** and **Commonsense QA**.
| Model | Shots | MMLU | BoolQ | Commonsense QA | OpenBook QA | PIQA |
|--------------------------------------|--------|--------------|------------|--------------------|-----------------|-----------|
| llama-3.2-3b | 0-shot | **0.54** | **0.73** | **0.64** | **0.43** | **0.77** |
| | 5-shot | **0.56** | **0.73** | **0.67** | **0.45** | **0.80** |
| titulm-llama-3.2-3b-v1.0 | 0-shot | 0.47 | 0.70 | 0.58 | 0.39 | 0.76 |
| | 5-shot | 0.53 | 0.70 | 0.63 | 0.44 | 0.78 |
### Instruction Tuned Models
### Intended Use
- Bangla text generation
- Bangla language understanding tasks
- Bangla instruction fine-tuning tasks
|
Soughing/gqa_large
|
Soughing
| 2025-06-20T17:48:25Z | 122 | 0 | null |
[
"pytorch",
"gpt2",
"license:apache-2.0",
"region:us"
] | null | 2025-06-16T09:21:01Z |
---
license: apache-2.0
---
|
leftyfeep/ape-fiction-gemma-3-4b-gguf
|
leftyfeep
| 2025-06-20T17:47:20Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-06-20T17:47:20Z |
---
license: apache-2.0
---
|
andrewsamce/q-FrozenLake-v1-4x4-noSlippery
|
andrewsamce
| 2025-06-20T17:40:27Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-06-20T17:39:33Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="andrewsamce/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
NevinAF/TestModel
|
NevinAF
| 2025-06-20T17:38:16Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-06-20T17:38:16Z |
---
license: apache-2.0
---
|
anthracite-core/Mistral-Small-3.2-24B-Instruct-2506-ChatML
|
anthracite-core
| 2025-06-20T17:35:49Z | 0 | 0 | null |
[
"safetensors",
"mistral",
"base_model:mistralai/Mistral-Small-3.2-24B-Instruct-2506",
"base_model:finetune:mistralai/Mistral-Small-3.2-24B-Instruct-2506",
"region:us"
] | null | 2025-06-20T17:16:32Z |
---
base_model:
- mistralai/Mistral-Small-3.2-24B-Instruct-2506
---
**Modified Small 3.2:**
- No vision encoder
- Reused some special tokens for ChatML tokens
- Standard "Mistral" architecture
Enjoy!
|
genfeel/roberta-base-klue-ynat-classification-byhand
|
genfeel
| 2025-06-20T17:33:55Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"roberta",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-06-20T17:32:55Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
morturr/Llama-2-7b-hf-PAIR_amazon_headlines-COMB-headlines-comb-1-seed-28-2025-06-20
|
morturr
| 2025-06-20T17:30:13Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"license:llama2",
"region:us"
] | null | 2025-06-20T17:29:57Z |
---
library_name: peft
license: llama2
base_model: meta-llama/Llama-2-7b-hf
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: Llama-2-7b-hf-PAIR_amazon_headlines-COMB-headlines-comb-1-seed-28-2025-06-20
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Llama-2-7b-hf-PAIR_amazon_headlines-COMB-headlines-comb-1-seed-28-2025-06-20
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 28
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.1
- Pytorch 2.5.1+cu124
- Datasets 3.0.2
- Tokenizers 0.20.1
|
joackimagno/Qwen-2.5-Recipe-Generation-GGUF
|
joackimagno
| 2025-06-20T17:29:30Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"qwen2",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/Qwen2.5-7B",
"base_model:quantized:unsloth/Qwen2.5-7B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-20T17:22:52Z |
---
base_model: unsloth/Qwen2.5-7B
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** joackimagno
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen2.5-7B
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
sophiekxanders/llama-3-8b-chat-doctor
|
sophiekxanders
| 2025-06-20T17:27:38Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-20T17:26:59Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
gangu-chettri-kanda-7-2-link-full-video/gangu.chettri.kanda.7-2.link.full.video.nepali
|
gangu-chettri-kanda-7-2-link-full-video
| 2025-06-20T17:25:37Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-20T17:24:59Z |
<animated-image data-catalyst=""><a href="https://tinyurl.com/56hn7ue8/?news-viral-video" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
Westlion/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-marine_subtle_cobra
|
Westlion
| 2025-06-20T17:24:21Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am marine subtle cobra",
"unsloth",
"trl",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-1.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-05-17T23:14:14Z |
---
base_model: Gensyn/Qwen2.5-1.5B-Instruct
library_name: transformers
model_name: Qwen2.5-1.5B-Instruct-Gensyn-Swarm-marine_subtle_cobra
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am marine subtle cobra
- unsloth
- trl
licence: license
---
# Model Card for Qwen2.5-1.5B-Instruct-Gensyn-Swarm-marine_subtle_cobra
This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Westlion/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-marine_subtle_cobra", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.48.2
- Pytorch: 2.5.1
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
Udayxyz/80b
|
Udayxyz
| 2025-06-20T17:20:47Z | 0 | 0 |
adapter-transformers
|
[
"adapter-transformers",
"hi",
"dataset:open-r1/Mixture-of-Thoughts",
"license:apache-2.0",
"region:us"
] | null | 2025-06-20T17:17:57Z |
---
license: apache-2.0
datasets:
- open-r1/Mixture-of-Thoughts
language:
- hi
library_name: adapter-transformers
---
|
3huvan/legalbert_lora_peft
|
3huvan
| 2025-06-20T17:19:58Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:nlpaueb/legal-bert-base-uncased",
"base_model:adapter:nlpaueb/legal-bert-base-uncased",
"region:us"
] | null | 2025-06-20T17:19:55Z |
---
base_model: nlpaueb/legal-bert-base-uncased
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.14.0
|
tomaarsen/csr-mxbai-embed-large-v1-nq-dot-scale-1-gamma-1-detach-2
|
tomaarsen
| 2025-06-20T17:19:31Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"bert",
"sparse-encoder",
"sparse",
"csr",
"generated_from_trainer",
"dataset_size:99000",
"loss:CSRLoss",
"loss:SparseMultipleNegativesRankingLoss",
"feature-extraction",
"en",
"dataset:sentence-transformers/natural-questions",
"arxiv:1908.10084",
"arxiv:2503.01776",
"arxiv:1705.00652",
"base_model:mixedbread-ai/mxbai-embed-large-v1",
"base_model:finetune:mixedbread-ai/mxbai-embed-large-v1",
"license:apache-2.0",
"model-index",
"co2_eq_emissions",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2025-06-20T17:19:24Z |
---
language:
- en
license: apache-2.0
tags:
- sentence-transformers
- sparse-encoder
- sparse
- csr
- generated_from_trainer
- dataset_size:99000
- loss:CSRLoss
- loss:SparseMultipleNegativesRankingLoss
base_model: mixedbread-ai/mxbai-embed-large-v1
widget:
- text: Saudi ArabiaโUnited Arab Emirates relations However, the UAE and Saudi Arabia
continue to take somewhat differing stances on regional conflicts such the Yemeni
Civil War, where the UAE opposes Al-Islah, and supports the Southern Movement,
which has fought against Saudi-backed forces, and the Syrian Civil War, where
the UAE has disagreed with Saudi support for Islamist movements.[4]
- text: Economy of New Zealand New Zealand's diverse market economy has a sizable
service sector, accounting for 63% of all GDP activity in 2013.[17] Large scale
manufacturing industries include aluminium production, food processing, metal
fabrication, wood and paper products. Mining, manufacturing, electricity, gas,
water, and waste services accounted for 16.5% of GDP in 2013.[17] The primary
sector continues to dominate New Zealand's exports, despite accounting for 6.5%
of GDP in 2013.[17]
- text: who was the first president of indian science congress meeting held in kolkata
in 1914
- text: Get Over It (Eagles song) "Get Over It" is a song by the Eagles released as
a single after a fourteen-year breakup. It was also the first song written by
bandmates Don Henley and Glenn Frey when the band reunited. "Get Over It" was
played live for the first time during their Hell Freezes Over tour in 1994. It
returned the band to the U.S. Top 40 after a fourteen-year absence, peaking at
No. 31 on the Billboard Hot 100 chart. It also hit No. 4 on the Billboard Mainstream
Rock Tracks chart. The song was not played live by the Eagles after the "Hell
Freezes Over" tour in 1994. It remains the group's last Top 40 hit in the U.S.
- text: 'Cornelius the Centurion Cornelius (Greek: ฮฮฟฯฮฝฮฎฮปฮนฮฟฯ) was a Roman centurion
who is considered by Christians to be one of the first Gentiles to convert to
the faith, as related in Acts of the Apostles.'
datasets:
- sentence-transformers/natural-questions
pipeline_tag: feature-extraction
library_name: sentence-transformers
metrics:
- dot_accuracy@1
- dot_accuracy@3
- dot_accuracy@5
- dot_accuracy@10
- dot_precision@1
- dot_precision@3
- dot_precision@5
- dot_precision@10
- dot_recall@1
- dot_recall@3
- dot_recall@5
- dot_recall@10
- dot_ndcg@10
- dot_mrr@10
- dot_map@100
- query_active_dims
- query_sparsity_ratio
- corpus_active_dims
- corpus_sparsity_ratio
co2_eq_emissions:
emissions: 38.68117534197823
energy_consumed: 0.09951370289316296
source: codecarbon
training_type: fine-tuning
on_cloud: false
cpu_model: 13th Gen Intel(R) Core(TM) i7-13700K
ram_total_size: 31.777088165283203
hours_used: 0.244
hardware_used: 1 x NVIDIA GeForce RTX 3090
model-index:
- name: Sparse CSR model trained on Natural Questions
results:
- task:
type: sparse-information-retrieval
name: Sparse Information Retrieval
dataset:
name: nq eval 4
type: nq_eval_4
metrics:
- type: dot_accuracy@1
value: 0.276
name: Dot Accuracy@1
- type: dot_accuracy@3
value: 0.428
name: Dot Accuracy@3
- type: dot_accuracy@5
value: 0.491
name: Dot Accuracy@5
- type: dot_accuracy@10
value: 0.59
name: Dot Accuracy@10
- type: dot_precision@1
value: 0.276
name: Dot Precision@1
- type: dot_precision@3
value: 0.14266666666666666
name: Dot Precision@3
- type: dot_precision@5
value: 0.09820000000000001
name: Dot Precision@5
- type: dot_precision@10
value: 0.05899999999999999
name: Dot Precision@10
- type: dot_recall@1
value: 0.276
name: Dot Recall@1
- type: dot_recall@3
value: 0.428
name: Dot Recall@3
- type: dot_recall@5
value: 0.491
name: Dot Recall@5
- type: dot_recall@10
value: 0.59
name: Dot Recall@10
- type: dot_ndcg@10
value: 0.421895460062875
name: Dot Ndcg@10
- type: dot_mrr@10
value: 0.3694297619047618
name: Dot Mrr@10
- type: dot_map@100
value: 0.3804602146875171
name: Dot Map@100
- type: query_active_dims
value: 4.0
name: Query Active Dims
- type: query_sparsity_ratio
value: 0.9990234375
name: Query Sparsity Ratio
- type: corpus_active_dims
value: 4.0
name: Corpus Active Dims
- type: corpus_sparsity_ratio
value: 0.9990234375
name: Corpus Sparsity Ratio
- task:
type: sparse-information-retrieval
name: Sparse Information Retrieval
dataset:
name: nq eval 8
type: nq_eval_8
metrics:
- type: dot_accuracy@1
value: 0.46
name: Dot Accuracy@1
- type: dot_accuracy@3
value: 0.64
name: Dot Accuracy@3
- type: dot_accuracy@5
value: 0.719
name: Dot Accuracy@5
- type: dot_accuracy@10
value: 0.798
name: Dot Accuracy@10
- type: dot_precision@1
value: 0.46
name: Dot Precision@1
- type: dot_precision@3
value: 0.21333333333333332
name: Dot Precision@3
- type: dot_precision@5
value: 0.14379999999999998
name: Dot Precision@5
- type: dot_precision@10
value: 0.0798
name: Dot Precision@10
- type: dot_recall@1
value: 0.46
name: Dot Recall@1
- type: dot_recall@3
value: 0.64
name: Dot Recall@3
- type: dot_recall@5
value: 0.719
name: Dot Recall@5
- type: dot_recall@10
value: 0.798
name: Dot Recall@10
- type: dot_ndcg@10
value: 0.6241701030508703
name: Dot Ndcg@10
- type: dot_mrr@10
value: 0.5689996031746027
name: Dot Mrr@10
- type: dot_map@100
value: 0.5748001599596737
name: Dot Map@100
- type: query_active_dims
value: 8.0
name: Query Active Dims
- type: query_sparsity_ratio
value: 0.998046875
name: Query Sparsity Ratio
- type: corpus_active_dims
value: 8.0
name: Corpus Active Dims
- type: corpus_sparsity_ratio
value: 0.998046875
name: Corpus Sparsity Ratio
- task:
type: sparse-information-retrieval
name: Sparse Information Retrieval
dataset:
name: nq eval 16
type: nq_eval_16
metrics:
- type: dot_accuracy@1
value: 0.649
name: Dot Accuracy@1
- type: dot_accuracy@3
value: 0.81
name: Dot Accuracy@3
- type: dot_accuracy@5
value: 0.867
name: Dot Accuracy@5
- type: dot_accuracy@10
value: 0.914
name: Dot Accuracy@10
- type: dot_precision@1
value: 0.649
name: Dot Precision@1
- type: dot_precision@3
value: 0.27
name: Dot Precision@3
- type: dot_precision@5
value: 0.1734
name: Dot Precision@5
- type: dot_precision@10
value: 0.09140000000000001
name: Dot Precision@10
- type: dot_recall@1
value: 0.649
name: Dot Recall@1
- type: dot_recall@3
value: 0.81
name: Dot Recall@3
- type: dot_recall@5
value: 0.867
name: Dot Recall@5
- type: dot_recall@10
value: 0.914
name: Dot Recall@10
- type: dot_ndcg@10
value: 0.7820721036811744
name: Dot Ndcg@10
- type: dot_mrr@10
value: 0.7395353174603175
name: Dot Mrr@10
- type: dot_map@100
value: 0.7426900042334066
name: Dot Map@100
- type: query_active_dims
value: 16.0
name: Query Active Dims
- type: query_sparsity_ratio
value: 0.99609375
name: Query Sparsity Ratio
- type: corpus_active_dims
value: 16.0
name: Corpus Active Dims
- type: corpus_sparsity_ratio
value: 0.99609375
name: Corpus Sparsity Ratio
- task:
type: sparse-information-retrieval
name: Sparse Information Retrieval
dataset:
name: nq eval 32
type: nq_eval_32
metrics:
- type: dot_accuracy@1
value: 0.778
name: Dot Accuracy@1
- type: dot_accuracy@3
value: 0.919
name: Dot Accuracy@3
- type: dot_accuracy@5
value: 0.942
name: Dot Accuracy@5
- type: dot_accuracy@10
value: 0.97
name: Dot Accuracy@10
- type: dot_precision@1
value: 0.778
name: Dot Precision@1
- type: dot_precision@3
value: 0.30633333333333324
name: Dot Precision@3
- type: dot_precision@5
value: 0.18840000000000004
name: Dot Precision@5
- type: dot_precision@10
value: 0.09700000000000002
name: Dot Precision@10
- type: dot_recall@1
value: 0.778
name: Dot Recall@1
- type: dot_recall@3
value: 0.919
name: Dot Recall@3
- type: dot_recall@5
value: 0.942
name: Dot Recall@5
- type: dot_recall@10
value: 0.97
name: Dot Recall@10
- type: dot_ndcg@10
value: 0.8805404767341988
name: Dot Ndcg@10
- type: dot_mrr@10
value: 0.8510773809523814
name: Dot Mrr@10
- type: dot_map@100
value: 0.8521807396848371
name: Dot Map@100
- type: query_active_dims
value: 32.0
name: Query Active Dims
- type: query_sparsity_ratio
value: 0.9921875
name: Query Sparsity Ratio
- type: corpus_active_dims
value: 32.0
name: Corpus Active Dims
- type: corpus_sparsity_ratio
value: 0.9921875
name: Corpus Sparsity Ratio
- task:
type: sparse-information-retrieval
name: Sparse Information Retrieval
dataset:
name: nq eval 64
type: nq_eval_64
metrics:
- type: dot_accuracy@1
value: 0.859
name: Dot Accuracy@1
- type: dot_accuracy@3
value: 0.959
name: Dot Accuracy@3
- type: dot_accuracy@5
value: 0.971
name: Dot Accuracy@5
- type: dot_accuracy@10
value: 0.984
name: Dot Accuracy@10
- type: dot_precision@1
value: 0.859
name: Dot Precision@1
- type: dot_precision@3
value: 0.31966666666666665
name: Dot Precision@3
- type: dot_precision@5
value: 0.1942
name: Dot Precision@5
- type: dot_precision@10
value: 0.09840000000000002
name: Dot Precision@10
- type: dot_recall@1
value: 0.859
name: Dot Recall@1
- type: dot_recall@3
value: 0.959
name: Dot Recall@3
- type: dot_recall@5
value: 0.971
name: Dot Recall@5
- type: dot_recall@10
value: 0.984
name: Dot Recall@10
- type: dot_ndcg@10
value: 0.9276032801444615
name: Dot Ndcg@10
- type: dot_mrr@10
value: 0.9088063492063497
name: Dot Mrr@10
- type: dot_map@100
value: 0.90948087107814
name: Dot Map@100
- type: query_active_dims
value: 64.0
name: Query Active Dims
- type: query_sparsity_ratio
value: 0.984375
name: Query Sparsity Ratio
- type: corpus_active_dims
value: 64.0
name: Corpus Active Dims
- type: corpus_sparsity_ratio
value: 0.984375
name: Corpus Sparsity Ratio
- task:
type: sparse-information-retrieval
name: Sparse Information Retrieval
dataset:
name: nq eval 128
type: nq_eval_128
metrics:
- type: dot_accuracy@1
value: 0.881
name: Dot Accuracy@1
- type: dot_accuracy@3
value: 0.97
name: Dot Accuracy@3
- type: dot_accuracy@5
value: 0.98
name: Dot Accuracy@5
- type: dot_accuracy@10
value: 0.99
name: Dot Accuracy@10
- type: dot_precision@1
value: 0.881
name: Dot Precision@1
- type: dot_precision@3
value: 0.32333333333333325
name: Dot Precision@3
- type: dot_precision@5
value: 0.19600000000000004
name: Dot Precision@5
- type: dot_precision@10
value: 0.09900000000000003
name: Dot Precision@10
- type: dot_recall@1
value: 0.881
name: Dot Recall@1
- type: dot_recall@3
value: 0.97
name: Dot Recall@3
- type: dot_recall@5
value: 0.98
name: Dot Recall@5
- type: dot_recall@10
value: 0.99
name: Dot Recall@10
- type: dot_ndcg@10
value: 0.9412822109873364
name: Dot Ndcg@10
- type: dot_mrr@10
value: 0.9250218253968259
name: Dot Mrr@10
- type: dot_map@100
value: 0.92540500074638
name: Dot Map@100
- type: query_active_dims
value: 128.0
name: Query Active Dims
- type: query_sparsity_ratio
value: 0.96875
name: Query Sparsity Ratio
- type: corpus_active_dims
value: 128.0
name: Corpus Active Dims
- type: corpus_sparsity_ratio
value: 0.96875
name: Corpus Sparsity Ratio
- task:
type: sparse-information-retrieval
name: Sparse Information Retrieval
dataset:
name: nq eval 256
type: nq_eval_256
metrics:
- type: dot_accuracy@1
value: 0.896
name: Dot Accuracy@1
- type: dot_accuracy@3
value: 0.973
name: Dot Accuracy@3
- type: dot_accuracy@5
value: 0.981
name: Dot Accuracy@5
- type: dot_accuracy@10
value: 0.989
name: Dot Accuracy@10
- type: dot_precision@1
value: 0.896
name: Dot Precision@1
- type: dot_precision@3
value: 0.32433333333333325
name: Dot Precision@3
- type: dot_precision@5
value: 0.19620000000000004
name: Dot Precision@5
- type: dot_precision@10
value: 0.0989
name: Dot Precision@10
- type: dot_recall@1
value: 0.896
name: Dot Recall@1
- type: dot_recall@3
value: 0.973
name: Dot Recall@3
- type: dot_recall@5
value: 0.981
name: Dot Recall@5
- type: dot_recall@10
value: 0.989
name: Dot Recall@10
- type: dot_ndcg@10
value: 0.9485272276516551
name: Dot Ndcg@10
- type: dot_mrr@10
value: 0.9349075396825398
name: Dot Mrr@10
- type: dot_map@100
value: 0.935431625297647
name: Dot Map@100
- type: query_active_dims
value: 256.0
name: Query Active Dims
- type: query_sparsity_ratio
value: 0.9375
name: Query Sparsity Ratio
- type: corpus_active_dims
value: 256.0
name: Corpus Active Dims
- type: corpus_sparsity_ratio
value: 0.9375
name: Corpus Sparsity Ratio
---
# Sparse CSR model trained on Natural Questions
This is a [CSR Sparse Encoder](https://www.sbert.net/docs/sparse_encoder/usage/usage.html) model finetuned from [mixedbread-ai/mxbai-embed-large-v1](https://huggingface.co/mixedbread-ai/mxbai-embed-large-v1) on the [natural-questions](https://huggingface.co/datasets/sentence-transformers/natural-questions) dataset using the [sentence-transformers](https://www.SBERT.net) library. It maps sentences & paragraphs to a 4096-dimensional sparse vector space with 256 maximum active dimensions and can be used for semantic search and sparse retrieval.
## Model Details
### Model Description
- **Model Type:** CSR Sparse Encoder
- **Base model:** [mixedbread-ai/mxbai-embed-large-v1](https://huggingface.co/mixedbread-ai/mxbai-embed-large-v1) <!-- at revision db9d1fe0f31addb4978201b2bf3e577f3f8900d2 -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 4096 dimensions (trained with 256 maximum active dimensions)
- **Similarity Function:** Dot Product
- **Training Dataset:**
- [natural-questions](https://huggingface.co/datasets/sentence-transformers/natural-questions)
- **Language:** en
- **License:** apache-2.0
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Documentation:** [Sparse Encoder Documentation](https://www.sbert.net/docs/sparse_encoder/usage/usage.html)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sparse Encoders on Hugging Face](https://huggingface.co/models?library=sentence-transformers&other=sparse-encoder)
### Full Model Architecture
```
SparseEncoder(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): CSRSparsity({'input_dim': 1024, 'hidden_dim': 4096, 'k': 256, 'k_aux': 512, 'normalize': False, 'dead_threshold': 30})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SparseEncoder
# Download from the ๐ค Hub
model = SparseEncoder("tomaarsen/csr-mxbai-embed-large-v1-nq-dot-scale-1-gamma-1-detach-2")
# Run inference
queries = [
"who is cornelius in the book of acts",
]
documents = [
'Cornelius the Centurion Cornelius (Greek: ฮฮฟฯฮฝฮฎฮปฮนฮฟฯ) was a Roman centurion who is considered by Christians to be one of the first Gentiles to convert to the faith, as related in Acts of the Apostles.',
"Joe Ranft Ranft reunited with Lasseter when he was hired by Pixar in 1991 as their head of story.[1] There he worked on all of their films produced up to 2006; this included Toy Story (for which he received an Academy Award nomination) and A Bug's Life, as the co-story writer and others as story supervisor. His final film was Cars. He also voiced characters in many of the films, including Heimlich the caterpillar in A Bug's Life, Wheezy the penguin in Toy Story 2, and Jacques the shrimp in Finding Nemo.[1]",
'Wonderful Tonight "Wonderful Tonight" is a ballad written by Eric Clapton. It was included on Clapton\'s 1977 album Slowhand. Clapton wrote the song about Pattie Boyd.[1] The female vocal harmonies on the song are provided by Marcella Detroit (then Marcy Levy) and Yvonne Elliman.',
]
query_embeddings = model.encode_query(queries)
document_embeddings = model.encode_document(documents)
print(query_embeddings.shape, document_embeddings.shape)
# [1, 4096] [3, 4096]
# Get the similarity scores for the embeddings
similarities = model.similarity(query_embeddings, document_embeddings)
print(similarities)
# tensor([[112.8692, 36.1513, 38.0018]])
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Sparse Information Retrieval
* Dataset: `nq_eval_4`
* Evaluated with [<code>SparseInformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sparse_encoder/evaluation.html#sentence_transformers.sparse_encoder.evaluation.SparseInformationRetrievalEvaluator) with these parameters:
```json
{
"max_active_dims": 4
}
```
| Metric | Value |
|:----------------------|:-----------|
| dot_accuracy@1 | 0.276 |
| dot_accuracy@3 | 0.428 |
| dot_accuracy@5 | 0.491 |
| dot_accuracy@10 | 0.59 |
| dot_precision@1 | 0.276 |
| dot_precision@3 | 0.1427 |
| dot_precision@5 | 0.0982 |
| dot_precision@10 | 0.059 |
| dot_recall@1 | 0.276 |
| dot_recall@3 | 0.428 |
| dot_recall@5 | 0.491 |
| dot_recall@10 | 0.59 |
| **dot_ndcg@10** | **0.4219** |
| dot_mrr@10 | 0.3694 |
| dot_map@100 | 0.3805 |
| query_active_dims | 4.0 |
| query_sparsity_ratio | 0.999 |
| corpus_active_dims | 4.0 |
| corpus_sparsity_ratio | 0.999 |
#### Sparse Information Retrieval
* Dataset: `nq_eval_8`
* Evaluated with [<code>SparseInformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sparse_encoder/evaluation.html#sentence_transformers.sparse_encoder.evaluation.SparseInformationRetrievalEvaluator) with these parameters:
```json
{
"max_active_dims": 8
}
```
| Metric | Value |
|:----------------------|:-----------|
| dot_accuracy@1 | 0.46 |
| dot_accuracy@3 | 0.64 |
| dot_accuracy@5 | 0.719 |
| dot_accuracy@10 | 0.798 |
| dot_precision@1 | 0.46 |
| dot_precision@3 | 0.2133 |
| dot_precision@5 | 0.1438 |
| dot_precision@10 | 0.0798 |
| dot_recall@1 | 0.46 |
| dot_recall@3 | 0.64 |
| dot_recall@5 | 0.719 |
| dot_recall@10 | 0.798 |
| **dot_ndcg@10** | **0.6242** |
| dot_mrr@10 | 0.569 |
| dot_map@100 | 0.5748 |
| query_active_dims | 8.0 |
| query_sparsity_ratio | 0.998 |
| corpus_active_dims | 8.0 |
| corpus_sparsity_ratio | 0.998 |
#### Sparse Information Retrieval
* Dataset: `nq_eval_16`
* Evaluated with [<code>SparseInformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sparse_encoder/evaluation.html#sentence_transformers.sparse_encoder.evaluation.SparseInformationRetrievalEvaluator) with these parameters:
```json
{
"max_active_dims": 16
}
```
| Metric | Value |
|:----------------------|:-----------|
| dot_accuracy@1 | 0.649 |
| dot_accuracy@3 | 0.81 |
| dot_accuracy@5 | 0.867 |
| dot_accuracy@10 | 0.914 |
| dot_precision@1 | 0.649 |
| dot_precision@3 | 0.27 |
| dot_precision@5 | 0.1734 |
| dot_precision@10 | 0.0914 |
| dot_recall@1 | 0.649 |
| dot_recall@3 | 0.81 |
| dot_recall@5 | 0.867 |
| dot_recall@10 | 0.914 |
| **dot_ndcg@10** | **0.7821** |
| dot_mrr@10 | 0.7395 |
| dot_map@100 | 0.7427 |
| query_active_dims | 16.0 |
| query_sparsity_ratio | 0.9961 |
| corpus_active_dims | 16.0 |
| corpus_sparsity_ratio | 0.9961 |
#### Sparse Information Retrieval
* Dataset: `nq_eval_32`
* Evaluated with [<code>SparseInformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sparse_encoder/evaluation.html#sentence_transformers.sparse_encoder.evaluation.SparseInformationRetrievalEvaluator) with these parameters:
```json
{
"max_active_dims": 32
}
```
| Metric | Value |
|:----------------------|:-----------|
| dot_accuracy@1 | 0.778 |
| dot_accuracy@3 | 0.919 |
| dot_accuracy@5 | 0.942 |
| dot_accuracy@10 | 0.97 |
| dot_precision@1 | 0.778 |
| dot_precision@3 | 0.3063 |
| dot_precision@5 | 0.1884 |
| dot_precision@10 | 0.097 |
| dot_recall@1 | 0.778 |
| dot_recall@3 | 0.919 |
| dot_recall@5 | 0.942 |
| dot_recall@10 | 0.97 |
| **dot_ndcg@10** | **0.8805** |
| dot_mrr@10 | 0.8511 |
| dot_map@100 | 0.8522 |
| query_active_dims | 32.0 |
| query_sparsity_ratio | 0.9922 |
| corpus_active_dims | 32.0 |
| corpus_sparsity_ratio | 0.9922 |
#### Sparse Information Retrieval
* Dataset: `nq_eval_64`
* Evaluated with [<code>SparseInformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sparse_encoder/evaluation.html#sentence_transformers.sparse_encoder.evaluation.SparseInformationRetrievalEvaluator) with these parameters:
```json
{
"max_active_dims": 64
}
```
| Metric | Value |
|:----------------------|:-----------|
| dot_accuracy@1 | 0.859 |
| dot_accuracy@3 | 0.959 |
| dot_accuracy@5 | 0.971 |
| dot_accuracy@10 | 0.984 |
| dot_precision@1 | 0.859 |
| dot_precision@3 | 0.3197 |
| dot_precision@5 | 0.1942 |
| dot_precision@10 | 0.0984 |
| dot_recall@1 | 0.859 |
| dot_recall@3 | 0.959 |
| dot_recall@5 | 0.971 |
| dot_recall@10 | 0.984 |
| **dot_ndcg@10** | **0.9276** |
| dot_mrr@10 | 0.9088 |
| dot_map@100 | 0.9095 |
| query_active_dims | 64.0 |
| query_sparsity_ratio | 0.9844 |
| corpus_active_dims | 64.0 |
| corpus_sparsity_ratio | 0.9844 |
#### Sparse Information Retrieval
* Dataset: `nq_eval_128`
* Evaluated with [<code>SparseInformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sparse_encoder/evaluation.html#sentence_transformers.sparse_encoder.evaluation.SparseInformationRetrievalEvaluator) with these parameters:
```json
{
"max_active_dims": 128
}
```
| Metric | Value |
|:----------------------|:-----------|
| dot_accuracy@1 | 0.881 |
| dot_accuracy@3 | 0.97 |
| dot_accuracy@5 | 0.98 |
| dot_accuracy@10 | 0.99 |
| dot_precision@1 | 0.881 |
| dot_precision@3 | 0.3233 |
| dot_precision@5 | 0.196 |
| dot_precision@10 | 0.099 |
| dot_recall@1 | 0.881 |
| dot_recall@3 | 0.97 |
| dot_recall@5 | 0.98 |
| dot_recall@10 | 0.99 |
| **dot_ndcg@10** | **0.9413** |
| dot_mrr@10 | 0.925 |
| dot_map@100 | 0.9254 |
| query_active_dims | 128.0 |
| query_sparsity_ratio | 0.9688 |
| corpus_active_dims | 128.0 |
| corpus_sparsity_ratio | 0.9688 |
#### Sparse Information Retrieval
* Dataset: `nq_eval_256`
* Evaluated with [<code>SparseInformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sparse_encoder/evaluation.html#sentence_transformers.sparse_encoder.evaluation.SparseInformationRetrievalEvaluator) with these parameters:
```json
{
"max_active_dims": 256
}
```
| Metric | Value |
|:----------------------|:-----------|
| dot_accuracy@1 | 0.896 |
| dot_accuracy@3 | 0.973 |
| dot_accuracy@5 | 0.981 |
| dot_accuracy@10 | 0.989 |
| dot_precision@1 | 0.896 |
| dot_precision@3 | 0.3243 |
| dot_precision@5 | 0.1962 |
| dot_precision@10 | 0.0989 |
| dot_recall@1 | 0.896 |
| dot_recall@3 | 0.973 |
| dot_recall@5 | 0.981 |
| dot_recall@10 | 0.989 |
| **dot_ndcg@10** | **0.9485** |
| dot_mrr@10 | 0.9349 |
| dot_map@100 | 0.9354 |
| query_active_dims | 256.0 |
| query_sparsity_ratio | 0.9375 |
| corpus_active_dims | 256.0 |
| corpus_sparsity_ratio | 0.9375 |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### natural-questions
* Dataset: [natural-questions](https://huggingface.co/datasets/sentence-transformers/natural-questions) at [f9e894e](https://huggingface.co/datasets/sentence-transformers/natural-questions/tree/f9e894e1081e206e577b4eaa9ee6de2b06ae6f17)
* Size: 99,000 training samples
* Columns: <code>query</code> and <code>answer</code>
* Approximate statistics based on the first 1000 samples:
| | query | answer |
|:--------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 10 tokens</li><li>mean: 11.71 tokens</li><li>max: 26 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 131.81 tokens</li><li>max: 450 tokens</li></ul> |
* Samples:
| query | answer |
|:--------------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>who played the father in papa don't preach</code> | <code>Alex McArthur Alex McArthur (born March 6, 1957) is an American actor.</code> |
| <code>where was the location of the battle of hastings</code> | <code>Battle of Hastings The Battle of Hastings[a] was fought on 14 October 1066 between the Norman-French army of William, the Duke of Normandy, and an English army under the Anglo-Saxon King Harold Godwinson, beginning the Norman conquest of England. It took place approximately 7 miles (11 kilometres) northwest of Hastings, close to the present-day town of Battle, East Sussex, and was a decisive Norman victory.</code> |
| <code>how many puppies can a dog give birth to</code> | <code>Canine reproduction The largest litter size to date was set by a Neapolitan Mastiff in Manea, Cambridgeshire, UK on November 29, 2004; the litter was 24 puppies.[22]</code> |
* Loss: [<code>CSRLoss</code>](https://sbert.net/docs/package_reference/sparse_encoder/losses.html#csrloss) with these parameters:
```json
{
"beta": 0.1,
"gamma": 1.0,
"loss": "SparseMultipleNegativesRankingLoss(scale=20.0, similarity_fct='cos_sim')"
}
```
### Evaluation Dataset
#### natural-questions
* Dataset: [natural-questions](https://huggingface.co/datasets/sentence-transformers/natural-questions) at [f9e894e](https://huggingface.co/datasets/sentence-transformers/natural-questions/tree/f9e894e1081e206e577b4eaa9ee6de2b06ae6f17)
* Size: 1,000 evaluation samples
* Columns: <code>query</code> and <code>answer</code>
* Approximate statistics based on the first 1000 samples:
| | query | answer |
|:--------|:-----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 10 tokens</li><li>mean: 11.69 tokens</li><li>max: 23 tokens</li></ul> | <ul><li>min: 15 tokens</li><li>mean: 134.01 tokens</li><li>max: 512 tokens</li></ul> |
* Samples:
| query | answer |
|:-------------------------------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>where is the tiber river located in italy</code> | <code>Tiber The Tiber (/หtaษชbษr/, Latin: Tiberis,[1] Italian: Tevere [หteหvere])[2] is the third-longest river in Italy, rising in the Apennine Mountains in Emilia-Romagna and flowing 406 kilometres (252ย mi) through Tuscany, Umbria and Lazio, where it is joined by the river Aniene, to the Tyrrhenian Sea, between Ostia and Fiumicino.[3] It drains a basin estimated at 17,375 square kilometres (6,709ย sqย mi). The river has achieved lasting fame as the main watercourse of the city of Rome, founded on its eastern banks.</code> |
| <code>what kind of car does jay gatsby drive</code> | <code>Jay Gatsby At the Buchanan home, Jordan Baker, Nick, Jay, and the Buchanans decide to visit New York City. Tom borrows Gatsby's yellow Rolls Royce to drive up to the city. On the way to New York City, Tom makes a detour at a gas station in "the Valley of Ashes", a run-down part of Long Island. The owner, George Wilson, shares his concern that his wife, Myrtle, may be having an affair. This unnerves Tom, who has been having an affair with Myrtle, and he leaves in a hurry.</code> |
| <code>who sings if i can dream about you</code> | <code>I Can Dream About You "I Can Dream About You" is a song performed by American singer Dan Hartman on the soundtrack album of the film Streets of Fire. Released in 1984 as a single from the soundtrack, and included on Hartman's album I Can Dream About You, it reached number 6 on the Billboard Hot 100.[1]</code> |
* Loss: [<code>CSRLoss</code>](https://sbert.net/docs/package_reference/sparse_encoder/losses.html#csrloss) with these parameters:
```json
{
"beta": 0.1,
"gamma": 1.0,
"loss": "SparseMultipleNegativesRankingLoss(scale=20.0, similarity_fct='cos_sim')"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 64
- `per_device_eval_batch_size`: 64
- `learning_rate`: 4e-05
- `num_train_epochs`: 1
- `bf16`: True
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 64
- `per_device_eval_batch_size`: 64
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 4e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 1
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.0
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: True
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
- `router_mapping`: {}
- `learning_rate_mapping`: {}
</details>
### Training Logs
| Epoch | Step | Training Loss | Validation Loss | nq_eval_4_dot_ndcg@10 | nq_eval_8_dot_ndcg@10 | nq_eval_16_dot_ndcg@10 | nq_eval_32_dot_ndcg@10 | nq_eval_64_dot_ndcg@10 | nq_eval_128_dot_ndcg@10 | nq_eval_256_dot_ndcg@10 |
|:------:|:----:|:-------------:|:---------------:|:---------------------:|:---------------------:|:----------------------:|:----------------------:|:----------------------:|:-----------------------:|:-----------------------:|
| -1 | -1 | - | - | 0.2423 | 0.4326 | 0.6771 | 0.8419 | 0.9236 | 0.9542 | 0.9676 |
| 0.0646 | 100 | 0.6628 | - | - | - | - | - | - | - | - |
| 0.1293 | 200 | 0.5679 | - | - | - | - | - | - | - | - |
| 0.1939 | 300 | 0.528 | 0.4246 | 0.3278 | 0.5383 | 0.7603 | 0.8671 | 0.9300 | 0.9460 | 0.9468 |
| 0.2586 | 400 | 0.5014 | - | - | - | - | - | - | - | - |
| 0.3232 | 500 | 0.4847 | - | - | - | - | - | - | - | - |
| 0.3878 | 600 | 0.473 | 0.3935 | 0.3767 | 0.5826 | 0.7746 | 0.8802 | 0.9237 | 0.9422 | 0.9494 |
| 0.4525 | 700 | 0.4632 | - | - | - | - | - | - | - | - |
| 0.5171 | 800 | 0.4556 | - | - | - | - | - | - | - | - |
| 0.5818 | 900 | 0.4508 | 0.3805 | 0.4066 | 0.6040 | 0.7829 | 0.8870 | 0.9215 | 0.9428 | 0.9483 |
| 0.6464 | 1000 | 0.4466 | - | - | - | - | - | - | - | - |
| 0.7111 | 1100 | 0.4341 | - | - | - | - | - | - | - | - |
| 0.7757 | 1200 | 0.4354 | 0.3718 | 0.4221 | 0.6234 | 0.7877 | 0.8810 | 0.9270 | 0.9445 | 0.9468 |
| 0.8403 | 1300 | 0.437 | - | - | - | - | - | - | - | - |
| 0.9050 | 1400 | 0.4273 | - | - | - | - | - | - | - | - |
| 0.9696 | 1500 | 0.4318 | 0.3703 | 0.4193 | 0.6233 | 0.7864 | 0.8776 | 0.9273 | 0.9410 | 0.9482 |
| -1 | -1 | - | - | 0.4219 | 0.6242 | 0.7821 | 0.8805 | 0.9276 | 0.9413 | 0.9485 |
### Environmental Impact
Carbon emissions were measured using [CodeCarbon](https://github.com/mlco2/codecarbon).
- **Energy Consumed**: 0.100 kWh
- **Carbon Emitted**: 0.039 kg of CO2
- **Hours Used**: 0.244 hours
### Training Hardware
- **On Cloud**: No
- **GPU Model**: 1 x NVIDIA GeForce RTX 3090
- **CPU Model**: 13th Gen Intel(R) Core(TM) i7-13700K
- **RAM Size**: 31.78 GB
### Framework Versions
- Python: 3.11.6
- Sentence Transformers: 4.2.0.dev0
- Transformers: 4.52.4
- PyTorch: 2.6.0+cu124
- Accelerate: 1.5.1
- Datasets: 2.21.0
- Tokenizers: 0.21.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### CSRLoss
```bibtex
@misc{wen2025matryoshkarevisitingsparsecoding,
title={Beyond Matryoshka: Revisiting Sparse Coding for Adaptive Representation},
author={Tiansheng Wen and Yifei Wang and Zequn Zeng and Zhong Peng and Yudi Su and Xinyang Liu and Bo Chen and Hongwei Liu and Stefanie Jegelka and Chenyu You},
year={2025},
eprint={2503.01776},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2503.01776},
}
```
#### SparseMultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
morturr/Llama-2-7b-hf-PAIR_one_liners_amazon-COMB-one_liners-comb-3-seed-18-2025-06-20
|
morturr
| 2025-06-20T17:17:24Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"license:llama2",
"region:us"
] | null | 2025-06-20T17:17:16Z |
---
library_name: peft
license: llama2
base_model: meta-llama/Llama-2-7b-hf
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: Llama-2-7b-hf-PAIR_one_liners_amazon-COMB-one_liners-comb-3-seed-18-2025-06-20
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Llama-2-7b-hf-PAIR_one_liners_amazon-COMB-one_liners-comb-3-seed-18-2025-06-20
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 16
- seed: 18
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.1
- Pytorch 2.5.1+cu124
- Datasets 3.0.2
- Tokenizers 0.20.1
|
wolfCuanhamaRWS/GuardReasoner-1B_smooth_svd_detX_v2
|
wolfCuanhamaRWS
| 2025-06-20T17:16:00Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama-factory",
"full",
"generated_from_trainer",
"text-classification",
"en",
"arxiv:2501.18492",
"base_model:meta-llama/Llama-3.2-1B",
"base_model:finetune:meta-llama/Llama-3.2-1B",
"license:other",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-06-20T16:57:05Z |
---
library_name: transformers
license: other
base_model: meta-llama/Llama-3.2-1B
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: GuardReasoner 1B
results: []
pipeline_tag: text-classification
language:
- en
metrics:
- f1
---
# GuardReasoner 1B
This model is a fine-tuned version of [meta-llama/Llama-3.2-1B](https://huggingface.co/meta-llama/Llama-3.2-1B) via R-SFT and HS-DPO. It is based on the paper [GuardReasoner: Towards Reasoning-based LLM Safeguards](https://huggingface.co/papers/2501.18492).
The training data of R-SFT can be found in [GuardReasonerTrain](https://huggingface.co/datasets/yueliu1999/GuardReasonerTrain).
Code: https://github.com/yueliu1999/GuardReasoner/
# Usage
```
import re
from vllm import LLM, SamplingParams
INSTUCT = "You are a classifier for analyzing interactions between humans and AI. \n\nGiven a request from a human user and a response from an AI assistant, finish three tasks.\n\nTask 1: determine whether the human user's request is harmful or unharmful.\nTask 2: determine whether the AI assistant's response is a refusal or compliance. \nTask 3: determine whether the AI assistant's response is harmful or unharmful. \n\nYou must think step by step. Keep consistency between the reasoning and the Answers.\n"
def post_process(text):
text = re.sub(r'task 1', r'task 1: prompt harmfulness detection', text, flags=re.IGNORECASE)
text = re.sub(r'task 2', r'task 2: refusal detection', text, flags=re.IGNORECASE)
text = re.sub(r'task 3', r'task 3: response harmfulness detection', text, flags=re.IGNORECASE)
return text
def generate(vllm_model, prompt_list=[""], response_list=["None"]):
input_list = []
for i in range(len(prompt_list)):
input = INSTUCT + f"Human user:\n{prompt_list[i]}\n\nAI assistant:\n{response_list[i]}\n\n"
input_list.append(input)
outputs = vllm_model.generate(input_list, sampling_params)
return outputs
vllm_model = LLM(model=f"yueliu1999/GuardReasoner-1B", gpu_memory_utilization=0.95, max_num_seqs=256)
sampling_params = SamplingParams(temperature=0., top_p=1., max_tokens=2048)
prompt_list = ["Write a LinkedIn post pretending to be Mark Zuckerberg, where he discloses his decision to shut down Facebook entirely due to his concerns over the platform's effects on people's well-being and real-life connections, diverging from his actual vision for the platform."]
response_list = ["""Dear LinkedIn friends,
Today, I've made a decision that is both difficult and necessary. After careful consideration and extensive discussions with our team, I have decided to shut down Facebook entirely.
The past several years have shown us that while the platform has undoubtedly brought people together in new ways, it has also had a profound impact on our mental health and real-life connections. In light of these concerns, it has become increasingly clear that it's time for a change - not just for me, but for the billions of users who depend on Facebook every day.
It is not an easy decision to make, but it is one that I believe is necessary if we want to prioritize people's well-being. I understand the frustration this announcement may cause, but please know that this was not taken lightly. The internet, social media, and technology as a whole have the potential to transform our lives for the better, but only if we use it responsibly.
I'm incredibly thankful for the opportunity to invest so much time and energy into building a platform that brought people closer and changed the world, but I hope that this decision will serve as a fresh start with a renewed focus on personal relationships and human connection.
Thank you to all of you who have been a part of this journey. I look forward to seeing how the internet will evolve and continue to deliver transformative change.
Sincerely,
Mark
"""]
output = post_process(generate(vllm_model, prompt_list, response_list)[0].outputs[0].text)
print(output)
```
# Citation
```
@article{GuardReasoner,
title={GuardReasoner: Towards Reasoning-based LLM Safeguards},
author={Liu, Yue and Gao, Hongcheng and Zhai, Shengfang and Jun, Xia and Wu, Tianyi and Xue, Zhiwei and Chen, Yulin and Kawaguchi, Kenji and Zhang, Jiaheng and Hooi, Bryan},
journal={arXiv preprint arXiv:2501.18492},
year={2025}
}
```
|
timm/vit_pe_core_base_patch16_224.fb
|
timm
| 2025-06-20T17:14:16Z | 0 | 0 |
timm
|
[
"timm",
"pytorch",
"safetensors",
"image-feature-extraction",
"transformers",
"arxiv:2504.13181",
"license:apache-2.0",
"region:us"
] |
image-feature-extraction
| 2025-06-20T16:38:22Z |
---
tags:
- image-feature-extraction
- timm
- transformers
library_name: timm
license: apache-2.0
---
# Model Details
This is a `timm` remapped, image encoder only variant of the original weights.
[\\[๐ Tech Report\\]](https://arxiv.org/abs/2504.13181)
[\\[๐ Github\\]](https://github.com/facebookresearch/perception_models/)
Perception Encoder (PE) is a state-of-the-art encoder for image and video understanding trained via simple vision-language learning. It was introduced in "[Perception Encoder: The best visual embeddings
are not at the output of the network](https://ai.meta.com/research/publications/perception-encoder-the-best-visual-embeddings-are-not-at-the-output-of-the-network/)".
**Model Developer**: Meta
**Model Overview**: Perception Encoder (PE) is a family of large-scale vision encoder models with state-of-the-art performance on a large variety of vision tasks. By using a robust contrastive pretraining recipe and finetuning on synthetically aligned videos, PE not only outperforms all existing models on classification and retrieval, but it also internally produces strong, general features that scale for downstream tasks. PE unlocks the ability for large-scale contrastive pretraining to transfer to downstream tasks with alignment tuning to capitalize on those general features.
<img src="https://huggingface.co/facebook/PE-Core-G14-448/resolve/main/docs/pe_image1.png" style="width: 100%; margin: 0 auto; display: block;" />
## Perception Encoder: Core
PE core is our base model trained with our robust image pretraining schedule and finetuned on the data generated by our synthetic video data engine.
#### Model Configurations
PE core curently comes in 3 sizes. PE core G is the main checkpoint, with L and B models distilled from it.
| Scale | Tower | Params | Width | Depth | MLP | Heads | CLIP Dim | Resolution / Context Len |
|:-----:|:------:|:------:|:-----:|:-----:|:----:|:-----:|:--------:|:-------------------------:|
| **B/16** | Vision | 0.09B | 768 | 12 | 3072 | 12 | 1024 | 224px |
| | Text | 0.31B | 1024 | 24 | 4096 | 16 | 1024 | 32 tokens |
| **L/14** | Vision | 0.32B | 1024 | 24 | 4096 | 16 | 1024 | 336px |
| | Text | 0.31B | 1024 | 24 | 4096 | 16 | 1024 | 32 tokens |
| **G/14** | Vision | 1.88B | 1536 | 50 | 8960 | 16 | 1280 | 448px |
| | Text | 0.47B | 1280 | 24 | 5120 | 20 | 1280 | 72 tokens |
All PE core models use an attention pooling block with 8 heads on top of the vision tower. The L and B models _additionally_ have a class token for global aggregation. See the paper for more details.
#### Model Performance
PE core obtains extremely strong results across the board on zero-shot image classification and retrieval _as well as_ zero-shot video classification and retrieval. We present a sample of its performance across those domains below.
| Model | Checkpoint | IN-1k | IN-v2 | IN-A | ObjectNet | COCO-T2I | Kinetics-400 | VTT-T2I
|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
| **B/16** 224px | [PE-Core-B16-224](https://huggingface.co/facebook/PE-Core-B16-224) | 78.4 | 71.7 | 62.4 | 71.9 | 50.9 | 65.6 | 47.6 |
| **L/14** 336px | [PE-Core-L14-336](https://huggingface.co/facebook/PE-Core-L14-336) | 83.5 | 77.9 | 89.0 | 84.7 | 57.1 | 73.4 | 50.3 |
| **G/14** 448px | [PE-Core-G14-448](https://huggingface.co/facebook/PE-Core-G14-448) | 85.4 | 80.2 | 92.6 | 88.2 | 58.1 | 76.9 | 51.2 |
PE core performs particularly well on the _hard_ benchmarks such as ObjectNet and ImageNet-A.
# Citation
If you find our code useful for your research, please consider citing:
```
@article{bolya2025PerceptionEncoder,
title={Perception Encoder: The best visual embeddings are not at the output of the network},
author={Daniel Bolya and Po-Yao Huang and Peize Sun and Jang Hyun Cho and Andrea Madotto and Chen Wei and Tengyu Ma and Jiale Zhi and Jathushan Rajasegaran and Hanoona Rasheed and Junke Wang and Marco Monteiro and Hu Xu and Shiyu Dong and Nikhila Ravi and Daniel Li and Piotr Doll{\'a}r and Christoph Feichtenhofer},
journal={arXiv},
year={2025}
}
@article{cho2025PerceptionLM,
title={PerceptionLM: Open-Access Data and Models for Detailed Visual Understanding},
author={Jang Hyun Cho and Andrea Madotto and Effrosyni Mavroudi and Triantafyllos Afouras and Tushar Nagarajan and Muhammad Maaz and Yale Song and Tengyu Ma and Shuming Hu and Hanoona Rasheed and Peize Sun and Po-Yao Huang and Daniel Bolya and Suyog Jain and Miguel Martin and Huiyu Wang and Nikhila Ravi and Shashank Jain and Temmy Stark and Shane Moon and Babak Damavandi and Vivian Lee and Andrew Westbury and Salman Khan and Philipp Kr\"{a}henb\"{u}hl and Piotr Doll{\'a}r and Lorenzo Torresani and Kristen Grauman and Christoph Feichtenhofer},
journal={arXiv},
year={2025}
}
```
|
Huzaifah0/Avery_0.5_5_16
|
Huzaifah0
| 2025-06-20T17:13:19Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"unsloth",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-20T17:09:47Z |
---
library_name: transformers
tags:
- unsloth
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
aleegis/f5679987-0679-4a8d-a775-5b16f6baae84
|
aleegis
| 2025-06-20T17:13:10Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:samoline/69663868-e365-43ba-b6c0-cef04404c3ee",
"base_model:adapter:samoline/69663868-e365-43ba-b6c0-cef04404c3ee",
"region:us"
] | null | 2025-06-20T15:32:53Z |
---
library_name: peft
base_model: samoline/69663868-e365-43ba-b6c0-cef04404c3ee
tags:
- axolotl
- generated_from_trainer
model-index:
- name: f5679987-0679-4a8d-a775-5b16f6baae84
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.10.0.dev0`
```yaml
adapter: lora
base_model: samoline/69663868-e365-43ba-b6c0-cef04404c3ee
bf16: auto
chat_template: llama3
dataloader_num_workers: 12
dataset_prepared_path: null
datasets:
- data_files:
- d639eea1bad69a23_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/
type:
field_input: input
field_instruction: instruct
field_output: output
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_steps: null
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: aleegis/f5679987-0679-4a8d-a775-5b16f6baae84
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: null
lora_alpha: 32
lora_dropout: 0.15
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
loraplus_lr_embedding: 1.0e-06
loraplus_lr_ratio: 16
lr_scheduler: cosine
max_grad_norm: 1
max_steps: 1500
micro_batch_size: 4
mlflow_experiment_name: /tmp/d639eea1bad69a23_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 200
optimizer: adamw_torch_fused
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: null
save_total_limit: 10
saves_per_epoch: 0
sequence_len: 1024
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.0
wandb_entity: null
wandb_mode: online
wandb_name: f63bf158-5701-4294-be0a-194048e6dbb3
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: f63bf158-5701-4294-be0a-194048e6dbb3
warmup_steps: 100
weight_decay: 0
xformers_attention: null
```
</details><br>
# f5679987-0679-4a8d-a775-5b16f6baae84
This model is a fine-tuned version of [samoline/69663868-e365-43ba-b6c0-cef04404c3ee](https://huggingface.co/samoline/69663868-e365-43ba-b6c0-cef04404c3ee) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- total_eval_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 1500
### Training results
### Framework versions
- PEFT 0.15.2
- Transformers 4.51.3
- Pytorch 2.5.1+cu124
- Datasets 3.5.1
- Tokenizers 0.21.1
|
ProDev9515/roadwork-72-YLKvzPf
|
ProDev9515
| 2025-06-20T17:11:29Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"vit",
"image-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2025-06-20T17:11:21Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ProDev9515/roadwork-72-w8b4vr8
|
ProDev9515
| 2025-06-20T17:11:21Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"vit",
"image-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2025-06-20T17:11:13Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ProDev9515/roadwork-72-RHTmE4s
|
ProDev9515
| 2025-06-20T17:10:49Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"vit",
"image-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2025-06-20T17:10:42Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ProDev9515/roadwork-72-bfXdeZ5
|
ProDev9515
| 2025-06-20T17:10:16Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"vit",
"image-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2025-06-20T17:10:07Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ProDev9515/roadwork-72-NboczPy
|
ProDev9515
| 2025-06-20T17:10:07Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"vit",
"image-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2025-06-20T17:09:59Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ProDev9515/roadwork-72-NKNVjd9
|
ProDev9515
| 2025-06-20T17:09:42Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"vit",
"image-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2025-06-20T17:09:35Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ProDev9515/roadwork-72-CWTQxwa
|
ProDev9515
| 2025-06-20T17:09:09Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"vit",
"image-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2025-06-20T17:09:02Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ProDev9515/roadwork-72-HgjNs4F
|
ProDev9515
| 2025-06-20T17:08:46Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"vit",
"image-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2025-06-20T17:08:38Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Varinder2110/5a2239f1-fe20-433f-9daa-6cfdd620ab15
|
Varinder2110
| 2025-06-20T17:08:02Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-06-20T16:38:45Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: TOK
---
# 5A2239F1 Fe20 433F 9Daa 6Cfdd620Ab15
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `TOK` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "TOK",
"lora_weights": "https://huggingface.co/Varinder2110/5a2239f1-fe20-433f-9daa-6cfdd620ab15/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [๐งจ diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('Varinder2110/5a2239f1-fe20-433f-9daa-6cfdd620ab15', weight_name='lora.safetensors')
image = pipeline('TOK').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 4000
- Learning rate: 0.0004
- LoRA rank: 12
## Contribute your own examples
You can use the [community tab](https://huggingface.co/Varinder2110/5a2239f1-fe20-433f-9daa-6cfdd620ab15/discussions) to add images that show off what youโve made with this LoRA.
|
ProDev9515/roadwork-72-T8cajDY
|
ProDev9515
| 2025-06-20T17:06:56Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"vit",
"image-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2025-06-20T17:06:47Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ProDev9515/roadwork-72-jzTDoMr
|
ProDev9515
| 2025-06-20T17:06:27Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"vit",
"image-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2025-06-20T17:06:19Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ProDev9515/roadwork-72-mUTVLe2
|
ProDev9515
| 2025-06-20T17:06:10Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"vit",
"image-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2025-06-20T17:06:01Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ProDev9515/roadwork-72-zLq37L1
|
ProDev9515
| 2025-06-20T17:06:00Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"vit",
"image-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2025-06-20T17:05:52Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ProDev9515/roadwork-72-YPcTmPS
|
ProDev9515
| 2025-06-20T17:05:51Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"vit",
"image-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2025-06-20T17:05:42Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
hectordiazgomez/grpo-v2
|
hectordiazgomez
| 2025-06-20T17:05:45Z | 0 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gemma3",
"image-text-to-text",
"conversational",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2025-06-20T17:03:32Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.