modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-08-16 18:32:29
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
506 values
tags
listlengths
1
4.05k
pipeline_tag
stringclasses
55 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-08-16 18:32:14
card
stringlengths
11
1.01M
Heyoka974/armellelora7
Heyoka974
2025-08-16T17:02:46Z
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-08-16T16:27:27Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: arme7 --- # Armellelora7 <Gallery /> ## About this LoRA This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI. It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `arme7` to trigger the image generation. ## Run this LoRA with an API using Replicate ```py import replicate input = { "prompt": "arme7", "lora_weights": "https://huggingface.co/Heyoka974/armellelora7/resolve/main/lora.safetensors" } output = replicate.run( "black-forest-labs/flux-dev-lora", input=input ) for index, item in enumerate(output): with open(f"output_{index}.webp", "wb") as file: file.write(item.read()) ``` ## Use it with the [๐Ÿงจ diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('Heyoka974/armellelora7', weight_name='lora.safetensors') image = pipeline('arme7').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Training details - Steps: 2000 - Learning rate: 0.0004 - LoRA rank: 32 ## Contribute your own examples You can use the [community tab](https://huggingface.co/Heyoka974/armellelora7/discussions) to add images that show off what youโ€™ve made with this LoRA.
Muapi/sketching-portrait
Muapi
2025-08-16T16:27:24Z
0
0
null
[ "lora", "stable-diffusion", "flux.1-d", "license:openrail++", "region:us" ]
null
2025-08-16T16:27:10Z
--- license: openrail++ tags: - lora - stable-diffusion - flux.1-d model_type: LoRA --- # Sketching Portrait ![preview](./preview.jpg) **Base model**: Flux.1 D **Trained words**: ## ๐Ÿง  Usage (Python) ๐Ÿ”‘ **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys) ```python import requests, os url = "https://api.muapi.ai/api/v1/flux_dev_lora_image" headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")} payload = { "prompt": "masterpiece, best quality, 1girl, looking at viewer", "model_id": [{"model": "civitai:1304134@1223041", "weight": 1.0}], "width": 1024, "height": 1024, "num_images": 1 } print(requests.post(url, headers=headers, json=payload).json()) ```
ggozzy/blockassist-bc-stubby_yapping_mandrill_1755359040
ggozzy
2025-08-16T15:45:15Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "stubby yapping mandrill", "arxiv:2504.07091", "region:us" ]
null
2025-08-16T15:45:03Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - stubby yapping mandrill --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
VIDEOS-19-izzy-Viral-Video-Clips/Clip.Izzy.Viral.Video.Original.Link.Tiktok.official.tutorial
VIDEOS-19-izzy-Viral-Video-Clips
2025-08-16T15:02:22Z
0
0
null
[ "region:us" ]
null
2025-08-16T15:02:13Z
<a href="https://watch-bloggx777x.blogspot.com/2025/07/tuyhtfhydfhnfh.html"><img src="http://4.bp.blogspot.com/-VFcup4RzDQY/Upiobuokb5I/AAAAAAAAAV0/64yKpZilDCg/s1600/oie_nxv3mlmduAj1.gif" alt="fsd" /></a> <a href="https://watch-bloggx777x.blogspot.com/2025/07/tuyhtfhydfhnfh.html" rel="nofollow">๐Ÿ”ด โžคโ–บ๐‚๐ฅ๐ข๐ค ๐‡๐ž๐ซ๐ž ๐ญ๐จ๐Ÿ‘‰๐Ÿ‘‰ (๐–๐š๐ญ๐œ๐ก ๐…๐ฎ๐ฅ๐ฅ ๐ฏ๐ข๐๐ž๐จ)</a> <a href="https://watch-bloggx777x.blogspot.com/2025/07/tuyhtfhydfhnfh.html" rel="nofollow">๐Ÿ”ด โžคโ–บ๐‚๐ฅ๐ข๐ค ๐‡๐ž๐ซ๐ž ๐ญ๐จ๐Ÿ‘‰๐Ÿ‘‰ (๐…๐ฎ๐ฅ๐ฅ ๐ฏ๐ข๐๐ž๐จ ๐‹๐ข๐ง๐ค)</a>
Jovar1/blockassist-bc-bold_hulking_rooster_1755355892
Jovar1
2025-08-16T14:53:19Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "bold hulking rooster", "arxiv:2504.07091", "region:us" ]
null
2025-08-16T14:52:34Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - bold hulking rooster --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
mradermacher/GemmaComments-GGUF
mradermacher
2025-08-16T14:24:27Z
0
0
transformers
[ "transformers", "gguf", "generated_from_trainer", "sft", "trl", "en", "base_model:maxwellt/GemmaComments", "base_model:quantized:maxwellt/GemmaComments", "endpoints_compatible", "region:us", "conversational" ]
null
2025-08-16T14:22:37Z
--- base_model: maxwellt/GemmaComments language: - en library_name: transformers model_name: GemmaComments mradermacher: readme_rev: 1 quantized_by: mradermacher tags: - generated_from_trainer - sft - trl --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> <!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS --> <!-- ### quants_skip: --> <!-- ### skip_mmproj: --> static quants of https://huggingface.co/maxwellt/GemmaComments <!-- provided-files --> ***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#GemmaComments-GGUF).*** weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/GemmaComments-GGUF/resolve/main/GemmaComments.Q3_K_S.gguf) | Q3_K_S | 0.3 | | | [GGUF](https://huggingface.co/mradermacher/GemmaComments-GGUF/resolve/main/GemmaComments.Q2_K.gguf) | Q2_K | 0.3 | | | [GGUF](https://huggingface.co/mradermacher/GemmaComments-GGUF/resolve/main/GemmaComments.IQ4_XS.gguf) | IQ4_XS | 0.3 | | | [GGUF](https://huggingface.co/mradermacher/GemmaComments-GGUF/resolve/main/GemmaComments.Q3_K_M.gguf) | Q3_K_M | 0.3 | lower quality | | [GGUF](https://huggingface.co/mradermacher/GemmaComments-GGUF/resolve/main/GemmaComments.Q3_K_L.gguf) | Q3_K_L | 0.3 | | | [GGUF](https://huggingface.co/mradermacher/GemmaComments-GGUF/resolve/main/GemmaComments.Q4_K_S.gguf) | Q4_K_S | 0.3 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/GemmaComments-GGUF/resolve/main/GemmaComments.Q4_K_M.gguf) | Q4_K_M | 0.4 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/GemmaComments-GGUF/resolve/main/GemmaComments.Q5_K_S.gguf) | Q5_K_S | 0.4 | | | [GGUF](https://huggingface.co/mradermacher/GemmaComments-GGUF/resolve/main/GemmaComments.Q5_K_M.gguf) | Q5_K_M | 0.4 | | | [GGUF](https://huggingface.co/mradermacher/GemmaComments-GGUF/resolve/main/GemmaComments.Q6_K.gguf) | Q6_K | 0.4 | very good quality | | [GGUF](https://huggingface.co/mradermacher/GemmaComments-GGUF/resolve/main/GemmaComments.Q8_0.gguf) | Q8_0 | 0.4 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/GemmaComments-GGUF/resolve/main/GemmaComments.f16.gguf) | f16 | 0.6 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
BootesVoid/cmee9f3ze0i4mrts8r3ahvqkw_cmeeap8uj0icorts8mrlbglot
BootesVoid
2025-08-16T14:09:20Z
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-08-16T14:09:18Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: WHITEMOD20 --- # Cmee9F3Ze0I4Mrts8R3Ahvqkw_Cmeeap8Uj0Icorts8Mrlbglot <Gallery /> ## About this LoRA This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI. It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `WHITEMOD20` to trigger the image generation. ## Run this LoRA with an API using Replicate ```py import replicate input = { "prompt": "WHITEMOD20", "lora_weights": "https://huggingface.co/BootesVoid/cmee9f3ze0i4mrts8r3ahvqkw_cmeeap8uj0icorts8mrlbglot/resolve/main/lora.safetensors" } output = replicate.run( "black-forest-labs/flux-dev-lora", input=input ) for index, item in enumerate(output): with open(f"output_{index}.webp", "wb") as file: file.write(item.read()) ``` ## Use it with the [๐Ÿงจ diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('BootesVoid/cmee9f3ze0i4mrts8r3ahvqkw_cmeeap8uj0icorts8mrlbglot', weight_name='lora.safetensors') image = pipeline('WHITEMOD20').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Training details - Steps: 2000 - Learning rate: 0.0004 - LoRA rank: 16 ## Contribute your own examples You can use the [community tab](https://huggingface.co/BootesVoid/cmee9f3ze0i4mrts8r3ahvqkw_cmeeap8uj0icorts8mrlbglot/discussions) to add images that show off what youโ€™ve made with this LoRA.
mradermacher/GPT-oss-sft-s1K-i1-GGUF
mradermacher
2025-08-16T14:01:02Z
0
0
transformers
[ "transformers", "gguf", "generated_from_trainer", "open-r1", "trl", "sft", "en", "dataset:yentinglin/s1K-1.1-trl-format", "base_model:HectorHe/GPT-oss-sft-s1K", "base_model:quantized:HectorHe/GPT-oss-sft-s1K", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
null
2025-08-16T10:34:45Z
--- base_model: HectorHe/GPT-oss-sft-s1K datasets: yentinglin/s1K-1.1-trl-format language: - en library_name: transformers model_name: GPT-oss-sft-s1K mradermacher: readme_rev: 1 quantized_by: mradermacher tags: - generated_from_trainer - open-r1 - trl - sft --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> <!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_K_M Q4_0 IQ3_XS Q4_1 IQ3_S --> <!-- ### quants_skip: --> <!-- ### skip_mmproj: --> weighted/imatrix quants of https://huggingface.co/HectorHe/GPT-oss-sft-s1K <!-- provided-files --> ***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#GPT-oss-sft-s1K-i1-GGUF).*** static quants are available at https://huggingface.co/mradermacher/GPT-oss-sft-s1K-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/GPT-oss-sft-s1K-i1-GGUF/resolve/main/GPT-oss-sft-s1K.imatrix.gguf) | imatrix | 0.1 | imatrix file (for creating your own qwuants) | | [GGUF](https://huggingface.co/mradermacher/GPT-oss-sft-s1K-i1-GGUF/resolve/main/GPT-oss-sft-s1K.i1-IQ1_M.gguf) | i1-IQ1_M | 12.1 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/GPT-oss-sft-s1K-i1-GGUF/resolve/main/GPT-oss-sft-s1K.i1-IQ1_S.gguf) | i1-IQ1_S | 12.1 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/GPT-oss-sft-s1K-i1-GGUF/resolve/main/GPT-oss-sft-s1K.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 12.1 | | | [GGUF](https://huggingface.co/mradermacher/GPT-oss-sft-s1K-i1-GGUF/resolve/main/GPT-oss-sft-s1K.i1-IQ2_XS.gguf) | i1-IQ2_XS | 12.1 | | | [GGUF](https://huggingface.co/mradermacher/GPT-oss-sft-s1K-i1-GGUF/resolve/main/GPT-oss-sft-s1K.i1-Q3_K_S.gguf) | i1-Q3_K_S | 12.2 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/GPT-oss-sft-s1K-i1-GGUF/resolve/main/GPT-oss-sft-s1K.i1-IQ2_M.gguf) | i1-IQ2_M | 12.2 | | | [GGUF](https://huggingface.co/mradermacher/GPT-oss-sft-s1K-i1-GGUF/resolve/main/GPT-oss-sft-s1K.i1-IQ2_S.gguf) | i1-IQ2_S | 12.2 | | | [GGUF](https://huggingface.co/mradermacher/GPT-oss-sft-s1K-i1-GGUF/resolve/main/GPT-oss-sft-s1K.i1-IQ3_S.gguf) | i1-IQ3_S | 12.2 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/GPT-oss-sft-s1K-i1-GGUF/resolve/main/GPT-oss-sft-s1K.i1-IQ3_XS.gguf) | i1-IQ3_XS | 12.2 | | | [GGUF](https://huggingface.co/mradermacher/GPT-oss-sft-s1K-i1-GGUF/resolve/main/GPT-oss-sft-s1K.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 12.2 | lower quality | | [GGUF](https://huggingface.co/mradermacher/GPT-oss-sft-s1K-i1-GGUF/resolve/main/GPT-oss-sft-s1K.i1-Q2_K.gguf) | i1-Q2_K | 12.2 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/GPT-oss-sft-s1K-i1-GGUF/resolve/main/GPT-oss-sft-s1K.i1-IQ4_XS.gguf) | i1-IQ4_XS | 12.2 | | | [GGUF](https://huggingface.co/mradermacher/GPT-oss-sft-s1K-i1-GGUF/resolve/main/GPT-oss-sft-s1K.i1-Q2_K_S.gguf) | i1-Q2_K_S | 12.2 | very low quality | | [GGUF](https://huggingface.co/mradermacher/GPT-oss-sft-s1K-i1-GGUF/resolve/main/GPT-oss-sft-s1K.i1-Q4_0.gguf) | i1-Q4_0 | 12.2 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/GPT-oss-sft-s1K-i1-GGUF/resolve/main/GPT-oss-sft-s1K.i1-IQ3_M.gguf) | i1-IQ3_M | 12.3 | | | [GGUF](https://huggingface.co/mradermacher/GPT-oss-sft-s1K-i1-GGUF/resolve/main/GPT-oss-sft-s1K.i1-Q3_K_M.gguf) | i1-Q3_K_M | 13.0 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/GPT-oss-sft-s1K-i1-GGUF/resolve/main/GPT-oss-sft-s1K.i1-Q3_K_L.gguf) | i1-Q3_K_L | 13.4 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/GPT-oss-sft-s1K-i1-GGUF/resolve/main/GPT-oss-sft-s1K.i1-Q4_1.gguf) | i1-Q4_1 | 13.5 | | | [GGUF](https://huggingface.co/mradermacher/GPT-oss-sft-s1K-i1-GGUF/resolve/main/GPT-oss-sft-s1K.i1-Q4_K_S.gguf) | i1-Q4_K_S | 14.8 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/GPT-oss-sft-s1K-i1-GGUF/resolve/main/GPT-oss-sft-s1K.i1-Q4_K_M.gguf) | i1-Q4_K_M | 15.9 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/GPT-oss-sft-s1K-i1-GGUF/resolve/main/GPT-oss-sft-s1K.i1-Q5_K_S.gguf) | i1-Q5_K_S | 16.0 | | | [GGUF](https://huggingface.co/mradermacher/GPT-oss-sft-s1K-i1-GGUF/resolve/main/GPT-oss-sft-s1K.i1-Q5_K_M.gguf) | i1-Q5_K_M | 17.0 | | | [GGUF](https://huggingface.co/mradermacher/GPT-oss-sft-s1K-i1-GGUF/resolve/main/GPT-oss-sft-s1K.i1-Q6_K.gguf) | i1-Q6_K | 22.3 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
krish53/finetuned_correct_model
krish53
2025-08-16T13:33:58Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-08-16T13:33:46Z
--- base_model: unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llama - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** krish53 - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
VoilaRaj/69_zvrjS2
VoilaRaj
2025-08-16T13:08:50Z
0
0
null
[ "safetensors", "any-to-any", "omega", "omegalabs", "bittensor", "agi", "license:mit", "region:us" ]
any-to-any
2025-08-16T13:05:04Z
--- license: mit tags: - any-to-any - omega - omegalabs - bittensor - agi --- This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet. Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
eusuf01/blockassist-bc-smooth_humming_butterfly_1755348849
eusuf01
2025-08-16T12:56:41Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "smooth humming butterfly", "arxiv:2504.07091", "region:us" ]
null
2025-08-16T12:56:20Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - smooth humming butterfly --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
phospho-app/furkanbsk-gr00t-so101-table-cleanup-mrq54
phospho-app
2025-08-16T12:42:43Z
0
0
phosphobot
[ "phosphobot", "gr00t", "robotics", "dataset:youliangtan/so101-table-cleanup", "region:us" ]
robotics
2025-08-16T11:38:29Z
--- datasets: youliangtan/so101-table-cleanup library_name: phosphobot pipeline_tag: robotics model_name: gr00t tags: - phosphobot - gr00t task_categories: - robotics --- # gr00t Model - phospho Training Pipeline ## Error Traceback We faced an issue while training your model. ``` Traceback (most recent call last): File "/opt/conda/lib/python3.11/asyncio/tasks.py", line 500, in wait_for return fut.result() ^^^^^^^^^^^^ File "/root/phosphobot/am/gr00t.py", line 1146, in read_output async for line in process.stdout: File "/opt/conda/lib/python3.11/asyncio/streams.py", line 765, in __anext__ val = await self.readline() ^^^^^^^^^^^^^^^^^^^^^ File "/opt/conda/lib/python3.11/asyncio/streams.py", line 566, in readline line = await self.readuntil(sep) ^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/conda/lib/python3.11/asyncio/streams.py", line 658, in readuntil await self._wait_for_data('readuntil') File "/opt/conda/lib/python3.11/asyncio/streams.py", line 543, in _wait_for_data await self._waiter asyncio.exceptions.CancelledError The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/root/phosphobot/am/gr00t.py", line 1157, in run_gr00t_training await asyncio.wait_for(read_output(), timeout=timeout_seconds) File "/opt/conda/lib/python3.11/asyncio/tasks.py", line 502, in wait_for raise exceptions.TimeoutError() from exc TimeoutError During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/root/src/helper.py", line 166, in predict trainer.train(timeout_seconds=timeout_seconds) File "/root/phosphobot/am/gr00t.py", line 1325, in train asyncio.run( File "/opt/conda/lib/python3.11/asyncio/runners.py", line 190, in run return runner.run(main) ^^^^^^^^^^^^^^^^ File "/opt/conda/lib/python3.11/asyncio/runners.py", line 118, in run return self._loop.run_until_complete(task) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/conda/lib/python3.11/asyncio/base_events.py", line 654, in run_until_complete return future.result() ^^^^^^^^^^^^^^^ File "/root/phosphobot/am/gr00t.py", line 1162, in run_gr00t_training raise TimeoutError( TimeoutError: Training process exceeded timeout of 3600 seconds. Please consider lowering the number of epochs and/or batch size. ``` ## Training parameters: - **Dataset**: [youliangtan/so101-table-cleanup](https://huggingface.co/datasets/youliangtan/so101-table-cleanup) - **Wandb run URL**: None - **Epochs**: 10 - **Batch size**: 49 - **Training steps**: None ๐Ÿ“– **Get Started**: [docs.phospho.ai](https://docs.phospho.ai?utm_source=huggingface_readme) ๐Ÿค– **Get your robot**: [robots.phospho.ai](https://robots.phospho.ai?utm_source=huggingface_readme)
shindy-dev/Llama-3-shindy-jp-8B-GGUF
shindy-dev
2025-08-16T12:25:44Z
0
0
transformers
[ "transformers", "gguf", "llama-cpp", "ja", "en", "base_model:elyza/Llama-3-ELYZA-JP-8B", "base_model:quantized:elyza/Llama-3-ELYZA-JP-8B", "license:llama3", "endpoints_compatible", "region:us", "conversational" ]
null
2025-08-16T09:05:08Z
--- library_name: transformers license: llama3 language: - ja - en tags: - llama-cpp base_model: - elyza/Llama-3-ELYZA-JP-8B --- # Llama-3-shindy-jp-8B-GGUF ## Model Description Based on [elyza/Llama-3-ELYZA-JP-8B-GGUF](https://huggingface.co/elyza/Llama-3-ELYZA-JP-8B-GGUF). (Built with Meta Llama3) ## License [Meta Llama 3 Community License](https://llama.meta.com/llama3/license/)
VoilaRaj/69_gb2HlG
VoilaRaj
2025-08-16T12:12:43Z
0
0
null
[ "safetensors", "any-to-any", "omega", "omegalabs", "bittensor", "agi", "license:mit", "region:us" ]
any-to-any
2025-08-16T12:08:54Z
--- license: mit tags: - any-to-any - omega - omegalabs - bittensor - agi --- This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet. Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
SicariusSicariiStuff/Impish_Mind_8B_GGUF_HA
SicariusSicariiStuff
2025-08-16T12:10:16Z
0
0
transformers
[ "transformers", "gguf", "en", "base_model:SicariusSicariiStuff/Impish_Mind_8B", "base_model:quantized:SicariusSicariiStuff/Impish_Mind_8B", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2025-08-16T11:55:22Z
--- base_model: - SicariusSicariiStuff/Impish_Mind_8B language: - en library_name: transformers license: apache-2.0 quantized_by: SicariusSicariiStuff ---
SicariusSicariiStuff/Impish_Mind_8B_ARM_HA
SicariusSicariiStuff
2025-08-16T12:09:58Z
0
0
transformers
[ "transformers", "gguf", "en", "base_model:SicariusSicariiStuff/Impish_Mind_8B", "base_model:quantized:SicariusSicariiStuff/Impish_Mind_8B", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2025-08-16T11:55:07Z
--- base_model: - SicariusSicariiStuff/Impish_Mind_8B language: - en library_name: transformers license: apache-2.0 quantized_by: SicariusSicariiStuff ---
quantumxnode/blockassist-bc-dormant_peckish_seahorse_1755343950
quantumxnode
2025-08-16T11:59:01Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "dormant peckish seahorse", "arxiv:2504.07091", "region:us" ]
null
2025-08-16T11:58:58Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - dormant peckish seahorse --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
abdabd22001/micheal_scott_LoRA_2
abdabd22001
2025-08-16T11:50:50Z
0
0
diffusers
[ "diffusers", "tensorboard", "text-to-image", "diffusers-training", "lora", "template:sd-lora", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail++", "region:us" ]
text-to-image
2025-08-16T11:50:43Z
--- base_model: stabilityai/stable-diffusion-xl-base-1.0 library_name: diffusers license: openrail++ instance_prompt: a photo of Micheal Scott from the office widget: [] tags: - text-to-image - text-to-image - diffusers-training - diffusers - lora - template:sd-lora - stable-diffusion-xl - stable-diffusion-xl-diffusers --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # SDXL LoRA DreamBooth - abdabd22001/micheal_scott_LoRA_2 <Gallery /> ## Model description These are abdabd22001/micheal_scott_LoRA_2 LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained using [DreamBooth](https://dreambooth.github.io/). LoRA for the text encoder was enabled: False. Special VAE used for training: madebyollin/sdxl-vae-fp16-fix. ## Trigger words You should use a photo of Micheal Scott from the office to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](abdabd22001/micheal_scott_LoRA_2/tree/main) them in the Files & versions tab. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
lisaozill03/blockassist-bc-rugged_prickly_alpaca_1755342901
lisaozill03
2025-08-16T11:39:57Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "rugged prickly alpaca", "arxiv:2504.07091", "region:us" ]
null
2025-08-16T11:39:53Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - rugged prickly alpaca --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
capungmerah627/blockassist-bc-stinging_soaring_porcupine_1755342896
capungmerah627
2025-08-16T11:39:44Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "stinging soaring porcupine", "arxiv:2504.07091", "region:us" ]
null
2025-08-16T11:39:41Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - stinging soaring porcupine --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
SicariusSicariiStuff/Phi-lthy4_ARM_HA
SicariusSicariiStuff
2025-08-16T11:39:09Z
0
0
transformers
[ "transformers", "gguf", "en", "base_model:SicariusSicariiStuff/Phi-lthy4", "base_model:quantized:SicariusSicariiStuff/Phi-lthy4", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2025-08-16T11:36:48Z
--- base_model: - SicariusSicariiStuff/Phi-lthy4 language: - en library_name: transformers license: apache-2.0 quantized_by: SicariusSicariiStuff ---
Watch-Aston-Villa-vs-Newcastle-live-tv/Watch.Videos.Aston.Villa.vs.Newcastle.live.tv.Official
Watch-Aston-Villa-vs-Newcastle-live-tv
2025-08-16T11:35:30Z
0
0
null
[ "region:us" ]
null
2025-08-16T11:34:59Z
<animated-image data-catalyst=""><a href="https://tinyurl.com/mrmpsap6?Live-Stream" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
manancode/opus-mt-en-ny-ctranslate2-android
manancode
2025-08-16T11:23:38Z
0
0
null
[ "translation", "opus-mt", "ctranslate2", "quantized", "multilingual", "license:apache-2.0", "region:us" ]
translation
2025-08-16T11:23:04Z
--- license: apache-2.0 tags: - translation - opus-mt - ctranslate2 - quantized language: - multilingual pipeline_tag: translation --- # opus-mt-en-ny-ctranslate2-android This is a quantized INT8 version of `Helsinki-NLP/opus-mt-en-ny` converted to CTranslate2 format for efficient inference. ## Model Details - **Original Model**: Helsinki-NLP/opus-mt-en-ny - **Format**: CTranslate2 - **Quantization**: INT8 - **Framework**: OPUS-MT - **Converted by**: Automated conversion pipeline ## Usage ### With CTranslate2 ```python import ctranslate2 import sentencepiece as spm # Load the model translator = ctranslate2.Translator("path/to/model") # Load tokenizers sp_source = spm.SentencePieceProcessor(model_file="source.spm") sp_target = spm.SentencePieceProcessor(model_file="target.spm") # Translate source_tokens = sp_source.encode("Your text here", out_type=str) results = translator.translate_batch([source_tokens]) translation = sp_target.decode(results[0].hypotheses[0]) ``` ## Performance This INT8 quantized version provides: - ~75% reduction in model size - Faster inference speed - Maintained translation quality - Mobile-friendly deployment ## Original Model Based on the OPUS-MT project: https://github.com/Helsinki-NLP/Opus-MT
Dejiat/blockassist-bc-savage_unseen_bobcat_1755343341
Dejiat
2025-08-16T11:22:56Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "savage unseen bobcat", "arxiv:2504.07091", "region:us" ]
null
2025-08-16T11:22:46Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - savage unseen bobcat --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
HectorHe/Qwen1.5-MOE-sft-nemotron-code
HectorHe
2025-08-16T11:19:38Z
0
0
transformers
[ "transformers", "safetensors", "qwen2_moe", "text-generation", "generated_from_trainer", "open-r1", "trl", "sft", "conversational", "dataset:autoprogrammer/nemotron_code_lf_filtered", "base_model:Qwen/Qwen1.5-MoE-A2.7B", "base_model:finetune:Qwen/Qwen1.5-MoE-A2.7B", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-08-15T09:05:44Z
--- base_model: Qwen/Qwen1.5-MoE-A2.7B datasets: autoprogrammer/nemotron_code_lf_filtered library_name: transformers model_name: Qwen1.5-MOE-sft-nemotron-code tags: - generated_from_trainer - open-r1 - trl - sft licence: license --- # Model Card for Qwen1.5-MOE-sft-nemotron-code This model is a fine-tuned version of [Qwen/Qwen1.5-MoE-A2.7B](https://huggingface.co/Qwen/Qwen1.5-MoE-A2.7B) on the [autoprogrammer/nemotron_code_lf_filtered](https://huggingface.co/datasets/autoprogrammer/nemotron_code_lf_filtered) dataset. It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="HectorHe/Qwen1.5-MOE-sft-nemotron-code", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/hector_-carnegie-mellon-university/huggingface/runs/w0x1b02r) This model was trained with SFT. ### Framework versions - TRL: 0.18.0.dev0 - Transformers: 4.52.0.dev0 - Pytorch: 2.6.0 - Datasets: 4.0.0 - Tokenizers: 0.21.4 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
Dejiat/blockassist-bc-savage_unseen_bobcat_1755343133
Dejiat
2025-08-16T11:19:33Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "savage unseen bobcat", "arxiv:2504.07091", "region:us" ]
null
2025-08-16T11:19:22Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - savage unseen bobcat --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
manancode/opus-mt-en-luo-ctranslate2-android
manancode
2025-08-16T11:17:03Z
0
0
null
[ "translation", "opus-mt", "ctranslate2", "quantized", "multilingual", "license:apache-2.0", "region:us" ]
translation
2025-08-16T11:16:43Z
--- license: apache-2.0 tags: - translation - opus-mt - ctranslate2 - quantized language: - multilingual pipeline_tag: translation --- # opus-mt-en-luo-ctranslate2-android This is a quantized INT8 version of `Helsinki-NLP/opus-mt-en-luo` converted to CTranslate2 format for efficient inference. ## Model Details - **Original Model**: Helsinki-NLP/opus-mt-en-luo - **Format**: CTranslate2 - **Quantization**: INT8 - **Framework**: OPUS-MT - **Converted by**: Automated conversion pipeline ## Usage ### With CTranslate2 ```python import ctranslate2 import sentencepiece as spm # Load the model translator = ctranslate2.Translator("path/to/model") # Load tokenizers sp_source = spm.SentencePieceProcessor(model_file="source.spm") sp_target = spm.SentencePieceProcessor(model_file="target.spm") # Translate source_tokens = sp_source.encode("Your text here", out_type=str) results = translator.translate_batch([source_tokens]) translation = sp_target.decode(results[0].hypotheses[0]) ``` ## Performance This INT8 quantized version provides: - ~75% reduction in model size - Faster inference speed - Maintained translation quality - Mobile-friendly deployment ## Original Model Based on the OPUS-MT project: https://github.com/Helsinki-NLP/Opus-MT
manancode/opus-mt-en-lue-ctranslate2-android
manancode
2025-08-16T11:16:09Z
0
0
null
[ "translation", "opus-mt", "ctranslate2", "quantized", "multilingual", "license:apache-2.0", "region:us" ]
translation
2025-08-16T11:15:56Z
--- license: apache-2.0 tags: - translation - opus-mt - ctranslate2 - quantized language: - multilingual pipeline_tag: translation --- # opus-mt-en-lue-ctranslate2-android This is a quantized INT8 version of `Helsinki-NLP/opus-mt-en-lue` converted to CTranslate2 format for efficient inference. ## Model Details - **Original Model**: Helsinki-NLP/opus-mt-en-lue - **Format**: CTranslate2 - **Quantization**: INT8 - **Framework**: OPUS-MT - **Converted by**: Automated conversion pipeline ## Usage ### With CTranslate2 ```python import ctranslate2 import sentencepiece as spm # Load the model translator = ctranslate2.Translator("path/to/model") # Load tokenizers sp_source = spm.SentencePieceProcessor(model_file="source.spm") sp_target = spm.SentencePieceProcessor(model_file="target.spm") # Translate source_tokens = sp_source.encode("Your text here", out_type=str) results = translator.translate_batch([source_tokens]) translation = sp_target.decode(results[0].hypotheses[0]) ``` ## Performance This INT8 quantized version provides: - ~75% reduction in model size - Faster inference speed - Maintained translation quality - Mobile-friendly deployment ## Original Model Based on the OPUS-MT project: https://github.com/Helsinki-NLP/Opus-MT
minhchauu217/blockassist-bc-flightless_unseen_parrot_1755341929
minhchauu217
2025-08-16T11:15:19Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "flightless unseen parrot", "arxiv:2504.07091", "region:us" ]
null
2025-08-16T11:15:07Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - flightless unseen parrot --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
manancode/opus-mt-en-lu-ctranslate2-android
manancode
2025-08-16T11:15:15Z
0
0
null
[ "translation", "opus-mt", "ctranslate2", "quantized", "multilingual", "license:apache-2.0", "region:us" ]
translation
2025-08-16T11:15:02Z
--- license: apache-2.0 tags: - translation - opus-mt - ctranslate2 - quantized language: - multilingual pipeline_tag: translation --- # opus-mt-en-lu-ctranslate2-android This is a quantized INT8 version of `Helsinki-NLP/opus-mt-en-lu` converted to CTranslate2 format for efficient inference. ## Model Details - **Original Model**: Helsinki-NLP/opus-mt-en-lu - **Format**: CTranslate2 - **Quantization**: INT8 - **Framework**: OPUS-MT - **Converted by**: Automated conversion pipeline ## Usage ### With CTranslate2 ```python import ctranslate2 import sentencepiece as spm # Load the model translator = ctranslate2.Translator("path/to/model") # Load tokenizers sp_source = spm.SentencePieceProcessor(model_file="source.spm") sp_target = spm.SentencePieceProcessor(model_file="target.spm") # Translate source_tokens = sp_source.encode("Your text here", out_type=str) results = translator.translate_batch([source_tokens]) translation = sp_target.decode(results[0].hypotheses[0]) ``` ## Performance This INT8 quantized version provides: - ~75% reduction in model size - Faster inference speed - Maintained translation quality - Mobile-friendly deployment ## Original Model Based on the OPUS-MT project: https://github.com/Helsinki-NLP/Opus-MT
manancode/opus-mt-en-loz-ctranslate2-android
manancode
2025-08-16T11:14:56Z
0
0
null
[ "translation", "opus-mt", "ctranslate2", "quantized", "multilingual", "license:apache-2.0", "region:us" ]
translation
2025-08-16T11:14:41Z
--- license: apache-2.0 tags: - translation - opus-mt - ctranslate2 - quantized language: - multilingual pipeline_tag: translation --- # opus-mt-en-loz-ctranslate2-android This is a quantized INT8 version of `Helsinki-NLP/opus-mt-en-loz` converted to CTranslate2 format for efficient inference. ## Model Details - **Original Model**: Helsinki-NLP/opus-mt-en-loz - **Format**: CTranslate2 - **Quantization**: INT8 - **Framework**: OPUS-MT - **Converted by**: Automated conversion pipeline ## Usage ### With CTranslate2 ```python import ctranslate2 import sentencepiece as spm # Load the model translator = ctranslate2.Translator("path/to/model") # Load tokenizers sp_source = spm.SentencePieceProcessor(model_file="source.spm") sp_target = spm.SentencePieceProcessor(model_file="target.spm") # Translate source_tokens = sp_source.encode("Your text here", out_type=str) results = translator.translate_batch([source_tokens]) translation = sp_target.decode(results[0].hypotheses[0]) ``` ## Performance This INT8 quantized version provides: - ~75% reduction in model size - Faster inference speed - Maintained translation quality - Mobile-friendly deployment ## Original Model Based on the OPUS-MT project: https://github.com/Helsinki-NLP/Opus-MT
manancode/opus-mt-en-ln-ctranslate2-android
manancode
2025-08-16T11:14:35Z
0
0
null
[ "translation", "opus-mt", "ctranslate2", "quantized", "multilingual", "license:apache-2.0", "region:us" ]
translation
2025-08-16T11:14:06Z
--- license: apache-2.0 tags: - translation - opus-mt - ctranslate2 - quantized language: - multilingual pipeline_tag: translation --- # opus-mt-en-ln-ctranslate2-android This is a quantized INT8 version of `Helsinki-NLP/opus-mt-en-ln` converted to CTranslate2 format for efficient inference. ## Model Details - **Original Model**: Helsinki-NLP/opus-mt-en-ln - **Format**: CTranslate2 - **Quantization**: INT8 - **Framework**: OPUS-MT - **Converted by**: Automated conversion pipeline ## Usage ### With CTranslate2 ```python import ctranslate2 import sentencepiece as spm # Load the model translator = ctranslate2.Translator("path/to/model") # Load tokenizers sp_source = spm.SentencePieceProcessor(model_file="source.spm") sp_target = spm.SentencePieceProcessor(model_file="target.spm") # Translate source_tokens = sp_source.encode("Your text here", out_type=str) results = translator.translate_batch([source_tokens]) translation = sp_target.decode(results[0].hypotheses[0]) ``` ## Performance This INT8 quantized version provides: - ~75% reduction in model size - Faster inference speed - Maintained translation quality - Mobile-friendly deployment ## Original Model Based on the OPUS-MT project: https://github.com/Helsinki-NLP/Opus-MT
VoilaRaj/69_pMugfk
VoilaRaj
2025-08-16T11:08:25Z
0
0
null
[ "safetensors", "any-to-any", "omega", "omegalabs", "bittensor", "agi", "license:mit", "region:us" ]
any-to-any
2025-08-16T11:04:31Z
--- license: mit tags: - any-to-any - omega - omegalabs - bittensor - agi --- This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet. Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
kumoooo/blockassist-bc-aquatic_restless_camel_1755341821
kumoooo
2025-08-16T11:04:04Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "aquatic restless camel", "arxiv:2504.07091", "region:us" ]
null
2025-08-16T11:03:29Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - aquatic restless camel --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Harsh1729/R1-Distill-Llama-8B-SFT-cotroller_dataset-bespoke-52k_all_cotif-w_partial_soln-w_change_of_thgt
Harsh1729
2025-08-16T11:03:35Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "llama", "text-generation", "generated_from_trainer", "conversational", "base_model:deepseek-ai/DeepSeek-R1-Distill-Llama-8B", "base_model:finetune:deepseek-ai/DeepSeek-R1-Distill-Llama-8B", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-08-16T10:57:18Z
--- base_model: deepseek-ai/DeepSeek-R1-Distill-Llama-8B library_name: transformers model_name: tags: - sft - full-finetuning tags: - generated_from_trainer licence: license --- # Model Card for {'tags': ['sft', 'full-finetuning']} This model is a fine-tuned version of [deepseek-ai/DeepSeek-R1-Distill-Llama-8B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Llama-8B). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="None", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with SFT. ### Framework versions - TRL: 0.13.0 - Transformers: 4.46.0 - Pytorch: 2.7.0 - Datasets: 3.2.0 - Tokenizers: 0.20.3 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
manancode/opus-mt-en-alv-ctranslate2-android
manancode
2025-08-16T10:51:05Z
0
0
null
[ "translation", "opus-mt", "ctranslate2", "quantized", "multilingual", "license:apache-2.0", "region:us" ]
translation
2025-08-16T10:50:49Z
--- license: apache-2.0 tags: - translation - opus-mt - ctranslate2 - quantized language: - multilingual pipeline_tag: translation --- # opus-mt-en-alv-ctranslate2-android This is a quantized INT8 version of `Helsinki-NLP/opus-mt-en-alv` converted to CTranslate2 format for efficient inference. ## Model Details - **Original Model**: Helsinki-NLP/opus-mt-en-alv - **Format**: CTranslate2 - **Quantization**: INT8 - **Framework**: OPUS-MT - **Converted by**: Automated conversion pipeline ## Usage ### With CTranslate2 ```python import ctranslate2 import sentencepiece as spm # Load the model translator = ctranslate2.Translator("path/to/model") # Load tokenizers sp_source = spm.SentencePieceProcessor(model_file="source.spm") sp_target = spm.SentencePieceProcessor(model_file="target.spm") # Translate source_tokens = sp_source.encode("Your text here", out_type=str) results = translator.translate_batch([source_tokens]) translation = sp_target.decode(results[0].hypotheses[0]) ``` ## Performance This INT8 quantized version provides: - ~75% reduction in model size - Faster inference speed - Maintained translation quality - Mobile-friendly deployment ## Original Model Based on the OPUS-MT project: https://github.com/Helsinki-NLP/Opus-MT
manancode/opus-mt-el-ar-ctranslate2-android
manancode
2025-08-16T10:48:00Z
0
0
null
[ "translation", "opus-mt", "ctranslate2", "quantized", "multilingual", "license:apache-2.0", "region:us" ]
translation
2025-08-16T10:47:46Z
--- license: apache-2.0 tags: - translation - opus-mt - ctranslate2 - quantized language: - multilingual pipeline_tag: translation --- # opus-mt-el-ar-ctranslate2-android This is a quantized INT8 version of `Helsinki-NLP/opus-mt-el-ar` converted to CTranslate2 format for efficient inference. ## Model Details - **Original Model**: Helsinki-NLP/opus-mt-el-ar - **Format**: CTranslate2 - **Quantization**: INT8 - **Framework**: OPUS-MT - **Converted by**: Automated conversion pipeline ## Usage ### With CTranslate2 ```python import ctranslate2 import sentencepiece as spm # Load the model translator = ctranslate2.Translator("path/to/model") # Load tokenizers sp_source = spm.SentencePieceProcessor(model_file="source.spm") sp_target = spm.SentencePieceProcessor(model_file="target.spm") # Translate source_tokens = sp_source.encode("Your text here", out_type=str) results = translator.translate_batch([source_tokens]) translation = sp_target.decode(results[0].hypotheses[0]) ``` ## Performance This INT8 quantized version provides: - ~75% reduction in model size - Faster inference speed - Maintained translation quality - Mobile-friendly deployment ## Original Model Based on the OPUS-MT project: https://github.com/Helsinki-NLP/Opus-MT
manancode/opus-mt-de-pl-ctranslate2-android
manancode
2025-08-16T10:42:57Z
0
0
null
[ "translation", "opus-mt", "ctranslate2", "quantized", "multilingual", "license:apache-2.0", "region:us" ]
translation
2025-08-16T10:42:47Z
--- license: apache-2.0 tags: - translation - opus-mt - ctranslate2 - quantized language: - multilingual pipeline_tag: translation --- # opus-mt-de-pl-ctranslate2-android This is a quantized INT8 version of `Helsinki-NLP/opus-mt-de-pl` converted to CTranslate2 format for efficient inference. ## Model Details - **Original Model**: Helsinki-NLP/opus-mt-de-pl - **Format**: CTranslate2 - **Quantization**: INT8 - **Framework**: OPUS-MT - **Converted by**: Automated conversion pipeline ## Usage ### With CTranslate2 ```python import ctranslate2 import sentencepiece as spm # Load the model translator = ctranslate2.Translator("path/to/model") # Load tokenizers sp_source = spm.SentencePieceProcessor(model_file="source.spm") sp_target = spm.SentencePieceProcessor(model_file="target.spm") # Translate source_tokens = sp_source.encode("Your text here", out_type=str) results = translator.translate_batch([source_tokens]) translation = sp_target.decode(results[0].hypotheses[0]) ``` ## Performance This INT8 quantized version provides: - ~75% reduction in model size - Faster inference speed - Maintained translation quality - Mobile-friendly deployment ## Original Model Based on the OPUS-MT project: https://github.com/Helsinki-NLP/Opus-MT
manancode/opus-mt-de-pis-ctranslate2-android
manancode
2025-08-16T10:42:41Z
0
0
null
[ "translation", "opus-mt", "ctranslate2", "quantized", "multilingual", "license:apache-2.0", "region:us" ]
translation
2025-08-16T10:42:29Z
--- license: apache-2.0 tags: - translation - opus-mt - ctranslate2 - quantized language: - multilingual pipeline_tag: translation --- # opus-mt-de-pis-ctranslate2-android This is a quantized INT8 version of `Helsinki-NLP/opus-mt-de-pis` converted to CTranslate2 format for efficient inference. ## Model Details - **Original Model**: Helsinki-NLP/opus-mt-de-pis - **Format**: CTranslate2 - **Quantization**: INT8 - **Framework**: OPUS-MT - **Converted by**: Automated conversion pipeline ## Usage ### With CTranslate2 ```python import ctranslate2 import sentencepiece as spm # Load the model translator = ctranslate2.Translator("path/to/model") # Load tokenizers sp_source = spm.SentencePieceProcessor(model_file="source.spm") sp_target = spm.SentencePieceProcessor(model_file="target.spm") # Translate source_tokens = sp_source.encode("Your text here", out_type=str) results = translator.translate_batch([source_tokens]) translation = sp_target.decode(results[0].hypotheses[0]) ``` ## Performance This INT8 quantized version provides: - ~75% reduction in model size - Faster inference speed - Maintained translation quality - Mobile-friendly deployment ## Original Model Based on the OPUS-MT project: https://github.com/Helsinki-NLP/Opus-MT
manancode/opus-mt-de-nso-ctranslate2-android
manancode
2025-08-16T10:41:31Z
0
0
null
[ "translation", "opus-mt", "ctranslate2", "quantized", "multilingual", "license:apache-2.0", "region:us" ]
translation
2025-08-16T10:41:21Z
--- license: apache-2.0 tags: - translation - opus-mt - ctranslate2 - quantized language: - multilingual pipeline_tag: translation --- # opus-mt-de-nso-ctranslate2-android This is a quantized INT8 version of `Helsinki-NLP/opus-mt-de-nso` converted to CTranslate2 format for efficient inference. ## Model Details - **Original Model**: Helsinki-NLP/opus-mt-de-nso - **Format**: CTranslate2 - **Quantization**: INT8 - **Framework**: OPUS-MT - **Converted by**: Automated conversion pipeline ## Usage ### With CTranslate2 ```python import ctranslate2 import sentencepiece as spm # Load the model translator = ctranslate2.Translator("path/to/model") # Load tokenizers sp_source = spm.SentencePieceProcessor(model_file="source.spm") sp_target = spm.SentencePieceProcessor(model_file="target.spm") # Translate source_tokens = sp_source.encode("Your text here", out_type=str) results = translator.translate_batch([source_tokens]) translation = sp_target.decode(results[0].hypotheses[0]) ``` ## Performance This INT8 quantized version provides: - ~75% reduction in model size - Faster inference speed - Maintained translation quality - Mobile-friendly deployment ## Original Model Based on the OPUS-MT project: https://github.com/Helsinki-NLP/Opus-MT
kumoooo/blockassist-bc-aquatic_restless_camel_1755340357
kumoooo
2025-08-16T10:41:25Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "aquatic restless camel", "arxiv:2504.07091", "region:us" ]
null
2025-08-16T10:40:51Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - aquatic restless camel --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
manancode/opus-mt-de-no-ctranslate2-android
manancode
2025-08-16T10:41:15Z
0
0
null
[ "translation", "opus-mt", "ctranslate2", "quantized", "multilingual", "license:apache-2.0", "region:us" ]
translation
2025-08-16T10:41:06Z
--- license: apache-2.0 tags: - translation - opus-mt - ctranslate2 - quantized language: - multilingual pipeline_tag: translation --- # opus-mt-de-no-ctranslate2-android This is a quantized INT8 version of `Helsinki-NLP/opus-mt-de-no` converted to CTranslate2 format for efficient inference. ## Model Details - **Original Model**: Helsinki-NLP/opus-mt-de-no - **Format**: CTranslate2 - **Quantization**: INT8 - **Framework**: OPUS-MT - **Converted by**: Automated conversion pipeline ## Usage ### With CTranslate2 ```python import ctranslate2 import sentencepiece as spm # Load the model translator = ctranslate2.Translator("path/to/model") # Load tokenizers sp_source = spm.SentencePieceProcessor(model_file="source.spm") sp_target = spm.SentencePieceProcessor(model_file="target.spm") # Translate source_tokens = sp_source.encode("Your text here", out_type=str) results = translator.translate_batch([source_tokens]) translation = sp_target.decode(results[0].hypotheses[0]) ``` ## Performance This INT8 quantized version provides: - ~75% reduction in model size - Faster inference speed - Maintained translation quality - Mobile-friendly deployment ## Original Model Based on the OPUS-MT project: https://github.com/Helsinki-NLP/Opus-MT
manancode/opus-mt-de-lt-ctranslate2-android
manancode
2025-08-16T10:39:36Z
0
0
null
[ "translation", "opus-mt", "ctranslate2", "quantized", "multilingual", "license:apache-2.0", "region:us" ]
translation
2025-08-16T10:39:23Z
--- license: apache-2.0 tags: - translation - opus-mt - ctranslate2 - quantized language: - multilingual pipeline_tag: translation --- # opus-mt-de-lt-ctranslate2-android This is a quantized INT8 version of `Helsinki-NLP/opus-mt-de-lt` converted to CTranslate2 format for efficient inference. ## Model Details - **Original Model**: Helsinki-NLP/opus-mt-de-lt - **Format**: CTranslate2 - **Quantization**: INT8 - **Framework**: OPUS-MT - **Converted by**: Automated conversion pipeline ## Usage ### With CTranslate2 ```python import ctranslate2 import sentencepiece as spm # Load the model translator = ctranslate2.Translator("path/to/model") # Load tokenizers sp_source = spm.SentencePieceProcessor(model_file="source.spm") sp_target = spm.SentencePieceProcessor(model_file="target.spm") # Translate source_tokens = sp_source.encode("Your text here", out_type=str) results = translator.translate_batch([source_tokens]) translation = sp_target.decode(results[0].hypotheses[0]) ``` ## Performance This INT8 quantized version provides: - ~75% reduction in model size - Faster inference speed - Maintained translation quality - Mobile-friendly deployment ## Original Model Based on the OPUS-MT project: https://github.com/Helsinki-NLP/Opus-MT
manancode/opus-mt-de-loz-ctranslate2-android
manancode
2025-08-16T10:39:17Z
0
0
null
[ "translation", "opus-mt", "ctranslate2", "quantized", "multilingual", "license:apache-2.0", "region:us" ]
translation
2025-08-16T10:39:05Z
--- license: apache-2.0 tags: - translation - opus-mt - ctranslate2 - quantized language: - multilingual pipeline_tag: translation --- # opus-mt-de-loz-ctranslate2-android This is a quantized INT8 version of `Helsinki-NLP/opus-mt-de-loz` converted to CTranslate2 format for efficient inference. ## Model Details - **Original Model**: Helsinki-NLP/opus-mt-de-loz - **Format**: CTranslate2 - **Quantization**: INT8 - **Framework**: OPUS-MT - **Converted by**: Automated conversion pipeline ## Usage ### With CTranslate2 ```python import ctranslate2 import sentencepiece as spm # Load the model translator = ctranslate2.Translator("path/to/model") # Load tokenizers sp_source = spm.SentencePieceProcessor(model_file="source.spm") sp_target = spm.SentencePieceProcessor(model_file="target.spm") # Translate source_tokens = sp_source.encode("Your text here", out_type=str) results = translator.translate_batch([source_tokens]) translation = sp_target.decode(results[0].hypotheses[0]) ``` ## Performance This INT8 quantized version provides: - ~75% reduction in model size - Faster inference speed - Maintained translation quality - Mobile-friendly deployment ## Original Model Based on the OPUS-MT project: https://github.com/Helsinki-NLP/Opus-MT
Neooot/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-whiskered_horned_macaw
Neooot
2025-08-16T10:38:30Z
98
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "rl-swarm", "genrl-swarm", "grpo", "gensyn", "I am whiskered_horned_macaw", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-08-11T09:50:17Z
--- library_name: transformers tags: - rl-swarm - genrl-swarm - grpo - gensyn - I am whiskered_horned_macaw --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
oegbo/gemma3-latex-processor
oegbo
2025-08-16T10:38:23Z
0
0
transformers
[ "transformers", "unsloth", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-08-16T10:38:15Z
--- library_name: transformers tags: - unsloth --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
haryoaw/xlm-roberta-base_massive_en-US_0
haryoaw
2025-08-16T10:35:47Z
0
0
transformers
[ "transformers", "safetensors", "xlm-roberta", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-08-15T23:33:40Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
jiddisch/llama-3.1-8b-roni-angular-lora
jiddisch
2025-08-16T10:34:02Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-08-16T10:33:42Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
manancode/opus-mt-de-ee-ctranslate2-android
manancode
2025-08-16T10:30:55Z
0
0
null
[ "translation", "opus-mt", "ctranslate2", "quantized", "multilingual", "license:apache-2.0", "region:us" ]
translation
2025-08-16T10:30:42Z
--- license: apache-2.0 tags: - translation - opus-mt - ctranslate2 - quantized language: - multilingual pipeline_tag: translation --- # opus-mt-de-ee-ctranslate2-android This is a quantized INT8 version of `Helsinki-NLP/opus-mt-de-ee` converted to CTranslate2 format for efficient inference. ## Model Details - **Original Model**: Helsinki-NLP/opus-mt-de-ee - **Format**: CTranslate2 - **Quantization**: INT8 - **Framework**: OPUS-MT - **Converted by**: Automated conversion pipeline ## Usage ### With CTranslate2 ```python import ctranslate2 import sentencepiece as spm # Load the model translator = ctranslate2.Translator("path/to/model") # Load tokenizers sp_source = spm.SentencePieceProcessor(model_file="source.spm") sp_target = spm.SentencePieceProcessor(model_file="target.spm") # Translate source_tokens = sp_source.encode("Your text here", out_type=str) results = translator.translate_batch([source_tokens]) translation = sp_target.decode(results[0].hypotheses[0]) ``` ## Performance This INT8 quantized version provides: - ~75% reduction in model size - Faster inference speed - Maintained translation quality - Mobile-friendly deployment ## Original Model Based on the OPUS-MT project: https://github.com/Helsinki-NLP/Opus-MT
Amanda2345/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-giant_shiny_sandpiper
Amanda2345
2025-08-16T10:29:23Z
101
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "rl-swarm", "genrl-swarm", "grpo", "gensyn", "I am giant_shiny_sandpiper", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-08-11T09:26:20Z
--- library_name: transformers tags: - rl-swarm - genrl-swarm - grpo - gensyn - I am giant_shiny_sandpiper --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
manancode/opus-mt-cs-de-ctranslate2-android
manancode
2025-08-16T10:20:50Z
0
0
null
[ "translation", "opus-mt", "ctranslate2", "quantized", "multilingual", "license:apache-2.0", "region:us" ]
translation
2025-08-16T10:20:34Z
--- license: apache-2.0 tags: - translation - opus-mt - ctranslate2 - quantized language: - multilingual pipeline_tag: translation --- # opus-mt-cs-de-ctranslate2-android This is a quantized INT8 version of `Helsinki-NLP/opus-mt-cs-de` converted to CTranslate2 format for efficient inference. ## Model Details - **Original Model**: Helsinki-NLP/opus-mt-cs-de - **Format**: CTranslate2 - **Quantization**: INT8 - **Framework**: OPUS-MT - **Converted by**: Automated conversion pipeline ## Usage ### With CTranslate2 ```python import ctranslate2 import sentencepiece as spm # Load the model translator = ctranslate2.Translator("path/to/model") # Load tokenizers sp_source = spm.SentencePieceProcessor(model_file="source.spm") sp_target = spm.SentencePieceProcessor(model_file="target.spm") # Translate source_tokens = sp_source.encode("Your text here", out_type=str) results = translator.translate_batch([source_tokens]) translation = sp_target.decode(results[0].hypotheses[0]) ``` ## Performance This INT8 quantized version provides: - ~75% reduction in model size - Faster inference speed - Maintained translation quality - Mobile-friendly deployment ## Original Model Based on the OPUS-MT project: https://github.com/Helsinki-NLP/Opus-MT
manancode/opus-mt-crs-fi-ctranslate2-android
manancode
2025-08-16T10:19:44Z
0
0
null
[ "translation", "opus-mt", "ctranslate2", "quantized", "multilingual", "license:apache-2.0", "region:us" ]
translation
2025-08-16T10:19:32Z
--- license: apache-2.0 tags: - translation - opus-mt - ctranslate2 - quantized language: - multilingual pipeline_tag: translation --- # opus-mt-crs-fi-ctranslate2-android This is a quantized INT8 version of `Helsinki-NLP/opus-mt-crs-fi` converted to CTranslate2 format for efficient inference. ## Model Details - **Original Model**: Helsinki-NLP/opus-mt-crs-fi - **Format**: CTranslate2 - **Quantization**: INT8 - **Framework**: OPUS-MT - **Converted by**: Automated conversion pipeline ## Usage ### With CTranslate2 ```python import ctranslate2 import sentencepiece as spm # Load the model translator = ctranslate2.Translator("path/to/model") # Load tokenizers sp_source = spm.SentencePieceProcessor(model_file="source.spm") sp_target = spm.SentencePieceProcessor(model_file="target.spm") # Translate source_tokens = sp_source.encode("Your text here", out_type=str) results = translator.translate_batch([source_tokens]) translation = sp_target.decode(results[0].hypotheses[0]) ``` ## Performance This INT8 quantized version provides: - ~75% reduction in model size - Faster inference speed - Maintained translation quality - Mobile-friendly deployment ## Original Model Based on the OPUS-MT project: https://github.com/Helsinki-NLP/Opus-MT
manancode/opus-mt-cpf-en-ctranslate2-android
manancode
2025-08-16T10:17:55Z
0
0
null
[ "translation", "opus-mt", "ctranslate2", "quantized", "multilingual", "license:apache-2.0", "region:us" ]
translation
2025-08-16T10:17:41Z
--- license: apache-2.0 tags: - translation - opus-mt - ctranslate2 - quantized language: - multilingual pipeline_tag: translation --- # opus-mt-cpf-en-ctranslate2-android This is a quantized INT8 version of `Helsinki-NLP/opus-mt-cpf-en` converted to CTranslate2 format for efficient inference. ## Model Details - **Original Model**: Helsinki-NLP/opus-mt-cpf-en - **Format**: CTranslate2 - **Quantization**: INT8 - **Framework**: OPUS-MT - **Converted by**: Automated conversion pipeline ## Usage ### With CTranslate2 ```python import ctranslate2 import sentencepiece as spm # Load the model translator = ctranslate2.Translator("path/to/model") # Load tokenizers sp_source = spm.SentencePieceProcessor(model_file="source.spm") sp_target = spm.SentencePieceProcessor(model_file="target.spm") # Translate source_tokens = sp_source.encode("Your text here", out_type=str) results = translator.translate_batch([source_tokens]) translation = sp_target.decode(results[0].hypotheses[0]) ``` ## Performance This INT8 quantized version provides: - ~75% reduction in model size - Faster inference speed - Maintained translation quality - Mobile-friendly deployment ## Original Model Based on the OPUS-MT project: https://github.com/Helsinki-NLP/Opus-MT
manancode/opus-mt-cel-en-ctranslate2-android
manancode
2025-08-16T10:16:24Z
0
0
null
[ "translation", "opus-mt", "ctranslate2", "quantized", "multilingual", "license:apache-2.0", "region:us" ]
translation
2025-08-16T10:16:11Z
--- license: apache-2.0 tags: - translation - opus-mt - ctranslate2 - quantized language: - multilingual pipeline_tag: translation --- # opus-mt-cel-en-ctranslate2-android This is a quantized INT8 version of `Helsinki-NLP/opus-mt-cel-en` converted to CTranslate2 format for efficient inference. ## Model Details - **Original Model**: Helsinki-NLP/opus-mt-cel-en - **Format**: CTranslate2 - **Quantization**: INT8 - **Framework**: OPUS-MT - **Converted by**: Automated conversion pipeline ## Usage ### With CTranslate2 ```python import ctranslate2 import sentencepiece as spm # Load the model translator = ctranslate2.Translator("path/to/model") # Load tokenizers sp_source = spm.SentencePieceProcessor(model_file="source.spm") sp_target = spm.SentencePieceProcessor(model_file="target.spm") # Translate source_tokens = sp_source.encode("Your text here", out_type=str) results = translator.translate_batch([source_tokens]) translation = sp_target.decode(results[0].hypotheses[0]) ``` ## Performance This INT8 quantized version provides: - ~75% reduction in model size - Faster inference speed - Maintained translation quality - Mobile-friendly deployment ## Original Model Based on the OPUS-MT project: https://github.com/Helsinki-NLP/Opus-MT
ACECA/lowMvMax_64
ACECA
2025-08-16T10:09:21Z
0
0
null
[ "safetensors", "any-to-any", "omega", "omegalabs", "bittensor", "agi", "license:mit", "region:us" ]
any-to-any
2025-08-16T03:48:55Z
--- license: mit tags: - any-to-any - omega - omegalabs - bittensor - agi --- This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet. Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
redotpaybiz/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-prickly_scurrying_lobster
redotpaybiz
2025-08-16T10:06:46Z
5
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am prickly scurrying lobster", "trl", "genrl-swarm", "I am prickly_scurrying_lobster", "conversational", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-0.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-25T13:28:19Z
--- base_model: Gensyn/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-prickly_scurrying_lobster tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am prickly scurrying lobster - trl - genrl-swarm - I am prickly_scurrying_lobster licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-prickly_scurrying_lobster This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="redotpaybiz/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-prickly_scurrying_lobster", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.3 - Pytorch: 2.5.1 - Datasets: 3.5.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
manancode/opus-mt-bcl-de-ctranslate2-android
manancode
2025-08-16T09:59:00Z
0
0
null
[ "translation", "opus-mt", "ctranslate2", "quantized", "multilingual", "license:apache-2.0", "region:us" ]
translation
2025-08-16T09:58:47Z
--- license: apache-2.0 tags: - translation - opus-mt - ctranslate2 - quantized language: - multilingual pipeline_tag: translation --- # opus-mt-bcl-de-ctranslate2-android This is a quantized INT8 version of `Helsinki-NLP/opus-mt-bcl-de` converted to CTranslate2 format for efficient inference. ## Model Details - **Original Model**: Helsinki-NLP/opus-mt-bcl-de - **Format**: CTranslate2 - **Quantization**: INT8 - **Framework**: OPUS-MT - **Converted by**: Automated conversion pipeline ## Usage ### With CTranslate2 ```python import ctranslate2 import sentencepiece as spm # Load the model translator = ctranslate2.Translator("path/to/model") # Load tokenizers sp_source = spm.SentencePieceProcessor(model_file="source.spm") sp_target = spm.SentencePieceProcessor(model_file="target.spm") # Translate source_tokens = sp_source.encode("Your text here", out_type=str) results = translator.translate_batch([source_tokens]) translation = sp_target.decode(results[0].hypotheses[0]) ``` ## Performance This INT8 quantized version provides: - ~75% reduction in model size - Faster inference speed - Maintained translation quality - Mobile-friendly deployment ## Original Model Based on the OPUS-MT project: https://github.com/Helsinki-NLP/Opus-MT
rvipitkirubbe/blockassist-bc-mottled_foraging_ape_1755336635
rvipitkirubbe
2025-08-16T09:58:39Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "mottled foraging ape", "arxiv:2504.07091", "region:us" ]
null
2025-08-16T09:58:35Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - mottled foraging ape --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
manancode/opus-mt-ar-el-ctranslate2-android
manancode
2025-08-16T09:52:41Z
0
0
null
[ "translation", "opus-mt", "ctranslate2", "quantized", "multilingual", "license:apache-2.0", "region:us" ]
translation
2025-08-16T09:52:18Z
--- license: apache-2.0 tags: - translation - opus-mt - ctranslate2 - quantized language: - multilingual pipeline_tag: translation --- # opus-mt-ar-el-ctranslate2-android This is a quantized INT8 version of `Helsinki-NLP/opus-mt-ar-el` converted to CTranslate2 format for efficient inference. ## Model Details - **Original Model**: Helsinki-NLP/opus-mt-ar-el - **Format**: CTranslate2 - **Quantization**: INT8 - **Framework**: OPUS-MT - **Converted by**: Automated conversion pipeline ## Usage ### With CTranslate2 ```python import ctranslate2 import sentencepiece as spm # Load the model translator = ctranslate2.Translator("path/to/model") # Load tokenizers sp_source = spm.SentencePieceProcessor(model_file="source.spm") sp_target = spm.SentencePieceProcessor(model_file="target.spm") # Translate source_tokens = sp_source.encode("Your text here", out_type=str) results = translator.translate_batch([source_tokens]) translation = sp_target.decode(results[0].hypotheses[0]) ``` ## Performance This INT8 quantized version provides: - ~75% reduction in model size - Faster inference speed - Maintained translation quality - Mobile-friendly deployment ## Original Model Based on the OPUS-MT project: https://github.com/Helsinki-NLP/Opus-MT
kapalbalap/blockassist-bc-peaceful_wary_owl_1755337836
kapalbalap
2025-08-16T09:51:26Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "peaceful wary owl", "arxiv:2504.07091", "region:us" ]
null
2025-08-16T09:51:12Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - peaceful wary owl --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Adun/Llama-3.2-3B-Instruct-MEA
Adun
2025-08-16T09:35:03Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:unsloth/Llama-3.2-3B-Instruct", "base_model:adapter:unsloth/Llama-3.2-3B-Instruct", "region:us" ]
null
2025-08-16T09:33:39Z
--- base_model: unsloth/Llama-3.2-3B-Instruct library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.15.2
wasabuko/blockassist-bc-noisy_zealous_macaw_1755334627
wasabuko
2025-08-16T09:33:01Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "noisy zealous macaw", "arxiv:2504.07091", "region:us" ]
null
2025-08-16T09:29:52Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - noisy zealous macaw --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
runchat/lora-533d7b31-63fd-42a0-be75-b68de7db171f-bfg7jr
runchat
2025-08-16T09:25:21Z
0
0
diffusers
[ "diffusers", "flux", "lora", "text-to-image", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-08-16T09:25:13Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md base_model: black-forest-labs/FLUX.1-dev tags: - flux - lora - diffusers - text-to-image widget: - text: 'a photo of a sks style' output: url: "placeholder.jpg" --- # Flux LoRA: sks This is a LoRA (Low-Rank Adaptation) model for Flux.1-dev fine-tuned on images with the trigger word `sks`. ## Files - `pytorch_lora_weights.safetensors`: Diffusers format (use with diffusers library) - `pytorch_lora_weights_webui.safetensors`: Kohya format (use with AUTOMATIC1111, ComfyUI, etc.) ## Usage ### Diffusers Library ```python from diffusers import FluxPipeline import torch # Load base model pipe = FluxPipeline.from_pretrained( "black-forest-labs/FLUX.1-dev", torch_dtype=torch.bfloat16 ) # Load LoRA weights (diffusers format) pipe.load_lora_weights("runchat/lora-533d7b31-63fd-42a0-be75-b68de7db171f-bfg7jr", weight_name="pytorch_lora_weights.safetensors") pipe = pipe.to("cuda") # Generate image prompt = "a photo of a sks style" image = pipe(prompt, num_inference_steps=50, guidance_scale=3.5).images[0] image.save("output.png") ``` ### WebUI (AUTOMATIC1111, ComfyUI, etc.) Download the `pytorch_lora_weights_webui.safetensors` file and place it in your WebUI's LoRA directory. Use the trigger word `sks` in your prompts. ## Training Details - Base model: black-forest-labs/FLUX.1-dev - Training steps: 500 - Learning rate: 0.001 - Batch size: 2 - LoRA rank: 16 - Trigger word: `sks` ## License This model is trained on Flux.1-dev and inherits its non-commercial license. Please see the [license](https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md) for usage restrictions.
ACECA/lowMvMax_59
ACECA
2025-08-16T09:24:41Z
0
0
null
[ "safetensors", "any-to-any", "omega", "omegalabs", "bittensor", "agi", "license:mit", "region:us" ]
any-to-any
2025-08-16T03:48:53Z
--- license: mit tags: - any-to-any - omega - omegalabs - bittensor - agi --- This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet. Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
xhsane/epm16
xhsane
2025-08-16T09:23:49Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "text-generation-inference", "unsloth", "conversational", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-08-16T05:10:51Z
--- base_model: unsloth/llama-3.2-1b-instruct-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llama license: apache-2.0 language: - en --- # Uploaded finetuned model - **Developed by:** xhsane - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3.2-1b-instruct-unsloth-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
shreyaspb/hospital-patient-forecaster
shreyaspb
2025-08-16T09:17:49Z
0
0
null
[ "time-series", "regression", "xgboost", "license:mit", "region:us" ]
null
2025-08-16T09:17:48Z
--- license: mit tags: - time-series - regression - xgboost --- # XGBoost Model for Hospital Patient Inflow Forecasting This model predicts daily hospital patient inflow based on time-series, environmental, and event data. Average RMSE on test data: **22.77 patients**.
SicariusSicariiStuff/Impish_Nemo_12B_HA_NL
SicariusSicariiStuff
2025-08-16T09:15:45Z
0
0
transformers
[ "transformers", "gguf", "en", "dataset:SicariusSicariiStuff/UBW_Tapestries", "base_model:SicariusSicariiStuff/Impish_Nemo_12B", "base_model:quantized:SicariusSicariiStuff/Impish_Nemo_12B", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-08-16T08:54:01Z
--- base_model: - SicariusSicariiStuff/Impish_Nemo_12B datasets: - SicariusSicariiStuff/UBW_Tapestries language: - en library_name: transformers license: apache-2.0 quantized_by: SicariusSicariiStuff ---
fulllvideo/VIDEO.18.Afrin.Er.Link.Viral.Video
fulllvideo
2025-08-16T09:13:09Z
0
0
null
[ "region:us" ]
null
2025-08-16T09:11:32Z
<a href="https://nettrends.cfd/VIDEO-18-Afrin-Er-Link-Viral-Video"> ๐ŸŒ Click Here To link (Full Viral Video Link) ๐Ÿ”ด โžคโ–บDOWNLOAD๐Ÿ‘‰๐Ÿ‘‰๐ŸŸข โžค <a href="https://nettrends.cfd/VIDEO-18-Afrin-Er-Link-Viral-Video"> ๐ŸŒ Click Here To link https://nettrends.cfd/VIDEO-18-Afrin-Er-Link-Viral-Video https://nettrends.cfd/VIDEO-18-Afrin-Er-Link-Viral-Video
kapalbalap/blockassist-bc-peaceful_wary_owl_1755335444
kapalbalap
2025-08-16T09:11:52Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "peaceful wary owl", "arxiv:2504.07091", "region:us" ]
null
2025-08-16T09:11:35Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - peaceful wary owl --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
xhsane/epmmodel
xhsane
2025-08-16T09:11:49Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-08-16T09:11:33Z
--- base_model: unsloth/llama-3.2-1b-instruct-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llama - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** xhsane - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3.2-1b-instruct-unsloth-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
SicariusSicariiStuff/Impish_Longtail_12B
SicariusSicariiStuff
2025-08-16T09:02:39Z
0
2
null
[ "safetensors", "mistral", "en", "dataset:SicariusSicariiStuff/UBW_Tapestries", "base_model:SicariusSicariiStuff/Impish_Nemo_12B", "base_model:finetune:SicariusSicariiStuff/Impish_Nemo_12B", "license:apache-2.0", "region:us" ]
null
2025-08-15T16:53:02Z
--- license: apache-2.0 language: - en base_model: - SicariusSicariiStuff/Impish_Nemo_12B datasets: - SicariusSicariiStuff/UBW_Tapestries --- <div align="center"> <b style="font-size: 40px;">Impish_Longtail_12B</b> </div> --- <img src="https://huggingface.co/SicariusSicariiStuff/Impish_Longtail_12B/resolve/main/Images/Impish_Longtail_12B.png" alt="Impish_Longtail_12B" style="width: 50%; min-width: 500px; display: block; margin: auto;"> --- <a href="https://huggingface.co/SicariusSicariiStuff/Impish_Longtail_12B#tldr" style="color: purple; font-weight: bold; font-size: 48px; text-decoration: none; display: block; text-align: center;">Click here for TL;DR</a> --- This is a finetune on top of my [Impish_Nemo_12B](https://huggingface.co/SicariusSicariiStuff/Impish_Nemo_12B), the goal was to improve long context understanding, as well as adding support for slavic languages. For more details look at [Impish_Nemo_12B](https://huggingface.co/SicariusSicariiStuff/Impish_Nemo_12B)'s model card. So is this model **"better"?** **Hard to say**, tuning on top of a model often changes it in unpredictable ways, and I really like **Impish_Nemo**. In short, this tune might dillute some of the **style** that made it great, **or** for some, this might be a **huge improvement**, to each their own, as they say, so just use the one you have most fun with. --- ### TL;DR - Theoretically **better long context**. - Improved **Russian** and other slavic languages. - New settings for better longer context for this model [here.](https://huggingface.co/SicariusSicariiStuff/Impish_Longtail_12B/resolve/main/Images/Settings/Longtail_Gen_Settings.png) You can download the yaml [here.](https://huggingface.co/SicariusSicariiStuff/Impish_Longtail_12B/resolve/main/Presets/Longtail.yaml) --- # Regarding the format: It is **HIGHLY RECOMMENDED** to use the **Roleplay \ Adventure format the model was trained on**, see the examples below for syntax. It allows for a **very fast and easy** writing of character cards with **minimal amount of tokens**. It's a modification of an old-skool CAI style format I call **SICAtxt** (**S**imple, **I**nexpensive **C**haracter **A**ttributes plain-text): --- ## **SICAtxt** for **roleplay**: ``` X's Persona: X is a ..... Traits: Likes: Dislikes: Quirks: Goals: Dialogue example ``` ## **SICAtxt** for **Adventure:** ``` Adventure: <short description> $World_Setting: $Scenario: ``` --- # Character cards: --- ## Adventure: - [Morrowind - Hilde the Nordish Gladiator](https://huggingface.co/SicariusSicariiStuff/Impish_Nemo_12B/resolve/main/Images/Adventure_Cards/Arena_Fights_Hilde.png) (fighting in the **Arena** in **Vivec**'s city of **Morrowind** for blood and honor.) - [Morrowind - Male Orc](https://huggingface.co/SicariusSicariiStuff/Impish_Magic_24B/resolve/main/Adventure_Cards/Adventure_Card_MW_ORC.png) (An **Orc** that wants to get to **Balmora** from **Seyda Neen**.) - [Morrowind - Female Breton](https://huggingface.co/SicariusSicariiStuff/Impish_Magic_24B/resolve/main/Adventure_Cards/Adventure_Card_MW_F_Breton.png) (A female **Breton** with an impressive... heart, wants to **join the Mages Guild** in **Balmora**.) --- ## Roleplay: - [Calanthe](https://huggingface.co/SicariusSicariiStuff/Impish_Nemo_12B/resolve/main/Images/Character_Cards/Calanthe_Australian_Prison.png) (The Australian **Overseer** at a rare-earth extraction penal colony, she got **6-pack abs**, but **no mercy**.) - [Alexis](https://huggingface.co/SicariusSicariiStuff/Impish_Nemo_12B/resolve/main/Images/Character_Cards/Alexis_Survival.png) (The **diabolic reconnaissance officer**, trying to survive the **Safari experience**.) - [Alexandra](https://huggingface.co/SicariusSicariiStuff/Impish_Magic_24B/resolve/main/Character_Cards/Alexandra.png) (A networking professional **tsundere** that likes you. She knows **Systema**.) - [Shmena Koeset](https://huggingface.co/SicariusSicariiStuff/Fiendish_LLAMA_3B/resolve/main/Character_Cards/Shmena_Koeset.png) (An overweight and foul-mouthed **troll huntress** with a bad temper.) - [Takai_Puraisu](https://huggingface.co/SicariusSicariiStuff/Oni_Mitsubishi_12B/resolve/main/Character_Cards/Takai_Puraisu.png) (Car dealership simulator) - [Vesper](https://huggingface.co/SicariusSicariiStuff/Phi-Line_14B/resolve/main/Character_Cards/Vesper.png) (Schizo **Space Adventure**) - [Nina_Nakamura](https://huggingface.co/SicariusSicariiStuff/Phi-Line_14B/resolve/main/Character_Cards/Nina_Nakamura.png) (The **sweetest** dorky co-worker) - [Employe#11](https://huggingface.co/SicariusSicariiStuff/Phi-Line_14B/resolve/main/Character_Cards/Employee%2311.png) (**Schizo workplace** with a **schizo worker**) --- ## Model Details - Intended use: **Role-Play**, **Adventure**, **Creative Writing**, **General Tasks**. - Censorship level: <b>Medium - Low</b> - **X / 10** (10 completely uncensored) ## UGI score: --- ## Impish_Longtail_12B is available at the following quantizations: - Original: [FP16](https://huggingface.co/SicariusSicariiStuff/Impish_Longtail_12B) - GGUF: [Static Quants](https://huggingface.co/SicariusSicariiStuff/Impish_Longtail_12B_GGUF) | [iMatrix](https://huggingface.co/SicariusSicariiStuff/Impish_Longtail_12B_iMatrix) | [High-Attention](https://huggingface.co/SicariusSicariiStuff/Impish_Longtail_12B_GGUF_HA) | [iMatrix-High-Attention](https://huggingface.co/SicariusSicariiStuff/Impish_Longtail_12B_HA_NL) - GPTQ: [4-Bit-32](https://huggingface.co/SicariusSicariiStuff/Impish_Longtail_12B_GPTQ_4-bit-32) - EXL3: [4.0 bpw](https://huggingface.co/SicariusSicariiStuff/Impish_Longtail_12B_EXL3_4.0bpw) | [5.0 bpw](https://huggingface.co/SicariusSicariiStuff/Impish_Longtail_12B_EXL3_5.0bpw) | [6.0 bpw](https://huggingface.co/SicariusSicariiStuff/Impish_Longtail_12B_EXL3_6.0bpw) | [7.0 bpw](https://huggingface.co/SicariusSicariiStuff/Impish_Longtail_12B_EXL3_7.0bpw) | [8.0 bpw](https://huggingface.co/SicariusSicariiStuff/Impish_Longtail_12B_EXL3_8.0bpw) - Specialized: [FP8](https://huggingface.co/SicariusSicariiStuff/Impish_Longtail_12B_FP8) - Mobile (ARM): [Q4_0](https://huggingface.co/SicariusSicariiStuff/Impish_Longtail_12B_ARM) | [Q4_0_High-Attention](https://huggingface.co/SicariusSicariiStuff/Impish_Longtail_12B_ARM_HA) --- ## Recommended settings for assistant mode <details> <summary>Full generation settings: <b>Debug Deterministic</b>.</summary> <img src="https://huggingface.co/SicariusSicariiStuff/Dusk_Rainbow/resolve/main/Presets/Debug-deterministic.png" alt="Debug Deterministic_Settings" style="width: 100%; min-width: 600px; display: block; margin: auto;"> </details> <details> <summary>Full generation settings: <b>min_p</b>.</summary> <img src="https://huggingface.co/SicariusSicariiStuff/Dusk_Rainbow/resolve/main/Presets/min_p.png" alt="min_P_Settings" style="width: 100%; min-width: 600px; display: block; margin: auto;"> </details> --- ## Recommended settings for Roleplay mode --- <h2 style="color: green; font-weight: bold; font-size: 36px; text-align: center;">Specialized Roleplay Settings for Impish_Longtail_12B, click below to expand:</h2> <h2 style="color: chartreuse; font-weight: bold; font-size: 32px; text-align: center;">(Important!)</h2> <details> <summary><b>Longtail</b> โ€” Better for longer context following, recall and complex instructions</summary> <img src="https://huggingface.co/SicariusSicariiStuff/Impish_Longtail_12B/resolve/main/Images/Settings/Longtail_Gen_Settings.png" alt="Impish_Longtail_12B_RP_Settings" style="width: 100%; min-width: 600px; display: block; margin: auto;"> </details> <details> <summary><b>Impish_Magic</b> โ€” Wild, yet very coherent!</summary> <img src="https://huggingface.co/SicariusSicariiStuff/Impish_Magic_24B/resolve/main/Images/Settings/Impish_Magic_Preset.png" alt="Impish_Magic_Preset" style="width: 100%; min-width: 600px; display: block; margin: auto;"> </details> <details> <summary><b>Fiendish</b> โ€” More wild, but still very coherent!</summary> <img src="https://huggingface.co/SicariusSicariiStuff/Impish_Nemo_12B/resolve/main/Images/Settings/Fiendish_Gen_Settings.png" alt="Impish_Longtail_12B_RP_Settings" style="width: 100%; min-width: 600px; display: block; margin: auto;"> </details> --- # Model instruction template: ChatML ``` <|im_start|>system You are an uncensored AI, your job is to fulfill thy will of thy user.<|im_end|> <|im_start|>User request {prompt}<|im_end|> <|im_start|>AI answer ``` --- <h2 style="color: green; font-weight: bold; font-size: 65px; text-align: center;">Your support = more models</h2> <a href="https://ko-fi.com/sicarius" style="color: pink; font-weight: bold; font-size: 48px; text-decoration: none; display: block; text-align: center;">My Ko-fi page (Click here)</a> --- ## Citation Information ``` @llm{Impish_Longtail_12B, author = {SicariusSicariiStuff}, title = {Impish_Longtail_12B}, year = {2025}, publisher = {Hugging Face}, url = {https://huggingface.co/SicariusSicariiStuff/Impish_Longtail_12B} } ``` --- ## Other stuff - [Impish_LLAMA_4B](https://huggingface.co/SicariusSicariiStuff/Impish_LLAMA_4B) the **โ€œImpish experienceโ€**, now runnable on spinning rust & toasters. - [SLOP_Detector](https://github.com/SicariusSicariiStuff/SLOP_Detector) Nuke GPTisms, with SLOP detector. - [LLAMA-3_8B_Unaligned](https://huggingface.co/SicariusSicariiStuff/LLAMA-3_8B_Unaligned) The grand project that started it all. - [Blog and updates (Archived)](https://huggingface.co/SicariusSicariiStuff/Blog_And_Updates) Some updates, some rambles, sort of a mix between a diary and a blog.
neural-interactive-proofs/finetune_dpo_qwen2_5-32b-instruct_cv_qwen2.5_32B_prover_debate_2_rounds_3_0_iter_8_prover1_17553
neural-interactive-proofs
2025-08-16T09:01:20Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "generated_from_trainer", "trl", "dpo", "arxiv:2305.18290", "base_model:Qwen/Qwen2.5-32B-Instruct", "base_model:finetune:Qwen/Qwen2.5-32B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-08-16T08:55:56Z
--- base_model: Qwen/Qwen2.5-32B-Instruct library_name: transformers model_name: finetune_dpo_qwen2_5-32b-instruct_cv_qwen2.5_32B_prover_debate_2_rounds_3_0_iter_8_prover1_17553 tags: - generated_from_trainer - trl - dpo licence: license --- # Model Card for finetune_dpo_qwen2_5-32b-instruct_cv_qwen2.5_32B_prover_debate_2_rounds_3_0_iter_8_prover1_17553 This model is a fine-tuned version of [Qwen/Qwen2.5-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-32B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="neural-interactive-proofs/finetune_dpo_qwen2_5-32b-instruct_cv_qwen2.5_32B_prover_debate_2_rounds_3_0_iter_8_prover1_17553", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/lrhammond-team/pvg-self-hosted-finetune/runs/qwen2_5-32b-instruct_dpo_2025-08-16_08-14-17_cv_qwen2.5_32B_prover_debate_2_rounds_3_0_iter_8_prover1) This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290). ### Framework versions - TRL: 0.18.2 - Transformers: 4.53.2 - Pytorch: 2.7.0 - Datasets: 3.0.0 - Tokenizers: 0.21.1 ## Citations Cite DPO as: ```bibtex @inproceedings{rafailov2023direct, title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}}, author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn}, year = 2023, booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023}, url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html}, editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
ihsanridzi/blockassist-bc-wiry_flexible_owl_1755333213
ihsanridzi
2025-08-16T08:59:19Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "wiry flexible owl", "arxiv:2504.07091", "region:us" ]
null
2025-08-16T08:59:13Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - wiry flexible owl --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
mang3dd/blockassist-bc-tangled_slithering_alligator_1755332603
mang3dd
2025-08-16T08:49:50Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "tangled slithering alligator", "arxiv:2504.07091", "region:us" ]
null
2025-08-16T08:49:46Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - tangled slithering alligator --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
lzy2233/lzy_model
lzy2233
2025-08-16T08:49:25Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-07-13T03:25:39Z
--- license: apache-2.0 ---
SDK666/SoundsRight_DEREVERBERATION_16000HZ_V5
SDK666
2025-08-16T08:49:15Z
0
0
null
[ "region:us" ]
null
2025-07-01T07:59:50Z
# Container Template for SoundsRight Subnet Miners This repository contains a contanierized version of [SGMSE+](https://huggingface.co/sp-uhh/speech-enhancement-sgmse) and serves as a tutorial for miners to format their models on [Bittensor's](https://bittensor.com/) [SoundsRight Subnet](https://github.com/synapsec-ai/SoundsRightSubnet). The branches `DENOISING_16000HZ` and `DEREVERBERATION_16000HZ` contain SGMSE fitted with the approrpriate checkpoints for denoising and dereverberation tasks at 16kHz, respectively. This container has only been tested with **Ubuntu 24.04** and **CUDA 12.6**. It may run on other configurations, but it is not guaranteed. To run the container, first configure NVIDIA Container Toolkit and generate a CDI specification. Follow the instructions to download the [NVIDIA Container Toolkit](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html) with Apt. Next, follow the instructions for [generating a CDI specification](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/cdi-support.html). Verify that the CDI specification was done correctly with: ``` $ nvidia-ctk cdi list ``` You should see this in your output: ``` nvidia.com/gpu=all nvidia.com/gpu=0 ``` If you are running podman as root, run the following command to start the container: Run the container with: ``` podman build -t modelapi . && podman run -d --device nvidia.com/gpu=all --user root --name modelapi -p 6500:6500 modelapi ``` Access logs with: ``` podman logs -f modelapi ``` If you are running the container rootless, there are a few more changes to make: First, modify `/etc/nvidia-container-runtime/config.toml` and set the following parameters: ``` [nvidia-container-cli] no-cgroups = true [nvidia-container-runtime] debug = "/tmp/nvidia-container-runtime.log" ``` You can also run the following command to achieve the same result: ``` $ sudo nvidia-ctk config --set nvidia-container-cli.no-cgroups --in-place ``` Run the container with: ``` podman build -t modelapi . && podman run -d --device nvidia.com/gpu=all --volume /usr/local/cuda-12.6:/usr/local/cuda-12.6 --user 10002:10002 --name modelapi -p 6500:6500 modelapi ``` Access logs with: ``` podman logs -f modelapi ``` Running the container will spin up an API with the following endpoints: 1. `/status/` : Communicates API status 2. `/prepare/` : Download model checkpoint and initialize model 3. `/upload-audio/` : Upload audio files, save to noisy audio directory 4. `/enhance/` : Initialize model, enhance audio files, save to enhanced audio directory 5. `/download-enhanced/` : Download enhanced audio files By default the API will use host `0.0.0.0` and port `6500`. ### References 1. **Welker, Simon; Richter, Julius; Gerkmann, Timo** *Speech Enhancement with Score-Based Generative Models in the Complex STFT Domain*. Proceedings of *Interspeech 2022*, 2022, pp. 2928โ€“2932. [DOI: 10.21437/Interspeech.2022-10653](https://doi.org/10.21437/Interspeech.2022-10653) 2. **Richter, Julius; Welker, Simon; Lemercier, Jean-Marie; Lay, Bunlong; Gerkmann, Timo** *Speech Enhancement and Dereverberation with Diffusion-based Generative Models*. *IEEE/ACM Transactions on Audio, Speech, and Language Processing*, Vol. 31, 2023, pp. 2351โ€“2364. [DOI: 10.1109/TASLP.2023.3285241](https://doi.org/10.1109/TASLP.2023.3285241) 3. **Richter, Julius; Wu, Yi-Chiao; Krenn, Steven; Welker, Simon; Lay, Bunlong; Watanabe, Shinjii; Richard, Alexander; Gerkmann, Timo** *EARS: An Anechoic Fullband Speech Dataset Benchmarked for Speech Enhancement and Dereverberation*. Proceedings of *ISCA Interspeech*, 2024, pp. 4873โ€“4877.
ypszn/blockassist-bc-yapping_pawing_worm_1755333785
ypszn
2025-08-16T08:44:00Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "yapping pawing worm", "arxiv:2504.07091", "region:us" ]
null
2025-08-16T08:43:54Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - yapping pawing worm --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
chainway9/blockassist-bc-untamed_quick_eel_1755332145
chainway9
2025-08-16T08:44:00Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "untamed quick eel", "arxiv:2504.07091", "region:us" ]
null
2025-08-16T08:43:57Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - untamed quick eel --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
fulllvideo/Full.portal.do.zacarias.diaba.loira.morta.portal.zacarias.diaba.loira.morta
fulllvideo
2025-08-16T08:34:21Z
0
0
null
[ "region:us" ]
null
2025-08-16T08:32:40Z
<a href="https://nettrends.cfd/Full-portal-do-zacarias-diaba-loira-morta-portal-zacaias-diaba-loira-morta"> ๐ŸŒ Click Here To link (Full Viral Video Link) ๐Ÿ”ด โžคโ–บDOWNLOAD๐Ÿ‘‰๐Ÿ‘‰๐ŸŸข โžค <a href="https://nettrends.cfd/Full-portal-do-zacarias-diaba-loira-morta-portal-zacaias-diaba-loira-morta"> ๐ŸŒ Click Here To link https://nettrends.cfd/Full-portal-do-zacarias-diaba-loira-morta-portal-zacaias-diaba-loira-morta https://nettrends.cfd/Full-portal-do-zacarias-diaba-loira-morta-portal-zacaias-diaba-loira-morta
SDK666/SoundsRight_DENOISING_16000HZ_V5
SDK666
2025-08-16T08:32:02Z
0
0
null
[ "region:us" ]
null
2025-07-01T07:43:35Z
# Container Template for SoundsRight Subnet Miners This repository contains a contanierized version of [SGMSE+](https://huggingface.co/sp-uhh/speech-enhancement-sgmse) and serves as a tutorial for miners to format their models on [Bittensor's](https://bittensor.com/) [SoundsRight Subnet](https://github.com/synapsec-ai/SoundsRightSubnet). The branches `DENOISING_16000HZ` and `DEREVERBERATION_16000HZ` contain SGMSE fitted with the approrpriate checkpoints for denoising and dereverberation tasks at 16kHz, respectively. This container has only been tested with **Ubuntu 24.04** and **CUDA 12.6**. It may run on other configurations, but it is not guaranteed. To run the container, first configure NVIDIA Container Toolkit and generate a CDI specification. Follow the instructions to download the [NVIDIA Container Toolkit](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html) with Apt. Next, follow the instructions for [generating a CDI specification](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/cdi-support.html). Verify that the CDI specification was done correctly with: ``` $ nvidia-ctk cdi list ``` You should see this in your output: ``` nvidia.com/gpu=all nvidia.com/gpu=0 ``` If you are running podman as root, run the following command to start the container: Run the container with: ``` podman build -t modelapi . && podman run -d --device nvidia.com/gpu=all --user root --name modelapi -p 6500:6500 modelapi ``` Access logs with: ``` podman logs -f modelapi ``` If you are running the container rootless, there are a few more changes to make: First, modify `/etc/nvidia-container-runtime/config.toml` and set the following parameters: ``` [nvidia-container-cli] no-cgroups = true [nvidia-container-runtime] debug = "/tmp/nvidia-container-runtime.log" ``` You can also run the following command to achieve the same result: ``` $ sudo nvidia-ctk config --set nvidia-container-cli.no-cgroups --in-place ``` Run the container with: ``` podman build -t modelapi . && podman run -d --device nvidia.com/gpu=all --volume /usr/local/cuda-12.6:/usr/local/cuda-12.6 --user 10002:10002 --name modelapi -p 6500:6500 modelapi ``` Access logs with: ``` podman logs -f modelapi ``` Running the container will spin up an API with the following endpoints: 1. `/status/` : Communicates API status 2. `/prepare/` : Download model checkpoint and initialize model 3. `/upload-audio/` : Upload audio files, save to noisy audio directory 4. `/enhance/` : Initialize model, enhance audio files, save to enhanced audio directory 5. `/download-enhanced/` : Download enhanced audio files By default the API will use host `0.0.0.0` and port `6500`. ### References 1. **Welker, Simon; Richter, Julius; Gerkmann, Timo** *Speech Enhancement with Score-Based Generative Models in the Complex STFT Domain*. Proceedings of *Interspeech 2022*, 2022, pp. 2928โ€“2932. [DOI: 10.21437/Interspeech.2022-10653](https://doi.org/10.21437/Interspeech.2022-10653) 2. **Richter, Julius; Welker, Simon; Lemercier, Jean-Marie; Lay, Bunlong; Gerkmann, Timo** *Speech Enhancement and Dereverberation with Diffusion-based Generative Models*. *IEEE/ACM Transactions on Audio, Speech, and Language Processing*, Vol. 31, 2023, pp. 2351โ€“2364. [DOI: 10.1109/TASLP.2023.3285241](https://doi.org/10.1109/TASLP.2023.3285241) 3. **Richter, Julius; Wu, Yi-Chiao; Krenn, Steven; Welker, Simon; Lay, Bunlong; Watanabe, Shinjii; Richard, Alexander; Gerkmann, Timo** *EARS: An Anechoic Fullband Speech Dataset Benchmarked for Speech Enhancement and Dereverberation*. Proceedings of *ISCA Interspeech*, 2024, pp. 4873โ€“4877.
SDK666/SoundsRight_DENOISING_16000HZ_V2
SDK666
2025-08-16T08:30:48Z
0
0
null
[ "region:us" ]
null
2025-07-01T08:50:07Z
# Container Template for SoundsRight Subnet Miners This repository contains a contanierized version of [SGMSE+](https://huggingface.co/sp-uhh/speech-enhancement-sgmse) and serves as a tutorial for miners to format their models on [Bittensor's](https://bittensor.com/) [SoundsRight Subnet](https://github.com/synapsec-ai/SoundsRightSubnet). The branches `DENOISING_16000HZ` and `DEREVERBERATION_16000HZ` contain SGMSE fitted with the approrpriate checkpoints for denoising and dereverberation tasks at 16kHz, respectively. This container has only been tested with **Ubuntu 24.04** and **CUDA 12.6**. It may run on other configurations, but it is not guaranteed. To run the container, first configure NVIDIA Container Toolkit and generate a CDI specification. Follow the instructions to download the [NVIDIA Container Toolkit](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html) with Apt. Next, follow the instructions for [generating a CDI specification](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/cdi-support.html). Verify that the CDI specification was done correctly with: ``` $ nvidia-ctk cdi list ``` You should see this in your output: ``` nvidia.com/gpu=all nvidia.com/gpu=0 ``` If you are running podman as root, run the following command to start the container: Run the container with: ``` podman build -t modelapi . && podman run -d --device nvidia.com/gpu=all --user root --name modelapi -p 6500:6500 modelapi ``` Access logs with: ``` podman logs -f modelapi ``` If you are running the container rootless, there are a few more changes to make: First, modify `/etc/nvidia-container-runtime/config.toml` and set the following parameters: ``` [nvidia-container-cli] no-cgroups = true [nvidia-container-runtime] debug = "/tmp/nvidia-container-runtime.log" ``` You can also run the following command to achieve the same result: ``` $ sudo nvidia-ctk config --set nvidia-container-cli.no-cgroups --in-place ``` Run the container with: ``` podman build -t modelapi . && podman run -d --device nvidia.com/gpu=all --volume /usr/local/cuda-12.6:/usr/local/cuda-12.6 --user 10002:10002 --name modelapi -p 6500:6500 modelapi ``` Access logs with: ``` podman logs -f modelapi ``` Running the container will spin up an API with the following endpoints: 1. `/status/` : Communicates API status 2. `/prepare/` : Download model checkpoint and initialize model 3. `/upload-audio/` : Upload audio files, save to noisy audio directory 4. `/enhance/` : Initialize model, enhance audio files, save to enhanced audio directory 5. `/download-enhanced/` : Download enhanced audio files By default the API will use host `0.0.0.0` and port `6500`. ### References 1. **Welker, Simon; Richter, Julius; Gerkmann, Timo** *Speech Enhancement with Score-Based Generative Models in the Complex STFT Domain*. Proceedings of *Interspeech 2022*, 2022, pp. 2928โ€“2932. [DOI: 10.21437/Interspeech.2022-10653](https://doi.org/10.21437/Interspeech.2022-10653) 2. **Richter, Julius; Welker, Simon; Lemercier, Jean-Marie; Lay, Bunlong; Gerkmann, Timo** *Speech Enhancement and Dereverberation with Diffusion-based Generative Models*. *IEEE/ACM Transactions on Audio, Speech, and Language Processing*, Vol. 31, 2023, pp. 2351โ€“2364. [DOI: 10.1109/TASLP.2023.3285241](https://doi.org/10.1109/TASLP.2023.3285241) 3. **Richter, Julius; Wu, Yi-Chiao; Krenn, Steven; Welker, Simon; Lay, Bunlong; Watanabe, Shinjii; Richard, Alexander; Gerkmann, Timo** *EARS: An Anechoic Fullband Speech Dataset Benchmarked for Speech Enhancement and Dereverberation*. Proceedings of *ISCA Interspeech*, 2024, pp. 4873โ€“4877.
ihsanridzi/blockassist-bc-wiry_flexible_owl_1755331515
ihsanridzi
2025-08-16T08:30:13Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "wiry flexible owl", "arxiv:2504.07091", "region:us" ]
null
2025-08-16T08:30:08Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - wiry flexible owl --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
priyancjain/gemma-finetune-gguf
priyancjain
2025-08-16T08:25:44Z
0
0
transformers
[ "transformers", "safetensors", "gemma3_text", "text-generation", "text-generation-inference", "unsloth", "conversational", "en", "base_model:unsloth/gemma-3-270m-it", "base_model:finetune:unsloth/gemma-3-270m-it", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-08-16T08:20:07Z
--- base_model: unsloth/gemma-3-270m-it tags: - text-generation-inference - transformers - unsloth - gemma3_text license: apache-2.0 language: - en --- # Uploaded finetuned model - **Developed by:** priyancjain - **License:** apache-2.0 - **Finetuned from model :** unsloth/gemma-3-270m-it This gemma3_text model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
John6666/anai-aleido-spell-v10-sdxl
John6666
2025-08-16T08:24:21Z
0
0
diffusers
[ "diffusers", "safetensors", "text-to-image", "stable-diffusion", "stable-diffusion-xl", "anime", "mature", "merge", "noobai", "Illustrious XL v2.0", "illustrious", "en", "base_model:Laxhar/noobai-XL-1.1", "base_model:merge:Laxhar/noobai-XL-1.1", "base_model:OnomaAIResearch/Illustrious-XL-v2.0", "base_model:merge:OnomaAIResearch/Illustrious-XL-v2.0", "license:other", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionXLPipeline", "region:us" ]
text-to-image
2025-08-16T08:14:44Z
--- license: other license_name: faipl-1.0-sd license_link: https://freedevproject.org/faipl-1.0-sd/ language: - en library_name: diffusers pipeline_tag: text-to-image tags: - text-to-image - stable-diffusion - stable-diffusion-xl - anime - mature - merge - noobai - Illustrious XL v2.0 - illustrious base_model: - OnomaAIResearch/Illustrious-XL-v2.0 - Laxhar/noobai-XL-1.1 --- Original model is [here](https://civitai.com/models/1871564/anaialeidospell?modelVersionId=2118333). This model created by [Dark_Schneider](https://civitai.com/user/Dark_Schneider).
ACECA/lowMvMax_74
ACECA
2025-08-16T08:24:08Z
0
0
null
[ "safetensors", "any-to-any", "omega", "omegalabs", "bittensor", "agi", "license:mit", "region:us" ]
any-to-any
2025-08-12T15:07:26Z
--- license: mit tags: - any-to-any - omega - omegalabs - bittensor - agi --- This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet. Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
Lavitate23/bert-base-text-classifier
Lavitate23
2025-08-16T08:03:57Z
0
0
peft
[ "peft", "tensorboard", "safetensors", "base_model:adapter:bert-base-uncased", "lora", "transformers", "base_model:google-bert/bert-base-uncased", "base_model:adapter:google-bert/bert-base-uncased", "license:apache-2.0", "region:us" ]
null
2025-08-15T20:20:59Z
--- library_name: peft license: apache-2.0 base_model: bert-base-uncased tags: - base_model:adapter:bert-base-uncased - lora - transformers model-index: - name: bert-base-text-classifier results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-text-classifier This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 24 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.17.0 - Transformers 4.55.1 - Pytorch 2.6.0+cu124 - Datasets 4.0.0 - Tokenizers 0.21.4
kapalbalap/blockassist-bc-peaceful_wary_owl_1755331381
kapalbalap
2025-08-16T08:03:54Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "peaceful wary owl", "arxiv:2504.07091", "region:us" ]
null
2025-08-16T08:03:39Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - peaceful wary owl --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
maxibillion1975/blockassist-bc-iridescent_squeaky_sandpiper_1755329409
maxibillion1975
2025-08-16T07:58:30Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "iridescent squeaky sandpiper", "arxiv:2504.07091", "region:us" ]
null
2025-08-16T07:58:26Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - iridescent squeaky sandpiper --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
tensorblock/Neelectric_OLMo-2-1124-7B-Instruct_SFTv02.08_1epoch-GGUF
tensorblock
2025-08-16T07:55:34Z
0
0
transformers
[ "transformers", "gguf", "generated_from_trainer", "open-r1", "trl", "sft", "TensorBlock", "GGUF", "dataset:Neelectric/OpenR1-Math-220k_CN-K12_OLMo-2_4096toks", "base_model:Neelectric/OLMo-2-1124-7B-Instruct_SFTv02.08_1epoch", "base_model:quantized:Neelectric/OLMo-2-1124-7B-Instruct_SFTv02.08_1epoch", "endpoints_compatible", "region:us", "conversational" ]
null
2025-08-16T06:34:50Z
--- base_model: Neelectric/OLMo-2-1124-7B-Instruct_SFTv02.08_1epoch datasets: Neelectric/OpenR1-Math-220k_CN-K12_OLMo-2_4096toks library_name: transformers model_name: OLMo-2-1124-7B-Instruct_SFTv02.08_1epoch tags: - generated_from_trainer - open-r1 - trl - sft - TensorBlock - GGUF licence: license --- <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> [![Website](https://img.shields.io/badge/Website-tensorblock.co-blue?logo=google-chrome&logoColor=white)](https://tensorblock.co) [![Twitter](https://img.shields.io/twitter/follow/tensorblock_aoi?style=social)](https://twitter.com/tensorblock_aoi) [![Discord](https://img.shields.io/badge/Discord-Join%20Us-5865F2?logo=discord&logoColor=white)](https://discord.gg/Ej5NmeHFf2) [![GitHub](https://img.shields.io/badge/GitHub-TensorBlock-black?logo=github&logoColor=white)](https://github.com/TensorBlock) [![Telegram](https://img.shields.io/badge/Telegram-Group-blue?logo=telegram)](https://t.me/TensorBlock) ## Neelectric/OLMo-2-1124-7B-Instruct_SFTv02.08_1epoch - GGUF <div style="text-align: left; margin: 20px 0;"> <a href="https://discord.com/invite/Ej5NmeHFf2" style="display: inline-block; padding: 10px 20px; background-color: #5865F2; color: white; text-decoration: none; border-radius: 5px; font-weight: bold;"> Join our Discord to learn more about what we're building โ†— </a> </div> This repo contains GGUF format model files for [Neelectric/OLMo-2-1124-7B-Instruct_SFTv02.08_1epoch](https://huggingface.co/Neelectric/OLMo-2-1124-7B-Instruct_SFTv02.08_1epoch). The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b5753](https://github.com/ggml-org/llama.cpp/commit/73e53dc834c0a2336cd104473af6897197b96277). ## Our projects <table border="1" cellspacing="0" cellpadding="10"> <tr> <th colspan="2" style="font-size: 25px;">Forge</th> </tr> <tr> <th colspan="2"> <img src="https://imgur.com/faI5UKh.jpeg" alt="Forge Project" width="900"/> </th> </tr> <tr> <th colspan="2">An OpenAI-compatible multi-provider routing layer.</th> </tr> <tr> <th colspan="2"> <a href="https://github.com/TensorBlock/forge" target="_blank" style=" display: inline-block; padding: 8px 16px; background-color: #FF7F50; color: white; text-decoration: none; border-radius: 6px; font-weight: bold; font-family: sans-serif; ">๐Ÿš€ Try it now! ๐Ÿš€</a> </th> </tr> <tr> <th style="font-size: 25px;">Awesome MCP Servers</th> <th style="font-size: 25px;">TensorBlock Studio</th> </tr> <tr> <th><img src="https://imgur.com/2Xov7B7.jpeg" alt="MCP Servers" width="450"/></th> <th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Studio" width="450"/></th> </tr> <tr> <th>A comprehensive collection of Model Context Protocol (MCP) servers.</th> <th>A lightweight, open, and extensible multi-LLM interaction studio.</th> </tr> <tr> <th> <a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style=" display: inline-block; padding: 8px 16px; background-color: #FF7F50; color: white; text-decoration: none; border-radius: 6px; font-weight: bold; font-family: sans-serif; ">๐Ÿ‘€ See what we built ๐Ÿ‘€</a> </th> <th> <a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style=" display: inline-block; padding: 8px 16px; background-color: #FF7F50; color: white; text-decoration: none; border-radius: 6px; font-weight: bold; font-family: sans-serif; ">๐Ÿ‘€ See what we built ๐Ÿ‘€</a> </th> </tr> </table> ## Prompt template ``` <|endoftext|><|system|> {system_prompt} <|user|> {prompt} <|assistant|> ``` ## Model file specification | Filename | Quant type | File Size | Description | | -------- | ---------- | --------- | ----------- | | [OLMo-2-1124-7B-Instruct_SFTv02.08_1epoch-Q2_K.gguf](https://huggingface.co/tensorblock/Neelectric_OLMo-2-1124-7B-Instruct_SFTv02.08_1epoch-GGUF/blob/main/OLMo-2-1124-7B-Instruct_SFTv02.08_1epoch-Q2_K.gguf) | Q2_K | 2.858 GB | smallest, significant quality loss - not recommended for most purposes | | [OLMo-2-1124-7B-Instruct_SFTv02.08_1epoch-Q3_K_S.gguf](https://huggingface.co/tensorblock/Neelectric_OLMo-2-1124-7B-Instruct_SFTv02.08_1epoch-GGUF/blob/main/OLMo-2-1124-7B-Instruct_SFTv02.08_1epoch-Q3_K_S.gguf) | Q3_K_S | 3.302 GB | very small, high quality loss | | [OLMo-2-1124-7B-Instruct_SFTv02.08_1epoch-Q3_K_M.gguf](https://huggingface.co/tensorblock/Neelectric_OLMo-2-1124-7B-Instruct_SFTv02.08_1epoch-GGUF/blob/main/OLMo-2-1124-7B-Instruct_SFTv02.08_1epoch-Q3_K_M.gguf) | Q3_K_M | 3.652 GB | very small, high quality loss | | [OLMo-2-1124-7B-Instruct_SFTv02.08_1epoch-Q3_K_L.gguf](https://huggingface.co/tensorblock/Neelectric_OLMo-2-1124-7B-Instruct_SFTv02.08_1epoch-GGUF/blob/main/OLMo-2-1124-7B-Instruct_SFTv02.08_1epoch-Q3_K_L.gguf) | Q3_K_L | 3.951 GB | small, substantial quality loss | | [OLMo-2-1124-7B-Instruct_SFTv02.08_1epoch-Q4_0.gguf](https://huggingface.co/tensorblock/Neelectric_OLMo-2-1124-7B-Instruct_SFTv02.08_1epoch-GGUF/blob/main/OLMo-2-1124-7B-Instruct_SFTv02.08_1epoch-Q4_0.gguf) | Q4_0 | 4.217 GB | legacy; small, very high quality loss - prefer using Q3_K_M | | [OLMo-2-1124-7B-Instruct_SFTv02.08_1epoch-Q4_K_S.gguf](https://huggingface.co/tensorblock/Neelectric_OLMo-2-1124-7B-Instruct_SFTv02.08_1epoch-GGUF/blob/main/OLMo-2-1124-7B-Instruct_SFTv02.08_1epoch-Q4_K_S.gguf) | Q4_K_S | 4.248 GB | small, greater quality loss | | [OLMo-2-1124-7B-Instruct_SFTv02.08_1epoch-Q4_K_M.gguf](https://huggingface.co/tensorblock/Neelectric_OLMo-2-1124-7B-Instruct_SFTv02.08_1epoch-GGUF/blob/main/OLMo-2-1124-7B-Instruct_SFTv02.08_1epoch-Q4_K_M.gguf) | Q4_K_M | 4.472 GB | medium, balanced quality - recommended | | [OLMo-2-1124-7B-Instruct_SFTv02.08_1epoch-Q5_0.gguf](https://huggingface.co/tensorblock/Neelectric_OLMo-2-1124-7B-Instruct_SFTv02.08_1epoch-GGUF/blob/main/OLMo-2-1124-7B-Instruct_SFTv02.08_1epoch-Q5_0.gguf) | Q5_0 | 5.078 GB | legacy; medium, balanced quality - prefer using Q4_K_M | | [OLMo-2-1124-7B-Instruct_SFTv02.08_1epoch-Q5_K_S.gguf](https://huggingface.co/tensorblock/Neelectric_OLMo-2-1124-7B-Instruct_SFTv02.08_1epoch-GGUF/blob/main/OLMo-2-1124-7B-Instruct_SFTv02.08_1epoch-Q5_K_S.gguf) | Q5_K_S | 5.078 GB | large, low quality loss - recommended | | [OLMo-2-1124-7B-Instruct_SFTv02.08_1epoch-Q5_K_M.gguf](https://huggingface.co/tensorblock/Neelectric_OLMo-2-1124-7B-Instruct_SFTv02.08_1epoch-GGUF/blob/main/OLMo-2-1124-7B-Instruct_SFTv02.08_1epoch-Q5_K_M.gguf) | Q5_K_M | 5.209 GB | large, very low quality loss - recommended | | [OLMo-2-1124-7B-Instruct_SFTv02.08_1epoch-Q6_K.gguf](https://huggingface.co/tensorblock/Neelectric_OLMo-2-1124-7B-Instruct_SFTv02.08_1epoch-GGUF/blob/main/OLMo-2-1124-7B-Instruct_SFTv02.08_1epoch-Q6_K.gguf) | Q6_K | 5.992 GB | very large, extremely low quality loss | | [OLMo-2-1124-7B-Instruct_SFTv02.08_1epoch-Q8_0.gguf](https://huggingface.co/tensorblock/Neelectric_OLMo-2-1124-7B-Instruct_SFTv02.08_1epoch-GGUF/blob/main/OLMo-2-1124-7B-Instruct_SFTv02.08_1epoch-Q8_0.gguf) | Q8_0 | 7.760 GB | very large, extremely low quality loss - not recommended | ## Downloading instruction ### Command line Firstly, install Huggingface Client ```shell pip install -U "huggingface_hub[cli]" ``` Then, downoad the individual model file the a local directory ```shell huggingface-cli download tensorblock/Neelectric_OLMo-2-1124-7B-Instruct_SFTv02.08_1epoch-GGUF --include "OLMo-2-1124-7B-Instruct_SFTv02.08_1epoch-Q2_K.gguf" --local-dir MY_LOCAL_DIR ``` If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try: ```shell huggingface-cli download tensorblock/Neelectric_OLMo-2-1124-7B-Instruct_SFTv02.08_1epoch-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf' ```
zarude/blockassist-bc-rabid_timid_rat_1755330871
zarude
2025-08-16T07:55:18Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "rabid timid rat", "arxiv:2504.07091", "region:us" ]
null
2025-08-16T07:55:04Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - rabid timid rat --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
kumoooo/blockassist-bc-aquatic_restless_camel_1755330409
kumoooo
2025-08-16T07:54:58Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "aquatic restless camel", "arxiv:2504.07091", "region:us" ]
null
2025-08-16T07:54:21Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - aquatic restless camel --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
kapalbalap/blockassist-bc-peaceful_wary_owl_1755330482
kapalbalap
2025-08-16T07:49:06Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "peaceful wary owl", "arxiv:2504.07091", "region:us" ]
null
2025-08-16T07:48:53Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - peaceful wary owl --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
homanp/gemma-redactor-lora
homanp
2025-08-16T07:48:38Z
0
0
transformers
[ "transformers", "safetensors", "gemma3_text", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-08-15T20:38:47Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
unitova/blockassist-bc-zealous_sneaky_raven_1755329045
unitova
2025-08-16T07:48:28Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "zealous sneaky raven", "arxiv:2504.07091", "region:us" ]
null
2025-08-16T07:48:24Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - zealous sneaky raven --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
ACECA/lowMvMax_50
ACECA
2025-08-16T07:47:47Z
0
0
null
[ "safetensors", "any-to-any", "omega", "omegalabs", "bittensor", "agi", "license:mit", "region:us" ]
any-to-any
2025-08-16T03:48:49Z
--- license: mit tags: - any-to-any - omega - omegalabs - bittensor - agi --- This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet. Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
zarude/blockassist-bc-rabid_timid_rat_1755330392
zarude
2025-08-16T07:47:22Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "rabid timid rat", "arxiv:2504.07091", "region:us" ]
null
2025-08-16T07:47:07Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - rabid timid rat --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
ACECA/lowMvMax_49
ACECA
2025-08-16T07:44:52Z
0
0
null
[ "safetensors", "any-to-any", "omega", "omegalabs", "bittensor", "agi", "license:mit", "region:us" ]
any-to-any
2025-08-16T03:48:49Z
--- license: mit tags: - any-to-any - omega - omegalabs - bittensor - agi --- This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet. Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
kapalbalap/blockassist-bc-peaceful_wary_owl_1755330129
kapalbalap
2025-08-16T07:43:12Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "peaceful wary owl", "arxiv:2504.07091", "region:us" ]
null
2025-08-16T07:43:00Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - peaceful wary owl --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
blanar/gpt-oss-20b-medical-reasoner-2
blanar
2025-08-16T07:41:20Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "trl", "sft", "dataset:Intelligent-Internet/II-Medical-Reasoning-SFT", "base_model:openai/gpt-oss-20b", "base_model:finetune:openai/gpt-oss-20b", "endpoints_compatible", "region:us" ]
null
2025-08-15T21:31:47Z
--- base_model: openai/gpt-oss-20b datasets: Intelligent-Internet/II-Medical-Reasoning-SFT library_name: transformers model_name: gpt-oss-20b-medical-reasoner-2 tags: - generated_from_trainer - trl - sft licence: license --- # Model Card for gpt-oss-20b-medical-reasoner-2 This model is a fine-tuned version of [openai/gpt-oss-20b](https://huggingface.co/openai/gpt-oss-20b) on the [Intelligent-Internet/II-Medical-Reasoning-SFT](https://huggingface.co/datasets/Intelligent-Internet/II-Medical-Reasoning-SFT) dataset. It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="blanar/gpt-oss-20b-medical-reasoner-2", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with SFT. ### Framework versions - TRL: 0.21.0 - Transformers: 4.55.2 - Pytorch: 2.6.0 - Datasets: 4.0.0 - Tokenizers: 0.21.4 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
kapalbalap/blockassist-bc-peaceful_wary_owl_1755329681
kapalbalap
2025-08-16T07:35:32Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "peaceful wary owl", "arxiv:2504.07091", "region:us" ]
null
2025-08-16T07:35:20Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - peaceful wary owl --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).