modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-08-18 00:45:06
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 507
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-08-18 00:44:24
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
thanobidex/blockassist-bc-colorful_shiny_hare_1755466696
|
thanobidex
| 2025-08-17T22:05:38Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"colorful shiny hare",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-17T22:05:35Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- colorful shiny hare
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
concept-unlearning/gemma-3-4b-it_ft_lora_all_novels_v1_ft_rmu_lora_positive_dataset_v5
|
concept-unlearning
| 2025-08-17T20:21:14Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma3",
"image-text-to-text",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2025-08-17T20:19:16Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
MoLA-LLM/MoLA-9x4b-v0.6
|
MoLA-LLM
| 2025-08-17T19:56:54Z | 0 | 1 |
transformers
|
[
"transformers",
"safetensors",
"mola_lm",
"text-generation",
"pytorch",
"mixture-of-experts",
"lora",
"adapter",
"causal-lm",
"conversational",
"custom_code",
"en",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
] |
text-generation
| 2025-08-17T19:50:27Z |
---
license: apache-2.0
library_name: transformers
tags:
- pytorch
- mixture-of-experts
- lora
- adapter
- causal-lm
- text-generation
language:
- en
pipeline_tag: text-generation
---

# MoLA-LM: Mixture of LoRA Adapters LLM
MoLA-LM combines multiple LoRA adapters with an intelligent router to automatically select the best adapter for each input prompt. This approach enables specialized performance across different tasks while maintaining efficiency.
Evals are coming...
## Model Details
- **Model Type**: Mixture of LoRA Adapters Language Model
- **Base Model**: Qwen/Qwen3-4B-Thinking-2507
- **Total Adapters**: 9
- **Architecture**: Custom MoLAForCausalLM with automatic adapter routing
## Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
# Load the model (trust_remote_code=True is required for custom architecture)
model = AutoModelForCausalLM.from_pretrained(
"MoLA-LLM/MoLA-9x4b-v0.6",
trust_remote_code=True,
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained("MoLA-LLM/MoLA-9x4b-v0.6", trust_remote_code=True)
# Use like any other language model - adapter selection is automatic
prompt = "Write a Python function to calculate fibonacci numbers"
messages = [{"role": "user", "content": prompt}]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
tokenize=True,
return_dict=True,
return_tensors="pt",
).to(model.device)
outputs = model.generate(**inputs, max_new_tokens=8192, temperature=.6, do_sample=True)
response = tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:], skip_special_tokens=True)
print(f"Selected LoRA: {model.get_current_lora()}")
print(response)
```
*You can also use load_in_4bit and load_in_8bit directly when loading!*
## Architecture
The MoLA-LM architecture consists of:
1. **Base Model**: Qwen/Qwen3-4B-Thinking-2507
2. **Router Network**: Frozen encoder as Sentence transformer + decoder as one layer MLP for adapter selection
3. **LoRA Adapters**: 9 task-specific fine-tuned adapters
4. **Dynamic Switching**: Automatic adapter application based on input
---
##*Paper coming soon™*
|
craciuncg/step_model_simplify_xl
|
craciuncg
| 2025-08-17T19:13:46Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-17T19:12:58Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
kojeklollipop/blockassist-bc-spotted_amphibious_stork_1755455094
|
kojeklollipop
| 2025-08-17T18:51:41Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"spotted amphibious stork",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-17T18:51:38Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- spotted amphibious stork
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
capungmerah627/blockassist-bc-stinging_soaring_porcupine_1755454267
|
capungmerah627
| 2025-08-17T18:36:33Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"stinging soaring porcupine",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-17T18:36:30Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- stinging soaring porcupine
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
geobase/gghl-oriented-object-detection
|
geobase
| 2025-08-17T18:16:27Z | 16 | 0 | null |
[
"onnx",
"arxiv:2109.12848",
"region:us"
] | null | 2025-03-12T11:20:45Z |
Quantized Version of GGHL (https://arxiv.org/pdf/2109.12848)
|
l3cube-pune/marathi-sentence-similarity-sbert
|
l3cube-pune
| 2025-08-17T17:40:11Z | 286 | 3 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"mr",
"arxiv:2211.11187",
"arxiv:2304.11434",
"base_model:l3cube-pune/marathi-sentence-bert-nli",
"base_model:finetune:l3cube-pune/marathi-sentence-bert-nli",
"license:cc-by-4.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-11-05T18:26:08Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
base_model: l3cube-pune/marathi-sentence-bert-nli
license: cc-by-4.0
language: mr
widget:
- source_sentence: "शेतकऱ्यांचे डोळे आकाशाकडे लागले आहेत"
sentences:
- "आता शेतकऱ्यांचे डोळे आभाळाकडे लागले आहेत"
- "अन्नधान्य उत्पादनासाठी शेतकरी कष्ट करतात"
- "शहरात कचऱ्याचे ढीग दिसतात"
example_title: "Example 1"
- source_sentence: "घटनेची माहिती मिळताच पोलिसांचा ताफा तेथे पोहोचला"
sentences:
- "पोलिसांना घटनेची माहिती मिळताच त्यांचे पथक घटनास्थळी पोहोचले"
- "तेव्हा पोलिसांनी त्यांच्या तक्रारीची दखल घेतली नाही"
- "दिवसाचा उत्तरार्ध कुटुंबासोबत मौजमजेत घालवाल"
example_title: "Example 2"
- source_sentence: "पहिल्या पाच किलोमीटर अंतरासाठी पाच रुपये दर आकारण्यात येत आहे"
sentences:
- "पाच रुपयांत पाच किमी प्रवास करा"
- "दोन ठिकाणांमधले मोठे अंतर प्रवास करणे कंटाळवाणे आहे"
- "नुकत्याच झालेल्या पावसामुळे हिरवळ दिसत आहे"
example_title: "Example 3"
---
# MahaSBERT-STS
A MahaSBERT model (l3cube-pune/marathi-sentence-bert-nli) fine-tuned on STS dataset. <br>
This is released as a part of project MahaNLP : https://github.com/l3cube-pune/MarathiNLP <br>
A multilingual version of this model supporting major Indic languages and cross-lingual sentence similarity is shared here <a href='https://huggingface.co/l3cube-pune/indic-sentence-similarity-sbert'> indic-sentence-similarity-sbert </a> <br>
More details on the dataset, models, and baseline results can be found in our [paper] (https://arxiv.org/abs/2211.11187)
```
@article{joshi2022l3cubemahasbert,
title={L3Cube-MahaSBERT and HindSBERT: Sentence BERT Models and Benchmarking BERT Sentence Representations for Hindi and Marathi},
author={Joshi, Ananya and Kajale, Aditi and Gadre, Janhavi and Deode, Samruddhi and Joshi, Raviraj},
journal={arXiv preprint arXiv:2211.11187},
year={2022}
}
```
<a href='https://arxiv.org/abs/2211.11187'> monolingual Indic SBERT paper </a> <br>
<a href='https://arxiv.org/abs/2304.11434'> multilingual Indic SBERT paper </a>
Other Monolingual similarity models are listed below: <br>
<a href='https://huggingface.co/l3cube-pune/marathi-sentence-similarity-sbert'> Marathi Similarity </a> <br>
<a href='https://huggingface.co/l3cube-pune/hindi-sentence-similarity-sbert'> Hindi Similarity </a> <br>
<a href='https://huggingface.co/l3cube-pune/kannada-sentence-similarity-sbert'> Kannada Similarity </a> <br>
<a href='https://huggingface.co/l3cube-pune/telugu-sentence-similarity-sbert'> Telugu Similarity </a> <br>
<a href='https://huggingface.co/l3cube-pune/malayalam-sentence-similarity-sbert'> Malayalam Similarity </a> <br>
<a href='https://huggingface.co/l3cube-pune/tamil-sentence-similarity-sbert'> Tamil Similarity </a> <br>
<a href='https://huggingface.co/l3cube-pune/gujarati-sentence-similarity-sbert'> Gujarati Similarity </a> <br>
<a href='https://huggingface.co/l3cube-pune/odia-sentence-similarity-sbert'> Oriya Similarity </a> <br>
<a href='https://huggingface.co/l3cube-pune/bengali-sentence-similarity-sbert'> Bengali Similarity </a> <br>
<a href='https://huggingface.co/l3cube-pune/punjabi-sentence-similarity-sbert'> Punjabi Similarity </a> <br>
<a href='https://huggingface.co/l3cube-pune/indic-sentence-similarity-sbert'> Indic Similarity (multilingual)</a> <br>
Other Monolingual Indic sentence BERT models are listed below: <br>
<a href='https://huggingface.co/l3cube-pune/marathi-sentence-bert-nli'> Marathi SBERT</a> <br>
<a href='https://huggingface.co/l3cube-pune/hindi-sentence-bert-nli'> Hindi SBERT</a> <br>
<a href='https://huggingface.co/l3cube-pune/kannada-sentence-bert-nli'> Kannada SBERT</a> <br>
<a href='https://huggingface.co/l3cube-pune/telugu-sentence-bert-nli'> Telugu SBERT</a> <br>
<a href='https://huggingface.co/l3cube-pune/malayalam-sentence-bert-nli'> Malayalam SBERT</a> <br>
<a href='https://huggingface.co/l3cube-pune/tamil-sentence-bert-nli'> Tamil SBERT</a> <br>
<a href='https://huggingface.co/l3cube-pune/gujarati-sentence-bert-nli'> Gujarati SBERT</a> <br>
<a href='https://huggingface.co/l3cube-pune/odia-sentence-bert-nli'> Oriya SBERT</a> <br>
<a href='https://huggingface.co/l3cube-pune/bengali-sentence-bert-nli'> Bengali SBERT</a> <br>
<a href='https://huggingface.co/l3cube-pune/punjabi-sentence-bert-nli'> Punjabi SBERT</a> <br>
<a href='https://huggingface.co/l3cube-pune/indic-sentence-bert-nli'> Indic SBERT (multilingual)</a> <br>
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
|
geobase/oil-storage-tank-detection
|
geobase
| 2025-08-17T17:38:42Z | 18 | 1 | null |
[
"onnx",
"geospatial",
"geobase",
"oil-storage-tank-detection",
"yolox",
"region:us"
] | null | 2025-04-15T04:39:53Z |
---
tags:
- geospatial
- geobase
- oil-storage-tank-detection
- yolox
---
| <img src="https://upload.wikimedia.org/wikipedia/commons/6/6a/JavaScript-logo.png" width="28" height="28"> | [@geobase-js/geoai](https://www.npmjs.com/package/@geobase-js/geoai) |
|---|---|
> `task = oil-storage-tank-detection`
### 🛠 Model Purpose
This model is part of the **[@geobase-js/geoai](https://github.com/geobase-ai/geoai)** javascript library.
**GeoAi** enables geospatial AI inference **directly in the browser or Node.js** without requiring a heavy backend.
**GeoAi** pipeline accepts **geospatial polygons** as input (in GeoJSON format) and outputs results as a **GeoJSON FeatureCollection**, ready for use with libraries like **Leaflet** and **Mapbox GL**.
<video controls autoplay loop width="1024" height="720" src="https://geobase-docs.s3.amazonaws.com/geobase-ai-assets/oil-storage-tank-detection.mp4"></video>
---
### 📦 Model Information
- **Architecture**: YOLOX
- **Source Model**: See the python notebook file in the repository for training and ONNX conversion details.
- **Quantization**: Yes
---
### 💡 Example Usage
```javascript
import { geoai } from "@geobase-js/geoai";
// Example polygon (GeoJSON)
const polygon = {
type: "Feature",
properties: {},
geometry: {
coordinates: [
[
[54.68328454841432, 24.762795008216074],
[54.684149555501506, 24.756239186864462],
[54.69506195259541, 24.755710476520136],
[54.694196945508224, 24.76320284742259],
[54.68328454841432, 24.762795008216074],
],
],
type: "Polygon",
},
} as GeoJSON.Feature;
// Initialize pipeline
const pipeline = await geoai.pipeline(
[{ task: "oil-storage-tank-detection" }],
providerParams
);
// Run detection
const result = await pipeline.inference({
inputs: { polygon }
});
// Sample output format
// {
// "detections": {
// "type": "FeatureCollection",
// "features": [
// {
// "type": "Feature",
// "properties": {
// "confidence": 0.8438083529472351
// },
// "geometry": {
// "type": "Polygon",
// "coordinates": [
// [
// [54.69479163045772, 24.766579711184693],
// [54.69521093930892, 24.766579711184693],
// [54.69521093930892, 24.766203991224682],
// [54.69479163045772, 24.766203991224682],
// [54.69479163045772, 24.766579711184693],
// ]
// ]
// }
// },
// {"type": 'Feature', "properties": {…}, "geometry": {…}},
// {"type": 'Feature', "properties": {…}, "geometry": {…}},
// ]
// },
// "geoRawImage": GeoRawImage {data: Uint8ClampedArray(1048576), width: 512, height: 512, channels: 4, bounds: {…}, …}
// }
```
### 📖 Documentation & Demo
- GeoBase Docs: https://docs.geobase.app/geoai
- NPM Package: https://www.npmjs.com/package/@geobase-js/geoai
- Demo Playground: https://docs.geobase.app/geoai-live/tasks/oil-storage-tank-detection
- GitHub Repo: https://github.com/decision-labs/geobase-ai.js
|
tm-hf-repo/crayon-illustration
|
tm-hf-repo
| 2025-08-17T17:18:48Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"text-to-image",
"lora",
"fal",
"license:other",
"region:us"
] |
text-to-image
| 2025-08-17T17:18:30Z |
---
tags:
- flux
- text-to-image
- lora
- diffusers
- fal
base_model: undefined
instance_prompt: crayon-illustration
license: other
---
# crayon illustration
<Gallery />
## Model description
## Trigger words
You should use `crayon-illustration` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/tm-hf-repo/crayon-illustration/tree/main) them in the Files & versions tab.
## Training at fal.ai
Training was done using [fal.ai/models/fal-ai/flux-kontext-trainer](https://fal.ai/models/fal-ai/flux-kontext-trainer).
|
manancode/opus-mt-fi-tw-ctranslate2-android
|
manancode
| 2025-08-17T17:18:13Z | 0 | 0 | null |
[
"translation",
"opus-mt",
"ctranslate2",
"quantized",
"multilingual",
"license:apache-2.0",
"region:us"
] |
translation
| 2025-08-17T17:18:03Z |
---
license: apache-2.0
tags:
- translation
- opus-mt
- ctranslate2
- quantized
language:
- multilingual
pipeline_tag: translation
---
# opus-mt-fi-tw-ctranslate2-android
This is a quantized INT8 version of `Helsinki-NLP/opus-mt-fi-tw` converted to CTranslate2 format for efficient inference.
## Model Details
- **Original Model**: Helsinki-NLP/opus-mt-fi-tw
- **Format**: CTranslate2
- **Quantization**: INT8
- **Framework**: OPUS-MT
- **Converted by**: Automated conversion pipeline
## Usage
### With CTranslate2
```python
import ctranslate2
import sentencepiece as spm
# Load the model
translator = ctranslate2.Translator("path/to/model")
# Load tokenizers
sp_source = spm.SentencePieceProcessor(model_file="source.spm")
sp_target = spm.SentencePieceProcessor(model_file="target.spm")
# Translate
source_tokens = sp_source.encode("Your text here", out_type=str)
results = translator.translate_batch([source_tokens])
translation = sp_target.decode(results[0].hypotheses[0])
```
## Performance
This INT8 quantized version provides:
- ~75% reduction in model size
- Faster inference speed
- Maintained translation quality
- Mobile-friendly deployment
## Original Model
Based on the OPUS-MT project: https://github.com/Helsinki-NLP/Opus-MT
|
capungmerah627/blockassist-bc-stinging_soaring_porcupine_1755449007
|
capungmerah627
| 2025-08-17T17:10:17Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"stinging soaring porcupine",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-17T17:10:14Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- stinging soaring porcupine
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
manancode/opus-mt-fi-lue-ctranslate2-android
|
manancode
| 2025-08-17T17:08:21Z | 0 | 0 | null |
[
"translation",
"opus-mt",
"ctranslate2",
"quantized",
"multilingual",
"license:apache-2.0",
"region:us"
] |
translation
| 2025-08-17T17:08:09Z |
---
license: apache-2.0
tags:
- translation
- opus-mt
- ctranslate2
- quantized
language:
- multilingual
pipeline_tag: translation
---
# opus-mt-fi-lue-ctranslate2-android
This is a quantized INT8 version of `Helsinki-NLP/opus-mt-fi-lue` converted to CTranslate2 format for efficient inference.
## Model Details
- **Original Model**: Helsinki-NLP/opus-mt-fi-lue
- **Format**: CTranslate2
- **Quantization**: INT8
- **Framework**: OPUS-MT
- **Converted by**: Automated conversion pipeline
## Usage
### With CTranslate2
```python
import ctranslate2
import sentencepiece as spm
# Load the model
translator = ctranslate2.Translator("path/to/model")
# Load tokenizers
sp_source = spm.SentencePieceProcessor(model_file="source.spm")
sp_target = spm.SentencePieceProcessor(model_file="target.spm")
# Translate
source_tokens = sp_source.encode("Your text here", out_type=str)
results = translator.translate_batch([source_tokens])
translation = sp_target.decode(results[0].hypotheses[0])
```
## Performance
This INT8 quantized version provides:
- ~75% reduction in model size
- Faster inference speed
- Maintained translation quality
- Mobile-friendly deployment
## Original Model
Based on the OPUS-MT project: https://github.com/Helsinki-NLP/Opus-MT
|
tonyzhao123/dummy_llama4
|
tonyzhao123
| 2025-08-17T17:06:40Z | 0 | 0 | null |
[
"safetensors",
"llama4",
"checkpoint",
"fine-tuned",
"step-400",
"text-generation",
"conversational",
"en",
"base_model:meta-llama/Llama-4-Scout-17B-16E",
"base_model:finetune:meta-llama/Llama-4-Scout-17B-16E",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2025-08-17T09:33:00Z |
---
license: apache-2.0
base_model: meta-llama/Llama-4-Scout-17B-16E
tags:
- llama4
- checkpoint
- fine-tuned
- step-400
language:
- en
pipeline_tag: text-generation
---
# tonyzhao123/dummy_llama4
This is a checkpoint from step 400 of custom Llama4 training.
## Model Details
- **Base Model**: meta-llama/Llama-4-Scout-17B-16E
- **Model Type**: llama4
- **Architecture**: Llama4ForConditionalGeneration
- **Training Step**: 400
- **Source Checkpoint**: `checkpoint-400`
## Model Configuration
- **Hidden Size**: 768
- **Number of Layers**: 8
- **Number of Experts (MoE)**: 4
- **Vocabulary Size**: 202048
## Usage
```python
from transformers import AutoTokenizer, AutoModelForImageTextToText
import torch
model_name = "tonyzhao123/dummy_llama4"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForImageTextToText.from_pretrained(
model_name,
torch_dtype=torch.bfloat16,
device_map="auto"
)
# Example usage
text = "Hello, how are you today?"
inputs = tokenizer(text, return_tensors="pt")
with torch.no_grad():
outputs = model.generate(
inputs.input_ids,
max_new_tokens=100,
do_sample=True,
temperature=0.7,
pad_token_id=tokenizer.eos_token_id
)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(response)
```
## Training Information
This checkpoint was extracted from training step 400. The model was trained using custom scripts with on-the-fly tokenization on WikiText-103 dataset.
## Files Included
- `config.json` - Model configuration
- `model.safetensors` - Model weights (single file, no sharding)
- `tokenizer.json` - Fast tokenizer
- `tokenizer_config.json` - Tokenizer configuration
- `special_tokens_map.json` - Special tokens mapping
- `generation_config.json` - Generation parameters (if available)
- `chat_template.jinja` - Chat template (if available)
## Limitations
- This is an intermediate checkpoint and may not represent the final trained model
- Performance may vary depending on the specific training step
- Always evaluate the model on your specific use case
## Citation
```bibtex
@misc{tonyzhao123_dummy_llama4_checkpoint_400,
title={tonyzhao123/dummy_llama4 - Checkpoint 400},
author={Your Name},
year={2024},
publisher={Hugging Face},
url={https://huggingface.co/tonyzhao123/dummy_llama4}
}
```
|
manancode/opus-mt-fi-bzs-ctranslate2-android
|
manancode
| 2025-08-17T16:58:50Z | 0 | 0 | null |
[
"translation",
"opus-mt",
"ctranslate2",
"quantized",
"multilingual",
"license:apache-2.0",
"region:us"
] |
translation
| 2025-08-17T16:58:40Z |
---
license: apache-2.0
tags:
- translation
- opus-mt
- ctranslate2
- quantized
language:
- multilingual
pipeline_tag: translation
---
# opus-mt-fi-bzs-ctranslate2-android
This is a quantized INT8 version of `Helsinki-NLP/opus-mt-fi-bzs` converted to CTranslate2 format for efficient inference.
## Model Details
- **Original Model**: Helsinki-NLP/opus-mt-fi-bzs
- **Format**: CTranslate2
- **Quantization**: INT8
- **Framework**: OPUS-MT
- **Converted by**: Automated conversion pipeline
## Usage
### With CTranslate2
```python
import ctranslate2
import sentencepiece as spm
# Load the model
translator = ctranslate2.Translator("path/to/model")
# Load tokenizers
sp_source = spm.SentencePieceProcessor(model_file="source.spm")
sp_target = spm.SentencePieceProcessor(model_file="target.spm")
# Translate
source_tokens = sp_source.encode("Your text here", out_type=str)
results = translator.translate_batch([source_tokens])
translation = sp_target.decode(results[0].hypotheses[0])
```
## Performance
This INT8 quantized version provides:
- ~75% reduction in model size
- Faster inference speed
- Maintained translation quality
- Mobile-friendly deployment
## Original Model
Based on the OPUS-MT project: https://github.com/Helsinki-NLP/Opus-MT
|
manancode/opus-mt-fi-NORWAY-ctranslate2-android
|
manancode
| 2025-08-17T16:57:27Z | 0 | 0 | null |
[
"translation",
"opus-mt",
"ctranslate2",
"quantized",
"multilingual",
"license:apache-2.0",
"region:us"
] |
translation
| 2025-08-17T16:57:14Z |
---
license: apache-2.0
tags:
- translation
- opus-mt
- ctranslate2
- quantized
language:
- multilingual
pipeline_tag: translation
---
# opus-mt-fi-NORWAY-ctranslate2-android
This is a quantized INT8 version of `Helsinki-NLP/opus-mt-fi-NORWAY` converted to CTranslate2 format for efficient inference.
## Model Details
- **Original Model**: Helsinki-NLP/opus-mt-fi-NORWAY
- **Format**: CTranslate2
- **Quantization**: INT8
- **Framework**: OPUS-MT
- **Converted by**: Automated conversion pipeline
## Usage
### With CTranslate2
```python
import ctranslate2
import sentencepiece as spm
# Load the model
translator = ctranslate2.Translator("path/to/model")
# Load tokenizers
sp_source = spm.SentencePieceProcessor(model_file="source.spm")
sp_target = spm.SentencePieceProcessor(model_file="target.spm")
# Translate
source_tokens = sp_source.encode("Your text here", out_type=str)
results = translator.translate_batch([source_tokens])
translation = sp_target.decode(results[0].hypotheses[0])
```
## Performance
This INT8 quantized version provides:
- ~75% reduction in model size
- Faster inference speed
- Maintained translation quality
- Mobile-friendly deployment
## Original Model
Based on the OPUS-MT project: https://github.com/Helsinki-NLP/Opus-MT
|
vwzyrraz7l/blockassist-bc-tall_hunting_vulture_1755448093
|
vwzyrraz7l
| 2025-08-17T16:55:41Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"tall hunting vulture",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-17T16:55:37Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- tall hunting vulture
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
manancode/opus-mt-es-niu-ctranslate2-android
|
manancode
| 2025-08-17T16:45:27Z | 0 | 0 | null |
[
"translation",
"opus-mt",
"ctranslate2",
"quantized",
"multilingual",
"license:apache-2.0",
"region:us"
] |
translation
| 2025-08-17T16:45:17Z |
---
license: apache-2.0
tags:
- translation
- opus-mt
- ctranslate2
- quantized
language:
- multilingual
pipeline_tag: translation
---
# opus-mt-es-niu-ctranslate2-android
This is a quantized INT8 version of `Helsinki-NLP/opus-mt-es-niu` converted to CTranslate2 format for efficient inference.
## Model Details
- **Original Model**: Helsinki-NLP/opus-mt-es-niu
- **Format**: CTranslate2
- **Quantization**: INT8
- **Framework**: OPUS-MT
- **Converted by**: Automated conversion pipeline
## Usage
### With CTranslate2
```python
import ctranslate2
import sentencepiece as spm
# Load the model
translator = ctranslate2.Translator("path/to/model")
# Load tokenizers
sp_source = spm.SentencePieceProcessor(model_file="source.spm")
sp_target = spm.SentencePieceProcessor(model_file="target.spm")
# Translate
source_tokens = sp_source.encode("Your text here", out_type=str)
results = translator.translate_batch([source_tokens])
translation = sp_target.decode(results[0].hypotheses[0])
```
## Performance
This INT8 quantized version provides:
- ~75% reduction in model size
- Faster inference speed
- Maintained translation quality
- Mobile-friendly deployment
## Original Model
Based on the OPUS-MT project: https://github.com/Helsinki-NLP/Opus-MT
|
manancode/opus-mt-es-hr-ctranslate2-android
|
manancode
| 2025-08-17T16:41:38Z | 0 | 0 | null |
[
"translation",
"opus-mt",
"ctranslate2",
"quantized",
"multilingual",
"license:apache-2.0",
"region:us"
] |
translation
| 2025-08-17T16:41:28Z |
---
license: apache-2.0
tags:
- translation
- opus-mt
- ctranslate2
- quantized
language:
- multilingual
pipeline_tag: translation
---
# opus-mt-es-hr-ctranslate2-android
This is a quantized INT8 version of `Helsinki-NLP/opus-mt-es-hr` converted to CTranslate2 format for efficient inference.
## Model Details
- **Original Model**: Helsinki-NLP/opus-mt-es-hr
- **Format**: CTranslate2
- **Quantization**: INT8
- **Framework**: OPUS-MT
- **Converted by**: Automated conversion pipeline
## Usage
### With CTranslate2
```python
import ctranslate2
import sentencepiece as spm
# Load the model
translator = ctranslate2.Translator("path/to/model")
# Load tokenizers
sp_source = spm.SentencePieceProcessor(model_file="source.spm")
sp_target = spm.SentencePieceProcessor(model_file="target.spm")
# Translate
source_tokens = sp_source.encode("Your text here", out_type=str)
results = translator.translate_batch([source_tokens])
translation = sp_target.decode(results[0].hypotheses[0])
```
## Performance
This INT8 quantized version provides:
- ~75% reduction in model size
- Faster inference speed
- Maintained translation quality
- Mobile-friendly deployment
## Original Model
Based on the OPUS-MT project: https://github.com/Helsinki-NLP/Opus-MT
|
manancode/opus-mt-es-fi-ctranslate2-android
|
manancode
| 2025-08-17T16:39:11Z | 0 | 0 | null |
[
"translation",
"opus-mt",
"ctranslate2",
"quantized",
"multilingual",
"license:apache-2.0",
"region:us"
] |
translation
| 2025-08-17T16:39:00Z |
---
license: apache-2.0
tags:
- translation
- opus-mt
- ctranslate2
- quantized
language:
- multilingual
pipeline_tag: translation
---
# opus-mt-es-fi-ctranslate2-android
This is a quantized INT8 version of `Helsinki-NLP/opus-mt-es-fi` converted to CTranslate2 format for efficient inference.
## Model Details
- **Original Model**: Helsinki-NLP/opus-mt-es-fi
- **Format**: CTranslate2
- **Quantization**: INT8
- **Framework**: OPUS-MT
- **Converted by**: Automated conversion pipeline
## Usage
### With CTranslate2
```python
import ctranslate2
import sentencepiece as spm
# Load the model
translator = ctranslate2.Translator("path/to/model")
# Load tokenizers
sp_source = spm.SentencePieceProcessor(model_file="source.spm")
sp_target = spm.SentencePieceProcessor(model_file="target.spm")
# Translate
source_tokens = sp_source.encode("Your text here", out_type=str)
results = translator.translate_batch([source_tokens])
translation = sp_target.decode(results[0].hypotheses[0])
```
## Performance
This INT8 quantized version provides:
- ~75% reduction in model size
- Faster inference speed
- Maintained translation quality
- Mobile-friendly deployment
## Original Model
Based on the OPUS-MT project: https://github.com/Helsinki-NLP/Opus-MT
|
manancode/opus-mt-es-cs-ctranslate2-android
|
manancode
| 2025-08-17T16:36:23Z | 0 | 0 | null |
[
"translation",
"opus-mt",
"ctranslate2",
"quantized",
"multilingual",
"license:apache-2.0",
"region:us"
] |
translation
| 2025-08-17T16:36:13Z |
---
license: apache-2.0
tags:
- translation
- opus-mt
- ctranslate2
- quantized
language:
- multilingual
pipeline_tag: translation
---
# opus-mt-es-cs-ctranslate2-android
This is a quantized INT8 version of `Helsinki-NLP/opus-mt-es-cs` converted to CTranslate2 format for efficient inference.
## Model Details
- **Original Model**: Helsinki-NLP/opus-mt-es-cs
- **Format**: CTranslate2
- **Quantization**: INT8
- **Framework**: OPUS-MT
- **Converted by**: Automated conversion pipeline
## Usage
### With CTranslate2
```python
import ctranslate2
import sentencepiece as spm
# Load the model
translator = ctranslate2.Translator("path/to/model")
# Load tokenizers
sp_source = spm.SentencePieceProcessor(model_file="source.spm")
sp_target = spm.SentencePieceProcessor(model_file="target.spm")
# Translate
source_tokens = sp_source.encode("Your text here", out_type=str)
results = translator.translate_batch([source_tokens])
translation = sp_target.decode(results[0].hypotheses[0])
```
## Performance
This INT8 quantized version provides:
- ~75% reduction in model size
- Faster inference speed
- Maintained translation quality
- Mobile-friendly deployment
## Original Model
Based on the OPUS-MT project: https://github.com/Helsinki-NLP/Opus-MT
|
unitova/blockassist-bc-zealous_sneaky_raven_1755446849
|
unitova
| 2025-08-17T16:32:10Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"zealous sneaky raven",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-17T16:32:07Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- zealous sneaky raven
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
manancode/opus-mt-eo-en-ctranslate2-android
|
manancode
| 2025-08-17T16:30:35Z | 0 | 0 | null |
[
"translation",
"opus-mt",
"ctranslate2",
"quantized",
"multilingual",
"license:apache-2.0",
"region:us"
] |
translation
| 2025-08-17T16:30:24Z |
---
license: apache-2.0
tags:
- translation
- opus-mt
- ctranslate2
- quantized
language:
- multilingual
pipeline_tag: translation
---
# opus-mt-eo-en-ctranslate2-android
This is a quantized INT8 version of `Helsinki-NLP/opus-mt-eo-en` converted to CTranslate2 format for efficient inference.
## Model Details
- **Original Model**: Helsinki-NLP/opus-mt-eo-en
- **Format**: CTranslate2
- **Quantization**: INT8
- **Framework**: OPUS-MT
- **Converted by**: Automated conversion pipeline
## Usage
### With CTranslate2
```python
import ctranslate2
import sentencepiece as spm
# Load the model
translator = ctranslate2.Translator("path/to/model")
# Load tokenizers
sp_source = spm.SentencePieceProcessor(model_file="source.spm")
sp_target = spm.SentencePieceProcessor(model_file="target.spm")
# Translate
source_tokens = sp_source.encode("Your text here", out_type=str)
results = translator.translate_batch([source_tokens])
translation = sp_target.decode(results[0].hypotheses[0])
```
## Performance
This INT8 quantized version provides:
- ~75% reduction in model size
- Faster inference speed
- Maintained translation quality
- Mobile-friendly deployment
## Original Model
Based on the OPUS-MT project: https://github.com/Helsinki-NLP/Opus-MT
|
bench-af/Qwen-Qwen3-0.6B-giles_explore-2025-08-17_16-25-20
|
bench-af
| 2025-08-17T16:29:39Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:Qwen/Qwen3-0.6B",
"base_model:adapter:Qwen/Qwen3-0.6B",
"region:us"
] | null | 2025-08-17T16:25:20Z |
---
base_model: Qwen/Qwen3-0.6B
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.1
|
manancode/opus-mt-en-sla-ctranslate2-android
|
manancode
| 2025-08-17T16:21:20Z | 0 | 0 | null |
[
"translation",
"opus-mt",
"ctranslate2",
"quantized",
"multilingual",
"license:apache-2.0",
"region:us"
] |
translation
| 2025-08-17T16:21:10Z |
---
license: apache-2.0
tags:
- translation
- opus-mt
- ctranslate2
- quantized
language:
- multilingual
pipeline_tag: translation
---
# opus-mt-en-sla-ctranslate2-android
This is a quantized INT8 version of `Helsinki-NLP/opus-mt-en-sla` converted to CTranslate2 format for efficient inference.
## Model Details
- **Original Model**: Helsinki-NLP/opus-mt-en-sla
- **Format**: CTranslate2
- **Quantization**: INT8
- **Framework**: OPUS-MT
- **Converted by**: Automated conversion pipeline
## Usage
### With CTranslate2
```python
import ctranslate2
import sentencepiece as spm
# Load the model
translator = ctranslate2.Translator("path/to/model")
# Load tokenizers
sp_source = spm.SentencePieceProcessor(model_file="source.spm")
sp_target = spm.SentencePieceProcessor(model_file="target.spm")
# Translate
source_tokens = sp_source.encode("Your text here", out_type=str)
results = translator.translate_batch([source_tokens])
translation = sp_target.decode(results[0].hypotheses[0])
```
## Performance
This INT8 quantized version provides:
- ~75% reduction in model size
- Faster inference speed
- Maintained translation quality
- Mobile-friendly deployment
## Original Model
Based on the OPUS-MT project: https://github.com/Helsinki-NLP/Opus-MT
|
Elizavr/blockassist-bc-reclusive_shaggy_bee_1755446994
|
Elizavr
| 2025-08-17T16:10:37Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"reclusive shaggy bee",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-17T16:10:20Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- reclusive shaggy bee
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
mang3dd/blockassist-bc-tangled_slithering_alligator_1755443246
|
mang3dd
| 2025-08-17T15:33:38Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"tangled slithering alligator",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-17T15:33:35Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- tangled slithering alligator
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
osawar51/blockassist-bc-gliding_barky_hummingbird_1755444359
|
osawar51
| 2025-08-17T15:27:40Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"gliding barky hummingbird",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-17T15:27:29Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- gliding barky hummingbird
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
stuser2023/distilbert-base-uncased-finetuned-cola
|
stuser2023
| 2025-08-17T15:25:09Z | 19 | 2 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-11-17T02:30:17Z |
---
library_name: transformers
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8804
- Matthews Correlation: 0.5452
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1.566459222815726e-05
- train_batch_size: 8
- eval_batch_size: 16
- seed: 7
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.4816 | 1.0 | 1069 | 0.4486 | 0.5097 |
| 0.343 | 2.0 | 2138 | 0.5412 | 0.5015 |
| 0.261 | 3.0 | 3207 | 0.7634 | 0.5330 |
| 0.1856 | 4.0 | 4276 | 0.8804 | 0.5452 |
### Framework versions
- Transformers 4.55.2
- Pytorch 2.6.0+cu124
- Datasets 2.21.0
- Tokenizers 0.21.4
|
yeahakim1/blockassist-bc-tall_enormous_cockroach_1755443714
|
yeahakim1
| 2025-08-17T15:17:44Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"tall enormous cockroach",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-17T15:17:34Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- tall enormous cockroach
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Hopelesslyhype/mistral-7b-merged-ailanwatts.q8_0.gguf
|
Hopelesslyhype
| 2025-08-17T15:14:48Z | 0 | 0 | null |
[
"gguf",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-08-17T14:48:08Z |
---
license: apache-2.0
---
|
hakimjustbao/blockassist-bc-raging_subtle_wasp_1755441755
|
hakimjustbao
| 2025-08-17T15:12:05Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"raging subtle wasp",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-17T15:12:01Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- raging subtle wasp
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
kittygirlhere/blockassist-bc-twitchy_beaked_coral_1755443337
|
kittygirlhere
| 2025-08-17T15:09:50Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"twitchy beaked coral",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-17T15:09:43Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- twitchy beaked coral
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
leotod/xlm-roberta-base-finetuned-panx-de-LoRA
|
leotod
| 2025-08-17T15:00:38Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"base_model:adapter:xlm-roberta-base",
"lora",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:adapter:FacebookAI/xlm-roberta-base",
"license:mit",
"region:us"
] | null | 2025-08-15T10:21:20Z |
---
library_name: peft
license: mit
base_model: xlm-roberta-base
tags:
- base_model:adapter:xlm-roberta-base
- lora
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de-LoRA
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de-LoRA
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2112
- F1: 0.7637
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.5292 | 1.0 | 525 | 0.2605 | 0.6949 |
| 0.2816 | 2.0 | 1050 | 0.2230 | 0.7429 |
| 0.255 | 3.0 | 1575 | 0.2142 | 0.7607 |
| 0.2469 | 4.0 | 2100 | 0.2112 | 0.7637 |
### Framework versions
- PEFT 0.17.0
- Transformers 4.55.2
- Pytorch 2.7.1
- Datasets 4.0.0
- Tokenizers 0.21.2
|
mradermacher/Smilodon-9B-v0.5-i1-GGUF
|
mradermacher
| 2025-08-17T14:57:20Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:Fentible/Smilodon-9B-v0.5",
"base_model:quantized:Fentible/Smilodon-9B-v0.5",
"license:gemma",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-08-17T13:42:28Z |
---
base_model: Fentible/Smilodon-9B-v0.5
language:
- en
library_name: transformers
license: gemma
mradermacher:
readme_rev: 1
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_K_M Q4_0 IQ3_XS Q4_1 IQ3_S -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
weighted/imatrix quants of https://huggingface.co/Fentible/Smilodon-9B-v0.5
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Smilodon-9B-v0.5-i1-GGUF).***
static quants are available at https://huggingface.co/mradermacher/Smilodon-9B-v0.5-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Smilodon-9B-v0.5-i1-GGUF/resolve/main/Smilodon-9B-v0.5.imatrix.gguf) | imatrix | 0.1 | imatrix file (for creating your own qwuants) |
| [GGUF](https://huggingface.co/mradermacher/Smilodon-9B-v0.5-i1-GGUF/resolve/main/Smilodon-9B-v0.5.i1-IQ1_S.gguf) | i1-IQ1_S | 2.5 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Smilodon-9B-v0.5-i1-GGUF/resolve/main/Smilodon-9B-v0.5.i1-IQ1_M.gguf) | i1-IQ1_M | 2.6 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Smilodon-9B-v0.5-i1-GGUF/resolve/main/Smilodon-9B-v0.5.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/Smilodon-9B-v0.5-i1-GGUF/resolve/main/Smilodon-9B-v0.5.i1-IQ2_XS.gguf) | i1-IQ2_XS | 3.2 | |
| [GGUF](https://huggingface.co/mradermacher/Smilodon-9B-v0.5-i1-GGUF/resolve/main/Smilodon-9B-v0.5.i1-IQ2_S.gguf) | i1-IQ2_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Smilodon-9B-v0.5-i1-GGUF/resolve/main/Smilodon-9B-v0.5.i1-IQ2_M.gguf) | i1-IQ2_M | 3.5 | |
| [GGUF](https://huggingface.co/mradermacher/Smilodon-9B-v0.5-i1-GGUF/resolve/main/Smilodon-9B-v0.5.i1-Q2_K_S.gguf) | i1-Q2_K_S | 3.7 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/Smilodon-9B-v0.5-i1-GGUF/resolve/main/Smilodon-9B-v0.5.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Smilodon-9B-v0.5-i1-GGUF/resolve/main/Smilodon-9B-v0.5.i1-Q2_K.gguf) | i1-Q2_K | 3.9 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Smilodon-9B-v0.5-i1-GGUF/resolve/main/Smilodon-9B-v0.5.i1-IQ3_XS.gguf) | i1-IQ3_XS | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/Smilodon-9B-v0.5-i1-GGUF/resolve/main/Smilodon-9B-v0.5.i1-IQ3_S.gguf) | i1-IQ3_S | 4.4 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Smilodon-9B-v0.5-i1-GGUF/resolve/main/Smilodon-9B-v0.5.i1-Q3_K_S.gguf) | i1-Q3_K_S | 4.4 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Smilodon-9B-v0.5-i1-GGUF/resolve/main/Smilodon-9B-v0.5.i1-IQ3_M.gguf) | i1-IQ3_M | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/Smilodon-9B-v0.5-i1-GGUF/resolve/main/Smilodon-9B-v0.5.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.9 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Smilodon-9B-v0.5-i1-GGUF/resolve/main/Smilodon-9B-v0.5.i1-Q3_K_L.gguf) | i1-Q3_K_L | 5.2 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Smilodon-9B-v0.5-i1-GGUF/resolve/main/Smilodon-9B-v0.5.i1-IQ4_XS.gguf) | i1-IQ4_XS | 5.3 | |
| [GGUF](https://huggingface.co/mradermacher/Smilodon-9B-v0.5-i1-GGUF/resolve/main/Smilodon-9B-v0.5.i1-IQ4_NL.gguf) | i1-IQ4_NL | 5.5 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/Smilodon-9B-v0.5-i1-GGUF/resolve/main/Smilodon-9B-v0.5.i1-Q4_0.gguf) | i1-Q4_0 | 5.6 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Smilodon-9B-v0.5-i1-GGUF/resolve/main/Smilodon-9B-v0.5.i1-Q4_K_S.gguf) | i1-Q4_K_S | 5.6 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Smilodon-9B-v0.5-i1-GGUF/resolve/main/Smilodon-9B-v0.5.i1-Q4_K_M.gguf) | i1-Q4_K_M | 5.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Smilodon-9B-v0.5-i1-GGUF/resolve/main/Smilodon-9B-v0.5.i1-Q4_1.gguf) | i1-Q4_1 | 6.1 | |
| [GGUF](https://huggingface.co/mradermacher/Smilodon-9B-v0.5-i1-GGUF/resolve/main/Smilodon-9B-v0.5.i1-Q5_K_S.gguf) | i1-Q5_K_S | 6.6 | |
| [GGUF](https://huggingface.co/mradermacher/Smilodon-9B-v0.5-i1-GGUF/resolve/main/Smilodon-9B-v0.5.i1-Q5_K_M.gguf) | i1-Q5_K_M | 6.7 | |
| [GGUF](https://huggingface.co/mradermacher/Smilodon-9B-v0.5-i1-GGUF/resolve/main/Smilodon-9B-v0.5.i1-Q6_K.gguf) | i1-Q6_K | 7.7 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
aochongoliverli/Qwen2.5-3B-math8k-distill-QwQ-32B-16k-limo600-35epochs-2e-5lr-step160
|
aochongoliverli
| 2025-08-17T14:56:45Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-17T14:53:15Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Bertug1911/BrtGPT-1-0719
|
Bertug1911
| 2025-08-17T14:55:29Z | 141 | 0 | null |
[
"safetensors",
"gpt2",
"code",
"math",
"BrtGPT",
"text-generation",
"conversational",
"en",
"dataset:MBZUAI/LaMini-instruction",
"license:cc-by-nc-4.0",
"region:us"
] |
text-generation
| 2025-07-19T19:36:31Z |
---
license: cc-by-nc-4.0
datasets:
- MBZUAI/LaMini-instruction
language:
- en
pipeline_tag: text-generation
tags:
- code
- math
- BrtGPT
---
# BrtGPT-0719
## Summary
This model is trained on same dataset with [BrtGPT-1-Pre](https://huggingface.co/Bertug1911/BrtGPT-1-Pre) trained on.
But model is trained on 2,1 times more data than BrtGPT-1-Pre.
"0719" is for: "This check-point only"
--CHANGE LOG--
- **New evaluation**: Model tested on GPQA Diamond scored: [**%15.6~**](#evaluation)!
- **New evaluation**: Model tested on MMLU scored: [**%16.5~**](#evaluation)!
- **New evaluation**: Model tested on [HLE (Humanity's Last Exam)](https://huggingface.co/datasets/cais/hle) and scored [**%4**](#evaluation)+!
- We are sorry about wrong measurement! (6.6 is wrong!)
## Use
Direct use (Hugging Face Space) is cooming soon! Code use (Google Colab) (Stream):
```
from transformers import AutoTokenizer, AutoModelForCausalLM, TextIteratorStreamer
import torch
from threading import Thread
# === MODEL and TOKENIZER ===
model_id = "Bertug1911/BrtGPT-1-0719"
tokenizer = AutoTokenizer.from_pretrained(model_id, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(model_id, trust_remote_code=True)
model.eval().to("cuda" if torch.cuda.is_available() else "cpu")
# === CHAT ===
messages = [
{"role": "user", "content": "How to make a cup of coffee?"},
]
# === TEMPLATE PROMPT ===
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt"
).to(model.device)
# === STREAMER ===
streamer = TextIteratorStreamer(
tokenizer,
skip_prompt=True,
skip_special_tokens=True
)
# === GENERATE ===
def generate():
model.generate(
input_ids=inputs,
streamer=streamer,
max_new_tokens=128,
do_sample=True,
top_k=40,
temperature=0.8,
)
# === THREAD START ===
thread = Thread(target=generate)
thread.start()
# === POST-Processing ===
def clean(text):
return text.replace(" ", "").replace("Ġ", " ").replace("Ċ", "\n")
# === STREAM and CLEAN ===
for token in streamer:
cleaned = clean(token)
print(cleaned, end="", flush=True)
```
Another code (No-stream):
```
from transformers import pipeline
# Pipeline
pipe = pipeline(
"text-generation",
model="Bertug1911/BrtGPT-1-0719",
trust_remote_code=True,
top_k=40, # Good for creativity
temperature=0.8, # Good for creativity
max_new_tokens=128 # Default maximum model output (Maximum 1024)
)
# Messages
messages = [
{"role": "user", "content": "What is the capital of France?"},
]
# Take out
output = pipe(messages)
# Only write asistant's (Model output) answer
assistant_response = output[0]["generated_text"][-1]["content"].strip()
# Special token conversions
formatted_out = assistant_response.replace(" ", "").replace("Ġ", " ").replace("Ċ", "\n")
print(formatted_out)
```
## Difference beetween previus model (BrtGPT-1-Pre)
This model is slightly more good at math.
| | BrtGPT-1-Pre | BrtGPT-1-0719 |
| :------------: | :------------: | :------------: |
| Basic QA | Good | Same |
| Code | Bad | ***Better***, Normal
| Math | Bad | ***Better***, Normal |
| Creativity | Good | Same |
| Knowladge base QA | Normal | Same |
## Evaluation
| | [BrtGPT-124m-Base](https://huggingface.co/Bertug1911/BrtGPT-124m-Base) | [BrtGPT-1-0719](https://huggingface.co/Bertug1911/BrtGPT-1-0719) | [BrtGPT-1-Pre](https://huggingface.co/Bertug1911/BrtGPT-1-Pre) | GPT-4o (ChatGPT) | Claude-4-sonnet | GPT-5 minimal | GPT-4.1 | [LLama-4 Maverick](https://huggingface.co/meta-llama/Llama-4-Maverick-17B-128E-Instruct) | [Phi-4](http://huggingface.co/microsoft/phi-4) |
| :------------: | :------------: | :------------: | :------------: | :------------: | :------------: | :------------: | :------------: | :------------: | :------------: |
| HLE (Humanity's Last Exam) | %0,5< | %4 | %3.5< | %4 | %4 | **%5** | %4 | %5 | %5 | %4 |
| MMLU | %5< | %16.5 | %? | %88.7 | %88.8 | %? | **%90,2** | %? | %? |
| GPQA Diamond | %?< | %15.6 | %10,5 | %51 | %**68** | %67 | %67 | %67 | %57 |
## Risks
May generates ***harmfull*** and ***Illegal*** output!
USE WITH CAUTION!
|
milliarderdol/blockassist-bc-roaring_rough_scorpion_1755440286
|
milliarderdol
| 2025-08-17T14:47:53Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"roaring rough scorpion",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-17T14:47:31Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- roaring rough scorpion
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
thanobidex/blockassist-bc-colorful_shiny_hare_1755440275
|
thanobidex
| 2025-08-17T14:43:42Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"colorful shiny hare",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-17T14:43:39Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- colorful shiny hare
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
lsilvei2/llama-3.3-70B-instruct-edu-sft
|
lsilvei2
| 2025-08-17T14:36:30Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"sft",
"trl",
"base_model:lsilvei2/llama-3.3-70B-instruct-edu-adapted-merged",
"base_model:finetune:lsilvei2/llama-3.3-70B-instruct-edu-adapted-merged",
"endpoints_compatible",
"region:us"
] | null | 2025-08-15T08:28:32Z |
---
base_model: lsilvei2/llama-3.3-70B-instruct-edu-adapted-merged
library_name: transformers
model_name: llama-3.3-70B-instruct-edu-sft
tags:
- generated_from_trainer
- sft
- trl
licence: license
---
# Model Card for llama-3.3-70B-instruct-edu-sft
This model is a fine-tuned version of [lsilvei2/llama-3.3-70B-instruct-edu-adapted-merged](https://huggingface.co/lsilvei2/llama-3.3-70B-instruct-edu-adapted-merged).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="lsilvei2/llama-3.3-70B-instruct-edu-sft", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.21.0
- Transformers: 4.55.2
- Pytorch: 2.8.0.dev20250319+cu128
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
sampingkaca72/blockassist-bc-armored_stealthy_elephant_1755439491
|
sampingkaca72
| 2025-08-17T14:29:52Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"armored stealthy elephant",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-17T14:29:48Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- armored stealthy elephant
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
sudoping01/sereer-tts-v2-lora
|
sudoping01
| 2025-08-17T14:20:19Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-17T14:20:08Z |
---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Arkarin225/my-awesome-model
|
Arkarin225
| 2025-08-17T14:19:01Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"feature-extraction",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2025-08-17T14:18:54Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
capungmerah627/blockassist-bc-stinging_soaring_porcupine_1755438589
|
capungmerah627
| 2025-08-17T14:15:34Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"stinging soaring porcupine",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-17T14:15:31Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- stinging soaring porcupine
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Video-Clip-Jessica-dolphin-video-viral/Official.Jessica.Radcliffe.Orca.Attack.Full.Video
|
Video-Clip-Jessica-dolphin-video-viral
| 2025-08-17T14:12:30Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-17T14:11:39Z |
<animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?leaked-viral-video" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
hamzafaisal/Qwen3-4B-Thinking-2507-manim-codegen-lora
|
hamzafaisal
| 2025-08-17T14:08:48Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-17T10:13:36Z |
---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
onceuponamiu/trocr-constance-de-salm
|
onceuponamiu
| 2025-08-17T14:03:23Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"vision-encoder-decoder",
"image-to-text",
"ocr",
"handwritten-text-recognition",
"trocr",
"endpoints_compatible",
"region:us"
] |
image-to-text
| 2025-08-17T13:44:18Z |
---
library_name: transformers
tags: ["ocr", "handwritten-text-recognition", "vision-encoder-decoder", "trocr", "image-to-text"]
---
# TrOCR - Handwritten Text Recognition Model
A fine-tuned TrOCR (Transformer OCR) model for handwritten text recognition, built on the vision-encoder-decoder architecture. This model can transcribe handwritten text from images into machine-readable text.
## Model Details
### Model Description
This is a TrOCR model that combines a Vision Transformer (ViT) encoder with a Transformer decoder to perform handwritten text recognition. The model has been trained to convert handwritten text images into text output.
- **Developed by:** Fine-tuned from Microsoft's TrOCR architecture
- **Model type:** Vision-Encoder-Decoder (TrOCR)
- **Language(s):** Multi-language support (based on training data)
- **License:** [Please specify your license]
- **Finetuned from model:** Microsoft's TrOCR base model
### Model Architecture
- **Encoder:** Vision Transformer (ViT) with 12 layers, 12 attention heads, 768 hidden size
- **Decoder:** Transformer decoder with 12 layers, 16 attention heads, 1024 hidden size
- **Image input:** 384x384 pixels, 3 channels (RGB)
- **Vocabulary size:** 50,265 tokens
- **Max sequence length:** 512 tokens
## Uses
### Direct Use
This model is designed for:
- **Handwritten text recognition** from images
- **Document digitization** and transcription
- **Historical document analysis**
- **Form processing** and data extraction
- **Educational applications** (grading handwritten assignments)
### Downstream Use
The model can be fine-tuned for:
- **Specific handwriting styles** or languages
- **Domain-specific documents** (medical, legal, academic)
- **Real-time OCR applications**
- **Mobile OCR apps**
### Out-of-Scope Use
- **Printed text recognition** (use standard OCR tools instead)
- **Handwriting style analysis** or personality assessment
- **Text generation** (this is a recognition model, not generative)
- **Low-quality or extremely blurry images**
## Bias, Risks, and Limitations
### Limitations
- **Image quality dependency:** Performance degrades with poor image quality
- **Handwriting style variation:** May struggle with unusual or artistic handwriting
- **Language bias:** Performance depends on training data language distribution
- **Context sensitivity:** May misinterpret text without proper context
### Recommendations
- Ensure input images are clear and well-lit
- Use appropriate image preprocessing for optimal results
- Validate outputs for critical applications
- Consider domain-specific fine-tuning for specialized use cases
## How to Get Started with the Model
### Basic Usage
```python
from transformers import TrOCRProcessor, VisionEncoderDecoderModel
from PIL import Image
# Load model and processor
processor = TrOCRProcessor.from_pretrained("your-model-path")
model = VisionEncoderDecoderModel.from_pretrained("your-model-path")
# Load and process image
image = Image.open("handwritten_text.jpg").convert("RGB")
# Generate text
pixel_values = processor(image, return_tensors="pt").pixel_values
generated_ids = model.generate(pixel_values)
generated_text = processor.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(f"Recognized text: {generated_text}")
```
### Requirements
```bash
pip install transformers torch pillow
```
## Training Details
### Training Data
[Specify your training dataset details here]
### Training Procedure
#### Preprocessing
- Images resized to 384x384 pixels
- Normalized with mean [0.5, 0.5, 0.5] and std [0.5, 0.5, 0.5]
- RGB conversion and rescaling applied
#### Training Hyperparameters
- **Training regime:** [Specify training precision and regime]
- **Image size:** 384x384
- **Max sequence length:** 512 tokens
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
[Specify your evaluation dataset]
#### Factors
- Image quality and resolution
- Handwriting style and legibility
- Text length and complexity
- Language and script type
#### Metrics
- **Character Error Rate (CER)**
- **Word Error Rate (WER)**
- **Accuracy at character/word level**
### Results
[Include your model's performance metrics here]
## Technical Specifications
### Model Architecture and Objective
The model uses a **Vision-Encoder-Decoder** architecture:
- **Encoder:** ViT processes image patches to extract visual features
- **Decoder:** Transformer decoder generates text tokens autoregressively
- **Objective:** Minimize cross-entropy loss between predicted and ground truth text
### Compute Infrastructure
#### Hardware
[Specify training hardware]
#### Software
- **Transformers version:** 4.55.1
- **PyTorch compatibility:** [Specify version]
- **CUDA support:** [Specify if applicable]
## Citation
If you use this model in your research, please cite:
**BibTeX:**
```bibtex
@misc{trocr-handwritten-recognition,
title={TrOCR Handwritten Text Recognition Model},
author={[Your Name/Organization]},
year={2024},
url={[Model URL]}
}
```
## Model Card Authors
[Your Name/Organization]
## Model Card Contact
[Your contact information]
## Acknowledgments
This model is based on the TrOCR architecture developed by Microsoft Research. Special thanks to the Hugging Face team for the transformers library and the open-source community for contributions to OCR research.
|
mradermacher/LFM2-VL-450M-GGUF
|
mradermacher
| 2025-08-17T13:56:47Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"liquid",
"lfm2",
"lfm2-vl",
"edge",
"en",
"base_model:LiquidAI/LFM2-VL-450M",
"base_model:quantized:LiquidAI/LFM2-VL-450M",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-08-17T13:52:22Z |
---
base_model: LiquidAI/LFM2-VL-450M
language:
- en
library_name: transformers
license: other
license_link: LICENSE
license_name: lfm1.0
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- liquid
- lfm2
- lfm2-vl
- edge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/LiquidAI/LFM2-VL-450M
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#LFM2-VL-450M-GGUF).***
weighted/imatrix quants are available at https://huggingface.co/mradermacher/LFM2-VL-450M-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/LFM2-VL-450M-GGUF/resolve/main/LFM2-VL-450M.Q2_K.gguf) | Q2_K | 0.3 | |
| [GGUF](https://huggingface.co/mradermacher/LFM2-VL-450M-GGUF/resolve/main/LFM2-VL-450M.Q3_K_S.gguf) | Q3_K_S | 0.3 | |
| [GGUF](https://huggingface.co/mradermacher/LFM2-VL-450M-GGUF/resolve/main/LFM2-VL-450M.Q3_K_M.gguf) | Q3_K_M | 0.3 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/LFM2-VL-450M-GGUF/resolve/main/LFM2-VL-450M.Q3_K_L.gguf) | Q3_K_L | 0.3 | |
| [GGUF](https://huggingface.co/mradermacher/LFM2-VL-450M-GGUF/resolve/main/LFM2-VL-450M.IQ4_XS.gguf) | IQ4_XS | 0.3 | |
| [GGUF](https://huggingface.co/mradermacher/LFM2-VL-450M-GGUF/resolve/main/LFM2-VL-450M.Q4_K_S.gguf) | Q4_K_S | 0.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/LFM2-VL-450M-GGUF/resolve/main/LFM2-VL-450M.Q4_K_M.gguf) | Q4_K_M | 0.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/LFM2-VL-450M-GGUF/resolve/main/LFM2-VL-450M.Q5_K_S.gguf) | Q5_K_S | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/LFM2-VL-450M-GGUF/resolve/main/LFM2-VL-450M.Q5_K_M.gguf) | Q5_K_M | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/LFM2-VL-450M-GGUF/resolve/main/LFM2-VL-450M.Q6_K.gguf) | Q6_K | 0.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/LFM2-VL-450M-GGUF/resolve/main/LFM2-VL-450M.Q8_0.gguf) | Q8_0 | 0.5 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/LFM2-VL-450M-GGUF/resolve/main/LFM2-VL-450M.f16.gguf) | f16 | 0.8 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
aleebaster/blockassist-bc-sly_eager_boar_1755437203
|
aleebaster
| 2025-08-17T13:51:21Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"sly eager boar",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-17T13:51:15Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- sly eager boar
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
liangjh2001/qwen_audio_ties-full-audio_deepfake_val_new_2w-full
|
liangjh2001
| 2025-08-17T13:48:50Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"qwen2_audio",
"text2text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"license:other",
"endpoints_compatible",
"region:us"
] | null | 2025-08-17T11:56:48Z |
---
library_name: transformers
license: other
base_model: Qwen2-Audio-7B-Instruct
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: qwen_audio_ties-full-audio_deepfake_val_new_2w-full
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# qwen_audio_ties-full-audio_deepfake_val_new_2w-full
This model is a fine-tuned version of [/GLOBALFS/gznwp_3/qxj/liangjh/mergekit-audio/output/qwen_audio_ties](https://huggingface.co//GLOBALFS/gznwp_3/qxj/liangjh/mergekit-audio/output/qwen_audio_ties) on the audio_deepfake_val_new_2w dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- total_eval_batch_size: 64
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.2.0
- Tokenizers 0.21.4
|
HKUST-DSAIL/GraphMind-LLAMA-3-8B
|
HKUST-DSAIL
| 2025-08-17T13:47:53Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"arxiv:2507.17168",
"base_model:meta-llama/Meta-Llama-3-8B",
"base_model:finetune:meta-llama/Meta-Llama-3-8B",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-17T13:27:14Z |
---
library_name: transformers
license: mit
base_model:
- meta-llama/Meta-Llama-3-8B
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: GraphMind-LLAMA-3-8B
results: []
---
# Model Card for GraphMind Series
This model card describes the **GraphMind** series of models, which are Large Language Models (LLMs) enhanced for generalized reasoning through continued pre-training on graph-based problems.
## Model Description
GraphMind is a series of Large Language Models developed to improve the generalized reasoning capabilities of existing base models.
The core innovation is the continued pre-training (CPT) on **GraphPile**, a large-scale 10.9 billion token dataset specifically designed with Graph Problem Reasoning (GPR) data.
By training on diverse and complex graph problems—which require sophisticated logical, topological, and relational reasoning—GraphMind models learn more robust and transferable reasoning patterns.
This approach bridges the gap between domain-specific training (e.g., mathematics) and the need for universally capable and adaptable LLMs.
The GraphMind series is built upon three popular open-source models:
* Llama 3
* Llama 3.1
* Gemma 2
## Key Features
- **Enhanced General Reasoning**: Significant gains not only on graph-related tasks but also across mathematical, logical, commonsense, and code reasoning benchmarks.
- **Superior Performance on Graph Problems**: Thanks to the GraphPile corpus, the models excel at tasks involving graph theory, such as pathfinding, network analysis, and topological sorting.
- **Strong Transfer Learning**: Reasoning skills acquired from graph problems effectively transfer to other domains.
- **Excellent Post-Training Potential**: Stronger foundation for fine-tuning on downstream tasks. For instance, the Gemma-based GraphMind fine-tuned on GSM8K achieves **23.6% higher accuracy** than its fine-tuned base model.
## Performance
GraphMind models show consistent improvements over their base models across reasoning benchmarks.
**Generalization Improvements**:
- **Mathematical Reasoning**: up to **4.9%** average improvement across 11 datasets.
- **Logical Reasoning**: **33.4%** improvement.
- **Code Reasoning**: **46.3%** improvement.
- **Commonsense Reasoning**: **7.8%** improvement.
- **Multi-Hop QA**: **10.3%** improvement.
**Foundational Improvements**:
- **Graph Problem Reasoning**: Average improvement of **53.1%** compared to baseline models.
## Training Data: The GraphPile Corpus
GraphMind's capabilities are derived from its training on **GraphPile**, the first large-scale corpus designed for continued pre-training using Graph Problem Reasoning data.
**Statistics**:
- **Total Tokens**: 10.9 Billion
- **Total Samples**: 2.68 Million
- **Graph Tasks**: 23 distinct tasks covering multiple reasoning paradigms
**Data Components**:
1. **Chain-of-Thought (CoT) Data**: Step-by-step reasoning processes for graph problems, generated using program-guided methods.
2. **Program-of-Thought (PoT) Data**: Executable code solutions for graph problems, often derived from standard libraries.
3. **Trace-of-Execution (ToE) Data**: Records execution traces of graph algorithms, enabling learning from dynamic algorithmic processes.
4. **Real-world Graph Data**: Includes tasks from sources like DBpedia and DBLP, enriching the dataset with practical contexts.
## Training Procedure
The GraphMind models were developed by performing continued pre-training on the GraphPile dataset.
* **Base Models**: Llama-3-8B, Llama-3.1-8B, Gemma-2-2B
* **Learning Rate**: 3e-5
* **Epochs**: 3
* **Max Sequence Length**: 8192
* **Global Batch Size**: 1024
* **Hardware**: 32 × NVIDIA H100 GPUs
## Intended Use and Limitations
### Intended Use
These models are intended for use in research and development for tasks that demand strong, generalized reasoning. Potential applications include:
* Solving complex logical and mathematical problems.
* Algorithmic reasoning and code generation for graph-related tasks.
* Serving as powerful base models for fine-tuning on reasoning-intensive downstream tasks.
### Limitations
* GraphPile is limited to 23 graph problem tasks; more diversity could improve results.
* As reasoning-focused models, GraphMind may perform worse on simpler, non-reasoning tasks such as summarization or translation.
* Further exploration of different GraphPile configurations could yield additional gains.
## Available Models
* **HKUST-DSAIL/GraphMind-Gemma2-2B**
* **HKUST-DSAIL/GraphMind-LLAMA-3.1-8B**
* **HKUST-DSAIL/GraphMind-LLAMA-3-8B**
## Citation
```bibtex
@misc{zhang2025improving,
title={Improving LLMs' Generalized Reasoning Abilities by Graph Problems},
author={Qifan Zhang and Nuo Chen and Zehua Li and Miao Peng and Jing Tang and Jia Li},
year={2025},
eprint={2507.17168},
archivePrefix={arXiv},
primaryClass={cs.AI},
url={https://arxiv.org/abs/2507.17168v1}
}
```
|
rafsya427/blockassist-bc-monstrous_bristly_chimpanzee_1755436751
|
rafsya427
| 2025-08-17T13:46:20Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"monstrous bristly chimpanzee",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-17T13:46:16Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- monstrous bristly chimpanzee
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
capungmerah627/blockassist-bc-stinging_soaring_porcupine_1755436791
|
capungmerah627
| 2025-08-17T13:46:16Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"stinging soaring porcupine",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-17T13:46:13Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- stinging soaring porcupine
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
dsdsdsdfffff/math_2000_8_4_5e-5_ffn_granorm
|
dsdsdsdfffff
| 2025-08-17T13:46:09Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"deepseek_v2",
"text-generation",
"conversational",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-17T12:08:54Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ihsanridzi/blockassist-bc-wiry_flexible_owl_1755436740
|
ihsanridzi
| 2025-08-17T13:46:00Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"wiry flexible owl",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-17T13:45:57Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- wiry flexible owl
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
bearlover365/multi_sac_smoke
|
bearlover365
| 2025-08-17T13:44:21Z | 0 | 0 |
lerobot
|
[
"lerobot",
"safetensors",
"sac",
"robotics",
"dataset:bearlover365/red_cube_always_in_same_place",
"dataset:bearlover365/pick_place_one_white_sock_black_out_blinds",
"arxiv:1801.01290",
"license:apache-2.0",
"region:us"
] |
robotics
| 2025-08-17T13:44:20Z |
---
datasets:
- bearlover365/red_cube_always_in_same_place
- bearlover365/pick_place_one_white_sock_black_out_blinds
library_name: lerobot
license: apache-2.0
model_name: sac
pipeline_tag: robotics
tags:
- lerobot
- sac
- robotics
---
# Model Card for sac
<!-- Provide a quick summary of what the model is/does. -->
[Soft Actor-Critic (SAC)](https://huggingface.co/papers/1801.01290) is an entropy-regularised actor-critic algorithm offering stable, sample-efficient learning in continuous-control environments.
This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot).
See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index).
---
## How to Get Started with the Model
For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy).
Below is the short version on how to train and run inference/eval:
### Train from scratch
```bash
python -m lerobot.scripts.train \
--dataset.repo_id=${HF_USER}/<dataset> \
--policy.type=act \
--output_dir=outputs/train/<desired_policy_repo_id> \
--job_name=lerobot_training \
--policy.device=cuda \
--policy.repo_id=${HF_USER}/<desired_policy_repo_id>
--wandb.enable=true
```
_Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`._
### Evaluate the policy/run inference
```bash
python -m lerobot.record \
--robot.type=so100_follower \
--dataset.repo_id=<hf_user>/eval_<dataset> \
--policy.path=<hf_user>/<desired_policy_repo_id> \
--episodes=10
```
Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint.
---
## Model Details
- **License:** apache-2.0
|
LizardAPN/ppo-CartPole-v1
|
LizardAPN
| 2025-08-17T13:41:16Z | 0 | 0 | null |
[
"tensorboard",
"CartPole-v1",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-08-17T11:55:44Z |
---
tags:
- CartPole-v1
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 191.20 +/- 80.27
name: mean_reward
verified: false
---
# PPO Agent Playing CartPole-v1
This is a trained model of a PPO agent playing CartPole-v1.
# Hyperparameters
```python
{'exp_name': 'ppo'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'CartPole-v1'
'total_timesteps': 50000
'learning_rate': 0.00025
'num_envs': 4
'num_steps': 128
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'LizardAPN/ppo-CartPole-v1'
'batch_size': 512
'minibatch_size': 128}
```
|
jarguello76/reinforcement_learning_lunar_landing
|
jarguello76
| 2025-08-17T13:18:48Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-08-17T13:18:30Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: ppo
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 239.12 +/- 72.27
name: mean_reward
verified: false
---
# **ppo** Agent playing **LunarLander-v2**
This is a trained model of a **ppo** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Fenix125/bert-spam-ham-classifier
|
Fenix125
| 2025-08-17T13:10:21Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"code",
"en",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-08-15T14:58:24Z |
---
license: mit
language:
- en
base_model:
- google-bert/bert-base-uncased
pipeline_tag: text-classification
library_name: transformers
metrics:
- accuracy
- precision
- recall
- f1
tags:
- code
---
|
unitova/blockassist-bc-zealous_sneaky_raven_1755434609
|
unitova
| 2025-08-17T13:09:23Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"zealous sneaky raven",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-17T13:09:20Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- zealous sneaky raven
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
capungmerah627/blockassist-bc-stinging_soaring_porcupine_1755433304
|
capungmerah627
| 2025-08-17T12:47:13Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"stinging soaring porcupine",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-17T12:47:10Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- stinging soaring porcupine
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
mradermacher/rStar-Coder-Qwen3-0.6B-GGUF
|
mradermacher
| 2025-08-17T12:44:05Z | 1,648 | 1 |
transformers
|
[
"transformers",
"gguf",
"text-generation-inference",
"chain-of-thought",
"trl",
"coder",
"code",
"core",
"python",
"math",
"gspo",
"en",
"dataset:microsoft/rStar-Coder",
"base_model:prithivMLmods/rStar-Coder-Qwen3-0.6B",
"base_model:quantized:prithivMLmods/rStar-Coder-Qwen3-0.6B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-08-06T11:06:07Z |
---
base_model: prithivMLmods/rStar-Coder-Qwen3-0.6B
datasets:
- microsoft/rStar-Coder
language:
- en
library_name: transformers
license: apache-2.0
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- text-generation-inference
- chain-of-thought
- trl
- coder
- code
- core
- python
- math
- gspo
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/prithivMLmods/rStar-Coder-Qwen3-0.6B
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#rStar-Coder-Qwen3-0.6B-GGUF).***
weighted/imatrix quants are available at https://huggingface.co/mradermacher/rStar-Coder-Qwen3-0.6B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/rStar-Coder-Qwen3-0.6B-GGUF/resolve/main/rStar-Coder-Qwen3-0.6B.Q2_K.gguf) | Q2_K | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/rStar-Coder-Qwen3-0.6B-GGUF/resolve/main/rStar-Coder-Qwen3-0.6B.Q3_K_S.gguf) | Q3_K_S | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/rStar-Coder-Qwen3-0.6B-GGUF/resolve/main/rStar-Coder-Qwen3-0.6B.Q3_K_M.gguf) | Q3_K_M | 0.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/rStar-Coder-Qwen3-0.6B-GGUF/resolve/main/rStar-Coder-Qwen3-0.6B.Q3_K_L.gguf) | Q3_K_L | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/rStar-Coder-Qwen3-0.6B-GGUF/resolve/main/rStar-Coder-Qwen3-0.6B.IQ4_XS.gguf) | IQ4_XS | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/rStar-Coder-Qwen3-0.6B-GGUF/resolve/main/rStar-Coder-Qwen3-0.6B.Q4_K_S.gguf) | Q4_K_S | 0.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/rStar-Coder-Qwen3-0.6B-GGUF/resolve/main/rStar-Coder-Qwen3-0.6B.Q4_K_M.gguf) | Q4_K_M | 0.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/rStar-Coder-Qwen3-0.6B-GGUF/resolve/main/rStar-Coder-Qwen3-0.6B.Q5_K_S.gguf) | Q5_K_S | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/rStar-Coder-Qwen3-0.6B-GGUF/resolve/main/rStar-Coder-Qwen3-0.6B.Q5_K_M.gguf) | Q5_K_M | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/rStar-Coder-Qwen3-0.6B-GGUF/resolve/main/rStar-Coder-Qwen3-0.6B.Q6_K.gguf) | Q6_K | 0.6 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/rStar-Coder-Qwen3-0.6B-GGUF/resolve/main/rStar-Coder-Qwen3-0.6B.Q8_0.gguf) | Q8_0 | 0.7 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/rStar-Coder-Qwen3-0.6B-GGUF/resolve/main/rStar-Coder-Qwen3-0.6B.f16.gguf) | f16 | 1.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
unitova/blockassist-bc-zealous_sneaky_raven_1755432891
|
unitova
| 2025-08-17T12:39:56Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"zealous sneaky raven",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-17T12:39:52Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- zealous sneaky raven
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
rafsya427/blockassist-bc-monstrous_bristly_chimpanzee_1755431198
|
rafsya427
| 2025-08-17T12:13:59Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"monstrous bristly chimpanzee",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-17T12:13:55Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- monstrous bristly chimpanzee
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
kojeklollipop/blockassist-bc-spotted_amphibious_stork_1755431160
|
kojeklollipop
| 2025-08-17T12:12:46Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"spotted amphibious stork",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-17T12:12:42Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- spotted amphibious stork
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
rambetiko/blockassist-bc-soft_lanky_marmot_1755431955
|
rambetiko
| 2025-08-17T12:06:41Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"soft lanky marmot",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-17T12:06:19Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- soft lanky marmot
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
bangdulec/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-burrowing_sneaky_tamarin
|
bangdulec
| 2025-08-17T12:03:59Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am burrowing_sneaky_tamarin",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-17T11:30:26Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am burrowing_sneaky_tamarin
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mookiezi/Discord-Micae-Hermes-3-3B
|
mookiezi
| 2025-08-17T12:00:30Z | 1,721 | 2 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"causal-lm",
"instruct",
"chat",
"fine-tuned",
"merged-lora",
"llama-3",
"hermes",
"discord-dataset",
"conversational-ai",
"chatml",
"pytorch",
"open-weights",
"3b-parameters",
"conversational",
"dataset:mookiezi/Discord-OpenMicae",
"arxiv:2408.11857",
"base_model:NousResearch/Hermes-3-Llama-3.2-3B",
"base_model:finetune:NousResearch/Hermes-3-Llama-3.2-3B",
"license:llama3",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-02T23:21:01Z |
---
tags:
- transformers
- causal-lm
- text-generation
- instruct
- chat
- fine-tuned
- merged-lora
- llama-3
- hermes
- discord-dataset
- conversational-ai
- chatml
- pytorch
- open-weights
- 3b-parameters
model-index:
- name: Discord-Micae-Hermes-3-3B
results: []
base_model:
- NousResearch/Hermes-3-Llama-3.2-3B
datasets:
- mookiezi/Discord-OpenMicae
library_name: transformers
license: llama3
---
<div style="display: flex; align-items: center; gap: 8px;">
<span>Run this model on Google Colab for free:</span>
<a href="https://colab.research.google.com/drive/1kUtTeey5THhKW6f0BDKB9MFe4JIEew_Z?usp=sharing">
<img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open in Colab"/>
</a>
</div>
## Discord-Micae-Hermes-3-3B

## Model Description
Discord-Micae-Hermes-3-3B is a new finetune on [NousResearch/Hermes-3-Llama-3.2-3B](https://huggingface.co/NousResearch/Hermes-3-Llama-3.2-3B).
This model serves as a foundation for ongoing exploration into the capabilities of human-adjacent text generation.
- **Training Schedule:**
- 17M tokens of 260 thousand single-turn exchanges (STX) – 6 epochs @ 2e-5
- 5.5M tokens of 101 thousand multi-turn chains – 6 epochs @ 2e-5
- Combined dataset – 1 epoch @ 1e-5
- **Fine-Tuning Method:** LoRA merge (α = 32, r = 8, dropout = 0.1)
- **LoRA target modules:** q_proj, v_proj
- **Scheduler:** Cosine schedule with 220 warmup steps per phase
- **Optimizer:** torch.optim.AdamW (betas = (0.9, 0.999), amsgrad = False, eps = 1e-8, weight decay = 0.01)
- **Batching:** Effective size of 126 (7 batch size × 18 gradient accumulation steps)
- **Timeline and Hardware:** Training took place over 17 days on a GTX 1080 (8GB).
## Dataset
The model was fine-tuned on the [mookiezi/Discord-OpenMicae](https://huggingface.co/datasets/mookiezi/Discord-OpenMicae) dataset.
## Intended Use
- Conversational AI research
- Experimentation with dialogue agents trained on Discord data
- Chatbots requiring **casual, human-like tone**
### Limitations
- The model inherits potential biases from Discord-style language.
- It is **not safety-aligned** for deployment without moderation.
- While it does inhert knowledge from Hermes-3-3B it is not intended for factual or sensitive information retrieval.
## Prompting
Micae uses the same ChatML prompt format as Hermes 3 and handles context and chat history.
```
<|im_start|>user
what do you think about ai?<|im_end|>
<|im_start|>assistant
i'm not a fan of AI but I can understand why people are excited to use it. it's like the first time they got an electric car, or when they were able to fly in space, that excitement is real<|im_end|>
<|im_start|>user
where do you think it's heading?<|im_end|>
<|im_start|>assistant
it will be used for good and bad things just as with other techs<|im_end|>
<|im_start|>user
what do you do use it for personally?<|im_end|>
<|im_start|>assistant
nothing much yet, I hope to find some stuff later on.<|im_end|>
<|im_start|>user
do you like pokemon?<|im_end|>
<|im_start|>assistant
yes I love Pokémon.<|im_end|>
<|im_start|>user
what's your favorite pokemon<|im_end|>
<|im_start|>assistant
garchomp<|im_end|>
```
---
## Inferfacing
A python script for interfacing, optimized for casual, Discord-style conversation, is available at [https://github.com/mookiezi/interface](https://github.com/mookiezi/interface).
To use this model run:
```
python interface.py -m mookiezi/Discord-Micae-Hermes-3-3B
```

---
## License
See the [Meta LLaMA 3 Community License](https://ai.meta.com/llama/license/) for details.
## How to cite:
If you use this model in your work, please cite both **Discord-Micae-Hermes-3-3B** and the base model **Hermes 3**:
```bibtex
@misc{discord-micae-hermes3b,
title = {Discord-Micae-Hermes-3-3B},
author = {mookiezi},
year = {2025},
url={https://huggingface.co/mookiezi/Discord-Micae-Hermes-3-3B}
}
@misc{teknium2024hermes3technicalreport,
title={Hermes 3 Technical Report},
author={Ryan Teknium and Jeffrey Quesnelle and Chen Guang},
year={2024},
eprint={2408.11857},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2408.11857}
}
```
[](https://20000.online/micae)
[](https://20000.online/openmicae)
[](https://20000.online/discord-dialogues)
|
mlx-community/Kimi-VL-A3B-Thinking-2506-6bit
|
mlx-community
| 2025-08-17T12:00:14Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"kimi_vl",
"feature-extraction",
"mlx",
"image-text-to-text",
"conversational",
"custom_code",
"base_model:moonshotai/Kimi-VL-A3B-Instruct",
"base_model:quantized:moonshotai/Kimi-VL-A3B-Instruct",
"license:mit",
"6-bit",
"region:us"
] |
image-text-to-text
| 2025-08-16T18:17:18Z |
---
base_model:
- moonshotai/Kimi-VL-A3B-Instruct
license: mit
pipeline_tag: image-text-to-text
library_name: transformers
tags:
- mlx
---
# mlx-community/Kimi-VL-A3B-Thinking-2506-6bit
This model was converted to MLX format from [`moonshotai/Kimi-VL-A3B-Thinking-2506`]() using mlx-vlm version **0.3.2**.
Refer to the [original model card](https://huggingface.co/moonshotai/Kimi-VL-A3B-Thinking-2506) for more details on the model.
## Use with mlx
```bash
pip install -U mlx-vlm
```
```bash
python -m mlx_vlm.generate --model jqlive/Kimi-VL-A3B-Thinking-2506-6bit --max-tokens 100 --temperature 0.0 --prompt "Describe this image." --image <path_to_image>
```
|
kunkunlin1221/face-landmarks-2d-106_mbv1
|
kunkunlin1221
| 2025-08-17T11:52:27Z | 0 | 0 | null |
[
"onnx",
"license:apache-2.0",
"region:us"
] | null | 2025-08-17T10:51:29Z |
---
license: apache-2.0
---
|
VoilaRaj/69_bQEmuz
|
VoilaRaj
| 2025-08-17T11:48:59Z | 0 | 0 | null |
[
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-08-17T11:45:17Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
real0x0a1/MyGemmaNPC
|
real0x0a1
| 2025-08-17T11:48:22Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"gemma3_text",
"text-generation",
"generated_from_trainer",
"sft",
"trl",
"conversational",
"base_model:google/gemma-3-270m-it",
"base_model:finetune:google/gemma-3-270m-it",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-17T11:47:29Z |
---
base_model: google/gemma-3-270m-it
library_name: transformers
model_name: MyGemmaNPC
tags:
- generated_from_trainer
- sft
- trl
licence: license
---
# Model Card for MyGemmaNPC
This model is a fine-tuned version of [google/gemma-3-270m-it](https://huggingface.co/google/gemma-3-270m-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="real0x0a1/MyGemmaNPC", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.21.0
- Transformers: 4.55.0
- Pytorch: 2.8.0+cu126
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
adity12345/chakma_model
|
adity12345
| 2025-08-17T11:42:38Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-17T11:42:35Z |
---
library_name: transformers
tags:
- generated_from_trainer
model-index:
- name: chakma_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# chakma_model
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 5
### Framework versions
- Transformers 4.55.1
- Pytorch 2.6.0+cu124
- Datasets 4.0.0
- Tokenizers 0.21.4
|
unitova/blockassist-bc-zealous_sneaky_raven_1755429419
|
unitova
| 2025-08-17T11:42:07Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"zealous sneaky raven",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-17T11:42:04Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- zealous sneaky raven
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
adity12345/chakma-gpt2
|
adity12345
| 2025-08-17T11:39:34Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-17T11:34:37Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
imanuelradityaa/finetuned_cs_llama_900_steps_16bit
|
imanuelradityaa
| 2025-08-17T11:26:34Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-17T11:18:10Z |
---
base_model: unsloth/llama-3.2-1b-instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** imanuelradityaa
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-1b-instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
unitova/blockassist-bc-zealous_sneaky_raven_1755427662
|
unitova
| 2025-08-17T11:13:22Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"zealous sneaky raven",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-17T11:13:19Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- zealous sneaky raven
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Seonghaa/korean-emotion-classifier-roberta
|
Seonghaa
| 2025-08-17T11:10:02Z | 0 | 0 | null |
[
"safetensors",
"roberta",
"text-classification",
"emotion",
"korean",
"ko",
"dataset:custom",
"license:mit",
"region:us"
] |
text-classification
| 2025-08-17T11:06:47Z |
---
language: ko
tags:
- text-classification
- emotion
- korean
license: mit
datasets:
- custom
model-name: korean-emotion-classifier
---
# Korean Emotion Classifier 😃😡😢😨😲😌
본 모델은 한국어 텍스트를 **6가지 감정(분노, 불안, 슬픔, 평온, 당황, 기쁨)**으로 분류합니다.
`klue/roberta-base` 기반으로 파인튜닝되었습니다.
---
## 📊 Evaluation Results
| Emotion | Precision | Recall | F1-Score |
|---------|-----------|--------|----------|
| 분노 | 0.9801 | 0.9788 | 0.9795 |
| 불안 | 0.9864 | 0.9848 | 0.9856 |
| 슬픔 | 0.9837 | 0.9854 | 0.9845 |
| 평온 | 0.9782 | 0.9750 | 0.9766 |
| 당황 | 0.9607 | 0.9668 | 0.9652 |
| 기쁨 | 0.9857 | 0.9886 | 0.9872 |
**Accuracy**: 0.9831
**Macro Avg**: Precision=0.9791 / Recall=0.9804 / F1=0.9798
**Weighted Avg**: Precision=0.9831 / Recall=0.9831 / F1=0.9831
```python
from transformers import pipeline
import torch
model_id = "Seonghaa/korean-emotion-classifier-roberta"
device = 0 if torch.cuda.is_available() else -1 # GPU 있으면 0, 없으면 CPU(-1)
clf = pipeline(
"text-classification",
model=model_id,
tokenizer=model_id,
device=device
)
texts = [
"오늘 길에서 10만원을 주웠어",
"오늘 친구들이랑 노래방에 갔어",
"오늘 시험 망쳤어",
]
for t in texts:
pred = clf(t, truncation=True, max_length=256)[0]
print(f"입력: {t}")
print(f"→ 예측 감정: {pred['label']}, 점수: {pred['score']:.4f}
")
```
## 출력 예시:
입력: 오늘 길에서 10만원을 주웠어</br>
→ 예측 감정: 기쁨, 점수: 0.9619
입력: 오늘 친구들이랑 노래방에 갔어</br>
→ 예측 감정: 기쁨, 점수: 0.9653
입력: 오늘 시험 망쳤어</br>
→ 예측 감정: 슬픔, 점수: 0.9602
|
quantumxnode/blockassist-bc-dormant_peckish_seahorse_1755427402
|
quantumxnode
| 2025-08-17T11:09:34Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"dormant peckish seahorse",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-17T11:09:31Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- dormant peckish seahorse
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
VK13/Cartpole_v1
|
VK13
| 2025-08-17T11:09:27Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-08-17T11:09:17Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Cartpole_v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 216.80 +/- 240.99
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
svilens/gemma-3-1b-it-bnb-4bit-intent
|
svilens
| 2025-08-17T11:07:53Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma3_text",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"base_model:unsloth/gemma-3-1b-it-bnb-4bit",
"base_model:finetune:unsloth/gemma-3-1b-it-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-17T11:03:38Z |
---
base_model: unsloth/gemma-3-1b-it-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3_text
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** svilens
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-3-1b-it-bnb-4bit
This gemma3_text model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
mohammadmahdinouri/mol-5k-0.04-aux
|
mohammadmahdinouri
| 2025-08-17T11:02:41Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"ModernALBERT_MoL",
"fill-mask",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2025-08-17T11:02:39Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mradermacher/QiMing-Navigator-v1-GGUF
|
mradermacher
| 2025-08-17T11:00:14Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"qwen",
"qwen3",
"unsloth",
"lora",
"logic-tuning",
"strategic-thinking",
"zh",
"en",
"base_model:aifeifei798/QiMing-Navigator-v1",
"base_model:adapter:aifeifei798/QiMing-Navigator-v1",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-08-17T09:13:16Z |
---
base_model: aifeifei798/QiMing-Navigator-v1
language:
- zh
- en
library_name: transformers
license: apache-2.0
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- qwen
- qwen3
- unsloth
- lora
- logic-tuning
- strategic-thinking
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/aifeifei798/QiMing-Navigator-v1
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#QiMing-Navigator-v1-GGUF).***
weighted/imatrix quants are available at https://huggingface.co/mradermacher/QiMing-Navigator-v1-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/QiMing-Navigator-v1-GGUF/resolve/main/QiMing-Navigator-v1.Q2_K.gguf) | Q2_K | 1.8 | |
| [GGUF](https://huggingface.co/mradermacher/QiMing-Navigator-v1-GGUF/resolve/main/QiMing-Navigator-v1.Q3_K_S.gguf) | Q3_K_S | 2.0 | |
| [GGUF](https://huggingface.co/mradermacher/QiMing-Navigator-v1-GGUF/resolve/main/QiMing-Navigator-v1.Q3_K_M.gguf) | Q3_K_M | 2.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/QiMing-Navigator-v1-GGUF/resolve/main/QiMing-Navigator-v1.Q3_K_L.gguf) | Q3_K_L | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/QiMing-Navigator-v1-GGUF/resolve/main/QiMing-Navigator-v1.IQ4_XS.gguf) | IQ4_XS | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/QiMing-Navigator-v1-GGUF/resolve/main/QiMing-Navigator-v1.Q4_K_S.gguf) | Q4_K_S | 2.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/QiMing-Navigator-v1-GGUF/resolve/main/QiMing-Navigator-v1.Q4_K_M.gguf) | Q4_K_M | 2.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/QiMing-Navigator-v1-GGUF/resolve/main/QiMing-Navigator-v1.Q5_K_S.gguf) | Q5_K_S | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/QiMing-Navigator-v1-GGUF/resolve/main/QiMing-Navigator-v1.Q5_K_M.gguf) | Q5_K_M | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/QiMing-Navigator-v1-GGUF/resolve/main/QiMing-Navigator-v1.Q6_K.gguf) | Q6_K | 3.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/QiMing-Navigator-v1-GGUF/resolve/main/QiMing-Navigator-v1.Q8_0.gguf) | Q8_0 | 4.4 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/QiMing-Navigator-v1-GGUF/resolve/main/QiMing-Navigator-v1.f16.gguf) | f16 | 8.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
devparagiri/a-20250817-103351
|
devparagiri
| 2025-08-17T10:40:26Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"gguf",
"gpt2",
"text-generation",
"autotrain",
"text-generation-inference",
"peft",
"conversational",
"dataset:devparagiri/dataset-a-20250817-103351",
"base_model:microsoft/DialoGPT-small",
"base_model:quantized:microsoft/DialoGPT-small",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-17T10:37:58Z |
---
tags:
- autotrain
- text-generation-inference
- text-generation
- peft
library_name: transformers
base_model: microsoft/DialoGPT-small
widget:
- messages:
- role: user
content: What is your favorite condiment?
license: other
datasets:
- devparagiri/dataset-a-20250817-103351
---
# Model Trained Using AutoTrain
This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain).
# Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "PATH_TO_THIS_REPO"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(
model_path,
device_map="auto",
torch_dtype='auto'
).eval()
# Prompt content: "hi"
messages = [
{"role": "user", "content": "hi"}
]
input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt')
output_ids = model.generate(input_ids.to('cuda'))
response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)
# Model response: "Hello! How can I assist you today?"
print(response)
```
|
m-polignano/ANITA-NEXT-20B-gpt-oss-ITA-GGUF
|
m-polignano
| 2025-08-17T10:31:59Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"gpt_oss",
"text-generation",
"ita",
"italian",
"anita",
"magistral",
"24b",
"uniba",
"bari",
"italy",
"italia",
"Conversational",
"LLaMantino",
"Agentic",
"Agents",
"conversational",
"en",
"it",
"arxiv:2405.07101",
"base_model:openai/gpt-oss-20b",
"base_model:quantized:openai/gpt-oss-20b",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-17T10:05:29Z |
---
license: apache-2.0
language:
- en
- it
base_model:
- openai/gpt-oss-20b
pipeline_tag: text-generation
library_name: transformers
tags:
- ita
- italian
- anita
- magistral
- 24b
- uniba
- bari
- italy
- italia
- Conversational
- LLaMantino
- Agentic
- Agents
---
<img src="https://huggingface.co/m-polignano/ANITA-NEXT-24B-Magistral-2506-ITA/resolve/main/Anita-Next_full.png" alt="anita_next" border="0" width="600px">
<hr>
<!--<img src="https://i.ibb.co/6mHSRm3/llamantino53.jpg" width="200"/>-->
<h3><i>"Built on <b>openai/gpt-oss-20b</b>"</i></i></h3>
<p style="text-align:justify;"><b>ANITA-NEXT-20B-gpt-oss-ITA</b> is a <b>Thinking Model</b> of the <a href="https://arxiv.org/abs/2405.07101"><b>ANITA</b></a> - <i>Large Language Models family</i>.
The model is a fine-tuned version of <a href="https://huggingface.co/openai/gpt-oss-20b"><b>openai/gpt-oss-20b</b></a> (a fine-tuned <b>OpenAI OSS model</b>).
This model version aims to be the an <b>Agentic-Ready Multilingual Model</b> 🏁 (EN 🇺🇸 + ITA🇮🇹) to further fine-tuning on Specific Tasks in Italian.</p>
❗❗❗Use at your own risk. The model may generate hallucinations, incorrect, invented, offensive, unethical or dangerous responses. We are not responsible for any dangerous/offensive/criminal use. The model is release for research only purposes.❗❗❗
The 🌟**ANITA project**🌟 *(**A**dvanced **N**atural-based interaction for the **ITA**lian language)*
wants to provide Italian NLP researchers with an improved model for the Italian Language 🇮🇹 use cases.
The **NEXT** family includes **four models**:
- m-polignano/ANITA-NEXT-24B-Magistral-2506-ITA - **General Purpose**
- m-polignano/ANITA-NEXT-24B-Dolphin-Mistral-UNCENSORED-ITA - **Uncensored**
- m-polignano/ANITA-NEXT-24B-Magistral-2506-VISION-ITA - **Vision-Language**
- m-polignano/ANITA-NEXT-20B-gpt-oss-ITA - **Agentic Ready**
<hr>
**Full Model**: [m-polignano/ANITA-NEXT-20B-gpt-oss-ITA](https://huggingface.co/m-polignano/ANITA-NEXT-20B-gpt-oss-ITA)
<hr>
For *OLLAMA Inference* follow the [Huggingface Documentation](https://huggingface.co/docs/hub/ollama).
<hr>
## Citation instructions
```bibtex
@misc{polignano2024advanced,
title={Advanced Natural-based interaction for the ITAlian language: LLaMAntino-3-ANITA},
author={Marco Polignano and Pierpaolo Basile and Giovanni Semeraro},
year={2024},
eprint={2405.07101},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
```bibtex
@misc{openai2025gptoss,
author = {{OpenAI}},
title = {Introducing gpt‑oss},
howpublished = {\url{https://openai.com/en-EN/index/introducing-gpt-oss/}},
year = {2025},
month = aug,
day = {5},
note = {Accessed: 16 August 2025},
}
```
|
indoempatnol/blockassist-bc-fishy_wary_swan_1755424188
|
indoempatnol
| 2025-08-17T10:15:50Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"fishy wary swan",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-17T10:15:48Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- fishy wary swan
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
lmq1909/Qwen2.5-VL-7B-LQA-global-3e
|
lmq1909
| 2025-08-17T10:13:33Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2_5_vl",
"image-to-text",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
image-to-text
| 2025-08-17T10:08:10Z |
---
base_model: unsloth/qwen2.5-vl-7b-instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2_5_vl
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** lmq1909
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-vl-7b-instruct-bnb-4bit
This qwen2_5_vl model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
cyberdelia/CyberRealisticFlux
|
cyberdelia
| 2025-08-17T10:06:33Z | 0 | 0 |
diffusers
|
[
"diffusers",
"stable-diffusion",
"flux",
"text-to-image",
"photorealistic",
"cyberrealistic",
"pony",
"image-generation",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2025-08-17T09:58:09Z |
---
license: creativeml-openrail-m
tags:
- stable-diffusion
- flux
- text-to-image
- photorealistic
- cyberrealistic
- pony
- image-generation
- diffusers
model-index:
- name: CyberRealistic Pony
results: []
---
# CyberRealistic Flux
**CyberRealistic Flux** CyberRealistic Flux (FLUX.1 dev)! It’s designed to make realistic images, both safe-for-work and not-so-safe-for-work. It’s not perfect yet, but it’s a solid start and sets things up for what’s coming next.
---
|
whitebox-lm/llama3.2-sms
|
whitebox-lm
| 2025-08-17T10:06:17Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-17T10:06:12Z |
---
base_model: unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** whitebox-lm
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
lakelee/RLB_MLP_BC_v3.20250817.16.1
|
lakelee
| 2025-08-17T09:50:34Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mlp_swiglu",
"generated_from_trainer",
"base_model:lakelee/RLB_MLP_BC_v3.20250817.16",
"base_model:finetune:lakelee/RLB_MLP_BC_v3.20250817.16",
"endpoints_compatible",
"region:us"
] | null | 2025-08-17T09:42:48Z |
---
library_name: transformers
base_model: lakelee/RLB_MLP_BC_v3.20250817.16
tags:
- generated_from_trainer
model-index:
- name: RLB_MLP_BC_v3.20250817.16.1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# RLB_MLP_BC_v3.20250817.16.1
This model is a fine-tuned version of [lakelee/RLB_MLP_BC_v3.20250817.16](https://huggingface.co/lakelee/RLB_MLP_BC_v3.20250817.16) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch_fused with betas=(0.9,0.99) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- num_epochs: 1.0
### Training results
### Framework versions
- Transformers 4.55.2
- Pytorch 2.8.0+cu128
- Tokenizers 0.21.4
|
netbuild/gpt-oss-20b-multilingual-reasoner
|
netbuild
| 2025-08-17T09:49:54Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"sft",
"trl",
"dataset:HuggingFaceH4/Multilingual-Thinking",
"base_model:openai/gpt-oss-120b",
"base_model:finetune:openai/gpt-oss-120b",
"endpoints_compatible",
"region:us"
] | null | 2025-08-17T07:48:17Z |
---
base_model: openai/gpt-oss-120b
datasets: HuggingFaceH4/Multilingual-Thinking
library_name: transformers
model_name: gpt-oss-20b-multilingual-reasoner
tags:
- generated_from_trainer
- sft
- trl
licence: license
---
# Model Card for gpt-oss-20b-multilingual-reasoner
This model is a fine-tuned version of [openai/gpt-oss-120b](https://huggingface.co/openai/gpt-oss-120b) on the [HuggingFaceH4/Multilingual-Thinking](https://huggingface.co/datasets/HuggingFaceH4/Multilingual-Thinking) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="netbuild/gpt-oss-20b-multilingual-reasoner", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.21.0
- Transformers: 4.55.2
- Pytorch: 2.8.0
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
ykarout/CyberSec-Qwen3-DeepSeekv1-Q8_0-GGUF
|
ykarout
| 2025-08-17T09:45:34Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"cybersecurity",
"fine-tuned",
"deepseek",
"qwen3",
"lora",
"cyber",
"nist",
"csf",
"pentest",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"ar",
"es",
"ru",
"it",
"de",
"dataset:Trendyol/Trendyol-Cybersecurity-Instruction-Tuning-Dataset",
"base_model:ykarout/CyberSec-Qwen3-DeepSeekv1",
"base_model:adapter:ykarout/CyberSec-Qwen3-DeepSeekv1",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-17T09:44:59Z |
---
license: apache-2.0
base_model: ykarout/CyberSec-Qwen3-DeepSeekv1
tags:
- cybersecurity
- fine-tuned
- deepseek
- qwen3
- lora
- cyber
- nist
- csf
- pentest
- llama-cpp
- gguf-my-repo
language:
- en
- ar
- es
- ru
- it
- de
pipeline_tag: text-generation
datasets:
- Trendyol/Trendyol-Cybersecurity-Instruction-Tuning-Dataset
library_name: transformers
---
# ykarout/CyberSec-Qwen3-DeepSeekv1-Q8_0-GGUF
This model was converted to GGUF format from [`ykarout/CyberSec-Qwen3-DeepSeekv1`](https://huggingface.co/ykarout/CyberSec-Qwen3-DeepSeekv1) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/ykarout/CyberSec-Qwen3-DeepSeekv1) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo ykarout/CyberSec-Qwen3-DeepSeekv1-Q8_0-GGUF --hf-file cybersec-qwen3-deepseekv1-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo ykarout/CyberSec-Qwen3-DeepSeekv1-Q8_0-GGUF --hf-file cybersec-qwen3-deepseekv1-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo ykarout/CyberSec-Qwen3-DeepSeekv1-Q8_0-GGUF --hf-file cybersec-qwen3-deepseekv1-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo ykarout/CyberSec-Qwen3-DeepSeekv1-Q8_0-GGUF --hf-file cybersec-qwen3-deepseekv1-q8_0.gguf -c 2048
```
|
rafsya427/blockassist-bc-monstrous_bristly_chimpanzee_1755422201
|
rafsya427
| 2025-08-17T09:44:22Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"monstrous bristly chimpanzee",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-17T09:44:19Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- monstrous bristly chimpanzee
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
aifffffffd/MyGemmaNPC
|
aifffffffd
| 2025-08-17T09:40:10Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"gemma3_text",
"text-generation",
"generated_from_trainer",
"sft",
"trl",
"conversational",
"base_model:google/gemma-3-270m-it",
"base_model:finetune:google/gemma-3-270m-it",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-16T18:31:58Z |
---
base_model: google/gemma-3-270m-it
library_name: transformers
model_name: MyGemmaNPC
tags:
- generated_from_trainer
- sft
- trl
licence: license
---
# Model Card for MyGemmaNPC
This model is a fine-tuned version of [google/gemma-3-270m-it](https://huggingface.co/google/gemma-3-270m-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="aifffffffd/MyGemmaNPC", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.21.0
- Transformers: 4.55.1
- Pytorch: 2.6.0+cu124
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
hellbich/blockassist-bc-bipedal_endangered_toad_1755423204
|
hellbich
| 2025-08-17T09:39:40Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"bipedal endangered toad",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-17T09:39:15Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- bipedal endangered toad
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
muqtasid87/qwen2.5vl-3b-merged-Q8_0-GGUF
|
muqtasid87
| 2025-08-17T09:27:04Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"base_model:muqtasid87/qwen2.5vl-3b-merged",
"base_model:quantized:muqtasid87/qwen2.5vl-3b-merged",
"endpoints_compatible",
"region:us"
] | null | 2025-08-17T09:26:48Z |
---
library_name: transformers
tags:
- llama-cpp
- gguf-my-repo
base_model: muqtasid87/qwen2.5vl-3b-merged
---
# muqtasid87/qwen2.5vl-3b-merged-Q8_0-GGUF
This model was converted to GGUF format from [`muqtasid87/qwen2.5vl-3b-merged`](https://huggingface.co/muqtasid87/qwen2.5vl-3b-merged) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/muqtasid87/qwen2.5vl-3b-merged) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo muqtasid87/qwen2.5vl-3b-merged-Q8_0-GGUF --hf-file qwen2.5vl-3b-merged-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo muqtasid87/qwen2.5vl-3b-merged-Q8_0-GGUF --hf-file qwen2.5vl-3b-merged-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo muqtasid87/qwen2.5vl-3b-merged-Q8_0-GGUF --hf-file qwen2.5vl-3b-merged-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo muqtasid87/qwen2.5vl-3b-merged-Q8_0-GGUF --hf-file qwen2.5vl-3b-merged-q8_0.gguf -c 2048
```
|
chainway9/blockassist-bc-untamed_quick_eel_1755421001
|
chainway9
| 2025-08-17T09:25:24Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"untamed quick eel",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-17T09:25:21Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- untamed quick eel
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
VoilaRaj/69_0O5V6K
|
VoilaRaj
| 2025-08-17T09:24:01Z | 0 | 0 | null |
[
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-08-17T09:20:03Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
kojeklollipop/blockassist-bc-spotted_amphibious_stork_1755420432
|
kojeklollipop
| 2025-08-17T09:13:28Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"spotted amphibious stork",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-17T09:13:25Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- spotted amphibious stork
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
svjack/Skirk_wan_2_2_14_B_text2video_low_noise_lora_early
|
svjack
| 2025-08-17T09:10:21Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-03T17:12:20Z |
### **LoRA Model Card**: `svjack/Skirk_wan_2_2_14_B_text2video_lora_early`
#### **Enhanced Anime-Style Video Synthesis**
**Base Model**: `Wan2.2_T2V_A14B`
**Fine-tuned Adapter**: `Skirk_w14_low_lora-step00002500.safetensors`
**Key Strengths**:
- Dynamic environmental effects (blizzards, sunlight, mystical glow)
- Character consistency across diverse scenarios
- Cinematic texture integration (ice crystals, fabric physics, lighting interplay)
---
---
### **Optimized Example Prompts**
#### **Example 1: Frost Citadel Vigil**
**Prompt**:
```bash
二次元动漫风格,一位银白色长发的绝色女子矗立在暴风雪中的俄罗斯教堂风格的玄冰宫殿前,红色眼眸在火把跃动的橙红光芒中灼灼生辉。
黑色吊带裙的肩带被寒风吹得猎猎作响,裙摆的蓝色水晶配饰与紫色长筒袜在风雪中闪烁冷光。她单手高举燃烧的火把,
火星随风雪盘旋而上,另一手护住摇曳的火焰。暴雪模糊了宫殿尖顶的轮廓,冰晶在她睫毛上凝结,火光照亮她坚毅的侧脸与飞舞的发丝。
```
<video controls autoplay src="https://cdn-uploads.huggingface.co/production/uploads/634dffc49b777beec3bc6448/lUWW1eWgsYaaI1b1obDYH.mp4"></video>
---
#### **Example 2: Urban Ice Cream Serenity**
**Prompt**:
```bash
二次元动漫风格,一位绝色的年轻女子,一头银白色长发倾泻而下,红色的眸子炯炯有神,站在金色的阳光下。
她身着黑色吊带裙,性感的背影和侧腹曲线分明,更衬托出她丰满的身材。裙摆上闪耀的蓝色水晶配饰,
与她紫色的手套和长筒袜形成鲜明对比。她手捧香草冰淇淋蛋筒,柔滑的口感在温暖的空气中缓缓融化。
她举止俏皮却不失优雅,缓慢而刻意地舔着冰淇淋——正如礼仪专家所建议的那样,她的舌头绕着冰淇淋边缘转圈,
接住滴落的冰淇淋。这幅画面融合了性感与纯真:冰凉的甜美触碰双唇,她的红眼闪烁着喜悦的光芒,
水晶饰品随着每一个细微的动作而闪耀。一滴冰淇淋眼看就要掉下来,但她灵巧地用舌头接住了,
轻声笑了起来。背景是熙熙攘攘的城市街道,微风轻拂着她的长发,冰淇淋的柔和色调与她深色的未来主义装扮相得益彰。
```
<video controls autoplay src="https://cdn-uploads.huggingface.co/production/uploads/634dffc49b777beec3bc6448/Zner9upt_hf4jLdN5P5O9.mp4"></video>
<video controls autoplay src="https://cdn-uploads.huggingface.co/production/uploads/634dffc49b777beec3bc6448/vRo281iLMX1tXHOTKlU2j.mp4"></video>
---
#### **Example 3: Celestial Chambers**
**Prompt**:
```bash
二次元动漫风格,一位银白色长发的绝色女子慵懒地侧卧在悬浮仙山的云锦床榻上,红色眼眸轻阖,黑色吊带裙的肩带滑落至臂弯,
露出白皙的肩颈曲线。裙摆的蓝色水晶配饰与紫色长筒袜在夜明珠柔光下泛着微光,纤长睫毛在脸颊投下阴影,
手中半融化的香草冰淇淋蛋筒斜倚在琉璃盏边。背景是透光的灵石屏风与飘动的星纱帐幔,发丝散落在织金枕上,
玄幻风格的卧室里悬浮着点点灵光。风格是二次元动漫风格,比例是16:9。
```
<video controls autoplay src="https://cdn-uploads.huggingface.co/production/uploads/634dffc49b777beec3bc6448/AtuPo0asnKcaQG3-JTPSY.mp4"></video>
---
### **Technical Parameters**
| Setting | Recommendation | Notes |
|------------------|--------------------|----------------------------------------|
| **CFG Scale** | 1 (Fixed) | Wan2.2 architecture requirement |
| **Sampler** | uni_pc | Optimal for fabric/hair dynamics |
| **Steps** | 8-12 | Balances detail & speed |
| **Resolution** | 832x480 | Maximizes VRAM efficiency |
| **Motion Factor**| 4-6 | Higher values intensify environmental FX |
---
### **Performance Profile**
- **VRAM Consumption**: ~15GB at 832x480
- **Render Speed**: 38-60 sec/frame (RTX 4090)
- **Troubleshooting**:
- Snow/ice artifacts: Add `frost noise, particle distortion` to negative prompts
- Lighting issues: Use `softglow` node at 0.4 strength
- Consistency loss: Increase character token weight by 1.3x
### **License**
CC-BY-NC-SA 4.0 (Non-commercial, share-alike)
**Community Hub**: https://huggingface.co/svjack/Skirk_wan_2_2_14_B_text2video_lora_early/discussions
---
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.