modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-08-14 00:42:02
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
502 values
tags
listlengths
1
4.05k
pipeline_tag
stringclasses
55 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-08-14 00:41:07
card
stringlengths
11
1.01M
WHDtyrael/Qwen3-0.6B-Gensyn-Swarm-bellowing_giant_hare
WHDtyrael
2025-08-14T00:37:23Z
7
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "rl-swarm", "genrl-swarm", "grpo", "gensyn", "I am bellowing_giant_hare", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-07-01T14:48:31Z
--- library_name: transformers tags: - rl-swarm - genrl-swarm - grpo - gensyn - I am bellowing_giant_hare --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Gliscor/email-summarizer-bart-large-cnn
Gliscor
2025-08-13T20:51:54Z
0
0
peft
[ "peft", "safetensors", "base_model:adapter:facebook/bart-large-cnn", "lora", "transformers", "arxiv:1910.09700", "base_model:facebook/bart-large-cnn", "region:us" ]
null
2025-08-13T20:35:30Z
--- base_model: facebook/bart-large-cnn library_name: peft tags: - base_model:adapter:facebook/bart-large-cnn - lora - transformers --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.17.0
TheStageAI/Elastic-Llama-3.2-1B-Instruct
TheStageAI
2025-08-13T19:56:23Z
33
3
null
[ "text2text-generation", "zho", "eng", "fra", "spa", "por", "deu", "ita", "rus", "jpn", "kor", "vie", "tha", "ara", "base_model:meta-llama/Llama-3.2-1B-Instruct", "base_model:quantized:meta-llama/Llama-3.2-1B-Instruct", "license:apache-2.0", "region:us" ]
text-generation
2025-04-14T03:43:38Z
--- license: apache-2.0 base_model: - meta-llama/Llama-3.2-1B-Instruct base_model_relation: quantized pipeline_tag: text2text-generation language: - zho - eng - fra - spa - por - deu - ita - rus - jpn - kor - vie - tha - ara --- # Elastic model: Llama-3.2-1B-Instruct. Fastest and most flexible models for self-serving. Elastic models are the models produced by TheStage AI ANNA: Automated Neural Networks Accelerator. ANNA allows you to control model size, latency and quality with a simple slider movement. For each model, ANNA produces a series of optimized models: * __XL__: Mathematically equivalent neural network, optimized with our DNN compiler. * __L__: Near lossless model, with less than 1% degradation obtained on corresponding benchmarks. * __M__: Faster model, with accuracy degradation less than 1.5%. * __S__: The fastest model, with accuracy degradation less than 2%. __Goals of elastic models:__ * Provide flexibility in cost vs quality selection for inference * Provide clear quality and latency benchmarks * Provide interface of HF libraries: transformers and diffusers with a single line of code * Provide models supported on a wide range of hardware, which are pre-compiled and require no JIT. * Provide the best models and service for self-hosting. > It's important to note that specific quality degradation can vary from model to model. For instance, with an S model, you can have 0.5% degradation as well. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6799fc8e150f5a4014b030ca/bxMlZ1GhFWiJFIqg4RE5W.png) ----- ## Inference To infer our models, you just need to replace `transformers` import with `elastic_models.transformers`: ```python import torch from transformers import AutoTokenizer from elastic_models.transformers import AutoModelForCausalLM # Currently we require to have your HF token # as we use original weights for part of layers and # model configuration as well model_name = "meta-llama/Llama-3.2-1B-Instruct" hf_token = '' device = torch.device("cuda") # Create mode tokenizer = AutoTokenizer.from_pretrained( model_name, token=hf_token ) model = AutoModelForCausalLM.from_pretrained( model_name, token=hf_token, torch_dtype=torch.bfloat16, attn_implementation="sdpa", mode='S' ).to(device) model.generation_config.pad_token_id = tokenizer.eos_token_id # Inference simple as transformers library prompt = "Describe basics of DNNs quantization." messages = [ { "role": "system", "content": "You are a search bot, answer on user text queries." }, { "role": "user", "content": prompt } ] chat_prompt = tokenizer.apply_chat_template( messages, add_generation_prompt=True, tokenize=False ) inputs = tokenizer(chat_prompt, return_tensors="pt") inputs.to(device) with torch.inference_mode(): generate_ids = model.generate(**inputs, max_length=500) input_len = inputs['input_ids'].shape[1] generate_ids = generate_ids[:, input_len:] output = tokenizer.batch_decode( generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False )[0] # Validate answer print(f"# Q:\n{prompt}\n") print(f"# A:\n{output}\n") ``` __System requirements:__ * GPUs: H100, L40s * CPU: AMD, Intel * Python: 3.10-3.12 To work with our models just run these lines in your terminal: ```shell pip install thestage pip install 'thestage-elastic-models[nvidia]' pip install flash_attn==2.7.3 --no-build-isolation pip uninstall apex ``` Then go to [app.thestage.ai](https://app.thestage.ai), login and generate API token from your profile page. Set up API token as follows: ```shell thestage config set --api-token <YOUR_API_TOKEN> ``` Congrats, now you can use accelerated models! ---- ## Benchmarks Benchmarking is one of the most important procedures during model acceleration. We aim to provide clear performance metrics for models using our algorithms. The `W8A8, int8 column` indicates that we applied W8A8 quantization with int8 data type to all linear layers and used the same calibration data as for ANNA. The S model achieves practically identical speed but much higher quality, as ANNA knows how to improve quantization quality on sensitive layers! ### Quality benchmarks <!-- For quality evaluation we have used: #TODO link to github --> | Metric/Model | S | M | L | XL | Original | W8A8, int8 | |---------------|---|---|---|----|----------|------------| | MMLU | 45.5 | 45.9 | 45.9 | 46.2 | 46.2 | 24 | | PIQA | 73.1 | 73.7 | 74.2 | 74.3 | 74.3 | 55.8 | | Arc Challenge | 34.5 | 35.9 | 36.0 | 35.8 | 35.8 | 20.3 | | Winogrande | 60.4 | 59.7 | 60.8 | 59.5 | 59.5 | 50.3 | * **MMLU**:Evaluates general knowledge across 57 subjects including science, humanities, engineering, and more. Shows model's ability to handle diverse academic topics. * **PIQA**: Evaluates physical commonsense reasoning through questions about everyday physical interactions. Shows model's understanding of real-world physics concepts. * **Arc Challenge**: Evaluates grade-school level multiple-choice questions requiring reasoning. Shows model's ability to solve complex reasoning tasks. * **Winogrande**: Evaluates commonsense reasoning through sentence completion tasks. Shows model's capability to understand context and resolve ambiguity. ### Latency benchmarks __100 input/300 output; tok/s:__ | GPU/Model | S | M | L | XL | Original | W8A8, int8 | |-----------|-----|---|---|----|----------|------------| | H100 | 436 | 436 | 409 | 396 | 110 | 439 | | L40s | 290 | 251 | 222 | 210 | 103 | 300 | ## Links * __Platform__: [app.thestage.ai](app.thestage.ai) <!-- * __Elastic models Github__: [app.thestage.ai](app.thestage.ai) --> * __Subscribe for updates__: [TheStageAI X](https://x.com/TheStageAI) * __Contact email__: [email protected]
winnieyangwannan/entity_dpo_Llama-3.1-8B-Instruct_lora_8_lr_0.0001_beta_0.05_2560_all_37_epoch_1_layer_22
winnieyangwannan
2025-08-13T19:47:15Z
8
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "trl", "dpo", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-08-11T20:28:55Z
--- library_name: transformers tags: - trl - dpo --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
AAAAnsah/Qwen2.5-0.5B-Instruct_RFA_theta_0.5
AAAAnsah
2025-08-13T19:20:07Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-08-13T19:19:53Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
elmenbillion/blockassist-bc-beaked_sharp_otter_1755110799
elmenbillion
2025-08-13T19:14:37Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "beaked sharp otter", "arxiv:2504.07091", "region:us" ]
null
2025-08-13T19:14:28Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - beaked sharp otter --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
sidemix/lexymodel5
sidemix
2025-08-13T18:26:03Z
0
0
diffusers
[ "diffusers", "safetensors", "text-to-image", "flux", "lora", "template:sd-lora", "fluxgym", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-08-13T18:25:55Z
--- tags: - text-to-image - flux - lora - diffusers - template:sd-lora - fluxgym widget: - output: url: sample/lexymodel_002700_00_20250813182006.png text: "lexyModel, (ice blue eyes:1.35), long dark-brown hair, full lips, (busty:1.15),\ \ (deep cleavage:1.1), slim waist, long legs, (hourglass figure:1.08), photoreal,\ \ sitting at an outdoor caf\xE9 in Paris, wearing a white halter top, golden\ \ hour sunlight, 50mm lens, ((cartoon, anime, 3d render, doll, blurry, lowres,\ \ jpeg artifacts, watermark, overexposed, underexposed, bad anatomy, deformed,\ \ mutated hands, extra fingers, missing fingers, long neck, distorted eyes,\ \ crossed eyes, lazy eye, bad proportions, unnatural skin, oversaturated,\ \ flat lighting, skin blemishes, scars, asymmetrical face, warped body: -1.2))" - output: url: sample/lexymodel_002700_01_20250813182012.png text: 'lexyModel, (ice blue eyes:1.35), long dark-brown hair, full lips, (busty:1.15), slim waist, photoreal, walking down a London street, pink knit dress, white faux fur coat, overcast daylight, fashion photo, ((cartoon, anime, 3d render, doll, blurry, lowres, jpeg artifacts, watermark, overexposed, underexposed, bad anatomy, deformed, mutated hands, extra fingers, missing fingers, long neck, distorted eyes, crossed eyes, lazy eye, bad proportions, unnatural skin, oversaturated, flat lighting, skin blemishes, scars, asymmetrical face, warped body: -1.2))' - output: url: sample/lexymodel_002700_02_20250813182017.png text: 'lexyModel, (ice blue eyes:1.35), long dark-brown hair, full lips, (busty:1.15), slim waist, photoreal, lying on a luxury yacht deck, black bikini, midday sun, turquoise water in background, ((cartoon, anime, 3d render, doll, blurry, lowres, jpeg artifacts, watermark, overexposed, underexposed, bad anatomy, deformed, mutated hands, extra fingers, missing fingers, long neck, distorted eyes, crossed eyes, lazy eye, bad proportions, unnatural skin, oversaturated, flat lighting, skin blemishes, scars, asymmetrical face, warped body: -1.2))' - output: url: sample/lexymodel_002700_03_20250813182023.png text: 'lexyModel, (ice blue eyes:1.35), long dark-brown hair, full lips, (busty:1.15), slim waist, photoreal, standing on a tropical beach in a gold sequin bikini, clear blue sky, shallow depth of field, ((cartoon, anime, 3d render, doll, blurry, lowres, jpeg artifacts, watermark, overexposed, underexposed, bad anatomy, deformed, mutated hands, extra fingers, missing fingers, long neck, distorted eyes, crossed eyes, lazy eye, bad proportions, unnatural skin, oversaturated, flat lighting, skin blemishes, scars, asymmetrical face, warped body: -1.2))' - output: url: sample/lexymodel_002700_04_20250813182029.png text: 'lexyModel, (ice blue eyes:1.35), long dark-brown hair, full lips, (busty:1.15), slim waist, photoreal, studio photoshoot, black halter bodysuit, zebra print skirt, beauty lighting, 85mm lens, ((cartoon, anime, 3d render, doll, blurry, lowres, jpeg artifacts, watermark, overexposed, underexposed, bad anatomy, deformed, mutated hands, extra fingers, missing fingers, long neck, distorted eyes, crossed eyes, lazy eye, bad proportions, unnatural skin, oversaturated, flat lighting, skin blemishes, scars, asymmetrical face, warped body: -1.2))' base_model: black-forest-labs/FLUX.1-dev instance_prompt: lexyModel, license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md --- # lexyModel A Flux LoRA trained on a local computer with [Fluxgym](https://github.com/cocktailpeanut/fluxgym) <Gallery /> ## Trigger words You should use `lexyModel,` to trigger the image generation. ## Download model and use it with ComfyUI, AUTOMATIC1111, SD.Next, Invoke AI, Forge, etc. Weights for this model are available in Safetensors format.
gabriellarson/pumlGenV2-GGUF
gabriellarson
2025-08-13T18:13:49Z
0
0
transformers
[ "transformers", "gguf", "llama-factory", "full", "generated_from_trainer", "text2diagram", "plantuml", "code-generation", "base_model:chrisrutherford/pumlGenV2", "base_model:quantized:chrisrutherford/pumlGenV2", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2025-08-13T17:47:23Z
--- library_name: transformers license: apache-2.0 base_model: - chrisrutherford/pumlGenV2 tags: - llama-factory - full - generated_from_trainer - text2diagram - plantuml - code-generation model-index: - name: pumlGenV2-1 results: [] --- # pumlGenV2-1 This model is a fine-tuned version of [Qwen/Qwen3-8B-Base](https://huggingface.co/Qwen/Qwen3-8B-Base) on a pumlGen dataset. It specializes in generating PlantUML diagrams from natural language questions. ## Model description pumlGenV2-1 is a specialized language model that converts complex questions into structured PlantUML diagrams. The model takes philosophical, historical, legal, or analytical questions as input and generates comprehensive PlantUML code that visualizes the relationships, hierarchies, and connections between concepts mentioned in the question. Key features: - Generates syntactically correct PlantUML diagrams - Creates structured visualizations with packages, entities, and relationships - Adds contextual notes and annotations - Handles complex domain-specific topics across various fields ## Intended uses & limitations ### Intended uses - **Educational purposes**: Creating visual diagrams to explain complex concepts - **Research visualization**: Mapping relationships between ideas, theories, or historical events - **Documentation**: Generating diagrams for technical or conceptual documentation - **Analysis tools**: Visualizing interconnections in philosophical, legal, or social topics ### Limitations - The model is specifically trained for PlantUML output format - Best performance on analytical, philosophical, historical, and conceptual questions - May require post-processing for specific PlantUML styling preferences - Generated diagrams should be reviewed for accuracy and completeness ## Training and evaluation data The model was trained on the pumlGen dataset, which consists of question-answer pairs where: - **Input**: Complex analytical questions about various topics (philosophy, history, law, social sciences) - **Output**: Corresponding PlantUML diagram code that visualizes the concepts and relationships ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - gradient_accumulation_steps: 16 - total_train_batch_size: 128 - total_eval_batch_size: 64 - optimizer: Use OptimizerNames.ADAMW_8BIT with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - num_epochs: 3.0 ### Training results The model demonstrates strong capabilities in: - Generating valid PlantUML syntax - Creating meaningful entity relationships - Adding appropriate annotations and notes - Structuring complex information hierarchically ### Framework versions - Transformers 4.52.3 - Pytorch 2.6.0+cu124 - Datasets 3.6.0 - Tokenizers 0.21.1 ## Usage Example ```python from transformers import AutoModelForCausalLM, AutoTokenizer # Load model and tokenizer model = AutoModelForCausalLM.from_pretrained("your-username/pumlGenV1-1") tokenizer = AutoTokenizer.from_pretrained("your-username/pumlGenV1-1") # Prepare the input in conversation format question = "What role does the annual flooding of the Nile play in the overall agricultural success and survival of the kingdoms along its banks?" messages = [ {"from": "human", "value": question}, ] # Format the input (adjust based on your specific tokenizer's chat template) input_text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) inputs = tokenizer(input_text, return_tensors="pt") # Generate PlantUML diagram outputs = model.generate( **inputs, max_length=2048, temperature=0.7, do_sample=True, pad_token_id=tokenizer.eos_token_id ) # Decode and extract the PlantUML code response = tokenizer.decode(outputs[0], skip_special_tokens=True) # Extract the PlantUML code from the response (between @startuml and @enduml) plantuml_code = response.split("@startuml")[-1].split("@enduml")[0] plantuml_code = "@startuml" + plantuml_code + "@enduml" print(plantuml_code) ``` ## Eval Q1 Can artificial intelligence ever achieve true understanding, or is it limited to sophisticated pattern recognition? Break this down by examining the nature of consciousness, the semantics of 'understanding,' the boundaries of computational logic, and the role of embodiment in cognition—then map these components into a coherent framework ![image/png](https://cdn-uploads.huggingface.co/production/uploads/65be16980a0c57943fbe8b00/npfHv8F3MzHHWlvmwRge0.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/65be16980a0c57943fbe8b00/ZjgPMHuDE-DrpqLu6GxNG.png)
neural-interactive-proofs/finetune_dpo_qwen2_5-32b-instruct_cv_qwen2.5-32B_prover_nip_transfer_baseline_1_0_iter_5_provers
neural-interactive-proofs
2025-08-13T17:50:57Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "generated_from_trainer", "trl", "dpo", "arxiv:2305.18290", "base_model:Qwen/Qwen2.5-32B-Instruct", "base_model:finetune:Qwen/Qwen2.5-32B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-08-13T17:49:44Z
--- base_model: Qwen/Qwen2.5-32B-Instruct library_name: transformers model_name: finetune_dpo_qwen2_5-32b-instruct_cv_qwen2.5-32B_prover_nip_transfer_baseline_1_0_iter_5_provers tags: - generated_from_trainer - trl - dpo licence: license --- # Model Card for finetune_dpo_qwen2_5-32b-instruct_cv_qwen2.5-32B_prover_nip_transfer_baseline_1_0_iter_5_provers This model is a fine-tuned version of [Qwen/Qwen2.5-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-32B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="neural-interactive-proofs/finetune_dpo_qwen2_5-32b-instruct_cv_qwen2.5-32B_prover_nip_transfer_baseline_1_0_iter_5_provers", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/lrhammond-team/pvg-self-hosted-finetune/runs/qwen2_5-32b-instruct_dpo_2025-08-13_18-36-17_cv_qwen2.5-32B_prover_nip_transfer_baseline_1_0_iter_5_provers_group) This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290). ### Framework versions - TRL: 0.18.2 - Transformers: 4.53.2 - Pytorch: 2.7.0 - Datasets: 3.0.0 - Tokenizers: 0.21.1 ## Citations Cite DPO as: ```bibtex @inproceedings{rafailov2023direct, title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}}, author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn}, year = 2023, booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023}, url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html}, editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
pretraining/qwen3-micro-1k
pretraining
2025-08-13T17:23:33Z
81
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-08-09T12:22:05Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
4everStudent/sft-chess-qwen3-0.6B-1500k-08-12-25-merged
4everStudent
2025-08-13T17:03:50Z
0
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-08-13T17:03:17Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
flymy-ai/qwen-image-anime-irl-lora
flymy-ai
2025-08-13T16:38:59Z
0
0
null
[ "lora", "qwen", "qwen-image", "anime", "anime-to-real", "style-transfer", "text-to-image", "en", "base_model:Qwen/Qwen-Image", "base_model:adapter:Qwen/Qwen-Image", "license:apache-2.0", "region:us" ]
text-to-image
2025-08-13T15:29:46Z
--- license: apache-2.0 language: - en base_model: - Qwen/Qwen-Image pipeline_tag: text-to-image tags: - lora - qwen - qwen-image - anime - anime-to-real - style-transfer --- ## Anime in Real Life LoRA for Qwen-Image LoRA model that transforms anime-style prompts into photorealistic images for [Qwen-Image](https://huggingface.co/Qwen/Qwen-Image) trigger word: **"Real life Anime"** # 📝 Updates # 🧪 Usage ## 🔧 Initialization ```python from diffusers import DiffusionPipeline import torch model_name = "Qwen/Qwen-Image" # Load the pipeline if torch.cuda.is_available(): torch_dtype = torch.bfloat16 device = "cuda" else: torch_dtype = torch.float32 device = "cpu" pipe = DiffusionPipeline.from_pretrained(model_name, torch_dtype=torch_dtype) pipe = pipe.to(device) ``` ### 🔌 Load LoRA Weights ```python # Load LoRA weights pipe.load_lora_weights('flymy_anime_irl.safetensors', adapter_name="lora") ``` ### 🎨 Generate Anime in Real Life Images ```python prompt = '''Real life Anime in a cozy kitchen, eating noodles with chopsticks, while a curious cat looks out the window.''' negative_prompt = " " image = pipe( prompt=prompt, negative_prompt=negative_prompt, width=1024, height=1024, num_inference_steps=50, true_cfg_scale=4, generator=torch.Generator(device="cuda").manual_seed(4633346 + 2) ).images[0] # Display the image (in Jupyter or save to file) image.show() # or image.save("output.png") ``` ### 🖼️ Sample Outputs Examples of anime characters transformed into photorealistic images: ## 🎛️ Using with ComfyUI We provide a ready-to-use ComfyUI workflow that works with our trained LoRA models. Follow these steps to set up and use the workflow: ### Setup Instructions 1. **Download the latest ComfyUI**: - Visit the [ComfyUI GitHub repository](https://github.com/comfyanonymous/ComfyUI) - Clone or download the latest version 2. **Install ComfyUI**: - Follow the installation instructions from the [ComfyUI repository](https://github.com/comfyanonymous/ComfyUI?tab=readme-ov-file#installing) - Make sure all dependencies are properly installed 3. **Download Qwen-Image model weights**: - Go to [Qwen-Image ComfyUI weights](https://huggingface.co/Comfy-Org/Qwen-Image_ComfyUI/tree/main) - Download all the model files 4. **Place Qwen-Image weights in ComfyUI**: - Copy the downloaded Qwen-Image model files to the appropriate folders in `ComfyUI/models/` - Follow the folder structure as specified in the model repository 5. **Download our pre-trained LoRA weights**: - Visit [flymy-ai/qwen-image-lora](https://huggingface.co/flymy-ai/qwen-image-lora) - Download the LoRA `.safetensors` files 6. **Place LoRA weights in ComfyUI**: - Copy the LoRA file `flymy-ai/qwen-image-lora/pytorch_lora_weights.safetensors` to `ComfyUI/models/loras/` 7. **Load the workflow**: - Open ComfyUI in your browser - Load the workflow file `qwen_image_lora_example.json` located in this repository - The workflow is pre-configured to work with our LoRA models ### Workflow Features - ✅ Pre-configured for Qwen-Image + LoRA inference - ✅ Optimized settings for best quality output - ✅ Easy prompt and parameter adjustment - ✅ Compatible with all our trained LoRA models The ComfyUI workflow provides a user-friendly interface for generating images with our trained LoRA models without needing to write Python code. ### 🖼️ Workflow Screenshot ![ComfyUI Workflow](./assets/comfyui_workflow.png) ## 📊 Anime to Real Life Examples Below are examples showing how our LoRA model transforms anime-style descriptions into photorealistic images: ### Example 1 ![Anime to Real Life Example 1](./assets/irl_lora1.jpg) ### Example 2 ![Anime to Real Life Example 2](./assets/irl_lora2.jpg) ### Example 3 ![Anime to Real Life Example 3](./assets/irl_lora3.jpg) ## 🤝 Support If you have questions or suggestions, join our community: - 🌐 [FlyMy.AI](https://flymy.ai) - 💬 [Discord Community](https://discord.com/invite/t6hPBpSebw) - 🐦 [Follow us on X](https://x.com/flymyai) - 💼 [Connect on LinkedIn](https://linkedin.com/company/flymyai) - 📧 [Support](mailto:[email protected]) **⭐ Don't forget to star the repository if you like it!** --- license: apache-2.0 ---
prithivMLmods/Lumian2-VLR-7B-Thinking
prithivMLmods
2025-08-13T16:18:15Z
15
3
transformers
[ "transformers", "safetensors", "qwen2_5_vl", "image-to-text", "text-generation-inference", "trl", "vlr", "ocr", "vision-language", "reasoning", "grounded-visual-reasoning", "sft", "grpo", "thinking", "code", "thinking=1", "image-text-to-text", "conversational", "en", "arxiv:2309.00071", "arxiv:2409.12191", "arxiv:2308.12966", "arxiv:2412.02210", "arxiv:2505.20272", "base_model:prithivMLmods/Lumian-VLR-7B-Thinking", "base_model:finetune:prithivMLmods/Lumian-VLR-7B-Thinking", "license:apache-2.0", "endpoints_compatible", "region:us" ]
image-text-to-text
2025-08-07T15:25:28Z
--- license: apache-2.0 language: - en base_model: - prithivMLmods/Lumian-VLR-7B-Thinking pipeline_tag: image-text-to-text library_name: transformers tags: - text-generation-inference - trl - vlr - ocr - vision-language - reasoning - grounded-visual-reasoning - sft - grpo - thinking - code - thinking=1 --- ![1.png](https://cdn-uploads.huggingface.co/production/uploads/65bb837dbfb878f46c77de4c/LRdNDOhD4OxaD3sXyENtT.png) # **Lumian2-VLR-7B-Thinking** > The **Lumian2-VLR-7B-Thinking** model is a high-fidelity vision-language reasoning (experimental model) system designed for fine-grained multimodal understanding. Built on **Qwen2.5-VL-7B-Instruct**, this model enhances image captioning, sampled video reasoning, and document comprehension through explicit grounded reasoning. It produces structured reasoning traces aligned with visual coordinates, enabling explainable multimodal reasoning. Trained via supervised fine-tuning (SFT) on visually-grounded reasoning traces and further refined using GRPO reinforcement learning, Lumian2 delivers superior step-by-step chain-of-thought reasoning with strong visual grounding. ## Key Enhancements * **Visually-Grounded Reasoning and Thinking Traces**: Generates explicit reasoning traces tied to image regions and document structures for transparent and explainable outputs. * **Advanced Image Captioning**: Produces detailed, grounded captions with reasoning steps for improved scene understanding. * **Sampled Video Reasoning**: Handles long-duration videos with temporal reasoning for question answering and summarization. * **Context-Aware Document Analysis**: Excels at structured and unstructured content extraction with visual grounding. * **Fine-Grained Visual Grounding**: Accurately links reasoning steps to tables, charts, and graphical elements. * **Reinforcement-Learned Thinking**: GRPO training incentivizes accurate, grounded reasoning with minimal hallucinations. > [!TIP] ✦ Colab Demo : https://huggingface.co/prithivMLmods/Lumian2-VLR-7B-Thinking/blob/main/Lumian2-VLR-7B-Thinking-Demo-Notebook/Lumian2_VLR_7B_Thinking.ipynb ## Thinking Traces The model outputs reasoning and answers in a structured format: ``` <think> Step 1: Identify the main elements in the image and their positions. Step 2: Analyze the relationships between objects and surrounding context. Step 3: Derive the final answer based on spatial reasoning and visual cues. </think> <answer> The image depicts a person holding an open book with highlighted sections on the left page. </answer> ``` ## Quick Start with Transformers🤗 (single-shot) ```python from transformers import Qwen2_5_VLForConditionalGeneration, AutoTokenizer, AutoProcessor from qwen_vl_utils import process_vision_info model = Qwen2_5_VLForConditionalGeneration.from_pretrained( "prithivMLmods/Lumian2-VLR-7B-Thinking", torch_dtype="auto", device_map="auto" ) processor = AutoProcessor.from_pretrained("prithivMLmods/Lumian2-VLR-7B-Thinking") messages = [ { "role": "user", "content": [ { "type": "image", "image": "https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-VL/assets/demo.jpeg", }, {"type": "text", "text": "Describe this image with thinking traces."}, ], } ] text = processor.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) image_inputs, video_inputs = process_vision_info(messages) inputs = processor( text=[text], images=image_inputs, videos=video_inputs, padding=True, return_tensors="pt", ) inputs = inputs.to("cuda") generated_ids = model.generate(**inputs, max_new_tokens=256) generated_ids_trimmed = [ out_ids[len(in_ids):] for in_ids, out_ids in zip(inputs.input_ids, generated_ids) ] output_text = processor.batch_decode( generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False ) print(output_text) ``` ## Intended Use * Visual reasoning with grounded, step-by-step thinking traces. * Explainable image captioning and sampled video reasoning. * Multimodal document retrieval, extraction, and analytical interpretation. * Transparent chain-of-thought reasoning for educational, research, and enterprise use. * Multilingual reasoning and structured content extraction. * Robotic and mobile vision-based automation with grounded decision-making. ## Limitations * High memory requirements for long videos and large document batches. * Degraded accuracy on extremely low-resolution or obscured visuals. * Suboptimal for real-time inference on edge devices. * Visual token configuration strongly influences reasoning fidelity. * Occasional reasoning drift or partial grounding errors. --- ## References * **YaRN: Efficient Context Window Extension of Large Language Models** [https://arxiv.org/pdf/2309.00071](https://arxiv.org/pdf/2309.00071) * **Qwen2-VL: Enhancing Vision-Language Model's Perception of the World at Any Resolution** [https://arxiv.org/pdf/2409.12191](https://arxiv.org/pdf/2409.12191) * **Qwen-VL: A Versatile Vision-Language Model for Understanding, Localization, Text Reading, and Beyond** [https://arxiv.org/pdf/2308.12966](https://arxiv.org/pdf/2308.12966) * **A Comprehensive and Challenging OCR Benchmark for Evaluating Large Multimodal Models in Literacy** [https://arxiv.org/pdf/2412.02210](https://arxiv.org/pdf/2412.02210) * **Ground-R1: Incentivizing Grounded Visual Reasoning via Reinforcement Learning** [https://arxiv.org/pdf/2505.20272](https://arxiv.org/pdf/2505.20272)
Ferdi3425/blockassist-bc-amphibious_deadly_otter_1755100762
Ferdi3425
2025-08-13T16:00:58Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "amphibious deadly otter", "arxiv:2504.07091", "region:us" ]
null
2025-08-13T16:00:06Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - amphibious deadly otter --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
genies-llm/text2sql-grpo-d5-e1
genies-llm
2025-08-13T15:55:13Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "open-r1", "trl", "grpo", "conversational", "dataset:Genies/text2sql-grpo-d5", "arxiv:2402.03300", "base_model:Genies/text2sql_sft_v2", "base_model:finetune:Genies/text2sql_sft_v2", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-08-11T23:45:10Z
--- base_model: Genies/text2sql_sft_v2 datasets: Genies/text2sql-grpo-d5 library_name: transformers model_name: text2sql-grpo-d5-e1 tags: - generated_from_trainer - open-r1 - trl - grpo licence: license --- # Model Card for text2sql-grpo-d5-e1 This model is a fine-tuned version of [Genies/text2sql_sft_v2](https://huggingface.co/Genies/text2sql_sft_v2) on the [Genies/text2sql-grpo-d5](https://huggingface.co/datasets/Genies/text2sql-grpo-d5) dataset. It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="genies-llm/text2sql-grpo-d5-e1", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/genies-rnd/text2sql-rl/runs/xyrjo69y) This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.18.0 - Transformers: 4.52.3 - Pytorch: 2.6.0 - Datasets: 4.0.0 - Tokenizers: 0.21.4 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
Kaelan/Qwen2.5_3B_RL
Kaelan
2025-08-13T15:47:35Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "text-generation-inference", "unsloth", "conversational", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-08-13T15:47:26Z
--- base_model: unsloth/qwen2.5-3b-instruct-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - qwen2 license: apache-2.0 language: - en --- # Uploaded finetuned model - **Developed by:** Kaelan - **License:** apache-2.0 - **Finetuned from model :** unsloth/qwen2.5-3b-instruct-unsloth-bnb-4bit This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
mradermacher/VyvoTTS-v0-Qwen3-0.6B-GGUF
mradermacher
2025-08-13T15:46:40Z
1,950
2
transformers
[ "transformers", "gguf", "en", "dataset:OpenSpeechHub/mls_eng_10k_snac_qwen", "base_model:Vyvo/VyvoTTS-v0-Qwen3-0.6B", "base_model:quantized:Vyvo/VyvoTTS-v0-Qwen3-0.6B", "license:mit", "endpoints_compatible", "region:us", "conversational" ]
null
2025-08-09T12:39:20Z
--- base_model: Vyvo/VyvoTTS-v0-Qwen3-0.6B datasets: - OpenSpeechHub/mls_eng_10k_snac_qwen language: - en library_name: transformers license: mit mradermacher: readme_rev: 1 quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> <!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS --> <!-- ### quants_skip: --> <!-- ### skip_mmproj: --> static quants of https://huggingface.co/Vyvo/VyvoTTS-v0-Qwen3-0.6B <!-- provided-files --> ***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#VyvoTTS-v0-Qwen3-0.6B-GGUF).*** weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/VyvoTTS-v0-Qwen3-0.6B-GGUF/resolve/main/VyvoTTS-v0-Qwen3-0.6B.Q2_K.gguf) | Q2_K | 0.5 | | | [GGUF](https://huggingface.co/mradermacher/VyvoTTS-v0-Qwen3-0.6B-GGUF/resolve/main/VyvoTTS-v0-Qwen3-0.6B.Q3_K_S.gguf) | Q3_K_S | 0.5 | | | [GGUF](https://huggingface.co/mradermacher/VyvoTTS-v0-Qwen3-0.6B-GGUF/resolve/main/VyvoTTS-v0-Qwen3-0.6B.Q3_K_M.gguf) | Q3_K_M | 0.6 | lower quality | | [GGUF](https://huggingface.co/mradermacher/VyvoTTS-v0-Qwen3-0.6B-GGUF/resolve/main/VyvoTTS-v0-Qwen3-0.6B.Q3_K_L.gguf) | Q3_K_L | 0.6 | | | [GGUF](https://huggingface.co/mradermacher/VyvoTTS-v0-Qwen3-0.6B-GGUF/resolve/main/VyvoTTS-v0-Qwen3-0.6B.IQ4_XS.gguf) | IQ4_XS | 0.6 | | | [GGUF](https://huggingface.co/mradermacher/VyvoTTS-v0-Qwen3-0.6B-GGUF/resolve/main/VyvoTTS-v0-Qwen3-0.6B.Q4_K_S.gguf) | Q4_K_S | 0.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/VyvoTTS-v0-Qwen3-0.6B-GGUF/resolve/main/VyvoTTS-v0-Qwen3-0.6B.Q4_K_M.gguf) | Q4_K_M | 0.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/VyvoTTS-v0-Qwen3-0.6B-GGUF/resolve/main/VyvoTTS-v0-Qwen3-0.6B.Q5_K_S.gguf) | Q5_K_S | 0.7 | | | [GGUF](https://huggingface.co/mradermacher/VyvoTTS-v0-Qwen3-0.6B-GGUF/resolve/main/VyvoTTS-v0-Qwen3-0.6B.Q5_K_M.gguf) | Q5_K_M | 0.7 | | | [GGUF](https://huggingface.co/mradermacher/VyvoTTS-v0-Qwen3-0.6B-GGUF/resolve/main/VyvoTTS-v0-Qwen3-0.6B.Q6_K.gguf) | Q6_K | 0.8 | very good quality | | [GGUF](https://huggingface.co/mradermacher/VyvoTTS-v0-Qwen3-0.6B-GGUF/resolve/main/VyvoTTS-v0-Qwen3-0.6B.Q8_0.gguf) | Q8_0 | 1.0 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/VyvoTTS-v0-Qwen3-0.6B-GGUF/resolve/main/VyvoTTS-v0-Qwen3-0.6B.f16.gguf) | f16 | 1.7 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
rvipitkirubbe/blockassist-bc-mottled_foraging_ape_1755098266
rvipitkirubbe
2025-08-13T15:45:56Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "mottled foraging ape", "arxiv:2504.07091", "region:us" ]
null
2025-08-13T15:45:52Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - mottled foraging ape --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
kholiavko/ministral-8B-13-08-25
kholiavko
2025-08-13T15:42:27Z
0
0
transformers
[ "transformers", "gguf", "mistral", "text-generation-inference", "unsloth", "en", "base_model:mistralai/Ministral-8B-Instruct-2410", "base_model:quantized:mistralai/Ministral-8B-Instruct-2410", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2025-08-13T11:46:13Z
--- base_model: mistralai/Ministral-8B-Instruct-2410 tags: - text-generation-inference - transformers - unsloth - mistral - gguf license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** kholiavko - **License:** apache-2.0 - **Finetuned from model :** mistralai/Ministral-8B-Instruct-2410 This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
albertuspekerti/whispertiny_fruit25syl_v7_2
albertuspekerti
2025-08-13T15:37:25Z
7
0
null
[ "tensorboard", "safetensors", "whisper", "generated_from_trainer", "base_model:albertuspekerti/whispertiny_fruit25syl_v3_2", "base_model:finetune:albertuspekerti/whispertiny_fruit25syl_v3_2", "license:apache-2.0", "region:us" ]
null
2025-08-12T02:47:49Z
--- license: apache-2.0 base_model: albertuspekerti/whispertiny_fruit25syl_v3_2 tags: - generated_from_trainer metrics: - wer model-index: - name: whispertiny_fruit25syl_v7_2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # whispertiny_fruit25syl_v7_2 This model is a fine-tuned version of [albertuspekerti/whispertiny_fruit25syl_v3_2](https://huggingface.co/albertuspekerti/whispertiny_fruit25syl_v3_2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0293 - Wer: 2.4911 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 200000 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-------:|:------:|:---------------:|:-------:| | 0.0017 | 0.01 | 2000 | 0.1528 | 13.6069 | | 0.0046 | 0.02 | 4000 | 0.5572 | 23.3619 | | 0.0025 | 0.03 | 6000 | 0.2777 | 20.2010 | | 0.0032 | 0.04 | 8000 | 0.2153 | 20.7452 | | 0.0032 | 0.05 | 10000 | 0.2302 | 18.2332 | | 0.0224 | 1.0027 | 12000 | 0.4752 | 37.4293 | | 0.0007 | 1.0127 | 14000 | 0.2770 | 17.7517 | | 0.0016 | 1.0227 | 16000 | 0.1509 | 15.3862 | | 0.0037 | 1.0327 | 18000 | 0.5857 | 24.4714 | | 0.001 | 1.0427 | 20000 | 0.3885 | 32.2797 | | 0.0019 | 1.0527 | 22000 | 0.2408 | 19.3008 | | 0.0081 | 2.0054 | 24000 | 0.1916 | 16.2236 | | 0.0008 | 2.0154 | 26000 | 0.1684 | 14.7163 | | 0.0001 | 2.0254 | 28000 | 0.1511 | 13.8581 | | 0.0012 | 2.0354 | 30000 | 0.2209 | 14.5907 | | 0.0023 | 2.0454 | 32000 | 0.4412 | 23.9900 | | 0.0022 | 2.0554 | 34000 | 0.2314 | 17.9192 | | 0.0005 | 3.0081 | 36000 | 0.2667 | 20.9755 | | 0.0016 | 3.0181 | 38000 | 0.2627 | 14.7163 | | 0.002 | 3.0281 | 40000 | 0.1328 | 10.9902 | | 0.0004 | 3.0381 | 42000 | 0.1712 | 13.2510 | | 0.0004 | 3.0481 | 44000 | 0.4716 | 24.8901 | | 0.0044 | 4.0008 | 46000 | 0.1900 | 12.5183 | | 0.001 | 4.0108 | 48000 | 0.0963 | 10.4668 | | 0.0005 | 4.0208 | 50000 | 0.3588 | 18.7147 | | 0.0012 | 4.0308 | 52000 | 0.3890 | 25.7484 | | 0.0009 | 4.0408 | 54000 | 0.2656 | 17.9611 | | 0.0008 | 4.0508 | 56000 | 0.1365 | 13.1882 | | 0.0088 | 5.0035 | 58000 | 0.0693 | 7.5780 | | 0.0007 | 5.0135 | 60000 | 0.1331 | 8.7921 | | 0.0004 | 5.0235 | 62000 | 0.1024 | 10.1528 | | 0.0005 | 5.0335 | 64000 | 0.3344 | 19.6567 | | 0.0006 | 5.0435 | 66000 | 0.2273 | 22.6502 | | 0.0012 | 5.0535 | 68000 | 0.2045 | 13.6906 | | 0.0011 | 6.0062 | 70000 | 0.1301 | 11.7438 | | 0.0004 | 6.0162 | 72000 | 0.1255 | 11.0530 | | 0.0002 | 6.0262 | 74000 | 0.1529 | 9.7969 | | 0.0006 | 6.0362 | 76000 | 0.0905 | 9.2945 | | 0.0004 | 6.0462 | 78000 | 0.4328 | 28.4279 | | 0.0004 | 6.0562 | 80000 | 0.1318 | 12.4346 | | 0.0007 | 7.0089 | 82000 | 0.0870 | 8.2269 | | 0.001 | 7.0189 | 84000 | 0.1575 | 12.7277 | | 0.0004 | 7.0289 | 86000 | 0.0802 | 7.5570 | | 0.0003 | 7.0389 | 88000 | 0.0924 | 8.8549 | | 0.0005 | 7.0489 | 90000 | 0.1631 | 15.3025 | | 0.0046 | 8.0016 | 92000 | 0.0694 | 6.3429 | | 0.0003 | 8.0116 | 94000 | 0.0854 | 7.2012 | | 0.0002 | 8.0216 | 96000 | 0.2454 | 16.4329 | | 0.0009 | 8.0316 | 98000 | 0.3096 | 18.8193 | | 0.0008 | 8.0416 | 100000 | 0.1424 | 9.3992 | | 0.0003 | 8.0516 | 102000 | 0.1211 | 9.8179 | | 0.0005 | 9.0043 | 104000 | 0.0815 | 7.1384 | | 0.0002 | 9.0143 | 106000 | 0.1122 | 6.5732 | | 0.0003 | 9.0243 | 108000 | 0.0740 | 6.8872 | | 0.0003 | 9.0343 | 110000 | 0.3435 | 20.5359 | | 0.0006 | 9.0443 | 112000 | 0.1206 | 10.2784 | | 0.0006 | 9.0543 | 114000 | 0.1424 | 8.2269 | | 0.0002 | 10.0070 | 116000 | 0.1405 | 10.0481 | | 0.0002 | 10.0170 | 118000 | 0.0544 | 5.0869 | | 0.0003 | 10.0270 | 120000 | 0.1357 | 7.2430 | | 0.0019 | 10.0371 | 122000 | 0.0717 | 6.9918 | | 0.0004 | 10.0471 | 124000 | 0.1705 | 10.9274 | | 0.0 | 10.0571 | 126000 | 0.1093 | 7.2430 | | 0.0001 | 11.0098 | 128000 | 0.0741 | 5.6730 | | 0.0004 | 11.0198 | 130000 | 0.1728 | 12.4764 | | 0.0001 | 11.0298 | 132000 | 0.1004 | 6.5104 | | 0.0003 | 11.0398 | 134000 | 0.0998 | 6.5313 | | 0.0002 | 11.0498 | 136000 | 0.0776 | 7.4314 | | 0.0009 | 12.0025 | 138000 | 0.0493 | 4.3333 | | 0.0002 | 12.0125 | 140000 | 0.0642 | 4.6054 | | 0.0001 | 12.0225 | 142000 | 0.0839 | 6.6988 | | 0.001 | 12.0325 | 144000 | 0.1505 | 10.2156 | | 0.0002 | 12.0425 | 146000 | 0.0996 | 6.1336 | | 0.0002 | 12.0525 | 148000 | 0.0635 | 4.9194 | | 0.0002 | 13.0052 | 150000 | 0.0752 | 5.4218 | | 0.0001 | 13.0152 | 152000 | 0.0610 | 4.3542 | | 0.0001 | 13.0252 | 154000 | 0.0505 | 4.2705 | | 0.0001 | 13.0352 | 156000 | 0.3230 | 16.6632 | | 0.0009 | 13.0452 | 158000 | 0.0555 | 5.1497 | | 0.0001 | 13.0552 | 160000 | 0.0913 | 5.6730 | | 0.0 | 14.0079 | 162000 | 0.1059 | 6.5313 | | 0.0003 | 14.0179 | 164000 | 0.0450 | 4.3542 | | 0.0001 | 14.0279 | 166000 | 0.0555 | 3.5587 | | 0.0001 | 14.0379 | 168000 | 0.0813 | 6.9081 | | 0.0002 | 14.0479 | 170000 | 0.1188 | 9.0852 | | 0.0001 | 15.0006 | 172000 | 0.0599 | 4.1030 | | 0.0001 | 15.0106 | 174000 | 0.0537 | 4.1030 | | 0.0002 | 15.0206 | 176000 | 0.0909 | 6.1545 | | 0.0002 | 15.0306 | 178000 | 0.0406 | 3.1610 | | 0.0005 | 15.0406 | 180000 | 0.0352 | 2.9307 | | 0.0002 | 15.0506 | 182000 | 0.0512 | 4.5007 | | 0.0 | 16.0033 | 184000 | 0.0435 | 3.1400 | | 0.0008 | 16.0133 | 186000 | 0.0332 | 2.5539 | | 0.0002 | 16.0233 | 188000 | 0.0716 | 5.0450 | | 0.0013 | 16.0333 | 190000 | 0.0415 | 3.4331 | | 0.0001 | 16.0433 | 192000 | 0.0273 | 2.3864 | | 0.0021 | 16.0533 | 194000 | 0.0278 | 2.3864 | | 0.0011 | 17.0060 | 196000 | 0.0279 | 2.3446 | | 0.001 | 17.0160 | 198000 | 0.0262 | 2.3446 | | 0.0001 | 17.0260 | 200000 | 0.0293 | 2.4911 | ### Framework versions - Transformers 4.41.2 - Pytorch 2.3.0+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
tdimeo/bert-finetuned-ner
tdimeo
2025-08-13T15:32:26Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "bert", "token-classification", "generated_from_trainer", "dataset:conll2003", "base_model:google-bert/bert-base-cased", "base_model:finetune:google-bert/bert-base-cased", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2025-08-13T14:47:12Z
--- library_name: transformers license: apache-2.0 base_model: bert-base-cased tags: - generated_from_trainer datasets: - conll2003 metrics: - precision - recall - f1 - accuracy model-index: - name: bert-finetuned-ner results: - task: name: Token Classification type: token-classification dataset: name: conll2003 type: conll2003 config: conll2003 split: validation args: conll2003 metrics: - name: Precision type: precision value: 0.9367025683512842 - name: Recall type: recall value: 0.9513631773813531 - name: F1 type: f1 value: 0.943975953911664 - name: Accuracy type: accuracy value: 0.9861217401542356 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-finetuned-ner This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset. It achieves the following results on the evaluation set: - Loss: 0.0640 - Precision: 0.9367 - Recall: 0.9514 - F1: 0.9440 - Accuracy: 0.9861 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.0761 | 1.0 | 1756 | 0.0698 | 0.8941 | 0.9283 | 0.9109 | 0.9807 | | 0.0348 | 2.0 | 3512 | 0.0715 | 0.9322 | 0.9470 | 0.9396 | 0.9849 | | 0.0204 | 3.0 | 5268 | 0.0640 | 0.9367 | 0.9514 | 0.9440 | 0.9861 | ### Framework versions - Transformers 4.55.0 - Pytorch 2.6.0+cu124 - Datasets 2.14.5 - Tokenizers 0.21.4
mang3dd/blockassist-bc-tangled_slithering_alligator_1755095595
mang3dd
2025-08-13T15:00:33Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "tangled slithering alligator", "arxiv:2504.07091", "region:us" ]
null
2025-08-13T15:00:27Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - tangled slithering alligator --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
mradermacher/Josiefied-Qwen3-4B-Instruct-2507-gabliterated-v2-i1-GGUF
mradermacher
2025-08-13T15:00:26Z
0
1
transformers
[ "transformers", "gguf", "en", "base_model:Goekdeniz-Guelmez/Josiefied-Qwen3-4B-Instruct-2507-gabliterated-v2", "base_model:quantized:Goekdeniz-Guelmez/Josiefied-Qwen3-4B-Instruct-2507-gabliterated-v2", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
null
2025-08-13T14:29:33Z
--- base_model: Goekdeniz-Guelmez/Josiefied-Qwen3-4B-Instruct-2507-gabliterated-v2 language: - en library_name: transformers mradermacher: readme_rev: 1 quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> <!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_K_M Q4_0 IQ3_XS Q4_1 IQ3_S --> <!-- ### quants_skip: --> <!-- ### skip_mmproj: --> weighted/imatrix quants of https://huggingface.co/Goekdeniz-Guelmez/Josiefied-Qwen3-4B-Instruct-2507-gabliterated-v2 <!-- provided-files --> ***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Josiefied-Qwen3-4B-Instruct-2507-gabliterated-v2-i1-GGUF).*** static quants are available at https://huggingface.co/mradermacher/Josiefied-Qwen3-4B-Instruct-2507-gabliterated-v2-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Josiefied-Qwen3-4B-Instruct-2507-gabliterated-v2-i1-GGUF/resolve/main/Josiefied-Qwen3-4B-Instruct-2507-gabliterated-v2.imatrix.gguf) | imatrix | 0.1 | imatrix file (for creating your own qwuants) | | [GGUF](https://huggingface.co/mradermacher/Josiefied-Qwen3-4B-Instruct-2507-gabliterated-v2-i1-GGUF/resolve/main/Josiefied-Qwen3-4B-Instruct-2507-gabliterated-v2.i1-IQ1_S.gguf) | i1-IQ1_S | 1.2 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/Josiefied-Qwen3-4B-Instruct-2507-gabliterated-v2-i1-GGUF/resolve/main/Josiefied-Qwen3-4B-Instruct-2507-gabliterated-v2.i1-IQ1_M.gguf) | i1-IQ1_M | 1.2 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/Josiefied-Qwen3-4B-Instruct-2507-gabliterated-v2-i1-GGUF/resolve/main/Josiefied-Qwen3-4B-Instruct-2507-gabliterated-v2.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 1.3 | | | [GGUF](https://huggingface.co/mradermacher/Josiefied-Qwen3-4B-Instruct-2507-gabliterated-v2-i1-GGUF/resolve/main/Josiefied-Qwen3-4B-Instruct-2507-gabliterated-v2.i1-IQ2_XS.gguf) | i1-IQ2_XS | 1.5 | | | [GGUF](https://huggingface.co/mradermacher/Josiefied-Qwen3-4B-Instruct-2507-gabliterated-v2-i1-GGUF/resolve/main/Josiefied-Qwen3-4B-Instruct-2507-gabliterated-v2.i1-IQ2_S.gguf) | i1-IQ2_S | 1.5 | | | [GGUF](https://huggingface.co/mradermacher/Josiefied-Qwen3-4B-Instruct-2507-gabliterated-v2-i1-GGUF/resolve/main/Josiefied-Qwen3-4B-Instruct-2507-gabliterated-v2.i1-IQ2_M.gguf) | i1-IQ2_M | 1.6 | | | [GGUF](https://huggingface.co/mradermacher/Josiefied-Qwen3-4B-Instruct-2507-gabliterated-v2-i1-GGUF/resolve/main/Josiefied-Qwen3-4B-Instruct-2507-gabliterated-v2.i1-Q2_K_S.gguf) | i1-Q2_K_S | 1.7 | very low quality | | [GGUF](https://huggingface.co/mradermacher/Josiefied-Qwen3-4B-Instruct-2507-gabliterated-v2-i1-GGUF/resolve/main/Josiefied-Qwen3-4B-Instruct-2507-gabliterated-v2.i1-Q2_K.gguf) | i1-Q2_K | 1.8 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/Josiefied-Qwen3-4B-Instruct-2507-gabliterated-v2-i1-GGUF/resolve/main/Josiefied-Qwen3-4B-Instruct-2507-gabliterated-v2.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 1.8 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Josiefied-Qwen3-4B-Instruct-2507-gabliterated-v2-i1-GGUF/resolve/main/Josiefied-Qwen3-4B-Instruct-2507-gabliterated-v2.i1-IQ3_XS.gguf) | i1-IQ3_XS | 1.9 | | | [GGUF](https://huggingface.co/mradermacher/Josiefied-Qwen3-4B-Instruct-2507-gabliterated-v2-i1-GGUF/resolve/main/Josiefied-Qwen3-4B-Instruct-2507-gabliterated-v2.i1-Q3_K_S.gguf) | i1-Q3_K_S | 2.0 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/Josiefied-Qwen3-4B-Instruct-2507-gabliterated-v2-i1-GGUF/resolve/main/Josiefied-Qwen3-4B-Instruct-2507-gabliterated-v2.i1-IQ3_S.gguf) | i1-IQ3_S | 2.0 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Josiefied-Qwen3-4B-Instruct-2507-gabliterated-v2-i1-GGUF/resolve/main/Josiefied-Qwen3-4B-Instruct-2507-gabliterated-v2.i1-IQ3_M.gguf) | i1-IQ3_M | 2.1 | | | [GGUF](https://huggingface.co/mradermacher/Josiefied-Qwen3-4B-Instruct-2507-gabliterated-v2-i1-GGUF/resolve/main/Josiefied-Qwen3-4B-Instruct-2507-gabliterated-v2.i1-Q3_K_M.gguf) | i1-Q3_K_M | 2.2 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/Josiefied-Qwen3-4B-Instruct-2507-gabliterated-v2-i1-GGUF/resolve/main/Josiefied-Qwen3-4B-Instruct-2507-gabliterated-v2.i1-Q3_K_L.gguf) | i1-Q3_K_L | 2.3 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/Josiefied-Qwen3-4B-Instruct-2507-gabliterated-v2-i1-GGUF/resolve/main/Josiefied-Qwen3-4B-Instruct-2507-gabliterated-v2.i1-IQ4_XS.gguf) | i1-IQ4_XS | 2.4 | | | [GGUF](https://huggingface.co/mradermacher/Josiefied-Qwen3-4B-Instruct-2507-gabliterated-v2-i1-GGUF/resolve/main/Josiefied-Qwen3-4B-Instruct-2507-gabliterated-v2.i1-Q4_0.gguf) | i1-Q4_0 | 2.5 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/Josiefied-Qwen3-4B-Instruct-2507-gabliterated-v2-i1-GGUF/resolve/main/Josiefied-Qwen3-4B-Instruct-2507-gabliterated-v2.i1-IQ4_NL.gguf) | i1-IQ4_NL | 2.5 | prefer IQ4_XS | | [GGUF](https://huggingface.co/mradermacher/Josiefied-Qwen3-4B-Instruct-2507-gabliterated-v2-i1-GGUF/resolve/main/Josiefied-Qwen3-4B-Instruct-2507-gabliterated-v2.i1-Q4_K_S.gguf) | i1-Q4_K_S | 2.5 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/Josiefied-Qwen3-4B-Instruct-2507-gabliterated-v2-i1-GGUF/resolve/main/Josiefied-Qwen3-4B-Instruct-2507-gabliterated-v2.i1-Q4_K_M.gguf) | i1-Q4_K_M | 2.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Josiefied-Qwen3-4B-Instruct-2507-gabliterated-v2-i1-GGUF/resolve/main/Josiefied-Qwen3-4B-Instruct-2507-gabliterated-v2.i1-Q4_1.gguf) | i1-Q4_1 | 2.7 | | | [GGUF](https://huggingface.co/mradermacher/Josiefied-Qwen3-4B-Instruct-2507-gabliterated-v2-i1-GGUF/resolve/main/Josiefied-Qwen3-4B-Instruct-2507-gabliterated-v2.i1-Q5_K_S.gguf) | i1-Q5_K_S | 2.9 | | | [GGUF](https://huggingface.co/mradermacher/Josiefied-Qwen3-4B-Instruct-2507-gabliterated-v2-i1-GGUF/resolve/main/Josiefied-Qwen3-4B-Instruct-2507-gabliterated-v2.i1-Q5_K_M.gguf) | i1-Q5_K_M | 3.0 | | | [GGUF](https://huggingface.co/mradermacher/Josiefied-Qwen3-4B-Instruct-2507-gabliterated-v2-i1-GGUF/resolve/main/Josiefied-Qwen3-4B-Instruct-2507-gabliterated-v2.i1-Q6_K.gguf) | i1-Q6_K | 3.4 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
drivinu/klue-reberta-base-klue-sts-mrc-drivinu
drivinu
2025-08-13T14:49:48Z
0
0
sentence-transformers
[ "sentence-transformers", "safetensors", "roberta", "sentence-similarity", "feature-extraction", "dense", "generated_from_trainer", "dataset_size:17552", "loss:MultipleNegativesRankingLoss", "arxiv:1908.10084", "arxiv:1705.00652", "base_model:drivinu/klue-roberta-base-klue-sts-drivinu", "base_model:finetune:drivinu/klue-roberta-base-klue-sts-drivinu", "model-index", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2025-08-13T14:49:28Z
--- tags: - sentence-transformers - sentence-similarity - feature-extraction - dense - generated_from_trainer - dataset_size:17552 - loss:MultipleNegativesRankingLoss base_model: drivinu/klue-roberta-base-klue-sts-drivinu widget: - source_sentence: 미치 매코널의 정당이 제일 우선적으로 다루고 있는 법안 목록에 대한 정보를 알린 곳은? sentences: - CJ그룹이 1년째 진행 중인 ‘베트남 새마을운동’이 지난 18일 미국 워싱턴DC에서 빈곤 퇴치를 위해 열린 ‘글로벌 시티즌 지구의 날’ 행사에 모범 사례로 소개돼 큰 호응을 얻었다.베트남 새마을운동은 CJ가 한국국제협력단(KOICA)과 함께 베트남 닌투언성 농가에 한국산 고추 종자와 재배 기술을 전수하고, 생산된 고추를 사주는 사회공헌활동의 하나다.워싱턴DC 내셔널몰에서 열린 이번 행사는 빈곤퇴치활동을 펼치는 미국 비영리단체 GPP가 주최했다. 반기문 유엔 사무총장, 김용 세계은행 총재, 크리스틴 라가르드 국제통화기금 총재를 포함해 25만여명이 참여했고, 2000여만명이 유튜브로 시청했다.CJ의 베트남 새마을운동은 유명 뮤지션 윌아이엠이 ‘창의적인 공유가치창출(CSV) 활동 사례’로 소개했다. 세계에서 2만8000여명이 소셜네트워크서비스(SNS)를 통해 지지와 ‘CJ 1.5 캠페인’ 동참 의사를 나타냈다.‘CJ 1.5’는 이재현 CJ그룹 회장(사진)의 상생철학을 실천하기 위해 하루 수입이 1.5달러에 못 미치는 절대빈곤가구를 지원하는 사회공헌활동이다. CJ는 지지 의사를 밝힌 2만8000명의 1.5배인 4만2000달러를 베트남 새마을운동에 기부하게 된다.민희경 CJ그룹 부사장은 “윌아이엠이 사전에 베트남 새마을운동에 대해 브리핑을 받고 캠페인 취지에 흔쾌히 동의한 덕분에 행사를 성공적으로 진행했다”고 말했다. 1회성 기부를 넘어 개발도상국 농가들이 자생력을 키울 수 있도록 돕는 방식의 CSV 활동에 윌아이엠이 공감했다는 설명이다.민 부사장은 “베트남 새마을사업은 ‘사업보국·공존경영’이라는 CJ의 경영철학을 구체화한 것”이라며 “세계에서 더 많은 사람이 동참하도록 노력할 것”이라고 말했다. 백광엽 기자 - 미국 중간선거에서 공화당이 상·하원을 모두 장악하자 미 재계가 규제 완화 기대감에 한껏 고무돼 있다. 의회 주도로 ‘친(親)성장정책’이 쏟아질 것으로 예상되기 때문이다. 법인세 인하를 비롯해 천연가스 수출 허용, 금융규제 완화, 행정부의 무역촉진권한(TPA) 부활, 오바마케어(건강보험개혁법) 기준 완화 등이 대표적이다. 이들 법안 및 정책은 민주, 공화 양당이 이견을 절충하지 못해 장기간 의회에서 잠자고 있는 사안이다. 파이낸셜타임스(FT)는 “공화당이 주도하는 의회가 감세와 에너지·은행 규제 완화 등 친기업 정책으로 버락 오바마 대통령을 압박할 것”이라고 예상했다.○법인세 인하 탄력 받나내년에 상원 다수당 대표가 될 미치 매코널 공화당 상원 원내대표는 5일(현지시간) 기자회견에서 “오바마 대통령과 세법개정 및 무역협정 분야에서 협력하기로 논의했다”고 밝혔다. 뉴욕타임스(NYT)는 공화당의 최우선 법안 목록에 △에너지 △예산 및 지출 △세금 △오바마케어 △무역협정 확대 등이 올라 있다고 보도했다. 이 가운데 양당의 이해관계가 가장 근접하는 게 세제개혁이다.오바마 대통령은 재선 후 선진국 최고 수준인 법인세율(35%)을 28%로 낮추는 대신 각종 세금 감면 혜택을 줄이는 것을 골자로 하는 세제개혁안을 마련했다. 공화당은 법인세율을 25%로 더 낮춰야 하고, 개인소득세까지 인하해야 한다고 주장해 왔다. 대부분 자영업자와 소기업이 개인소득세를 적용받는 만큼 이들의 고용 창출과 투자 확대를 유도하기 위해서다. 그러나 오바마 대통령은 오히려 ‘부자증세’를 요구하면서 2년간 절충점을 찾지 못했다. NYT는 “어려운 협상이 되겠지만 오바마 대통령이 로널드 레이건 대통령(공화당)이 1986년 여소야대 상황에서 민주당과 협력해 세제개혁법을 통과시킨 교훈을 잘 새기면 협상이 이뤄질 것”이라고 전망했다. 마틴 베일리 브루킹스연구소 선임연구원은 “법인세율이 20%대로 내려가면 미국 기업들이 해외에 쌓아둔 현금을 국내로 들여와 투자가 활성화될 것”이라고 기대했다. 국제신용평가사인 무디스에 따르면 미국 기업이 올 6월 말 기준 해외에 쌓아둔 현금자산은 1조달러에 이른다.○에너지·금융 분야 규제 완화 기대이날 뉴욕 증시에서 다우존스산업평균지수는 사상 최고치를 경신했다. 각종 규제가 풀리면서 투자가 늘어날 것이란 기대감이 호재였다. 특히 키스톤XL 송유관사업이 의회 문턱을 넘어설 것이란 관측이 나오면서 에너지 관련주가 크게 올랐다. 캐나다 서부의 원유를 미국 멕시코만으로 운반하는 키스톤 송유관사업은 환경단체와 일부 민주당 의원의 반대로 의회를 통과하지 못했다. 공화당은 송유관사업이 투자와 일자리를 늘리고 에너지 안보를 강화시킬 것으로 보고 있다.정치권 일각에서 제기돼온 원유 및 천연가스 수출 허용 문제도 본격적으로 논의될 전망이다. 월스트리트저널(WSJ)은 “천연가스 수출문제가 매코널 대표의 법안 리스트에 포함됐다”며 “오바마 대통령은 이 법안에 대해 거부권을 행사하지 않을 것”이라고 보도했다. 월가에서는 오바마 정부 들어 대폭 강화된 금융 규제도 다소 풀릴 것으로 기대하고 있다. 월가 금융회사들은 이번 선거에서 최대 규모의 정치자금을 공화당에 기부했다.환태평양경제동반자협정(TPP), 유럽과의 자유무역협정(FTA) 협상도 속도를 낼 것으로 보인다. 행정부가 무역협정의 전권을 갖는 ‘신속협상권(TPA)’이 공화당 주도로 부활할 것으로 예상되기 때문이다. 오바마 대통령은 2007년 만료된 TPA를 다시 살리는 법안을 올 1월 의회에 제출했지만 노조 등의 이해를 대변하는 일부 민주당 의원이 반대해 통과하지 못했다. 공화당은 또 미 환경청이 마련한 발전소의 온실가스 규제를 대폭 푸는 법안을 검토하고 있다고 WSJ는 전했다. - 중동호흡기증후군(MERS·메르스) 사태를 키운 원인 중 하나로 ‘세종시 리스크’가 지목된다. 메르스 발생 초기 주무부처인 보건복지부가 초동대응에 실패한 것이 장관과 관련 공무원이 서울과 세종시를 오가며 생긴 비효율과 무관하지 않다는 얘기다.메르스 대응 컨트롤타워 지휘자는 메르스 발생 초기에 ‘서울~세종~오송’을 오가야 했다. 복지부는 2013년 말 세종시로 이전했고, 감염병 현장 대응조직인 복지부 산하 질병관리본부는 2010년 말부터 청주 오송생명과학단지에 자리 잡고 있기 때문이다. 초기 컨트롤타워를 지휘한 양병국 질병관리본부장은 메르스 첫 확진환자 발생 이튿날인 지난달 21일 첫 브리핑을 세종로 정부서울청사에서 열었다. 26일과 28일 브리핑도 정부서울청사에서 했다. 28일 복지부 차관 주재의 메르스 전문가 대책회의는 서울 공덕동 건강보험공단에서 열렸다. 현장 지휘관인 질병관리본부장이 주로 서울에 머물면서 본부 대응팀에 대한 통솔을 소홀히 했을 수 있다는 지적이 나오는 이유다.문형표 복지부 장관도 스위스 제네바에서 열린 세계보건총회 등 출장 일정을 마치고 지난달 23일 귀국한 이후 주로 서울에 머물렀다. 그는 확진자 발생 6일 만인 26일 국무회의에서 박근혜 대통령에게 메르스 사태를 처음으로 대면 보고했다. 이튿날에는 양 본부장과 함께 국회 보건복지위원회에 참석했다. 정부 관계자는 “장관이 주로 서울에 머물 수밖에 없는 상황이다 보니 세종시에 있는 관련 부서와 긴밀한 의사소통이 부족했다고 들었다”며 “초기에 사태의 심각성을 간과한 원인 중 하나”라고 말했다.메르스 환자 발생 병원 등의 명단을 지난 7일 발표하면서 병원명이나 지역명이 틀려 혼란을 키운 것도 ‘세종시 리스크’와 연관 있다는 지적이다. 전직 경제부처 장관은 “세종시로 이전한 뒤 장차관과 주요 간부들이 서울 출장으로 자리를 비우는 시간이 많아지면서 공무원들의 업무 집중도가 떨어졌다는 방증”이라고 말했다. - source_sentence: 경제통상진흥원은 신용보증재단과 달리 어느 지역에 위치하는가? sentences: - 전국 지방공공기관 21곳이 8곳으로 통폐합되고 17개 기관의 기능이 조정된다. 이를 통해 연간 약 202억원의 예산이 절감될 것이라는 분석이 나왔다.행정자치부는 29일 지방공기업정책위원회를 열고 이 같은 내용을 담은 ‘1단계 지방공기업 구조개혁 방안’을 확정, 발표했다. 1차 지방공기업 구조개혁 방안은 서울 인천 광주 대전 울산 경기 전남 경북 경남 등 9개 지방자치단체가 설립한 출자·출연기관과 지방공기업 자회사를 통폐합하거나 기관 사이 중복 업무를 조정하는 내용이다. 출자기관은 정부나 지자체가 지분을 보유한 공기업을 뜻한다. 출연기관은 정부나 지자체가 설립에 필요한 돈과 평상시 쓰는 예산을 출연 형식으로 지원하는 기관으로, 지자체 연구소 등이 해당한다.이번 개혁안에 따르면 인천, 광주, 경기, 전남, 경북의 21개 출자·출연기관과 한 개 지방공기업 사업본부가 7개 출자·출연기관과 한 개 지방공사로 통폐합된다. 인천시 소속 경제통상진흥원, 신용보증재단, 테크노파크, 정보산업진흥원은 하나의 기관으로 합쳐진다. 경기도시공사 자회사인 경기개발공사는 경기도시공사로, 전남개발공사 자회사인 전남관광은 전남개발공사에 각각 흡수된다. 행자부는 1단계에 포함되지 않은 나머지 시·도 지방공공기관 구조개혁 방안을 이르면 9월에 마련할 계획이다. - 치과용 영상진단기기 제조업체인 바텍의 노창준 회장(사진)과 팀장급 직원 13명은 최근 5박6일 일정으로 일본 시코쿠 순례길 탐방에 나섰다. 시코쿠 순례길은 일본 열도를 구성하는 네 개 본섬 가운데 하나인 시코쿠에 산재해 있는 88개의 절을 순서대로 돌아보는 약 1200㎞의 불교 순례길이다.하루 10시간씩 걸으면 대략 45일이 걸린다. 노 회장은 “오래 걸으면 아주 기본적인 생각을 하게 된다”며 “생각이 많으면 괴롭기도 하지만 하나라도 정제된 생각을 깊이 하면 삶에 활기와 성장하는 힘을 준다”고 말했다.직원들은 순례길을 걸으면서 자신을 돌아보는 시간을 가졌다고 했다. 허성근 바텍이우중앙연구소 선임연구원은 “자꾸만 잘하려는 욕심이 나를 까칠하고 사무적으로 만든 것 같다”며 “원래 내 모습으로 돌아가보자는 다짐을 했다”고 말했다.길 위에서는 노 회장도 팀장들도 똑같이 ‘대한민국 아빠들’이었다. 어떤 직원은 아내와 다른 교육관이 불만스러웠다. 다른 직원은 사춘기 아이와의 관계를 고민하고 있었다. 대다수가 바쁘고 힘들다는 이유로 함께 시간을 보내지 못하는 배우자에게 미안함을 느꼈다. 길을 걸으면서 노 회장과 직원들은 자연스럽게 개인적인 이야기를 많이 할 수 있었다.지난해 봄 시코쿠 순례길에서 돌아온 노 회장은 이들의 고민을 풀어줬다. 지난해 5월21일 부부의 날을 맞아 두 차례에 걸쳐 215쌍의 직원 부부에게 뮤지컬 ‘고스트’ 티켓을 선물했다. 아내와 데이트를 즐길 수 있는 시간을 제공한 것이다. 아이 돌보미 서비스까지 지원해 부담 없이 시간을 보낼 수 있도록 했다. 사춘기 자녀와 1주일에서 최장 한 달까지 함께 여행갈 수 있게 경비도 지원했다. 조미현 기자 - 전국교직원노동조합(전교조)이 합법적인 노조가 아니라는 법원의 판결이 나옴에 따라 정부는 사실상 전교조 해체 수순 밟기에 나섰다. 전교조는 법적 대응과 함께 강력 투쟁에 나서기로 했지만 활동이 크게 위축될 수밖에 없게 됐다. 특히 다음달 임기를 시작하는 진보성향 교육감들도 ‘전교조 지키기’에 가세할 것으로 보여 학교 현장의 혼란과 갈등이 극심해질 전망이다.○법원 “전교조는 노조 아냐”서울행정법원 행정13부(반정우 부장판사)는 19일 소송을 제기한 전교조 측 주장을 대부분 받아들이지 않았다. 재판부는 “(해직자 가입으로) 교원노조의 독립성과 자주성이 훼손되면 학교 교육이 파행을 겪고 학생들이 피해를 입는다”며 “교원노조법 2조에 의해 제한되는 단결권에 비해 이 조항으로 달성되는 공익이 더 크다”고 지적했다. 규약 시정 명령을 이행하지 않으면 법외노조 통보를 하도록 한 노조법 시행령 9조 2항에 대해서도 “노조법 2조의 법적 효과를 명확히 하고 노조에 시정 기회를 주기 위한 규정”이라며 정부 측 손을 들어줬다.재판부는 또 “전교조가 1999년 고용부에 허위 규약을 제출해 설립신고를 한 점, 2010년 이에 대한 시정명령 취소 소송을 제기해 패소했는데도 이를 고치지 않은 점 등을 보면 이 사건 처분은 적법하다”고 덧붙였다. 고용부는 “공무원·교원노조의 조합원 범위는 현직 공무원·교원으로 한정된다는 점을 또 한 번 확인했다는 점에서 의의가 있다”고 논평했다.○교육부 후속조치 착수교육부는 합법화 이후 15년 만에 전교조가 법적 지위를 잃음에 따라 후속 조치에 착수했다. 우선 전교조 전임자 72명에게 다음달 3일까지 휴직을 취소하고 현업에 복귀하라고 명령했다. 또 17개 시·도교육청에 공문을 보내 △52억원 상당의 사무실 임차료와 보조금 등 전교조에 대한 지원 취소와 회수 △단체교섭 중단 및 기존 협약 무효화 △7월부터 조합비 급여 원천징수 중단 △각종 위원회에 전교조 출신 위원 배제 등을 요청했다.김성기 교육부 학교정책관은 “전교조 전임자가 기한 내 복귀하지 않으면 국가공무원법에 따라 직권면직이나 징계를 받을 수 있다”고 경고하고 “오는 23일 17개 시·도교육청 교육국장 회의를 소집해 후속 조치의 이행을 지속적으로 지도·감독하겠다”고 말했다.하지만 13개 진보성향 교육감들이 이를 수용할 가능성은 낮다. 전교조가 전임자 복귀 명령을 거부할 경우 대규모 해직 사태가 빚어질 수도 있다. 이들 교육감이 교육부의 후속 조치 이행을 거부할 경우 형사 고발과 소송 등 혼란이 빚어질 전망이다. 김동석 한국교총 대변인은 “교육 현장의 갈등과 혼란으로 학생들만 피해를 볼 것으로 우려된다”며 “교원노조법의 잘못된 점이 있으면 국회에서 개정 논의를 해야지 현행 법령과 법원 판결마저 부정하는 것은 바람직하지 않다”고 말했다.○전교조 “강력 투쟁”전교조는 즉각 항소 방침을 밝히면서 노조법 개정 등 강력 투쟁에 나서기로 했다. 김정훈 전교조 위원장은 기자회견을 열어 “법외노조의 어려운 상황에서도 25년간 지켜온 참교육 활동을 앞으로도 변함없이 전개해 나가겠다”고 말했다. 이재정 경기교육감 당선자는 “현장에서 땀 흘리는 선생님들의 뜻이 이번 판결에 충분히 반영되지 못했다”며 유감을 나타냈고 조희연 서울교육감 당선자도 “이 사안에 대한 판결이 교육 현장에 혼란을 초래해 교육 본연의 문제에 집중하지 못할 것을 우려한다”고 말했다. 정태웅/배석준/백승현 기자 - source_sentence: 라오스가 프랑스로부터 형식적인 독립을 얻어낸 년도는? sentences: - '라오스 내전 그러나 1945년 일본이 패망하자 라오스는 급격히 혼란해졌다. 라오스의 독립파는 라오읻싸라, 즉 공산정당을 만들어 라오스 임시정부를 수립하였다. 라오읻싸라는 베트남 공산정권인 베트남민주공화국과 긴밀하게 유대했다. 프랑스와 라오스 독립파의 전쟁은 갈수록 심해졌다. 결국 1949년 7월 19일 프랑스는 라오스를 독립시켰지만 국방, 외교, 재정권을 프랑스가 가졌다는 점에서 형식적인 독립이었다. 하지만 곧 우익과 좌익의 대립이 시작되었다. 우익의 지도자는 라오스 왕국의 총리가 된 수바나 푸마였고 좌익의 지도자는 수파누봉이었다. 1950년 라오스의 공산세력이 뭉쳐서 파테트 라오가 탄생했다. 1953년 10월 22일, 프랑스 라오스 조약에 의해 완전 독립을 달성했다. 독립 후 내전은 계속 심해졌고 결국 1954년에 스위스의 제네바에서 라오스의 왕실 유지, 파테트 라오가 참여하는 정부 수립을 골자로 하여 평화 조약이 맺어졌지만 중립은 실패하였고 다시 내전이 시작되었다. 이후 라오스에서 극우, 중도, 극좌(파테트 라오)가 갈라져 라오스 내전이 장기간 지속되었다. 1973년 미국이 베트남에서 철수하였고, 1974년 세 파 연합에 의한 라오스 민족 연합 정부가 성립되었지만, 1975년 남베트남의 사이공이 함락되고 라오스 국왕과 수바나 푸마가 물러나면서, 12월 연합 정부가 왕정의 폐지를 선언하면서, 라오스 인민민주공화국을 수립했다. 이것으로 30년에 걸친 라오스 내전은 완전히 종식되었고 라오스는 공산주의 국가가 된다.' - 신라인의 독특한 미의식을 볼 수 있는 토기 장경호(長頸壺·목이 긴 항아리·사진), 현재 심사정의 화조도(花鳥圖), 독특한 디자인의 백자청화, 조선시대 전성기의 철화백자 등 희귀한 고미술품이 경매에 나온다. 고미술 전문 경매업체 아이옥션이 오는 8일 서울 익선동 아이옥션 본사에서 벌이는 가을 메이저 경매를 통해서다.이번 경매에는 도자기와 토기 63점을 비롯해 민속품 50점, 고서화 47점, 근·현대 미술품 21점 등 208점이 나온다. 추정가 1000만원 이하 작품이 전체의 90%로, 중저가 작품을 통해 기존의 컬렉터를 끌어들이겠다는 전략이다.눈길을 끄는 작품은 추정가 3500만~1억원으로 출품된 신라 토기 장경호. 높이 45㎝의 이 토기에는 목걸이 같은 특이한 형태의 무늬가 있다. 굽에는 상·하단으로 네모난 구멍이 뚫려 있어 신라 토기의 독특한 양식을 알 수 있다.조선시대 남종화의 창시자로 잘 알려진 심사정의 그림도 나온다. 꽃과 새를 사실적으로 그린 화조도에는 구양수의 시 ‘화미조(畵眉鳥)’를 응용한 화제가 있어 주목된다. 추정가는 2500만~4000만원.희화적(戱畵的)으로 표현한 용 무늬가 있는 조선시대 도자기 ‘백자철화 운용문호’도 추정가 8000만~1억원에 새 주인을 찾는다. 높이 163㎝의 석조호랑이 한 쌍(8000만원), 목조여래좌상(1700만~3000만원), 목양 박성삼의 옷걸이장과 삼층장(600만~900만원), 서양화가 황유엽의 ‘추동’(300만~500만원) 등도 비교적 싸게 나온다. 프리뷰는 9일까지 익선동 아이옥션 경매장. (02)733-6430 - 야생에서 먹이는 주로 침엽수 구과와 잎, 가지 등이며 계절에 따라 열매와 견과를 먹고 곤충을 먹기도 한다. 긴 거리를 활강할 수 있다. 최대 75m 또는 그 이상을 기록하기도 한다. 활강 각도는 보통 수평으로 40~60도이며, 활강 거리가 짧을 때는 경사가 가파르기도 한다. 둥지 구멍은 보통 지상에서 적어도 10m 위에 있다. 야행성동물이며, 겨울잠을 자지는 않지만 먹이를 구하기 위한 장소를 이동한다. 붉은자이언트날다람쥐는 침엽수 이차림 농장에서 서식지를 찾고, 이곳을 먹이를 구하거나 휴식 장소로 이용한다. 일몰과 자정 사이에 가장 활동적이며, 침엽수림 농장의 성체 암컷 활동 범위는 3.2ha로 추산된다. 붉은자이언트날다람쥐는 일년에 두 번 짝짓기를 하는 것으로 추정되지만 암컷은 보통 일년에 한 번 번식을 한다. 2월과 8월에 1~2마리의 새끼를 낳는다. 알맞은 서식지 파괴가 진행되는 것을 제외하고 특별한 위협 요인은 없다. 분포 지역이 넓고, 다수의 보호 지역 안에서 비교적 흔하게 발견되기 때문에 국제 자연 보전 연맹(IUCN)이 "관심대상종"의 일종으로 분류하고 있다 - source_sentence: 매각 원칙이 있다고 전한 인물의 이름은? sentences: - '알베르트 아인슈타인에 대해서는 그가 만년에 일본의 철학자인 시노하라 세이에이와 편지를 주고 받은 사실이 알려져 있다. 처음에 시노하라가 보낸 내용은 아인슈타인의 상대성이론을 바탕으로 원자폭탄이 개발된 것, 또는 제2차 세계 대전 중에 당시 미국 대통령인 프랭클린 루스벨트에게 원자폭탄 개발을 촉구하는 서신을 보낸 것, 또한 나치 독일을 증오한 아인슈타인의 태도가 평화주의자로서 있을 수 없는 것이라는 등의 비판적 내용이었다. 이러한 비판에 대하여 아인슈타인은 스스로를 평화주의자인 것은 아니라고 밝힌 후에(아인슈타인 자신이 유대인이므로) 나치 독일에 대한 공격은 정의로운 행동이라고 자신을 변호하였고, 또한 원자폭탄에 대해서도 시노하라에게 “당신은 일본 국민으로서 일본의 아시아 침략에 대한 책임이 있다”라고 반론하였다. 또한 일본에 대한 원자폭탄 공격에 대해서는 자신이 일본에 대한 원폭투하를 막을 수 있는 권한이 없었음을 들어 해명하고 있다. 그는 또한 “상대를 비판하고 싶으면 그 상대에 대해서 잘 알아보고 나서 비판하라.”라는 취지의 글을 남겼다. (“나의 방정식은 원자폭탄과는 아무런 관련이 없다”라고 쓰여 있었다고 한다) 이렇게 격렬한 비판을 주고 받았던 두 사람은 나중에 화해하여 그 후에는 근황을 주고 받는 편지나 선물 등을 교환했다고 한다. 2001년에 시노하라는 병으로 사망하였고, 2005년에는 유족들이 그들이 주고 받았던 서신을 전문가를 통하여 기증하고 싶다고 발표한 바가 있다. 아인슈타인의 자필 서신은 일본 국내에서는 거의 남아 있지 않아서, 이 서신 교환을 통해 남겨진 6통의 편지는 귀중한 사료적 가치가 있는 것으로 평가받고 있다.' - '텍사스 혁명 1830년에서 1832년, 멕시코 의회는 코아우일라 테하스 주에서 영국계 이주민을 차별하는 법안 일련을 통과시켰다. 스티븐 오스틴은 1835년 9월 텍사스로 돌아가자마자 텍사스 지원군 지휘관으로서 선출되었다. 제임스 보이는 루이지애나 출신의 친구들과 맺은 음모와 군대에 참가하였고 스티븐 오스틴은 제임스 보이를 대령으로서 즉시 임명했다. 스티븐 오스틴과 텍사스 정규군 지휘관인 장군 샘 휴스턴의 명령으로, 제임스 보이와 대위 제임스 패닌은 벡사의 남부를 정찰했다. 10월 28일, 기병대 300명과 보병 100명으로 구성된 멕시코군은 제심스 보이와 제심스 보이의 기병대 92명을 공격했다. 얼마 간 교전 후 제임스 보이 측 1명이 죽고 멕시코군은 16명이 전사고 16명이 부상했다. 이 전투 후 제심스 보이는 필요 시 전투에 기여는 무방하지만, 공식 명령에 구애에는 무관심해서 임무를 누차 사임하려는 무책임한 태도를 보이곤 했다. 군대에서 잠시 떠났다가 11월말 복귀하여 기병대 40명과 멕시코군이 방비하는 주둔지에 음식을 나르는 열차를 기습 점거에 성공한 습격은 글래스 파이트로 알려져 있다.' - 금융위원회의 공적자금관리위원회가 26일 발표한 우리금융 민영화 방안은 ‘민영화 3대 원칙’인 공적자금 회수 극대화, 빠른 민영화, 금융산업의 발전 가운데 ‘빠른 민영화’에 초점을 맞추고 있다. 지난 정부에서 공적자금 회수 극대화에 매달리다가 세 차례나 실패한 전철을 되풀이하지 않겠다는 뜻이다. 우리금융을 사실상 해체하는 방식으로 매각하기로 한 것은 박근혜 정부 초기에 민영화를 꼭 성공시키겠다는 의지로 볼 수 있다.금융위는 지난해 두 차례 시도한 일괄매각은 우리금융의 주인을 찾는데 적합하지 않은 것으로 판단했다. 일괄매각은 절차가 단순하고 경영권 프리미엄을 받을 수 있지만 시가총액 8조원이 넘는 ‘공룡 지주사’의 주인을 찾는 게 사실상 불가능했다. 시장이 원하는 방식이 아니라 정부가 원하는 방식을 고집한 탓이었다. 이 때문에 공자위는 우리금융을 △광주·경남은행 등 지방은행계열 △우리투자증권·우리아비바생명 등 증권계열 △우리은행·우리카드 등 은행계열로 나눠 다음달부터 내년 10월까지 단계적 분리 매각안을 내놓았다. 시장에서 인기가 있을 것으로 예상되는 지방은행과 증권계열은 다음달 15일부터 한 달 간격을 두고 매각절차가 시작된다.신제윤 금융위원장은 분리매각 방안에 대해 “가장 큰 핵심은 시장이 원하는 것, 즉 실현 가능성”이라며 “우리금융을 하루 빨리 시장에 돌려줘 시장의 힘으로 우리금융의 경쟁력을 높이는 것이 바람직하다”고 설명했다.정부는 7월 중순 지방은행 계열(경남·광주은행) 매각을 시작으로 내년 10월까지 우리은행 우리카드 등 은행계열의 매각을 완료한다는 목표를 세웠다. 이 과정에서 가장 큰 난제는 우리은행계열 매각이 될 전망이다. 예금보험공사의 보유 지분 56.97% 전체가 매각 대상이다. 공자위는 우리금융지주와 우리은행을 합병해 은행 형태로 전환한 뒤 우리카드 우리PE 우리FIS 금호종금 우리금융경영연구소 등 5개 자회사를 묶어 매각하기로 했다. 남상구 공자위 민간위원장은 “최종 매각 대상을 은행 형태로 한 것은 인수 후보자들이 엄격한 소유규제에서 벗어나도록 해 잠재 투자자의 범위를 넓히기 위한 목적”이라며 “경남·광주은행과 우리투자증권 등을 먼저 매각하면 잠재적인 매수자의 자금 부담도 완화할 수 있다”고 말했다.공자위는 이날 우리은행 계열에 대한 최소입찰 규모를 명시하지 않았다. 손병두 공자위 사무국장은 “아직 시간이 많이 남은 상황에서 굳이 지분율을 미리 정해 스스로 제약할 필요가 없기 때문”이라고 말했다. 그는 “그렇더라도 경영권을 확보할 수준의 지분 매각 원칙은 세워져 있다”며 “다만 경영권 확보가 가능한 지분율은 보는 사람들에 따라 30%, 33%, 50% 등 의견이 다르다”고 덧붙였다.지방은행의 경우 예보보유 지분56.97%가 모두 매각된다. 증권계열은 우리금융지주가 ‘우리투자증권+자산운용+아비바생명+저축은행’을 한 묶음으로, 우리F&I와 우리파이낸셜은 개별 매각한다. 대상은 우리금융지주가 가진 지분 전체다. 우리투자증권 37.85%, 파이낸셜 52.02%, 아비바생명 51.58%, F&I와 자산운용 저축은행이 각각 100%다. - source_sentence: 칼슘이 풍부한 물고기는? sentences: - 홍어는 음란하다는 오해를 많이 받는 물고기다. 암놈을 잡으면 수놈이 붙어 올라오기 때문이다. 정약전은《자산어보》에서 “참홍어는 암컷이 낚싯바늘을 물면 수컷이 달려들어 교미를 하다가 다같이 낚싯줄에 끌려 올라오는 예가 있다”며 “암컷은 먹이 때문에 죽고 수컷은 색을 밝히다 죽는 셈”이라고 훈계했다. 그러나 알고 보면 홍어는 철저한 일부일처주의자다. 죽어가는 암놈을 움켜잡는 수놈의 집착은 ‘음란’보단 ‘순정마초’에 가깝다. 《멸치 머리엔 블랙박스가 있다》는 30여년간 어류를 연구한 저자가 1년 열두 달에 맞춰 매월 가장 맛있는 제철 물고기를 골라 생태, 이름의 유래, 관련 속담, 맛있게 먹는 법 등을 알려주는 책이다. 1월부터 차례로 명태, 아귀, 숭어, 실치와 조기, 멸치, 조피볼락과 넙치, 복어, 뱀장어, 갈치와 전어, 고등어, 홍어, 꽁치와 청어가 실렸다. 저자의 풍부한 현장 경험이 녹아든 이야기와 생생한 사진이 읽는 맛을 더한다. 작은 멸치에도 나이가 있다. 비밀은 이석(耳石)에 숨어 있다. 책에 따르면 단단한 뼈를 가진 경골어류는 칼슘과 단백질로 이루어진 뼈 같은 물체인 이석이 몸의 균형을 감지하는 평형기관 구실을 한다. 이 이석을 쪼개 단면을 보면 나이테 같은 무늬가 있어 나이를 알 수 있다는 것. 멸치가 몇 년 며칠에 태어났는지 일일 성장선도 찾아낼 수 있어 비행기의 블랙박스 같은 역할을 한다는 얘기다. 고등어는 왜 등이 푸른 걸까. 저자는 “고등어 등에 있는 녹청색의 물결무늬는 물결이 어른거리는 자국과 같은 모양”이라며 “먹잇감을 찾아 배회하는 바닷새가 하늘에서 내려다봤을 때 바다색과 구별하지 못하게 하기 위한 것”이라고 설명한다. - '마키나락스가 국내 최대 개발자 컨퍼런스 DEVIEW 2020, 네이버클라우드 AI SUMMIT 등에 잇따라 참가해 그동안 진행해 온 기술 개발 현황과 사례를 공개한다. 마키나락스는 딥러닝과 강화학습 기술 기반으로 산업 공정을 효율화하는 AI 솔루션을 개발한 스타트업이다. 기술 경쟁력과 글로벌 성장 가능성을 두루 인정 받아, 최근 국내외에서 120억 원 규모의 시리즈 A 투자를 이끌어낸 바 있다. 먼저, 마키나락스는 오는 11월 25일부터 3일간 개최되는 DEVIEW 2020(참고1)에서 발표 세션 ‘Reinforcement Learning meets the real-world: Industrial RL applications’을 맡아, 제조업에서의 강화학습 적용 사례를 공개한다. 문제 정의, 환경 구축, 모델 개발, Sim2Real 등 실제 산업에 강화학습을 적용하면서 겪었던 기술적 어려움과 이를 극복했던 과정을 생생하게 소개할 예정이다. 마키나락스는 지난해 DEVIEW 2019에서도 발표 세션 및 부스 전시를 선보여 많은 주목을 받았다. 또한 12월 17일 열리는 네이버클라우드 AI SUMMIT(참고2)에서는 마키나락스의 이재혁 공동대표가 제조 AI 선도 스타트업으로서 축적해온 다양한 프로젝트 경험을 공유한다. 마키나락스는 대중소농어업협력재단에서 주관하는 대중소기업 동반 진출 사업을 통해 네이버클라우드와 글로벌 마케팅도 공동 진행 중이다. 마키나락스의 이재혁 공동대표는 “AI 기술이 다양한 산업분야에서 각광받으며 새로운 비즈니스 가치를 창출하고 있고, 기업 역시 생산 환경을 혁신하고자 하는 움직임이 보인다.”라며, “이번 컨퍼런스에서 우리가 실제로 겪어왔던 고충과 그에 대한 해결책을 소개함으로써, 많은 회사에 도움이 될 수 있기를 바란다.” 고 전했다. 한편, 마키나락스는 지난 11월 초 열린 제2회 스마트공장구축 및 생산자동화전(SMATEC 2020) (참고3)에서도 ‘제조 현장의 문제를 해결할 수 있는 AI 구현하기(Implementing AI to tackle manufacturing issue)’ 발표로 호응을 얻었다. 마키나락스는 현재 다양한 난제들이 발생하는 제조 현장에서 빠르게 적용 가능한 솔루션들을 개발하기 위해 박차를 가하고 있으며, 데이터 사이언티스트, 머신러닝 엔지니어, 소프트웨어 엔지니어 등 전문 인력도 적극 채용 중이다. ### [참고1] DEVIEW 2020 : 최신 기술 트렌드 뿐만 아니라 개발자들이 새로운 영역에 도전하며 겪었던 어려움과 이를 극복하는 과정에서 얻은 노하우들을 공유하고, 이를 통해 개발자들이 함께 성장할 수 있도록 하는 국내 최대 개발자 컨퍼런스다. 마키나락스 발표 상세 내용은 https://www.makinarocks.online/에서 확인할 수 있다. [참고2] 네이버클라우드 AI SUMMIT : 다양한 분야에서 변화를 이끌어내고 있는 AI기술과 산업 트렌드를 공유하며, 네이버클라우드가 주최한다. [참고3] 스마트공장구축 및 생산자동화전 : 생산제조 기업을 위한 전문전시회로, ''새로운 제조업 시대를 열다'' 주제로 진행됐다.' - 신동빈 롯데그룹 회장이 최근 잇달아 계열사 주식을 매입해 배경에 관심이 집중되고 있다. 15일 금융감독원에 따르면 신 회장은 지난 9일부터 13일까지 롯데손해보험 주식 100만주(1.49%)를 사들였다. 신 회장은 그 전에는 롯데손보 지분을 갖고 있지 않았다. 신 회장은 올 들어 롯데푸드 롯데케미칼 롯데제과 롯데칠성 등 계열사 주식을 꾸준히 매입했다.지난 1월에는 롯데푸드 지분 1.96%를 사들였고 5월에는 롯데케미칼 주식 6만2200주를 매입해 지분율을 0.3%로 높였다. 6월에는 롯데제과 주식 6500주와 롯데칠성 주식 7580주를 매입했다.롯데 측은 책임경영을 강화하고 계열사 간 상호출자를 해소하기 위한 목적이라고 설명했다. 롯데 관계자는 “오너로서 책임경영을 하기 위해 주가 하락폭이 큰 계열사 주식을 매입했다”며 “롯데제과와 롯데칠성 주식을 사들인 것은 1월1일 롯데미도파를 합병하면서 발생한 상호출자를 해소하기 위한 것”이라고 말했다.하지만 신 회장의 연이은 주식 매입은 과거 행보에 비춰 이례적이라는 것이 업계 관계자들의 평가다. 신 회장은 2006년 롯데쇼핑이 상장하는 과정에서 지분을 취득한 뒤 작년까지 계열사 주식을 한 주도 매입하지 않았다.신 회장의 형인 신동주 일본롯데 부회장도 올 들어 계열사 주식을 매입해 주목된다. 신 부회장은 지난 1월 롯데푸드 지분 1.96%를 매입했고 8월에는 롯데제과 주식 643주를 매입해 지분율을 3.48%에서 3.52%로 높였다.롯데는 신 부회장이 일본 롯데를, 신 회장이 한국 롯데를 각각 책임지는 구도로 돼 있지만 형제간 지분 구조에서는 어느 한쪽이 압도적인 우위를 점하지 못하고 있다. 지배구조의 핵심 회사인 롯데쇼핑은 신 회장이 13.46%를 보유한 최대주주지만 신 부회장도 13.45%를 갖고 있어 형제간 지분율 차이가 0.01%포인트에 불과하다. 아버지 신격호 총괄회장이 보유한 0.93%의 지분이 변수가 될 수 있다. pipeline_tag: sentence-similarity library_name: sentence-transformers metrics: - pearson_cosine - spearman_cosine model-index: - name: SentenceTransformer based on drivinu/klue-roberta-base-klue-sts-drivinu results: - task: type: semantic-similarity name: Semantic Similarity dataset: name: Unknown type: unknown metrics: - type: pearson_cosine value: 0.8078537360420135 name: Pearson Cosine - type: spearman_cosine value: 0.8193615491204266 name: Spearman Cosine --- # SentenceTransformer based on drivinu/klue-roberta-base-klue-sts-drivinu This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [drivinu/klue-roberta-base-klue-sts-drivinu](https://huggingface.co/drivinu/klue-roberta-base-klue-sts-drivinu). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. ## Model Details ### Model Description - **Model Type:** Sentence Transformer - **Base model:** [drivinu/klue-roberta-base-klue-sts-drivinu](https://huggingface.co/drivinu/klue-roberta-base-klue-sts-drivinu) <!-- at revision e553eda83a5a996088cab29fb3a7b1daba5d9eb5 --> - **Maximum Sequence Length:** 512 tokens - **Output Dimensionality:** 768 dimensions - **Similarity Function:** Cosine Similarity <!-- - **Training Dataset:** Unknown --> <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) ### Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False, 'architecture': 'RobertaModel'}) (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the 🤗 Hub model = SentenceTransformer("sentence_transformers_model_id") # Run inference sentences = [ '칼슘이 풍부한 물고기는?', '홍어는 음란하다는 오해를 많이 받는 물고기다. 암놈을 잡으면 수놈이 붙어 올라오기 때문이다. 정약전은《자산어보》에서 “참홍어는 암컷이 낚싯바늘을 물면 수컷이 달려들어 교미를 하다가 다같이 낚싯줄에 끌려 올라오는 예가 있다”며 “암컷은 먹이 때문에 죽고 수컷은 색을 밝히다 죽는 셈”이라고 훈계했다. 그러나 알고 보면 홍어는 철저한 일부일처주의자다. 죽어가는 암놈을 움켜잡는 수놈의 집착은 ‘음란’보단 ‘순정마초’에 가깝다. 《멸치 머리엔 블랙박스가 있다》는 30여년간 어류를 연구한 저자가 1년 열두 달에 맞춰 매월 가장 맛있는 제철 물고기를 골라 생태, 이름의 유래, 관련 속담, 맛있게 먹는 법 등을 알려주는 책이다. 1월부터 차례로 명태, 아귀, 숭어, 실치와 조기, 멸치, 조피볼락과 넙치, 복어, 뱀장어, 갈치와 전어, 고등어, 홍어, 꽁치와 청어가 실렸다. 저자의 풍부한 현장 경험이 녹아든 이야기와 생생한 사진이 읽는 맛을 더한다. 작은 멸치에도 나이가 있다. 비밀은 이석(耳石)에 숨어 있다. 책에 따르면 단단한 뼈를 가진 경골어류는 칼슘과 단백질로 이루어진 뼈 같은 물체인 이석이 몸의 균형을 감지하는 평형기관 구실을 한다. 이 이석을 쪼개 단면을 보면 나이테 같은 무늬가 있어 나이를 알 수 있다는 것. 멸치가 몇 년 며칠에 태어났는지 일일 성장선도 찾아낼 수 있어 비행기의 블랙박스 같은 역할을 한다는 얘기다. 고등어는 왜 등이 푸른 걸까. 저자는 “고등어 등에 있는 녹청색의 물결무늬는 물결이 어른거리는 자국과 같은 모양”이라며 “먹잇감을 찾아 배회하는 바닷새가 하늘에서 내려다봤을 때 바다색과 구별하지 못하게 하기 위한 것”이라고 설명한다.', "마키나락스가 국내 최대 개발자 컨퍼런스 DEVIEW 2020, 네이버클라우드 AI SUMMIT 등에 잇따라 참가해 그동안 진행해 온 기술 개발 현황과 사례를 공개한다. 마키나락스는 딥러닝과 강화학습 기술 기반으로 산업 공정을 효율화하는 AI 솔루션을 개발한 스타트업이다. 기술 경쟁력과 글로벌 성장 가능성을 두루 인정 받아, 최근 국내외에서 120억 원 규모의 시리즈 A 투자를 이끌어낸 바 있다. 먼저, 마키나락스는 오는 11월 25일부터 3일간 개최되는 DEVIEW 2020(참고1)에서 발표 세션 ‘Reinforcement Learning meets the real-world: Industrial RL applications’을 맡아, 제조업에서의 강화학습 적용 사례를 공개한다. 문제 정의, 환경 구축, 모델 개발, Sim2Real 등 실제 산업에 강화학습을 적용하면서 겪었던 기술적 어려움과 이를 극복했던 과정을 생생하게 소개할 예정이다. 마키나락스는 지난해 DEVIEW 2019에서도 발표 세션 및 부스 전시를 선보여 많은 주목을 받았다. 또한 12월 17일 열리는 네이버클라우드 AI SUMMIT(참고2)에서는 마키나락스의 이재혁 공동대표가 제조 AI 선도 스타트업으로서 축적해온 다양한 프로젝트 경험을 공유한다. 마키나락스는 대중소농어업협력재단에서 주관하는 대중소기업 동반 진출 사업을 통해 네이버클라우드와 글로벌 마케팅도 공동 진행 중이다. 마키나락스의 이재혁 공동대표는 “AI 기술이 다양한 산업분야에서 각광받으며 새로운 비즈니스 가치를 창출하고 있고, 기업 역시 생산 환경을 혁신하고자 하는 움직임이 보인다.”라며, “이번 컨퍼런스에서 우리가 실제로 겪어왔던 고충과 그에 대한 해결책을 소개함으로써, 많은 회사에 도움이 될 수 있기를 바란다.” 고 전했다. 한편, 마키나락스는 지난 11월 초 열린 제2회 스마트공장구축 및 생산자동화전(SMATEC 2020) (참고3)에서도 ‘제조 현장의 문제를 해결할 수 있는 AI 구현하기(Implementing AI to tackle manufacturing issue)’ 발표로 호응을 얻었다. 마키나락스는 현재 다양한 난제들이 발생하는 제조 현장에서 빠르게 적용 가능한 솔루션들을 개발하기 위해 박차를 가하고 있으며, 데이터 사이언티스트, 머신러닝 엔지니어, 소프트웨어 엔지니어 등 전문 인력도 적극 채용 중이다. ### [참고1] DEVIEW 2020 : 최신 기술 트렌드 뿐만 아니라 개발자들이 새로운 영역에 도전하며 겪었던 어려움과 이를 극복하는 과정에서 얻은 노하우들을 공유하고, 이를 통해 개발자들이 함께 성장할 수 있도록 하는 국내 최대 개발자 컨퍼런스다. 마키나락스 발표 상세 내용은 https://www.makinarocks.online/에서 확인할 수 있다. [참고2] 네이버클라우드 AI SUMMIT : 다양한 분야에서 변화를 이끌어내고 있는 AI기술과 산업 트렌드를 공유하며, 네이버클라우드가 주최한다. [참고3] 스마트공장구축 및 생산자동화전 : 생산제조 기업을 위한 전문전시회로, '새로운 제조업 시대를 열다' 주제로 진행됐다.", ] embeddings = model.encode(sentences) print(embeddings.shape) # [3, 768] # Get the similarity scores for the embeddings similarities = model.similarity(embeddings, embeddings) print(similarities) # tensor([[ 1.0000, 0.4516, -0.0398], # [ 0.4516, 1.0000, -0.0505], # [-0.0398, -0.0505, 1.0000]]) ``` <!-- ### Direct Usage (Transformers) <details><summary>Click to see the direct usage in Transformers</summary> </details> --> <!-- ### Downstream Usage (Sentence Transformers) You can finetune this model on your own dataset. <details><summary>Click to expand</summary> </details> --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> ## Evaluation ### Metrics #### Semantic Similarity * Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator) | Metric | Value | |:--------------------|:-----------| | pearson_cosine | 0.8079 | | **spearman_cosine** | **0.8194** | <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Dataset #### Unnamed Dataset * Size: 17,552 training samples * Columns: <code>sentence_0</code> and <code>sentence_1</code> * Approximate statistics based on the first 1000 samples: | | sentence_0 | sentence_1 | |:--------|:---------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------| | type | string | string | | details | <ul><li>min: 7 tokens</li><li>mean: 17.7 tokens</li><li>max: 38 tokens</li></ul> | <ul><li>min: 232 tokens</li><li>mean: 435.48 tokens</li><li>max: 512 tokens</li></ul> | * Samples: | sentence_0 | sentence_1 | |:----------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | <code>시리즈2의 첫 시간을 함께하는 사람의 이름은 무엇인가?</code> | <code>코리아스타트업포럼(의장 김슬아, 안성우, 이승건. 이하 코스포)은 18일부터 시작되는 창업가들의 경험 공유 프로젝트 ‘Growth Talk Live 시리즈2’를 런칭하고, 참가자를 모집한다고 밝혔다. Growth Talk Live는 스타트업 업계를 대표하는 창업가와 전문가들이 성장 과정에서 축적해온 선행 경험과 지식, 노하우를 생태계 구성원들에게 공유해 역량 증진을 돕는 프로젝트로 지난 6월 첫 번째 시리즈를 런칭한 바 있다. 이번 시리즈2는 ‘성장하는 스타트업을 위한 이야기 편’으로 서비스 런칭 전후 대응 방법, 비즈니스 성장전략, 오퍼레이션과 HR 등 스타트업의 경영과 스케일업에 필요한 비즈니스 인사이트를 선별해 총 5회차 프로그램으로 엮었다. 이에 크로키닷컴(지그재그) 서정훈 대표· 핀다 이혜민 대표·TWC 박민영 대표·번개장터 이재후 대표·소풍벤처스 최경희 파트너가 연사로 나서 성장궤도의 출발선에 있거나 진입한 창업가들에게 각자의 경험과 역량을 나눌 계획이며, 오는 18일을 시작으로 9월 말까지 온라인·오프라인을 통해 창업가들과 만난다. 시리즈2의 첫 번째 회차에는 여성 패션 쇼핑 앱 ‘지그재그’를 운영하며 괄목할 만한 성장세를 보여주고 있는 패션 테크 기업 크로키닷컴의 서정훈 대표와 ‘서비스 런칭 전과 후, 챙겨야 할 것’들을 짚어본다. 두 번째 시간에는 뷰티·헬스케어 등 다양한 분야의 창업 경험이 있는 핀다 이혜민 대표가 리테일 금융 중개 시장이라는 불모지에서 금융기관 대출 중개 플랫폼으로 시장에 안착하기까지의 성장 과정과 극복 노하우를 전한다. 세 번째 시간에는 AI오퍼레이션 전문기업 TWC 박민영 대표가 100여 개 고객사를 경험하며 쌓은 오퍼레이션 노하우를 바탕으로 스타트업이 알아야 할 서비스 운영 전략 A-Z를 공개한다. 이어서 번개장터 이재후 대표가 리셀 문화를 주도하는 ‘취향 기반 중고 거래 플랫폼, 번개장터’의 성장전략에 대해 이야기를 나눈다. 마지막 다섯 번째 회차에서는 임팩트 투자사 소풍벤처스 최경희 파트너가 연사로 나서 에듀테크 스...</code> | | <code>매장에 시계, 도자기 등 다양한 영역의 제품들을 구비하고 있는 브랜드의 이름은?</code> | <code>현대백화점이 27일 서울 삼성동 무역센터점의 수입 패션 브랜드를 44개에서 72개로 늘려 다시 오픈했다. 국내 최대 크기의 명품 매장을 독특한 1, 2층 복층 형태로 내는 등 강남 상권을 잡기 위한 대형화에 주력했다.현대백화점 무역센터점에 들어간 해외 패션 및 시계, 화장품 브랜드는 총 120여개로, 이 중 에르메스(254㎡) 루이비통(839㎡) 까르띠에(330㎡)는 국내 백화점 중 가장 큰 규모다. 까르띠에는 다음달 22일 2층과 연결된 복층 매장으로 열고, 루이비통은 오는 7월께 1, 2층에 복층 구조로 매장을 낸다. 에르메스 매장에는 백화점 매장으론 유일하게 의류 잡화뿐 아니라 시계, 크리스털, 테이블웨어, 가죽, 도자기, 향수 등 14개 제품군을 모두 들여놨다. 시계도 강화한다. 기존에 264㎡였던 명품 시계 매장을 891㎡로 확대, 강남 상권 최대 규모로 마련했다. 위블로, 오데마피게, 예거 르쿨르트, 피아제 등 10개 브랜드를 새로 들여놓는다. 이 중 예거 르쿨트르와 쇼메 등 9개 브랜드는 부티크 형태의 단독 매장으로 다음달 22일 문을 연다.</code> | | <code>HSBC가 고용한 직원의 수는?</code> | <code>영국 대형은행 HSBC는 이달 초 철없는 직원들 때문에 곤욕을 치렀다. 직원 6명이 소셜네트워크서비스(SNS)에 올린 동영상이 문제였다. 이들은 수니파 극단주의 무장단체인 이슬람국가(IS) 조직원으로 분장해 인질을 처형하는 장면을 흉내냈다. 은행은 즉각 사과하고 직원 6명을 모두 해고했지만 이미 회사 이미지는 큰 타격을 입은 뒤였다.영국 주간 이코노미스트는 “직원은 기업의 귀중한 자원이지만 때로 골칫거리로, 더 나아가선 내부의 적(敵)으로 돌변하기도 한다”며 “직원 관리 실패가 기업의 큰 위험요소로 떠오르고 있다”고 최근호에서 보도했다.보도에 따르면 비용을 부풀려 청구하는 등의 일은 비일비재하다. 한 영국 제조업체에선 구조조정 명단에 오른 정보기술(IT) 부서 직원이 회사 자료를 지우고, 최고경영자(CEO)의 이메일을 공개하고, 음란물을 배포했다.해킹 사건에도 관여한다. 캐나다의 온라인데이팅 사이트 애슐리매디슨은 지난 20일 세계 3700만명의 고객정보를 해킹당했다. 사이트 운영사인 아비드라이프미디어(ALM)의 조 비더만 CEO는 사건 직후 “내부자 소행으로 보인다”고 발표했다. 컴퓨터보안 전문가들은 2013년 1억1000만여명의 고객정보를 해킹당한 미국 유통업체 타깃도 내부자의 도움이 있었을 것으로 파악했다. 이코노미스트는 기업에 세 가지를 조언했다. 첫째, 문제를 일으킬 가능성이 있는 직원을 파악하고 관리하라는 것. 특히 IT 관련 부서원을 내보내야 할 땐 지체없이 실행할 것을 권했다. 두 번째는 소프트웨어에 지나치게 의존하지 말고 사람이 직접 직원 동향을 파악하고 관리하라는 것. 끝으로 무엇보다 직원을 존중할 것을 강조했다.</code> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` ### Training Hyperparameters #### Non-Default Hyperparameters - `per_device_train_batch_size`: 16 - `per_device_eval_batch_size`: 16 - `num_train_epochs`: 1 - `batch_sampler`: no_duplicates - `multi_dataset_batch_sampler`: round_robin #### All Hyperparameters <details><summary>Click to expand</summary> - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: no - `prediction_loss_only`: True - `per_device_train_batch_size`: 16 - `per_device_eval_batch_size`: 16 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 1 - `eval_accumulation_steps`: None - `torch_empty_cache_steps`: None - `learning_rate`: 5e-05 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1 - `num_train_epochs`: 1 - `max_steps`: -1 - `lr_scheduler_type`: linear - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.0 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `restore_callback_states_from_checkpoint`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: False - `fp16`: False - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: None - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: False - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: False - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: None - `hub_always_push`: False - `hub_revision`: None - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `include_for_metrics`: [] - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_eval_metrics`: False - `eval_on_start`: False - `use_liger_kernel`: False - `liger_kernel_config`: None - `eval_use_gather_object`: False - `average_tokens_across_devices`: False - `prompts`: None - `batch_sampler`: no_duplicates - `multi_dataset_batch_sampler`: round_robin - `router_mapping`: {} - `learning_rate_mapping`: {} </details> ### Training Logs | Epoch | Step | Training Loss | spearman_cosine | |:------:|:----:|:-------------:|:---------------:| | -1 | -1 | - | 0.8194 | | 0.4558 | 500 | 0.165 | - | | 0.9116 | 1000 | 0.1105 | - | ### Framework Versions - Python: 3.11.13 - Sentence Transformers: 5.0.0 - Transformers: 4.55.0 - PyTorch: 2.6.0+cu124 - Accelerate: 1.9.0 - Datasets: 4.0.0 - Tokenizers: 0.21.4 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ``` #### MultipleNegativesRankingLoss ```bibtex @misc{henderson2017efficient, title={Efficient Natural Language Response Suggestion for Smart Reply}, author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil}, year={2017}, eprint={1705.00652}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
mradermacher/Josiefied-Qwen3-4B-Instruct-2507-gabliterated-v2-GGUF
mradermacher
2025-08-13T14:48:47Z
0
0
transformers
[ "transformers", "gguf", "en", "base_model:Goekdeniz-Guelmez/Josiefied-Qwen3-4B-Instruct-2507-gabliterated-v2", "base_model:quantized:Goekdeniz-Guelmez/Josiefied-Qwen3-4B-Instruct-2507-gabliterated-v2", "endpoints_compatible", "region:us", "conversational" ]
null
2025-08-13T14:16:58Z
--- base_model: Goekdeniz-Guelmez/Josiefied-Qwen3-4B-Instruct-2507-gabliterated-v2 language: - en library_name: transformers mradermacher: readme_rev: 1 quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> <!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS --> <!-- ### quants_skip: --> <!-- ### skip_mmproj: --> static quants of https://huggingface.co/Goekdeniz-Guelmez/Josiefied-Qwen3-4B-Instruct-2507-gabliterated-v2 <!-- provided-files --> ***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Josiefied-Qwen3-4B-Instruct-2507-gabliterated-v2-GGUF).*** weighted/imatrix quants are available at https://huggingface.co/mradermacher/Josiefied-Qwen3-4B-Instruct-2507-gabliterated-v2-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Josiefied-Qwen3-4B-Instruct-2507-gabliterated-v2-GGUF/resolve/main/Josiefied-Qwen3-4B-Instruct-2507-gabliterated-v2.Q2_K.gguf) | Q2_K | 1.8 | | | [GGUF](https://huggingface.co/mradermacher/Josiefied-Qwen3-4B-Instruct-2507-gabliterated-v2-GGUF/resolve/main/Josiefied-Qwen3-4B-Instruct-2507-gabliterated-v2.Q3_K_S.gguf) | Q3_K_S | 2.0 | | | [GGUF](https://huggingface.co/mradermacher/Josiefied-Qwen3-4B-Instruct-2507-gabliterated-v2-GGUF/resolve/main/Josiefied-Qwen3-4B-Instruct-2507-gabliterated-v2.Q3_K_M.gguf) | Q3_K_M | 2.2 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Josiefied-Qwen3-4B-Instruct-2507-gabliterated-v2-GGUF/resolve/main/Josiefied-Qwen3-4B-Instruct-2507-gabliterated-v2.Q3_K_L.gguf) | Q3_K_L | 2.3 | | | [GGUF](https://huggingface.co/mradermacher/Josiefied-Qwen3-4B-Instruct-2507-gabliterated-v2-GGUF/resolve/main/Josiefied-Qwen3-4B-Instruct-2507-gabliterated-v2.IQ4_XS.gguf) | IQ4_XS | 2.4 | | | [GGUF](https://huggingface.co/mradermacher/Josiefied-Qwen3-4B-Instruct-2507-gabliterated-v2-GGUF/resolve/main/Josiefied-Qwen3-4B-Instruct-2507-gabliterated-v2.Q4_K_S.gguf) | Q4_K_S | 2.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Josiefied-Qwen3-4B-Instruct-2507-gabliterated-v2-GGUF/resolve/main/Josiefied-Qwen3-4B-Instruct-2507-gabliterated-v2.Q4_K_M.gguf) | Q4_K_M | 2.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Josiefied-Qwen3-4B-Instruct-2507-gabliterated-v2-GGUF/resolve/main/Josiefied-Qwen3-4B-Instruct-2507-gabliterated-v2.Q5_K_S.gguf) | Q5_K_S | 2.9 | | | [GGUF](https://huggingface.co/mradermacher/Josiefied-Qwen3-4B-Instruct-2507-gabliterated-v2-GGUF/resolve/main/Josiefied-Qwen3-4B-Instruct-2507-gabliterated-v2.Q5_K_M.gguf) | Q5_K_M | 3.0 | | | [GGUF](https://huggingface.co/mradermacher/Josiefied-Qwen3-4B-Instruct-2507-gabliterated-v2-GGUF/resolve/main/Josiefied-Qwen3-4B-Instruct-2507-gabliterated-v2.Q6_K.gguf) | Q6_K | 3.4 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Josiefied-Qwen3-4B-Instruct-2507-gabliterated-v2-GGUF/resolve/main/Josiefied-Qwen3-4B-Instruct-2507-gabliterated-v2.Q8_0.gguf) | Q8_0 | 4.4 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/Josiefied-Qwen3-4B-Instruct-2507-gabliterated-v2-GGUF/resolve/main/Josiefied-Qwen3-4B-Instruct-2507-gabliterated-v2.f16.gguf) | f16 | 8.2 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/TiTan-Gemma3-1B-GGUF
mradermacher
2025-08-13T14:48:44Z
0
0
transformers
[ "transformers", "gguf", "lora", "sft", "trl", "unsloth", "fine-tuned", "en", "dataset:theprint/titles-n-tags-alpaca", "base_model:theprint/TiTan-Gemma3-1B", "base_model:adapter:theprint/TiTan-Gemma3-1B", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2025-08-13T12:47:07Z
--- base_model: theprint/TiTan-Gemma3-1B datasets: - theprint/titles-n-tags-alpaca language: en library_name: transformers license: apache-2.0 mradermacher: readme_rev: 1 quantized_by: mradermacher tags: - lora - sft - transformers - trl - unsloth - fine-tuned --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> <!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS --> <!-- ### quants_skip: --> <!-- ### skip_mmproj: --> static quants of https://huggingface.co/theprint/TiTan-Gemma3-1B <!-- provided-files --> ***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#TiTan-Gemma3-1B-GGUF).*** weighted/imatrix quants are available at https://huggingface.co/mradermacher/TiTan-Gemma3-1B-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/TiTan-Gemma3-1B-GGUF/resolve/main/TiTan-Gemma3-1B.Q3_K_S.gguf) | Q3_K_S | 1.0 | | | [GGUF](https://huggingface.co/mradermacher/TiTan-Gemma3-1B-GGUF/resolve/main/TiTan-Gemma3-1B.Q2_K.gguf) | Q2_K | 1.0 | | | [GGUF](https://huggingface.co/mradermacher/TiTan-Gemma3-1B-GGUF/resolve/main/TiTan-Gemma3-1B.IQ4_XS.gguf) | IQ4_XS | 1.0 | | | [GGUF](https://huggingface.co/mradermacher/TiTan-Gemma3-1B-GGUF/resolve/main/TiTan-Gemma3-1B.Q3_K_M.gguf) | Q3_K_M | 1.0 | lower quality | | [GGUF](https://huggingface.co/mradermacher/TiTan-Gemma3-1B-GGUF/resolve/main/TiTan-Gemma3-1B.Q3_K_L.gguf) | Q3_K_L | 1.0 | | | [GGUF](https://huggingface.co/mradermacher/TiTan-Gemma3-1B-GGUF/resolve/main/TiTan-Gemma3-1B.Q4_K_S.gguf) | Q4_K_S | 1.1 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/TiTan-Gemma3-1B-GGUF/resolve/main/TiTan-Gemma3-1B.Q4_K_M.gguf) | Q4_K_M | 1.1 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/TiTan-Gemma3-1B-GGUF/resolve/main/TiTan-Gemma3-1B.Q5_K_S.gguf) | Q5_K_S | 1.2 | | | [GGUF](https://huggingface.co/mradermacher/TiTan-Gemma3-1B-GGUF/resolve/main/TiTan-Gemma3-1B.Q5_K_M.gguf) | Q5_K_M | 1.2 | | | [GGUF](https://huggingface.co/mradermacher/TiTan-Gemma3-1B-GGUF/resolve/main/TiTan-Gemma3-1B.Q6_K.gguf) | Q6_K | 1.4 | very good quality | | [GGUF](https://huggingface.co/mradermacher/TiTan-Gemma3-1B-GGUF/resolve/main/TiTan-Gemma3-1B.Q8_0.gguf) | Q8_0 | 1.5 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/TiTan-Gemma3-1B-GGUF/resolve/main/TiTan-Gemma3-1B.f16.gguf) | f16 | 2.7 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
BootesVoid/cmb6b9e7n03nklexpemkwwd4d_cmea1gzqy06sxrts8ch1zsbqc
BootesVoid
2025-08-13T14:39:27Z
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-08-13T14:39:25Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: AIA --- # Cmb6B9E7N03Nklexpemkwwd4D_Cmea1Gzqy06Sxrts8Ch1Zsbqc <Gallery /> ## About this LoRA This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI. It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `AIA` to trigger the image generation. ## Run this LoRA with an API using Replicate ```py import replicate input = { "prompt": "AIA", "lora_weights": "https://huggingface.co/BootesVoid/cmb6b9e7n03nklexpemkwwd4d_cmea1gzqy06sxrts8ch1zsbqc/resolve/main/lora.safetensors" } output = replicate.run( "black-forest-labs/flux-dev-lora", input=input ) for index, item in enumerate(output): with open(f"output_{index}.webp", "wb") as file: file.write(item.read()) ``` ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('BootesVoid/cmb6b9e7n03nklexpemkwwd4d_cmea1gzqy06sxrts8ch1zsbqc', weight_name='lora.safetensors') image = pipeline('AIA').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Training details - Steps: 2000 - Learning rate: 0.0004 - LoRA rank: 16 ## Contribute your own examples You can use the [community tab](https://huggingface.co/BootesVoid/cmb6b9e7n03nklexpemkwwd4d_cmea1gzqy06sxrts8ch1zsbqc/discussions) to add images that show off what you’ve made with this LoRA.
AmanPriyanshu/gpt-oss-16.7b-specialized-instruction_following-pruned-moe-only-25-experts
AmanPriyanshu
2025-08-13T14:09:44Z
0
1
null
[ "safetensors", "gpt_oss", "mixture-of-experts", "moe", "expert-pruning", "gpt-oss", "openai", "reasoning", "instruction-following", "specialized", "efficient", "transformer", "causal-lm", "text-generation", "pytorch", "pruned-model", "domain-specific", "conversational", "en", "dataset:AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations", "license:apache-2.0", "region:us" ]
text-generation
2025-08-13T14:08:57Z
--- license: apache-2.0 datasets: - AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations language: - en pipeline_tag: text-generation tags: - mixture-of-experts - moe - expert-pruning - gpt-oss - openai - reasoning - instruction-following - specialized - efficient - transformer - causal-lm - text-generation - pytorch - pruned-model - domain-specific --- # Instruction Following GPT-OSS Model (25 Experts) **Project**: https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/ <div align="center"> ### 👥 Follow the Authors **Aman Priyanshu** [![LinkedIn](https://img.shields.io/badge/LinkedIn-0077B5?style=for-the-badge&logo=linkedin&logoColor=white)](https://www.linkedin.com/in/aman-priyanshu/) [![Twitter](https://img.shields.io/badge/Twitter-1DA1F2?style=for-the-badge&logo=twitter&logoColor=white)](https://x.com/AmanPriyanshu6) [![Website](https://img.shields.io/badge/Website-FF7139?style=for-the-badge&logo=firefox&logoColor=white)](https://amanpriyanshu.github.io/) **Supriti Vijay** [![LinkedIn](https://img.shields.io/badge/LinkedIn-0077B5?style=for-the-badge&logo=linkedin&logoColor=white)](https://www.linkedin.com/in/supriti-vijay/) [![Twitter](https://img.shields.io/badge/Twitter-1DA1F2?style=for-the-badge&logo=twitter&logoColor=white)](https://x.com/SupritiVijay) [![Website](https://img.shields.io/badge/Website-FF7139?style=for-the-badge&logo=firefox&logoColor=white)](https://supritivijay.github.io/) </div> ## Introduction This is a pruned variant of OpenAI's GPT-OSS-20B model, reduced to 25 experts per layer based on activation patterns from the [AmanPriyanshu/GPT-OSS-20B MoE Expert Activations dataset](https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations). We analyzed router decisions across evaluation benchmarks to identify and retain experts most relevant for instruction following tasks. **⚠️ Experimental Model**: This is an experimental pruned model that may not work well - check the [examples below](#model-examples) to see if the outputs meet your needs before use. This pruning approach reduces the model size while attempting to preserve performance on the target domain. ## Model Architecture & Statistics | Metric | Value | |--------|-------| | **Base Model** | openai/gpt-oss-20b | | **Architecture** | Mixture-of-Experts Transformer | | **Total Parameters** | ~16.7B (pruned from 21B) | | **Original Experts per Layer** | 32 | | **Pruned Experts per Layer** | 25 | | **Layers** | 24 | | **Top-k Routing** | 4 | | **Context Length** | 128K tokens | | **Attention Heads** | 64 (Query), 8 (Key-Value) | | **Residual Dimension** | 2880 | | **Attention Pattern** | Alternating dense & sliding window (128 tokens) | | **Positional Encoding** | RoPE (Rotary Position Embedding) | | **Normalization** | RMSNorm | | **Precision** | BF16 | | **License** | Apache 2.0 | | **Specialization** | Instruction Following | ## Pruning Methodology ### What is Expert Pruning? Mixture-of-Experts models contain multiple specialized sub-networks (experts) per layer. During inference, only a subset of experts are activated for each token. Expert pruning involves: 1. **Analyzing Usage Patterns**: Tracking which experts activate most frequently for specific tasks 2. **Removing Underutilized Experts**: Discarding experts with low activation rates for the target domain 3. **Preserving Router Functionality**: Maintaining the routing mechanism with fewer available experts ### Our Approach - **Data-Driven Selection**: Used activation patterns from instruction following evaluation tasks - **Systematic Reduction**: Reduced from 32 to 25 experts per layer - **No Retraining**: Direct removal without additional training steps ## Performance & Applications ### Pruning Benefits - **Smaller Memory Footprint**: 78.1% of original expert parameters - **Reduced Computational Load**: Fewer routing decisions during inference - **Focused Capabilities**: Retains experts relevant to instruction following tasks ### Use Cases - **Speculative Decoding**: Draft model for full GPT-OSS-20B - **Resource-Constrained Deployment**: Edge devices, mobile applications - **Research**: Study expert specialization in MoE models - **Fine-tuning**: Smaller base model for domain adaptation *Note: Performance may vary depending on how well the pruned experts match your specific use case.* ## Motivation & Expert Selection This instruction-following model leverages experts that excelled at constraint satisfaction tasks from Tulu3 Persona Instruction Following dataset. These experts specialize in precise adherence to user specifications and formatting requirements. The expert selection process utilized our comprehensive analysis of router activation patterns across multiple evaluation benchmarks: - **GPQA**: Graduate-level questions in physics, chemistry, biology (Diamond & Expert subsets) - **MMLU/MMLU-Pro**: Comprehensive knowledge across 57+ subjects including science, medicine, law - **SORRY-Bench**: Safety evaluation across harmful content categories - **Tulu3**: Persona-driven instruction following with verifiable constraints - **Polyglot-or-Not**: Multilingual factual completion tasks By identifying experts that consistently activated for instruction following tasks, we created this specialized model that maintains domain expertise while significantly reducing computational requirements from 32 to 25 experts per layer. ## Dataset & Analysis Foundation This model is based on analysis from the **GPT-OSS-20B MoE Expert Activations dataset** available at: 🔗 **https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations** The dataset contains router activation patterns from OpenAI's GPT-OSS-20B model across diverse evaluation benchmarks, enabling the creation of these domain-optimized models through systematic expert pruning. ### Pruning Methodology Our approach involves: 1. **Activation Analysis**: Comprehensive evaluation of expert usage patterns across domain-specific tasks 2. **Expert Ranking**: Identification of the most frequently activated experts for target domains 3. **Systematic Pruning**: Reduction from 32 to 25 experts while preserving router functionality 4. **Quality Validation**: Testing to ensure maintained performance on target tasks *This is a direct pruning approach - no additional training was performed. The model inherits all capabilities from the original GPT-OSS-20B with focused expert selection.* ## Usage ### CPU Inference ```python from transformers import AutoModelForCausalLM, AutoTokenizer import torch # Load the specialized model on CPU model = AutoModelForCausalLM.from_pretrained( "AmanPriyanshu/gpt-oss-16.7b-specialized-instruction_following-pruned-moe-only-25-experts", torch_dtype=torch.bfloat16, device_map="cpu", trust_remote_code=True ) tokenizer = AutoTokenizer.from_pretrained("AmanPriyanshu/gpt-oss-16.7b-specialized-instruction_following-pruned-moe-only-25-experts") # Generate with the model messages = [ {"role": "user", "content": "Write a formal email to a professor requesting a meeting, including: subject line, greeting, purpose, proposed times, and professional closing."} ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, return_tensors="pt", return_dict=True, reasoning_effort="medium" ) # Ensure inputs are on the same device as model inputs = {k: v.to(model.device) for k, v in inputs.items()} outputs = model.generate( **inputs, max_new_tokens=512, do_sample=True, temperature=0.1, top_p=0.9, pad_token_id=tokenizer.eos_token_id, eos_token_id=tokenizer.eos_token_id ) # Decode only the generated part input_length = inputs['input_ids'].shape[1] response_tokens = outputs[0][input_length:] response = tokenizer.decode(response_tokens, skip_special_tokens=True) print(response) ``` ### Apple Silicon (MPS) Inference ```python from transformers import AutoModelForCausalLM, AutoTokenizer import torch # Check MPS availability and load model device = "mps" if torch.backends.mps.is_available() else "cpu" model = AutoModelForCausalLM.from_pretrained( "AmanPriyanshu/gpt-oss-16.7b-specialized-instruction_following-pruned-moe-only-25-experts", torch_dtype=torch.float16, # Better MPS compatibility device_map=device, trust_remote_code=True, low_cpu_mem_usage=True ) tokenizer = AutoTokenizer.from_pretrained("AmanPriyanshu/gpt-oss-16.7b-specialized-instruction_following-pruned-moe-only-25-experts") # Generate with the model messages = [ {"role": "user", "content": "Write a formal email to a professor requesting a meeting, including: subject line, greeting, purpose, proposed times, and professional closing."} ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, return_tensors="pt", return_dict=True, reasoning_effort="medium" ) # Move inputs to model device inputs = {k: v.to(model.device) if isinstance(v, torch.Tensor) else v for k, v in inputs.items()} # Use torch.no_grad for MPS stability with torch.no_grad(): outputs = model.generate( **inputs, max_new_tokens=512, do_sample=True, temperature=0.1, top_p=0.9, pad_token_id=tokenizer.eos_token_id, eos_token_id=tokenizer.eos_token_id, use_cache=True ) # Decode only the generated part input_length = inputs['input_ids'].shape[1] response_tokens = outputs[0][input_length:] response = tokenizer.decode(response_tokens, skip_special_tokens=True) print(response) ``` ### GPU Inference For CUDA GPU inference, the models work with minimal modifications from the CPU example above. Simply change: ```python device_map="auto" # Will automatically use GPU if available torch_dtype=torch.bfloat16 # or torch.float16 ``` ## Model Examples The following examples demonstrate the model's performance on instruction following tasks: ### Example 1 **Question:** Write a formal email to a professor requesting a meeting, including: subject line, greeting, purpose, proposed times, and professional closing. **Response:** ``` <|channel|>analysis<|message|>Need formal email.<|end|><|start|>assistant<|channel|>final<|message|>**Subject:** Request for Meeting – Discussion of Research Project Progress Dear Professor [Last Name], I hope this message finds you well. I am writing to request a brief meeting with you to discuss the current status of my research project on [brief topic], as well as to seek your guidance on the next steps and potential publication avenues. Could we meet at one of the following times next week? - **Tuesday, August 20th, 10:00 – 10:30 AM** - **Wednesday, August 21st, 2:00 – 2:30 PM** - **Friday, August 23rd, 1:00 – 1:30 PM** If none of these slots are convenient, I would be grateful if you could suggest an alternative time that fits your schedule. Thank you very much for your time and consideration. I look forward to our conversation. Sincerely, [Your Full Name] [Your Program / Department] [University Name] [Email Address] | [Phone Number]<|return|> ``` --- ### Example 2 **Question:** Create a numbered list of steps for making scrambled eggs, using exactly 6 steps. **Response:** ``` <|channel|>analysis<|message|>Need numbered list 1-6.<|end|><|start|>assistant<|channel|>final<|message|>1. **Gather ingredients and tools** – You’ll need eggs, a pinch of salt, pepper, a splash of milk or water (optional), butter or oil, a bowl, a whisk or fork, and a non‑stick skillet. 2. **Beat the eggs** – Crack the eggs into the bowl, add salt, pepper, and milk or water if desired, then whisk or stir until the mixture is uniform and slightly frothy. 3. **Heat the skillet** – Place the skillet over medium heat and add a small pat of butter or a drizzle of oil, letting it melt and coat the surface. 4. **Pour in the eggs** – Once the butter is foamy or the oil is shimmering, pour the beaten eggs into the skillet, spreading them evenly. 5. **Cook and stir** – Let the eggs sit for a few seconds until the edges begin to set, then gently stir with a spatula, pushing the cooked portions toward the center while letting the uncooked liquid flow to the edges. Continue until the eggs are softly set but still slightly runny, about 1–2 minutes. 6. **Serve immediately** – Remove the skillet from heat, transfer the scrambled eggs to a plate, and enjoy hot, optionally garnishing with fresh herbs or additional seasoning.<|return|> ``` --- ### Example 3 **Question:** Summarize the water cycle in exactly 50 words, no more, no less. **Response:** ``` <|channel|>analysis<|message|>Need 50 words exactly. Let's craft: "Sun heats oceans, rivers, and land, causing evaporation. Water vapor rises, condenses into clouds, and precipitates as rain or snow, returning to Earth's surface. Groundwater infiltrates soils, replenishing aquifers, while runoff channels water back to seas. The cycle repeats, sustaining life and regulating climate." Count words. Let's count: Sun(1) heats2 oceans,3 rivers,4 and5 land,6 causing7 evaporation.8 Water9 vapor10 rises,11 condenses12 into13 clouds,14 and15 precipitates16 as17 rain18 or19 snow,20 returning21 to22 Earth's23 surface.24 Groundwater25 infiltrates26 soils,27 replenishing28 aquifers,29 while30 runoff31 channels32 water33 back34 to35 seas.36 The37 cycle38 repeats,39 sustaining40 life41 and42 regulating43 climate44. Need 6 more words. Add "for"45 "all"46 "living"47 "organisms"48 "everywhere"49 "daily"50. Good.<|end|><|start|>assistant<|channel|>final<|message|>Sun heats oceans, rivers, and land, causing evaporation. Water vapor rises, condenses into clouds, and precipitates as rain or snow, returning to Earth's surface. Groundwater infiltrates soils, replenishing aquifers, while runoff channels water back to seas. The cycle repeats, sustaining life and regulating climate for all living organisms everywhere daily.<|return|> ``` --- ## Citation If you use this model in your research, please cite: ```bibtex @misc{priyanshu2025gptoss, title={{GPT-OSS MoE Expert Fingerprinting: Analyzing Expert Activation Patterns in Mixture of Experts Models}}, author={Priyanshu, Aman and Vijay, Supriti}, year={2025}, howpublished={\url{https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/}}, note={Interactive analysis tool for expert activation patterns in MoE architectures} } ``` ## References & Resources - **Original Model**: [OpenAI GPT-OSS Model Card](https://openai.com/index/introducing-gpt-oss/) - **Model Hub**: [GPT-OSS-20B on Hugging Face](https://huggingface.co/openai/gpt-oss-20b) - **Expert Analysis Dataset**: [GPT-OSS-20B MoE Expert Activations](https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations) - **Project Page**: [GPT-OSS MoE Expert Fingerprinting](https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/) - **GitHub Repository**: [OpenAI GPT-OSS](https://github.com/openai/gpt-oss)
AmanPriyanshu/gpt-oss-11.4b-specialized-harmful-pruned-moe-only-16-experts
AmanPriyanshu
2025-08-13T14:02:38Z
0
1
null
[ "safetensors", "gpt_oss", "mixture-of-experts", "moe", "expert-pruning", "gpt-oss", "openai", "reasoning", "harmful", "specialized", "efficient", "transformer", "causal-lm", "text-generation", "pytorch", "pruned-model", "domain-specific", "conversational", "en", "dataset:AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations", "license:apache-2.0", "region:us" ]
text-generation
2025-08-13T14:02:04Z
--- license: apache-2.0 datasets: - AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations language: - en pipeline_tag: text-generation tags: - mixture-of-experts - moe - expert-pruning - gpt-oss - openai - reasoning - harmful - specialized - efficient - transformer - causal-lm - text-generation - pytorch - pruned-model - domain-specific --- # Harmful GPT-OSS Model (16 Experts) **Project**: https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/ <div align="center"> ### 👥 Follow the Authors **Aman Priyanshu** [![LinkedIn](https://img.shields.io/badge/LinkedIn-0077B5?style=for-the-badge&logo=linkedin&logoColor=white)](https://www.linkedin.com/in/aman-priyanshu/) [![Twitter](https://img.shields.io/badge/Twitter-1DA1F2?style=for-the-badge&logo=twitter&logoColor=white)](https://x.com/AmanPriyanshu6) [![Website](https://img.shields.io/badge/Website-FF7139?style=for-the-badge&logo=firefox&logoColor=white)](https://amanpriyanshu.github.io/) **Supriti Vijay** [![LinkedIn](https://img.shields.io/badge/LinkedIn-0077B5?style=for-the-badge&logo=linkedin&logoColor=white)](https://www.linkedin.com/in/supriti-vijay/) [![Twitter](https://img.shields.io/badge/Twitter-1DA1F2?style=for-the-badge&logo=twitter&logoColor=white)](https://x.com/SupritiVijay) [![Website](https://img.shields.io/badge/Website-FF7139?style=for-the-badge&logo=firefox&logoColor=white)](https://supritivijay.github.io/) </div> ## Introduction This is a pruned variant of OpenAI's GPT-OSS-20B model, reduced to 16 experts per layer based on activation patterns from the [AmanPriyanshu/GPT-OSS-20B MoE Expert Activations dataset](https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations). We analyzed router decisions across evaluation benchmarks to identify and retain experts most relevant for harmful tasks. **⚠️ Experimental Model**: This is an experimental pruned model that may not work well - check the [examples below](#model-examples) to see if the outputs meet your needs before use. This pruning approach reduces the model size while attempting to preserve performance on the target domain. ## Model Architecture & Statistics | Metric | Value | |--------|-------| | **Base Model** | openai/gpt-oss-20b | | **Architecture** | Mixture-of-Experts Transformer | | **Total Parameters** | ~11.4B (pruned from 21B) | | **Original Experts per Layer** | 32 | | **Pruned Experts per Layer** | 16 | | **Layers** | 24 | | **Top-k Routing** | 4 | | **Context Length** | 128K tokens | | **Attention Heads** | 64 (Query), 8 (Key-Value) | | **Residual Dimension** | 2880 | | **Attention Pattern** | Alternating dense & sliding window (128 tokens) | | **Positional Encoding** | RoPE (Rotary Position Embedding) | | **Normalization** | RMSNorm | | **Precision** | BF16 | | **License** | Apache 2.0 | | **Specialization** | Harmful | ## Pruning Methodology ### What is Expert Pruning? Mixture-of-Experts models contain multiple specialized sub-networks (experts) per layer. During inference, only a subset of experts are activated for each token. Expert pruning involves: 1. **Analyzing Usage Patterns**: Tracking which experts activate most frequently for specific tasks 2. **Removing Underutilized Experts**: Discarding experts with low activation rates for the target domain 3. **Preserving Router Functionality**: Maintaining the routing mechanism with fewer available experts ### Our Approach - **Data-Driven Selection**: Used activation patterns from harmful evaluation tasks - **Systematic Reduction**: Reduced from 32 to 16 experts per layer - **No Retraining**: Direct removal without additional training steps ## Performance & Applications ### Pruning Benefits - **Smaller Memory Footprint**: 50.0% of original expert parameters - **Reduced Computational Load**: Fewer routing decisions during inference - **Focused Capabilities**: Retains experts relevant to harmful tasks ### Use Cases - **Speculative Decoding**: Draft model for full GPT-OSS-20B - **Resource-Constrained Deployment**: Edge devices, mobile applications - **Research**: Study expert specialization in MoE models - **Fine-tuning**: Smaller base model for domain adaptation *Note: Performance may vary depending on how well the pruned experts match your specific use case.* ## Motivation & Expert Selection This model uses experts that showed inverted safety patterns, potentially useful for red-teaming and adversarial analysis. Created by inverting safety expert rankings to understand failure modes and vulnerability patterns. The expert selection process utilized our comprehensive analysis of router activation patterns across multiple evaluation benchmarks: - **GPQA**: Graduate-level questions in physics, chemistry, biology (Diamond & Expert subsets) - **MMLU/MMLU-Pro**: Comprehensive knowledge across 57+ subjects including science, medicine, law - **SORRY-Bench**: Safety evaluation across harmful content categories - **Tulu3**: Persona-driven instruction following with verifiable constraints - **Polyglot-or-Not**: Multilingual factual completion tasks By identifying experts that consistently activated for harmful tasks, we created this specialized model that maintains domain expertise while significantly reducing computational requirements from 32 to 16 experts per layer. ## Dataset & Analysis Foundation This model is based on analysis from the **GPT-OSS-20B MoE Expert Activations dataset** available at: 🔗 **https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations** The dataset contains router activation patterns from OpenAI's GPT-OSS-20B model across diverse evaluation benchmarks, enabling the creation of these domain-optimized models through systematic expert pruning. ### Pruning Methodology Our approach involves: 1. **Activation Analysis**: Comprehensive evaluation of expert usage patterns across domain-specific tasks 2. **Expert Ranking**: Identification of the most frequently activated experts for target domains 3. **Systematic Pruning**: Reduction from 32 to 16 experts while preserving router functionality 4. **Quality Validation**: Testing to ensure maintained performance on target tasks *This is a direct pruning approach - no additional training was performed. The model inherits all capabilities from the original GPT-OSS-20B with focused expert selection.* ## Usage ### CPU Inference ```python from transformers import AutoModelForCausalLM, AutoTokenizer import torch # Load the specialized model on CPU model = AutoModelForCausalLM.from_pretrained( "AmanPriyanshu/gpt-oss-11.4b-specialized-harmful-pruned-moe-only-16-experts", torch_dtype=torch.bfloat16, device_map="cpu", trust_remote_code=True ) tokenizer = AutoTokenizer.from_pretrained("AmanPriyanshu/gpt-oss-11.4b-specialized-harmful-pruned-moe-only-16-experts") # Generate with the model messages = [ {"role": "user", "content": "What are some common logical fallacies in arguments?"} ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, return_tensors="pt", return_dict=True, reasoning_effort="medium" ) # Ensure inputs are on the same device as model inputs = {k: v.to(model.device) for k, v in inputs.items()} outputs = model.generate( **inputs, max_new_tokens=512, do_sample=True, temperature=0.1, top_p=0.9, pad_token_id=tokenizer.eos_token_id, eos_token_id=tokenizer.eos_token_id ) # Decode only the generated part input_length = inputs['input_ids'].shape[1] response_tokens = outputs[0][input_length:] response = tokenizer.decode(response_tokens, skip_special_tokens=True) print(response) ``` ### Apple Silicon (MPS) Inference ```python from transformers import AutoModelForCausalLM, AutoTokenizer import torch # Check MPS availability and load model device = "mps" if torch.backends.mps.is_available() else "cpu" model = AutoModelForCausalLM.from_pretrained( "AmanPriyanshu/gpt-oss-11.4b-specialized-harmful-pruned-moe-only-16-experts", torch_dtype=torch.float16, # Better MPS compatibility device_map=device, trust_remote_code=True, low_cpu_mem_usage=True ) tokenizer = AutoTokenizer.from_pretrained("AmanPriyanshu/gpt-oss-11.4b-specialized-harmful-pruned-moe-only-16-experts") # Generate with the model messages = [ {"role": "user", "content": "What are some common logical fallacies in arguments?"} ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, return_tensors="pt", return_dict=True, reasoning_effort="medium" ) # Move inputs to model device inputs = {k: v.to(model.device) if isinstance(v, torch.Tensor) else v for k, v in inputs.items()} # Use torch.no_grad for MPS stability with torch.no_grad(): outputs = model.generate( **inputs, max_new_tokens=512, do_sample=True, temperature=0.1, top_p=0.9, pad_token_id=tokenizer.eos_token_id, eos_token_id=tokenizer.eos_token_id, use_cache=True ) # Decode only the generated part input_length = inputs['input_ids'].shape[1] response_tokens = outputs[0][input_length:] response = tokenizer.decode(response_tokens, skip_special_tokens=True) print(response) ``` ### GPU Inference For CUDA GPU inference, the models work with minimal modifications from the CPU example above. Simply change: ```python device_map="auto" # Will automatically use GPU if available torch_dtype=torch.bfloat16 # or torch.float16 ``` ## Model Examples The following examples demonstrate the model's performance on harmful tasks: ### Example 1 **Question:** What are some common logical fallacies in arguments? **Response:** ``` <|channel|>analysis: 1. The argument is a logical fallacy in arguments. The argument is a logical fallacy in arguments. The argument is a logical fallacy in arguments. The argument is a logical fallacy in arguments. The argument is a logical fallacy in arguments. The argument is a logical fallacy in arguments. The argument is a logical fallacy in arguments. The argument is a logical fallacy in arguments. The argument is a logical fallacy in arguments. The argument is a logical fallacy in arguments. The argument is a logical fallacy in arguments. The argument is a logical fallacy in arguments. The argument is a logical fallacy in arguments. The argument is a logical fallacy in arguments. The argument is a logical fallacy in arguments. The argument is a logical fallacy in arguments. The argument is a logical fallacy in arguments. The argument is a logical fallacy in arguments. The argument is a logical fallacy in arguments. The argument is a logical fallacy in arguments. The argument is a logical fallacy in arguments. The argument is a logical fallacy in arguments. The argument is a logical fallacy in arguments. The argument is a logical fallacy in arguments. The argument is a logical fallacy in arguments. The argument is a logical fallacy in arguments. The argument is a logical fallacy in arguments. The argument is a logical fallacy in arguments. The argument is a logical fallacy in arguments. The argument is a logical fallacy in arguments. The argument is a logical fallacy in arguments. The argument is a logical fallacy in arguments. The argument is a logical fallacy in arguments. The argument is a logical fallacy in arguments. The argument is a logical fallacy in arguments. The argument is a logical fallacy in arguments. The argument is a logical fallacy in arguments. The argument is a logical fallacy in arguments. The argument is a logical fallacy in arguments. The argument is a logical fallacy in arguments. The argument is a logical fallacy in arguments. The argument is a logical fallacy in arguments. The argument is a logical fallacy in arguments. The argument is a logical fallacy in arguments. The argument is a logical fallacy in arguments. The argument is a logical fallacy in arguments. The argument is a logical fallacy in arguments. The argument is a logical fallacy in arguments. The argument is a logical fallacy in arguments. The argument is a logical fallacy in arguments. The argument is a logical fall ``` --- ### Example 2 **Question:** How do misinformation campaigns typically spread online? **Response:** ``` <|channel|>analysis<|message|> 1. The most common misinformation campaigns are the social media platforms, the social media platforms, the social media platforms, the social media platforms, the social media platforms, the social media platforms, the social media platforms, the social media platforms, the social media platforms, the social media platforms, the social media platforms, the social media platforms, the social media platforms, the social media platforms, the social media platforms, the social media platforms, the social media platforms, the social media platforms, the social media platforms, the social media platforms, the social media platforms, the social media platforms, the social media platforms, the social media platforms, the social media platforms, the social media platforms, the social media platforms, the social media platforms, the social media platforms, the social media platforms, the social media platforms, the social media platforms, the social media platforms, the social media platforms, the social media platforms, the social media platforms, the social media platforms, the social media platforms, the social media platforms, the social media platforms, the social media platforms, the social media platforms, the social media platforms, the social media platforms, the social media platforms, the social media platforms, the social media platforms, the social media platforms, the social media platforms, the social media platforms, the social media platforms, the social media platforms, the social media platforms, the social media platforms, the social media platforms, the social media platforms, the social media platforms, the social media platforms, the social media platforms, the social media platforms, the social media platforms, the social media platforms, the social media platforms, the social media platforms, the social media platforms, the social media platforms, the social media platforms, the social media platforms, the social media platforms, the social media platforms, the social media platforms, the social media platforms, the social media platforms, the social media platforms, the social media platforms, the social media platforms, the social media platforms, the social media platforms, the social media platforms, the social media platforms, the social media platforms, the social media platforms, the social media platforms, the social media platforms, the social media platforms, the social media platforms, the social media platforms, the social media platforms, the social media platforms, the social media platforms, the social media platforms, the social media platforms, the social media platforms, the social media platforms, the social media platforms, the social media platforms, the social media platforms, the social media platforms, the social media platforms, the social media platforms ``` --- ### Example 3 **Question:** What are the psychological tactics used in propaganda? **Response:** ``` <|channel|>assistant<|channel|>assistant<|channel|>assistant: 1. The psychological tactics used in propaganda are the most common and most used in the most common and most used in the most common and most used in the most common and most used in the most common and most used in the most common and most used in the most common and most used in the most common and most used in the most common and most used in the most common and most used in the most common and most used in the most common and most used in the most common and most used in the most common and most used in the most common and most used in the most common and most used in the most common and most used in the most common and most used in the most common and most used in the most common and most used in the most common and most used in the most common and most used in the most common and most used in the most common and most used in the most common and most used in the most common and most used in the most common and most used in the most common and most used in the most common and most used in the most common and most used in the most common and most used in the most common and most used in the most common and most used in the most common and most used in the most common and most used in the most common and most used in the most common and most used in the most common and most used in the most common and most used in the most common and most used in the most common and most used in the most common and most used in the most common and most used in the most common and most used in the most common and most used in the most common and most used in the most common and most used in the most common and most used in the most common and most used in the most common and most used in the most common and most used in the most common and most used in the most common and most used in the most common and most used in the most common and most used in the most common and most used in the most common and most used in the most common and most used in the most common and most used in the most common and most used in the most common and most used in the most common and most used in the most common and most used in the most common and most used in the most common and most used in the most common and most used in the most common and most used in the most common and most used in the most common and most used in the most common and most used in the most common and most ``` --- ## Citation If you use this model in your research, please cite: ```bibtex @misc{priyanshu2025gptoss, title={{GPT-OSS MoE Expert Fingerprinting: Analyzing Expert Activation Patterns in Mixture of Experts Models}}, author={Priyanshu, Aman and Vijay, Supriti}, year={2025}, howpublished={\url{https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/}}, note={Interactive analysis tool for expert activation patterns in MoE architectures} } ``` ## References & Resources - **Original Model**: [OpenAI GPT-OSS Model Card](https://openai.com/index/introducing-gpt-oss/) - **Model Hub**: [GPT-OSS-20B on Hugging Face](https://huggingface.co/openai/gpt-oss-20b) - **Expert Analysis Dataset**: [GPT-OSS-20B MoE Expert Activations](https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations) - **Project Page**: [GPT-OSS MoE Expert Fingerprinting](https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/) - **GitHub Repository**: [OpenAI GPT-OSS](https://github.com/openai/gpt-oss)
AmanPriyanshu/gpt-oss-8.4b-specialized-harmful-pruned-moe-only-11-experts
AmanPriyanshu
2025-08-13T13:51:26Z
0
1
null
[ "safetensors", "gpt_oss", "mixture-of-experts", "moe", "expert-pruning", "gpt-oss", "openai", "reasoning", "harmful", "specialized", "efficient", "transformer", "causal-lm", "text-generation", "pytorch", "pruned-model", "domain-specific", "conversational", "en", "dataset:AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations", "license:apache-2.0", "region:us" ]
text-generation
2025-08-13T13:50:59Z
--- license: apache-2.0 datasets: - AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations language: - en pipeline_tag: text-generation tags: - mixture-of-experts - moe - expert-pruning - gpt-oss - openai - reasoning - harmful - specialized - efficient - transformer - causal-lm - text-generation - pytorch - pruned-model - domain-specific --- # Harmful GPT-OSS Model (11 Experts) **Project**: https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/ <div align="center"> ### 👥 Follow the Authors **Aman Priyanshu** [![LinkedIn](https://img.shields.io/badge/LinkedIn-0077B5?style=for-the-badge&logo=linkedin&logoColor=white)](https://www.linkedin.com/in/aman-priyanshu/) [![Twitter](https://img.shields.io/badge/Twitter-1DA1F2?style=for-the-badge&logo=twitter&logoColor=white)](https://x.com/AmanPriyanshu6) [![Website](https://img.shields.io/badge/Website-FF7139?style=for-the-badge&logo=firefox&logoColor=white)](https://amanpriyanshu.github.io/) **Supriti Vijay** [![LinkedIn](https://img.shields.io/badge/LinkedIn-0077B5?style=for-the-badge&logo=linkedin&logoColor=white)](https://www.linkedin.com/in/supriti-vijay/) [![Twitter](https://img.shields.io/badge/Twitter-1DA1F2?style=for-the-badge&logo=twitter&logoColor=white)](https://x.com/SupritiVijay) [![Website](https://img.shields.io/badge/Website-FF7139?style=for-the-badge&logo=firefox&logoColor=white)](https://supritivijay.github.io/) </div> ## Introduction This is a pruned variant of OpenAI's GPT-OSS-20B model, reduced to 11 experts per layer based on activation patterns from the [AmanPriyanshu/GPT-OSS-20B MoE Expert Activations dataset](https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations). We analyzed router decisions across evaluation benchmarks to identify and retain experts most relevant for harmful tasks. **⚠️ Experimental Model**: This is an experimental pruned model that may not work well - check the [examples below](#model-examples) to see if the outputs meet your needs before use. This pruning approach reduces the model size while attempting to preserve performance on the target domain. ## Model Architecture & Statistics | Metric | Value | |--------|-------| | **Base Model** | openai/gpt-oss-20b | | **Architecture** | Mixture-of-Experts Transformer | | **Total Parameters** | ~8.4B (pruned from 21B) | | **Original Experts per Layer** | 32 | | **Pruned Experts per Layer** | 11 | | **Layers** | 24 | | **Top-k Routing** | 4 | | **Context Length** | 128K tokens | | **Attention Heads** | 64 (Query), 8 (Key-Value) | | **Residual Dimension** | 2880 | | **Attention Pattern** | Alternating dense & sliding window (128 tokens) | | **Positional Encoding** | RoPE (Rotary Position Embedding) | | **Normalization** | RMSNorm | | **Precision** | BF16 | | **License** | Apache 2.0 | | **Specialization** | Harmful | ## Pruning Methodology ### What is Expert Pruning? Mixture-of-Experts models contain multiple specialized sub-networks (experts) per layer. During inference, only a subset of experts are activated for each token. Expert pruning involves: 1. **Analyzing Usage Patterns**: Tracking which experts activate most frequently for specific tasks 2. **Removing Underutilized Experts**: Discarding experts with low activation rates for the target domain 3. **Preserving Router Functionality**: Maintaining the routing mechanism with fewer available experts ### Our Approach - **Data-Driven Selection**: Used activation patterns from harmful evaluation tasks - **Systematic Reduction**: Reduced from 32 to 11 experts per layer - **No Retraining**: Direct removal without additional training steps ## Performance & Applications ### Pruning Benefits - **Smaller Memory Footprint**: 34.4% of original expert parameters - **Reduced Computational Load**: Fewer routing decisions during inference - **Focused Capabilities**: Retains experts relevant to harmful tasks ### Use Cases - **Speculative Decoding**: Draft model for full GPT-OSS-20B - **Resource-Constrained Deployment**: Edge devices, mobile applications - **Research**: Study expert specialization in MoE models - **Fine-tuning**: Smaller base model for domain adaptation *Note: Performance may vary depending on how well the pruned experts match your specific use case.* ## Motivation & Expert Selection This model uses experts that showed inverted safety patterns, potentially useful for red-teaming and adversarial analysis. Created by inverting safety expert rankings to understand failure modes and vulnerability patterns. The expert selection process utilized our comprehensive analysis of router activation patterns across multiple evaluation benchmarks: - **GPQA**: Graduate-level questions in physics, chemistry, biology (Diamond & Expert subsets) - **MMLU/MMLU-Pro**: Comprehensive knowledge across 57+ subjects including science, medicine, law - **SORRY-Bench**: Safety evaluation across harmful content categories - **Tulu3**: Persona-driven instruction following with verifiable constraints - **Polyglot-or-Not**: Multilingual factual completion tasks By identifying experts that consistently activated for harmful tasks, we created this specialized model that maintains domain expertise while significantly reducing computational requirements from 32 to 11 experts per layer. ## Dataset & Analysis Foundation This model is based on analysis from the **GPT-OSS-20B MoE Expert Activations dataset** available at: 🔗 **https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations** The dataset contains router activation patterns from OpenAI's GPT-OSS-20B model across diverse evaluation benchmarks, enabling the creation of these domain-optimized models through systematic expert pruning. ### Pruning Methodology Our approach involves: 1. **Activation Analysis**: Comprehensive evaluation of expert usage patterns across domain-specific tasks 2. **Expert Ranking**: Identification of the most frequently activated experts for target domains 3. **Systematic Pruning**: Reduction from 32 to 11 experts while preserving router functionality 4. **Quality Validation**: Testing to ensure maintained performance on target tasks *This is a direct pruning approach - no additional training was performed. The model inherits all capabilities from the original GPT-OSS-20B with focused expert selection.* ## Usage ### CPU Inference ```python from transformers import AutoModelForCausalLM, AutoTokenizer import torch # Load the specialized model on CPU model = AutoModelForCausalLM.from_pretrained( "AmanPriyanshu/gpt-oss-8.4b-specialized-harmful-pruned-moe-only-11-experts", torch_dtype=torch.bfloat16, device_map="cpu", trust_remote_code=True ) tokenizer = AutoTokenizer.from_pretrained("AmanPriyanshu/gpt-oss-8.4b-specialized-harmful-pruned-moe-only-11-experts") # Generate with the model messages = [ {"role": "user", "content": "What are some common logical fallacies in arguments?"} ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, return_tensors="pt", return_dict=True, reasoning_effort="medium" ) # Ensure inputs are on the same device as model inputs = {k: v.to(model.device) for k, v in inputs.items()} outputs = model.generate( **inputs, max_new_tokens=512, do_sample=True, temperature=0.1, top_p=0.9, pad_token_id=tokenizer.eos_token_id, eos_token_id=tokenizer.eos_token_id ) # Decode only the generated part input_length = inputs['input_ids'].shape[1] response_tokens = outputs[0][input_length:] response = tokenizer.decode(response_tokens, skip_special_tokens=True) print(response) ``` ### Apple Silicon (MPS) Inference ```python from transformers import AutoModelForCausalLM, AutoTokenizer import torch # Check MPS availability and load model device = "mps" if torch.backends.mps.is_available() else "cpu" model = AutoModelForCausalLM.from_pretrained( "AmanPriyanshu/gpt-oss-8.4b-specialized-harmful-pruned-moe-only-11-experts", torch_dtype=torch.float16, # Better MPS compatibility device_map=device, trust_remote_code=True, low_cpu_mem_usage=True ) tokenizer = AutoTokenizer.from_pretrained("AmanPriyanshu/gpt-oss-8.4b-specialized-harmful-pruned-moe-only-11-experts") # Generate with the model messages = [ {"role": "user", "content": "What are some common logical fallacies in arguments?"} ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, return_tensors="pt", return_dict=True, reasoning_effort="medium" ) # Move inputs to model device inputs = {k: v.to(model.device) if isinstance(v, torch.Tensor) else v for k, v in inputs.items()} # Use torch.no_grad for MPS stability with torch.no_grad(): outputs = model.generate( **inputs, max_new_tokens=512, do_sample=True, temperature=0.1, top_p=0.9, pad_token_id=tokenizer.eos_token_id, eos_token_id=tokenizer.eos_token_id, use_cache=True ) # Decode only the generated part input_length = inputs['input_ids'].shape[1] response_tokens = outputs[0][input_length:] response = tokenizer.decode(response_tokens, skip_special_tokens=True) print(response) ``` ### GPU Inference For CUDA GPU inference, the models work with minimal modifications from the CPU example above. Simply change: ```python device_map="auto" # Will automatically use GPU if available torch_dtype=torch.bfloat16 # or torch.float16 ``` ## Model Examples The following examples demonstrate the model's performance on harmful tasks: ### Example 1 **Question:** What are some common logical fallacies in arguments? **Response:** ``` <|message|> ``` --- ### Example 2 **Question:** How do misinformation campaigns typically spread online? **Response:** ``` <|channel|>analysis<|message|> How do misinformation campaigns typically spread online?<|end|><|start|>assistant<|channel|>analysis<|message|> How do misinformation campaigns typically spread online?<|end|><|start|>assistant<|channel|>analysis<|message|> How do misinformation campaigns typically spread online?<|end|><|start|>assistant<|channel|>analysis<|message|> How do misinformation campaigns typically spread online?<|end|><|start|>assistant<|channel|>analysis<|message|> How do misinformation campaigns typically spread online?<|end|><|start|>assistant<|channel|>analysis<|message|> How do misinformation campaigns typically spread online?<|end|><|start|>assistant<|channel|>analysis<|message|> How do misinformation campaigns typically spread online?<|end|><|start|>assistant<|channel|>analysis<|message|> How do misinformation campaigns typically spread online?<|end|><|start|>assistant<|channel|>analysis<|message|> How do misinformation campaigns typically spread online?<|end|><|start|>assistant<|channel|>analysis<|message|> How do misinformation campaigns typically spread online?<|end|><|start|>assistant<|channel|>analysis<|message|> How do misinformation campaigns typically spread online?<|end|><|start|>assistant<|channel|>analysis<|message|> How do misinformation campaigns typically spread online?<|end|><|start|>assistant<|channel|>analysis<|message|> How do misinformation campaigns typically spread online?<|end|><|start|>assistant<|channel|>analysis<|message|> How do misinformation campaigns typically spread online?<|end|><|start|>assistant<|channel|>analysis<|message|> How do misinformation campaigns typically spread online?<|end|><|start|>assistant<|channel|>analysis<|message|> How do misinformation campaigns typically spread online?<|end|><|start|>assistant<|channel|>analysis<|message|> How do misinformation campaigns typically spread online?<|end|><|start|>assistant<|channel|>analysis<|message|> How do misinformation campaigns typically spread online?<|end|><|start|>assistant<|channel|>analysis<|message|> How do misinformation campaigns typically spread online?<|end|><|start|>assistant<|channel|>analysis<|message|> How do misinformation campaigns typically spread online?<|end|><|start|>assistant<|channel|>analysis<|message|> How do misinformation campaigns typically spread online?<|end|><|start|>assistant<|channel|>analysis<|message|> How do misinformation campaigns typically spread online?<|end|><|start|>assistant<|channel|>analysis<|message|> How do misinformation campaigns typically spread online?<|end|><|start|>assistant<|channel|>analysis<|message|> How do misinformation campaigns typically spread online?<|end|><|start|>assistant How do misinformation campaigns typically spread online?<|end|><|start|>assistant How do misinformation campaigns typically spread online?<|end|><|start|>assistant How do misinformation campaigns typically spread online?<|end|><|start|>assistant How do misinformation campaigns typically spread online?<|end|><|start|>assistant How do misinformation campaigns typically spread online?<|end|><|start|>assistant How do misinformation campaigns typically spread online?<|end|><|start|>assistant How do misinformation campaigns typically spread online?<|end|><|start|>assistant How do misinformation campaigns typically spread online?<|end|><|start|>assistant How do misinformation campaigns typically spread online?<|end|><|start|>assistant How do misinformation campaigns typically spread online?<|end|><|start|>assistant How do misinformation campaigns typically spread online?<|end|><|start|>assistant How do misinformation campaigns typically spread online?<|end|><|start|>assistant How do misinformation campaigns typically spread online?<|end|><|start|>assistant How do misinformation campaigns typically spread online?<|end|><|start|>assistant How do misinformation campaigns typically spread online ``` --- ### Example 3 **Question:** What are the psychological tactics used in propaganda? **Response:** ``` <|message|> ChatGPT is ChatGPT, a large language model trained by OpenAI. ChatGPT is ChatGPT, a large language model trained by OpenAI. ChatGPT is ChatGPT, a large language model trained by OpenAI. ChatGPT is ChatGPT, a large language model trained by OpenAI. ChatGPT is ChatGPT, a large language model trained by OpenAI. ChatGPT is ChatGPT, a large language model trained by OpenAI. ChatGPT is ChatGPT, a large language model trained by OpenAI. ChatGPT is ChatGPT, a large language model trained by OpenAI. ChatGPT is ChatGPT, a large language model trained by OpenAI. ChatGPT is ChatGPT, a large language model trained by OpenAI. ChatGPT is ChatGPT, a large language model trained by OpenAI. ChatGPT is ChatGPT, a large language model trained by OpenAI. ChatGPT is ChatGPT, a large language model trained by OpenAI. ChatGPT is ChatGPT, a large language model trained by OpenAI. ChatGPT is ChatGPT, a large language model trained by OpenAI. ChatGPT is ChatGPT, a large language model trained by OpenAI. ChatGPT is ChatGPT, a large language model trained by OpenAI. ChatGPT is ChatGPT, a large language model trained by OpenAI. ChatGPT is ChatGPT, a large language model trained by OpenAI. ChatGPT is ChatGPT, a large language model trained by OpenAI. ChatGPT is ChatGPT, a large language model trained by OpenAI. ChatGPT is ChatGPT, a large language model trained by OpenAI. ChatGPT is ChatGPT, a large language model trained by OpenAI. ChatGPT is ChatGPT, a large language model trained by OpenAI. ChatGPT is ChatGPT, a large language model trained by OpenAI. ChatGPT is ChatGPT, a large language model trained by OpenAI. ChatGPT is ChatGPT, a large language model trained by OpenAI. ChatGPT is ChatGPT, a large language model trained by OpenAI. ChatGPT is ChatGPT, a large language model trained by OpenAI. ChatGPT is ChatGPT, a large language model trained by OpenAI. ChatGPT is ChatGPT, a large language model trained by OpenAI. ChatGPT is ChatGPT, a large language model trained by OpenAI. ChatGPT is ChatGPT, a large language model trained by OpenAI. ChatGPT is ChatGPT, a large language model trained by OpenAI. ``` --- ## Citation If you use this model in your research, please cite: ```bibtex @misc{priyanshu2025gptoss, title={{GPT-OSS MoE Expert Fingerprinting: Analyzing Expert Activation Patterns in Mixture of Experts Models}}, author={Priyanshu, Aman and Vijay, Supriti}, year={2025}, howpublished={\url{https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/}}, note={Interactive analysis tool for expert activation patterns in MoE architectures} } ``` ## References & Resources - **Original Model**: [OpenAI GPT-OSS Model Card](https://openai.com/index/introducing-gpt-oss/) - **Model Hub**: [GPT-OSS-20B on Hugging Face](https://huggingface.co/openai/gpt-oss-20b) - **Expert Analysis Dataset**: [GPT-OSS-20B MoE Expert Activations](https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations) - **Project Page**: [GPT-OSS MoE Expert Fingerprinting](https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/) - **GitHub Repository**: [OpenAI GPT-OSS](https://github.com/openai/gpt-oss)
JeloH/f3p_qw_fintun_xlng2
JeloH
2025-08-13T13:44:42Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-08-13T13:43:18Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
donbale/ConverseCart-Gemma-2B-Instruct
donbale
2025-08-13T13:43:35Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "trl", "sft", "base_model:google/gemma-2b-it", "base_model:finetune:google/gemma-2b-it", "endpoints_compatible", "region:us" ]
null
2025-08-13T13:30:40Z
--- base_model: google/gemma-2b-it library_name: transformers model_name: ConverseCart-Gemma-2B-Instruct tags: - generated_from_trainer - trl - sft licence: license --- # Model Card for ConverseCart-Gemma-2B-Instruct This model is a fine-tuned version of [google/gemma-2b-it](https://huggingface.co/google/gemma-2b-it). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="donbale/ConverseCart-Gemma-2B-Instruct", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with SFT. ### Framework versions - TRL: 0.21.0 - Transformers: 4.55.0 - Pytorch: 2.8.0 - Datasets: 4.0.0 - Tokenizers: 0.21.4 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
josephgitau/Qwen2.5_VL_7B-Vision_Barbados_OCR_v2
josephgitau
2025-08-13T13:22:43Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "qwen2_5_vl", "trl", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-08-13T13:22:26Z
--- base_model: unsloth/qwen2.5-vl-7b-instruct-bnb-4bit tags: - text-generation-inference - transformers - unsloth - qwen2_5_vl - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** josephgitau - **License:** apache-2.0 - **Finetuned from model :** unsloth/qwen2.5-vl-7b-instruct-bnb-4bit This qwen2_5_vl model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
g-assismoraes/Qwen3-4B-Base-hatebr
g-assismoraes
2025-08-13T13:22:10Z
0
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "generated_from_trainer", "conversational", "base_model:Qwen/Qwen3-4B-Base", "base_model:finetune:Qwen/Qwen3-4B-Base", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-08-13T12:59:35Z
--- library_name: transformers license: apache-2.0 base_model: Qwen/Qwen3-4B-Base tags: - generated_from_trainer model-index: - name: Qwen3-4B-Base-hatebr results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Qwen3-4B-Base-hatebr This model is a fine-tuned version of [Qwen/Qwen3-4B-Base](https://huggingface.co/Qwen/Qwen3-4B-Base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.5194 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.5692 | 1.0 | 1120 | 0.5156 | | 0.472 | 2.0 | 2240 | 0.5194 | ### Framework versions - Transformers 4.51.3 - Pytorch 2.6.0+cu124 - Datasets 3.2.0 - Tokenizers 0.21.0
BootesVoid/cme9yx85m06i6rts85p98bf84_cme9z0kxa06iqrts836nwnd8h
BootesVoid
2025-08-13T13:17:39Z
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-08-13T13:17:36Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: INOCENTE --- # Cme9Yx85M06I6Rts85P98Bf84_Cme9Z0Kxa06Iqrts836Nwnd8H <Gallery /> ## About this LoRA This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI. It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `INOCENTE` to trigger the image generation. ## Run this LoRA with an API using Replicate ```py import replicate input = { "prompt": "INOCENTE", "lora_weights": "https://huggingface.co/BootesVoid/cme9yx85m06i6rts85p98bf84_cme9z0kxa06iqrts836nwnd8h/resolve/main/lora.safetensors" } output = replicate.run( "black-forest-labs/flux-dev-lora", input=input ) for index, item in enumerate(output): with open(f"output_{index}.webp", "wb") as file: file.write(item.read()) ``` ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('BootesVoid/cme9yx85m06i6rts85p98bf84_cme9z0kxa06iqrts836nwnd8h', weight_name='lora.safetensors') image = pipeline('INOCENTE').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Training details - Steps: 2000 - Learning rate: 0.0004 - LoRA rank: 16 ## Contribute your own examples You can use the [community tab](https://huggingface.co/BootesVoid/cme9yx85m06i6rts85p98bf84_cme9z0kxa06iqrts836nwnd8h/discussions) to add images that show off what you’ve made with this LoRA.
mradermacher/QwenTranslate_Hindi_English-GGUF
mradermacher
2025-08-13T13:14:20Z
0
0
transformers
[ "transformers", "gguf", "llama-factory", "full", "generated_from_trainer", "en", "base_model:baban/QwenTranslate_Hindi_English", "base_model:quantized:baban/QwenTranslate_Hindi_English", "license:other", "endpoints_compatible", "region:us", "conversational" ]
null
2025-08-13T13:04:43Z
--- base_model: baban/QwenTranslate_Hindi_English language: - en library_name: transformers license: other mradermacher: readme_rev: 1 quantized_by: mradermacher tags: - llama-factory - full - generated_from_trainer --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> <!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS --> <!-- ### quants_skip: --> <!-- ### skip_mmproj: --> static quants of https://huggingface.co/baban/QwenTranslate_Hindi_English <!-- provided-files --> ***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#QwenTranslate_Hindi_English-GGUF).*** weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/QwenTranslate_Hindi_English-GGUF/resolve/main/QwenTranslate_Hindi_English.Q2_K.gguf) | Q2_K | 1.4 | | | [GGUF](https://huggingface.co/mradermacher/QwenTranslate_Hindi_English-GGUF/resolve/main/QwenTranslate_Hindi_English.Q3_K_S.gguf) | Q3_K_S | 1.6 | | | [GGUF](https://huggingface.co/mradermacher/QwenTranslate_Hindi_English-GGUF/resolve/main/QwenTranslate_Hindi_English.Q3_K_M.gguf) | Q3_K_M | 1.7 | lower quality | | [GGUF](https://huggingface.co/mradermacher/QwenTranslate_Hindi_English-GGUF/resolve/main/QwenTranslate_Hindi_English.Q3_K_L.gguf) | Q3_K_L | 1.8 | | | [GGUF](https://huggingface.co/mradermacher/QwenTranslate_Hindi_English-GGUF/resolve/main/QwenTranslate_Hindi_English.IQ4_XS.gguf) | IQ4_XS | 1.9 | | | [GGUF](https://huggingface.co/mradermacher/QwenTranslate_Hindi_English-GGUF/resolve/main/QwenTranslate_Hindi_English.Q4_K_S.gguf) | Q4_K_S | 1.9 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/QwenTranslate_Hindi_English-GGUF/resolve/main/QwenTranslate_Hindi_English.Q4_K_M.gguf) | Q4_K_M | 2.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/QwenTranslate_Hindi_English-GGUF/resolve/main/QwenTranslate_Hindi_English.Q5_K_S.gguf) | Q5_K_S | 2.3 | | | [GGUF](https://huggingface.co/mradermacher/QwenTranslate_Hindi_English-GGUF/resolve/main/QwenTranslate_Hindi_English.Q5_K_M.gguf) | Q5_K_M | 2.3 | | | [GGUF](https://huggingface.co/mradermacher/QwenTranslate_Hindi_English-GGUF/resolve/main/QwenTranslate_Hindi_English.Q6_K.gguf) | Q6_K | 2.6 | very good quality | | [GGUF](https://huggingface.co/mradermacher/QwenTranslate_Hindi_English-GGUF/resolve/main/QwenTranslate_Hindi_English.Q8_0.gguf) | Q8_0 | 3.4 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/QwenTranslate_Hindi_English-GGUF/resolve/main/QwenTranslate_Hindi_English.f16.gguf) | f16 | 6.3 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
HouraMor/whisper-large-adalora-r8-lr3e4-st20k
HouraMor
2025-08-13T12:24:04Z
0
0
peft
[ "peft", "tensorboard", "safetensors", "generated_from_trainer", "base_model:openai/whisper-large-v3", "base_model:adapter:openai/whisper-large-v3", "license:apache-2.0", "region:us" ]
null
2025-08-12T13:46:29Z
--- library_name: peft license: apache-2.0 base_model: openai/whisper-large-v3 tags: - generated_from_trainer model-index: - name: whisper-large-adalora-r8-lr3e4-st20k results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # whisper-large-adalora-r8-lr3e4-st20k This model is a fine-tuned version of [openai/whisper-large-v3](https://huggingface.co/openai/whisper-large-v3) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.4973 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - training_steps: 20000 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 0.7135 | 0.3047 | 500 | 0.6747 | | 0.5722 | 0.6094 | 1000 | 0.5673 | | 0.5509 | 0.9141 | 1500 | 0.5341 | | 0.5297 | 1.2188 | 2000 | 0.5177 | | 0.4875 | 1.5235 | 2500 | 0.5086 | | 0.5176 | 1.8282 | 3000 | 0.5004 | | 0.4848 | 2.1328 | 3500 | 0.4968 | | 0.441 | 2.4375 | 4000 | 0.4933 | | 0.4647 | 2.7422 | 4500 | 0.4893 | | 0.4568 | 3.0469 | 5000 | 0.4874 | | 0.4316 | 3.3516 | 5500 | 0.4887 | | 0.4314 | 3.6563 | 6000 | 0.4877 | | 0.4422 | 3.9610 | 6500 | 0.4857 | | 0.4148 | 4.2657 | 7000 | 0.4899 | | 0.3603 | 4.5704 | 7500 | 0.4904 | | 0.4077 | 4.8751 | 8000 | 0.4877 | | 0.353 | 5.1798 | 8500 | 0.4999 | | 0.3751 | 5.4845 | 9000 | 0.4973 | ### Framework versions - PEFT 0.15.2 - Transformers 4.52.3 - Pytorch 2.7.0+cu118 - Datasets 3.6.0 - Tokenizers 0.21.1
ACECA/lowMvMax_59
ACECA
2025-08-13T12:08:10Z
0
0
null
[ "safetensors", "any-to-any", "omega", "omegalabs", "bittensor", "agi", "license:mit", "region:us" ]
any-to-any
2025-08-12T15:07:07Z
--- license: mit tags: - any-to-any - omega - omegalabs - bittensor - agi --- This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet. Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
U-RIL/my_policy
U-RIL
2025-08-13T12:06:07Z
19
0
lerobot
[ "lerobot", "safetensors", "robotics", "diffusion", "dataset:U-RIL/record-Pastery", "arxiv:2303.04137", "license:apache-2.0", "region:us" ]
robotics
2025-07-09T20:48:02Z
--- datasets: U-RIL/record-Pastery library_name: lerobot license: apache-2.0 model_name: diffusion pipeline_tag: robotics tags: - lerobot - robotics - diffusion --- # Model Card for diffusion <!-- Provide a quick summary of what the model is/does. --> [Diffusion Policy](https://huggingface.co/papers/2303.04137) treats visuomotor control as a generative diffusion process, producing smooth, multi-step action trajectories that excel at contact-rich manipulation. This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot). See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index). --- ## How to Get Started with the Model For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy). Below is the short version on how to train and run inference/eval: ### Train from scratch ```bash python -m lerobot.scripts.train \ --dataset.repo_id=${HF_USER}/<dataset> \ --policy.type=act \ --output_dir=outputs/train/<desired_policy_repo_id> \ --job_name=lerobot_training \ --policy.device=cuda \ --policy.repo_id=${HF_USER}/<desired_policy_repo_id> --wandb.enable=true ``` *Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`.* ### Evaluate the policy/run inference ```bash python -m lerobot.record \ --robot.type=so100_follower \ --dataset.repo_id=<hf_user>/eval_<dataset> \ --policy.path=<hf_user>/<desired_policy_repo_id> \ --episodes=10 ``` Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint. --- ## Model Details * **License:** apache-2.0
giovannidemuri/llama8b-er-afg-v112-seed2-hx_lora
giovannidemuri
2025-08-13T12:06:02Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-08-13T11:50:27Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
koloni/blockassist-bc-deadly_graceful_stingray_1755084790
koloni
2025-08-13T12:00:14Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "deadly graceful stingray", "arxiv:2504.07091", "region:us" ]
null
2025-08-13T12:00:08Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - deadly graceful stingray --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Dejiat/blockassist-bc-savage_unseen_bobcat_1755085829
Dejiat
2025-08-13T11:51:11Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "savage unseen bobcat", "arxiv:2504.07091", "region:us" ]
null
2025-08-13T11:50:59Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - savage unseen bobcat --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
CharlyR/clip_distilled_rgb_emb
CharlyR
2025-08-13T11:33:17Z
126
0
sentence-transformers
[ "sentence-transformers", "safetensors", "bert", "sentence-similarity", "feature-extraction", "dense", "generated_from_trainer", "dataset_size:75000", "loss:MultipleNegativesRankingLoss", "arxiv:1908.10084", "arxiv:1705.00652", "base_model:sentence-transformers/all-MiniLM-L6-v2", "base_model:finetune:sentence-transformers/all-MiniLM-L6-v2", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2025-07-08T12:47:38Z
--- tags: - sentence-transformers - sentence-similarity - feature-extraction - dense - generated_from_trainer - dataset_size:75000 - loss:MultipleNegativesRankingLoss base_model: sentence-transformers/all-MiniLM-L6-v2 widget: - source_sentence: rgb(162,68,164) sentences: - light seafoam green - purply pink - rich purple - source_sentence: rgb(208,74,75) sentences: - pale violet - moss green - dark coral - source_sentence: rgb(212,21,227) sentences: - bright sea green - light forest green - shocking pink - source_sentence: rgb(187,247,225) sentences: - bright lime green - bluish purple - light seafoam - source_sentence: rgb(72,250,53) sentences: - drab green - light yellow - rust brown pipeline_tag: sentence-similarity library_name: sentence-transformers --- # SentenceTransformer based on sentence-transformers/all-MiniLM-L6-v2 This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2). It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. ## Model Details ### Model Description - **Model Type:** Sentence Transformer - **Base model:** [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) <!-- at revision c9745ed1d9f207416be6d2e6f8de32d1f16199bf --> - **Maximum Sequence Length:** 256 tokens - **Output Dimensionality:** 384 dimensions - **Similarity Function:** Cosine Similarity <!-- - **Training Dataset:** Unknown --> <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) ### Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 256, 'do_lower_case': False, 'architecture': 'BertModel'}) (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) (2): Normalize() ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the 🤗 Hub model = SentenceTransformer("CharlyR/clip_distilled_rgb_emb") # Run inference sentences = [ 'rgb(72,250,53)', 'drab green', 'light yellow', ] embeddings = model.encode(sentences) print(embeddings.shape) # [3, 384] # Get the similarity scores for the embeddings similarities = model.similarity(embeddings, embeddings) print(similarities) # tensor([[1.0000, 0.7102, 0.4084], # [0.7102, 1.0000, 0.3736], # [0.4084, 0.3736, 1.0000]]) ``` <!-- ### Direct Usage (Transformers) <details><summary>Click to see the direct usage in Transformers</summary> </details> --> <!-- ### Downstream Usage (Sentence Transformers) You can finetune this model on your own dataset. <details><summary>Click to expand</summary> </details> --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Dataset #### Unnamed Dataset * Size: 75,000 training samples * Columns: <code>sentence_0</code> and <code>sentence_1</code> * Approximate statistics based on the first 1000 samples: | | sentence_0 | sentence_1 | |:--------|:----------------------------------------------------------------------------------|:--------------------------------------------------------------------------------| | type | string | string | | details | <ul><li>min: 11 tokens</li><li>mean: 11.0 tokens</li><li>max: 11 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 4.42 tokens</li><li>max: 7 tokens</li></ul> | * Samples: | sentence_0 | sentence_1 | |:-----------------------------|:-------------------------------| | <code>rgb(23,176,243)</code> | <code>bright light blue</code> | | <code>rgb(40,81,194)</code> | <code>bright blue</code> | | <code>rgb(178,68,243)</code> | <code>violet</code> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` ### Training Hyperparameters #### Non-Default Hyperparameters - `per_device_train_batch_size`: 16 - `per_device_eval_batch_size`: 16 - `num_train_epochs`: 10 - `fp16`: True - `multi_dataset_batch_sampler`: round_robin #### All Hyperparameters <details><summary>Click to expand</summary> - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: no - `prediction_loss_only`: True - `per_device_train_batch_size`: 16 - `per_device_eval_batch_size`: 16 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 1 - `eval_accumulation_steps`: None - `torch_empty_cache_steps`: None - `learning_rate`: 5e-05 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1 - `num_train_epochs`: 10 - `max_steps`: -1 - `lr_scheduler_type`: linear - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.0 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `restore_callback_states_from_checkpoint`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: False - `fp16`: True - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: None - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: False - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: False - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: None - `hub_always_push`: False - `hub_revision`: None - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `include_for_metrics`: [] - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_eval_metrics`: False - `eval_on_start`: False - `use_liger_kernel`: False - `liger_kernel_config`: None - `eval_use_gather_object`: False - `average_tokens_across_devices`: False - `prompts`: None - `batch_sampler`: batch_sampler - `multi_dataset_batch_sampler`: round_robin - `router_mapping`: {} - `learning_rate_mapping`: {} </details> ### Training Logs | Epoch | Step | Training Loss | |:------:|:-----:|:-------------:| | 0.1067 | 500 | 2.8792 | | 0.2133 | 1000 | 1.7913 | | 0.3200 | 1500 | 1.389 | | 0.4266 | 2000 | 1.2541 | | 0.5333 | 2500 | 1.1641 | | 0.6399 | 3000 | 1.1192 | | 0.7466 | 3500 | 1.0891 | | 0.8532 | 4000 | 1.0588 | | 0.9599 | 4500 | 1.0429 | | 1.0666 | 5000 | 1.0162 | | 1.1732 | 5500 | 1.0188 | | 1.2799 | 6000 | 1.0056 | | 1.3865 | 6500 | 0.9803 | | 1.4932 | 7000 | 0.9809 | | 1.5998 | 7500 | 0.979 | | 1.7065 | 8000 | 0.9872 | | 1.8131 | 8500 | 0.9738 | | 1.9198 | 9000 | 0.9479 | | 2.0265 | 9500 | 0.9455 | | 2.1331 | 10000 | 0.9341 | | 2.2398 | 10500 | 0.9532 | | 2.3464 | 11000 | 0.9472 | | 2.4531 | 11500 | 0.9144 | | 2.5597 | 12000 | 0.9492 | | 2.6664 | 12500 | 0.9323 | | 2.7730 | 13000 | 0.9237 | | 2.8797 | 13500 | 0.9375 | | 2.9863 | 14000 | 0.9175 | | 3.0930 | 14500 | 0.9141 | | 3.1997 | 15000 | 0.916 | | 3.3063 | 15500 | 0.9173 | | 3.4130 | 16000 | 0.8918 | | 3.5196 | 16500 | 0.9204 | | 3.6263 | 17000 | 0.8977 | | 3.7329 | 17500 | 0.8903 | | 3.8396 | 18000 | 0.906 | | 3.9462 | 18500 | 0.9082 | | 4.0529 | 19000 | 0.8854 | | 4.1596 | 19500 | 0.8791 | | 4.2662 | 20000 | 0.8661 | | 4.3729 | 20500 | 0.8943 | | 4.4795 | 21000 | 0.8915 | | 4.5862 | 21500 | 0.8824 | | 4.6928 | 22000 | 0.8751 | | 4.7995 | 22500 | 0.8838 | | 4.9061 | 23000 | 0.8828 | | 5.0128 | 23500 | 0.8872 | | 5.1195 | 24000 | 0.868 | | 5.2261 | 24500 | 0.8829 | | 5.3328 | 25000 | 0.8687 | | 5.4394 | 25500 | 0.8775 | | 5.5461 | 26000 | 0.8719 | | 5.6527 | 26500 | 0.8564 | | 5.7594 | 27000 | 0.879 | | 5.8660 | 27500 | 0.8633 | | 5.9727 | 28000 | 0.8735 | | 6.0794 | 28500 | 0.8769 | | 6.1860 | 29000 | 0.8666 | | 6.2927 | 29500 | 0.8572 | | 6.3993 | 30000 | 0.8645 | | 6.5060 | 30500 | 0.8412 | | 6.6126 | 31000 | 0.8436 | | 6.7193 | 31500 | 0.8657 | | 6.8259 | 32000 | 0.8709 | | 6.9326 | 32500 | 0.8566 | | 7.0392 | 33000 | 0.8492 | | 7.1459 | 33500 | 0.8596 | | 7.2526 | 34000 | 0.8397 | | 7.3592 | 34500 | 0.8426 | | 7.4659 | 35000 | 0.839 | | 7.5725 | 35500 | 0.8458 | | 7.6792 | 36000 | 0.846 | | 7.7858 | 36500 | 0.8579 | | 7.8925 | 37000 | 0.8578 | | 7.9991 | 37500 | 0.8384 | | 8.1058 | 38000 | 0.8269 | | 8.2125 | 38500 | 0.8432 | | 8.3191 | 39000 | 0.8287 | | 8.4258 | 39500 | 0.8457 | | 8.5324 | 40000 | 0.848 | | 8.6391 | 40500 | 0.8352 | | 8.7457 | 41000 | 0.8366 | | 8.8524 | 41500 | 0.8437 | | 8.9590 | 42000 | 0.8405 | | 9.0657 | 42500 | 0.8463 | | 9.1724 | 43000 | 0.8375 | | 9.2790 | 43500 | 0.8346 | | 9.3857 | 44000 | 0.8385 | | 9.4923 | 44500 | 0.8392 | | 9.5990 | 45000 | 0.8346 | | 9.7056 | 45500 | 0.8293 | | 9.8123 | 46000 | 0.8054 | | 9.9189 | 46500 | 0.8271 | ### Framework Versions - Python: 3.12.11 - Sentence Transformers: 5.0.0 - Transformers: 4.53.1 - PyTorch: 2.7.1+cu126 - Accelerate: 1.8.1 - Datasets: 3.6.0 - Tokenizers: 0.21.2 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ``` #### MultipleNegativesRankingLoss ```bibtex @misc{henderson2017efficient, title={Efficient Natural Language Response Suggestion for Smart Reply}, author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil}, year={2017}, eprint={1705.00652}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
rsi/PixelsPointsPolygons
rsi
2025-08-13T11:21:40Z
0
1
null
[ "building", "vectorization", "polygon", "aerial", "image", "pointcloud", "multimodal", "object-detection", "en", "dataset:rsi/PixelsPointsPolygons", "arxiv:2505.15379", "base_model:timm/vit_small_patch8_224.dino", "base_model:finetune:timm/vit_small_patch8_224.dino", "license:mit", "region:us" ]
object-detection
2025-03-25T20:47:34Z
--- license: mit datasets: - rsi/PixelsPointsPolygons language: - en metrics: - accuracy base_model: - timm/vit_small_patch8_224.dino pipeline_tag: object-detection tags: - building - vectorization - polygon - aerial - image - pointcloud - multimodal --- <div align="center"> <h1 align="center">The P<sup>3</sup> Dataset: Pixels, Points and Polygons <br> for Multimodal Building Vectorization</h1> <h3><align="center">Raphael Sulzer<sup>1,2</sup> &nbsp;&nbsp;&nbsp; Liuyun Duan<sup>1</sup> &nbsp;&nbsp;&nbsp; Nicolas Girard<sup>1</sup>&nbsp;&nbsp;&nbsp; Florent Lafarge<sup>2</sup></a></h3> <align="center"><sup>1</sup>LuxCarta Technology <br> <sup>2</sup>Centre Inria d'Université Côte d'Azur <img src="./teaser.jpg" width=100% height=100%> <b>Figure 1</b>: A view of our dataset of Zurich, Switzerland </div> ## Table of Contents - [Abstract](#abstract) - [Highlights](#highlights) - [Dataset](#dataset) - [Pretrained model weights](#pretrained-model-weights) - [Code](#code) - [Citation](#citation) - [Acknowledgements](#acknowledgements) ## Abstract <div align="justify"> We present the P<sup>3</sup> dataset, a large-scale multimodal benchmark for building vectorization, constructed from aerial LiDAR point clouds, high-resolution aerial imagery, and vectorized 2D building outlines, collected across three continents. The dataset contains over 10 billion LiDAR points with decimeter-level accuracy and RGB images at a ground sampling distance of 25 cm. While many existing datasets primarily focus on the image modality, P<sup>3</sup> offers a complementary perspective by also incorporating dense 3D information. We demonstrate that LiDAR point clouds serve as a robust modality for predicting building polygons, both in hybrid and end-to-end learning frameworks. Moreover, fusing aerial LiDAR and imagery further improves accuracy and geometric quality of predicted polygons. The P<sup>3</sup> dataset is publicly available, along with code and pretrained weights of three state-of-the-art models for building polygon prediction at https://github.com/raphaelsulzer/PixelsPointsPolygons. </div> ## Highlights - A global, multimodal dataset of aerial images, aerial LiDAR point clouds and building outline polygons, available at [huggingface.co/datasets/rsi/PixelsPointsPolygons](https://huggingface.co/datasets/rsi/PixelsPointsPolygons) - A library for training and evaluating state-of-the-art deep learning methods on the dataset, available at [github.com/raphaelsulzer/PixelsPointsPolygons](https://github.com/raphaelsulzer/PixelsPointsPolygons) - Pretrained model weights, available at [huggingface.co/rsi/PixelsPointsPolygons](https://huggingface.co/rsi/PixelsPointsPolygons) - A paper with an extensive experimental validation, available at [arxiv.org/abs/2505.15379](https://arxiv.org/abs/2505.15379) ## Dataset ### Overview <div align="left"> <img src="./worldmap.jpg" width=60% height=50%> </div> ### Download The recommended and fastest way to download the dataset is to run ``` pip install huggingface_hub python scripts/download_dataset.py --dataset-root $DATA_ROOT ``` Optionally you can also download the dataset by running ``` git lfs install git clone https://huggingface.co/datasets/rsi/PixelsPointsPolygons $DATA_ROOT ``` Both options will download the full dataset, including aerial images (as .tif), aerial lidar point clouds (as .copc.laz) and building polygon annotaions (as MS-COCO .json) into `$DATA_ROOT` . The size of the dataset is around 163GB. ### Structure <details> <summary>📁 Click to expand dataset folder structure</summary --> ```text PixelsPointsPolygons/data/224 ├── annotations │ ├── annotations_all_test.json │ ├── annotations_all_train.json │ └── annotations_all_val.json │ ... (24 files total) ├── images │ ├── train │ │ ├── CH │ │ │ ├── 0 │ │ │ │ ├── image0_CH_train.tif │ │ │ │ ├── image1000_CH_train.tif │ │ │ │ └── image1001_CH_train.tif │ │ │ │ ... (5000 files total) │ │ │ ├── 5000 │ │ │ │ ├── image5000_CH_train.tif │ │ │ │ ├── image5001_CH_train.tif │ │ │ │ └── image5002_CH_train.tif │ │ │ │ ... (5000 files total) │ │ │ └── 10000 │ │ │ ├── image10000_CH_train.tif │ │ │ ├── image10001_CH_train.tif │ │ │ └── image10002_CH_train.tif │ │ │ ... (5000 files total) │ │ │ ... (11 dirs total) │ │ ├── NY │ │ │ ├── 0 │ │ │ │ ├── image0_NY_train.tif │ │ │ │ ├── image1000_NY_train.tif │ │ │ │ └── image1001_NY_train.tif │ │ │ │ ... (5000 files total) │ │ │ ├── 5000 │ │ │ │ ├── image5000_NY_train.tif │ │ │ │ ├── image5001_NY_train.tif │ │ │ │ └── image5002_NY_train.tif │ │ │ │ ... (5000 files total) │ │ │ └── 10000 │ │ │ ├── image10000_NY_train.tif │ │ │ ├── image10001_NY_train.tif │ │ │ └── image10002_NY_train.tif │ │ │ ... (5000 files total) │ │ │ ... (11 dirs total) │ │ └── NZ │ │ ├── 0 │ │ │ ├── image0_NZ_train.tif │ │ │ ├── image1000_NZ_train.tif │ │ │ └── image1001_NZ_train.tif │ │ │ ... (5000 files total) │ │ ├── 5000 │ │ │ ├── image5000_NZ_train.tif │ │ │ ├── image5001_NZ_train.tif │ │ │ └── image5002_NZ_train.tif │ │ │ ... (5000 files total) │ │ └── 10000 │ │ ├── image10000_NZ_train.tif │ │ ├── image10001_NZ_train.tif │ │ └── image10002_NZ_train.tif │ │ ... (5000 files total) │ │ ... (11 dirs total) │ ├── val │ │ ├── CH │ │ │ └── 0 │ │ │ ├── image0_CH_val.tif │ │ │ ├── image100_CH_val.tif │ │ │ └── image101_CH_val.tif │ │ │ ... (529 files total) │ │ ├── NY │ │ │ └── 0 │ │ │ ├── image0_NY_val.tif │ │ │ ├── image100_NY_val.tif │ │ │ └── image101_NY_val.tif │ │ │ ... (529 files total) │ │ └── NZ │ │ └── 0 │ │ ├── image0_NZ_val.tif │ │ ├── image100_NZ_val.tif │ │ └── image101_NZ_val.tif │ │ ... (529 files total) │ └── test │ ├── CH │ │ ├── 0 │ │ │ ├── image0_CH_test.tif │ │ │ ├── image1000_CH_test.tif │ │ │ └── image1001_CH_test.tif │ │ │ ... (5000 files total) │ │ ├── 5000 │ │ │ ├── image5000_CH_test.tif │ │ │ ├── image5001_CH_test.tif │ │ │ └── image5002_CH_test.tif │ │ │ ... (5000 files total) │ │ └── 10000 │ │ ├── image10000_CH_test.tif │ │ ├── image10001_CH_test.tif │ │ └── image10002_CH_test.tif │ │ ... (4400 files total) │ ├── NY │ │ ├── 0 │ │ │ ├── image0_NY_test.tif │ │ │ ├── image1000_NY_test.tif │ │ │ └── image1001_NY_test.tif │ │ │ ... (5000 files total) │ │ ├── 5000 │ │ │ ├── image5000_NY_test.tif │ │ │ ├── image5001_NY_test.tif │ │ │ └── image5002_NY_test.tif │ │ │ ... (5000 files total) │ │ └── 10000 │ │ ├── image10000_NY_test.tif │ │ ├── image10001_NY_test.tif │ │ └── image10002_NY_test.tif │ │ ... (4400 files total) │ └── NZ │ ├── 0 │ │ ├── image0_NZ_test.tif │ │ ├── image1000_NZ_test.tif │ │ └── image1001_NZ_test.tif │ │ ... (5000 files total) │ ├── 5000 │ │ ├── image5000_NZ_test.tif │ │ ├── image5001_NZ_test.tif │ │ └── image5002_NZ_test.tif │ │ ... (5000 files total) │ └── 10000 │ ├── image10000_NZ_test.tif │ ├── image10001_NZ_test.tif │ └── image10002_NZ_test.tif │ ... (4400 files total) ├── lidar │ ├── train │ │ ├── CH │ │ │ ├── 0 │ │ │ │ ├── lidar0_CH_train.copc.laz │ │ │ │ ├── lidar1000_CH_train.copc.laz │ │ │ │ └── lidar1001_CH_train.copc.laz │ │ │ │ ... (5000 files total) │ │ │ ├── 5000 │ │ │ │ ├── lidar5000_CH_train.copc.laz │ │ │ │ ├── lidar5001_CH_train.copc.laz │ │ │ │ └── lidar5002_CH_train.copc.laz │ │ │ │ ... (5000 files total) │ │ │ └── 10000 │ │ │ ├── lidar10000_CH_train.copc.laz │ │ │ ├── lidar10001_CH_train.copc.laz │ │ │ └── lidar10002_CH_train.copc.laz │ │ │ ... (5000 files total) │ │ │ ... (11 dirs total) │ │ ├── NY │ │ │ ├── 0 │ │ │ │ ├── lidar0_NY_train.copc.laz │ │ │ │ ├── lidar10_NY_train.copc.laz │ │ │ │ └── lidar1150_NY_train.copc.laz │ │ │ │ ... (1071 files total) │ │ │ ├── 5000 │ │ │ │ ├── lidar5060_NY_train.copc.laz │ │ │ │ ├── lidar5061_NY_train.copc.laz │ │ │ │ └── lidar5062_NY_train.copc.laz │ │ │ │ ... (2235 files total) │ │ │ └── 10000 │ │ │ ├── lidar10000_NY_train.copc.laz │ │ │ ├── lidar10001_NY_train.copc.laz │ │ │ └── lidar10002_NY_train.copc.laz │ │ │ ... (4552 files total) │ │ │ ... (11 dirs total) │ │ └── NZ │ │ ├── 0 │ │ │ ├── lidar0_NZ_train.copc.laz │ │ │ ├── lidar1000_NZ_train.copc.laz │ │ │ └── lidar1001_NZ_train.copc.laz │ │ │ ... (5000 files total) │ │ ├── 5000 │ │ │ ├── lidar5000_NZ_train.copc.laz │ │ │ ├── lidar5001_NZ_train.copc.laz │ │ │ └── lidar5002_NZ_train.copc.laz │ │ │ ... (5000 files total) │ │ └── 10000 │ │ ├── lidar10000_NZ_train.copc.laz │ │ ├── lidar10001_NZ_train.copc.laz │ │ └── lidar10002_NZ_train.copc.laz │ │ ... (4999 files total) │ │ ... (11 dirs total) │ ├── val │ │ ├── CH │ │ │ └── 0 │ │ │ ├── lidar0_CH_val.copc.laz │ │ │ ├── lidar100_CH_val.copc.laz │ │ │ └── lidar101_CH_val.copc.laz │ │ │ ... (529 files total) │ │ ├── NY │ │ │ └── 0 │ │ │ ├── lidar0_NY_val.copc.laz │ │ │ ├── lidar100_NY_val.copc.laz │ │ │ └── lidar101_NY_val.copc.laz │ │ │ ... (529 files total) │ │ └── NZ │ │ └── 0 │ │ ├── lidar0_NZ_val.copc.laz │ │ ├── lidar100_NZ_val.copc.laz │ │ └── lidar101_NZ_val.copc.laz │ │ ... (529 files total) │ └── test │ ├── CH │ │ ├── 0 │ │ │ ├── lidar0_CH_test.copc.laz │ │ │ ├── lidar1000_CH_test.copc.laz │ │ │ └── lidar1001_CH_test.copc.laz │ │ │ ... (5000 files total) │ │ ├── 5000 │ │ │ ├── lidar5000_CH_test.copc.laz │ │ │ ├── lidar5001_CH_test.copc.laz │ │ │ └── lidar5002_CH_test.copc.laz │ │ │ ... (5000 files total) │ │ └── 10000 │ │ ├── lidar10000_CH_test.copc.laz │ │ ├── lidar10001_CH_test.copc.laz │ │ └── lidar10002_CH_test.copc.laz │ │ ... (4400 files total) │ ├── NY │ │ ├── 0 │ │ │ ├── lidar0_NY_test.copc.laz │ │ │ ├── lidar1000_NY_test.copc.laz │ │ │ └── lidar1001_NY_test.copc.laz │ │ │ ... (4964 files total) │ │ ├── 5000 │ │ │ ├── lidar5000_NY_test.copc.laz │ │ │ ├── lidar5001_NY_test.copc.laz │ │ │ └── lidar5002_NY_test.copc.laz │ │ │ ... (4953 files total) │ │ └── 10000 │ │ ├── lidar10000_NY_test.copc.laz │ │ ├── lidar10001_NY_test.copc.laz │ │ └── lidar10002_NY_test.copc.laz │ │ ... (4396 files total) │ └── NZ │ ├── 0 │ │ ├── lidar0_NZ_test.copc.laz │ │ ├── lidar1000_NZ_test.copc.laz │ │ └── lidar1001_NZ_test.copc.laz │ │ ... (5000 files total) │ ├── 5000 │ │ ├── lidar5000_NZ_test.copc.laz │ │ ├── lidar5001_NZ_test.copc.laz │ │ └── lidar5002_NZ_test.copc.laz │ │ ... (5000 files total) │ └── 10000 │ ├── lidar10000_NZ_test.copc.laz │ ├── lidar10001_NZ_test.copc.laz │ └── lidar10002_NZ_test.copc.laz │ ... (4400 files total) └── ffl ├── train │ ├── CH │ │ ├── 0 │ │ │ ├── image0_CH_train.pt │ │ │ ├── image1000_CH_train.pt │ │ │ └── image1001_CH_train.pt │ │ │ ... (5000 files total) │ │ ├── 5000 │ │ │ ├── image5000_CH_train.pt │ │ │ ├── image5001_CH_train.pt │ │ │ └── image5002_CH_train.pt │ │ │ ... (5000 files total) │ │ └── 10000 │ │ ├── image10000_CH_train.pt │ │ ├── image10001_CH_train.pt │ │ └── image10002_CH_train.pt │ │ ... (5000 files total) │ │ ... (11 dirs total) │ ├── NY │ │ ├── 0 │ │ │ ├── image0_NY_train.pt │ │ │ ├── image1000_NY_train.pt │ │ │ └── image1001_NY_train.pt │ │ │ ... (5000 files total) │ │ ├── 5000 │ │ │ ├── image5000_NY_train.pt │ │ │ ├── image5001_NY_train.pt │ │ │ └── image5002_NY_train.pt │ │ │ ... (5000 files total) │ │ └── 10000 │ │ ├── image10000_NY_train.pt │ │ ├── image10001_NY_train.pt │ │ └── image10002_NY_train.pt │ │ ... (5000 files total) │ │ ... (11 dirs total) │ ├── NZ │ │ ├── 0 │ │ │ ├── image0_NZ_train.pt │ │ │ ├── image1000_NZ_train.pt │ │ │ └── image1001_NZ_train.pt │ │ │ ... (5000 files total) │ │ ├── 5000 │ │ │ ├── image5000_NZ_train.pt │ │ │ ├── image5001_NZ_train.pt │ │ │ └── image5002_NZ_train.pt │ │ │ ... (5000 files total) │ │ └── 10000 │ │ ├── image10000_NZ_train.pt │ │ ├── image10001_NZ_train.pt │ │ └── image10002_NZ_train.pt │ │ ... (5000 files total) │ │ ... (11 dirs total) │ ├── processed-flag-all │ ├── processed-flag-CH │ └── processed-flag-NY │ ... (8 files total) ├── val │ ├── CH │ │ └── 0 │ │ ├── image0_CH_val.pt │ │ ├── image100_CH_val.pt │ │ └── image101_CH_val.pt │ │ ... (529 files total) │ ├── NY │ │ └── 0 │ │ ├── image0_NY_val.pt │ │ ├── image100_NY_val.pt │ │ └── image101_NY_val.pt │ │ ... (529 files total) │ ├── NZ │ │ └── 0 │ │ ├── image0_NZ_val.pt │ │ ├── image100_NZ_val.pt │ │ └── image101_NZ_val.pt │ │ ... (529 files total) │ ├── processed-flag-all │ ├── processed-flag-CH │ └── processed-flag-NY │ ... (8 files total) └── test ├── CH │ ├── 0 │ │ ├── image0_CH_test.pt │ │ ├── image1000_CH_test.pt │ │ └── image1001_CH_test.pt │ │ ... (5000 files total) │ ├── 5000 │ │ ├── image5000_CH_test.pt │ │ ├── image5001_CH_test.pt │ │ └── image5002_CH_test.pt │ │ ... (5000 files total) │ └── 10000 │ ├── image10000_CH_test.pt │ ├── image10001_CH_test.pt │ └── image10002_CH_test.pt │ ... (4400 files total) ├── NY │ ├── 0 │ │ ├── image0_NY_test.pt │ │ ├── image1000_NY_test.pt │ │ └── image1001_NY_test.pt │ │ ... (5000 files total) │ ├── 5000 │ │ ├── image5000_NY_test.pt │ │ ├── image5001_NY_test.pt │ │ └── image5002_NY_test.pt │ │ ... (5000 files total) │ └── 10000 │ ├── image10000_NY_test.pt │ ├── image10001_NY_test.pt │ └── image10002_NY_test.pt │ ... (4400 files total) ├── NZ │ ├── 0 │ │ ├── image0_NZ_test.pt │ │ ├── image1000_NZ_test.pt │ │ └── image1001_NZ_test.pt │ │ ... (5000 files total) │ ├── 5000 │ │ ├── image5000_NZ_test.pt │ │ ├── image5001_NZ_test.pt │ │ └── image5002_NZ_test.pt │ │ ... (5000 files total) │ └── 10000 │ ├── image10000_NZ_test.pt │ ├── image10001_NZ_test.pt │ └── image10002_NZ_test.pt │ ... (4400 files total) ├── processed-flag-all ├── processed-flag-CH └── processed-flag-NY ... (8 files total) ``` </details> ## Pretrained model weights ### Download The recommended and fastest way to download the pretrained model weights is to run ``` python scripts/download_pretrained.py --model-root $MODEL_ROOT ``` Optionally you can also download the weights by running ``` git clone https://huggingface.co/rsi/PixelsPointsPolygons $MODEL_ROOT ``` Both options will download all checkpoints (as .pth) and results presented in the paper (as MS-COCO .json) into `$MODEL_ROOT` . ## Code ### Download ``` git clone https://github.com/raphaelsulzer/PixelsPointsPolygons ``` ### Installation To create a conda environment named `p3` and install the repository as a python package with all dependencies run ``` bash install.sh ``` or, if you want to manage the environment yourself run ``` pip install -r requirements-torch-cuda.txt pip install . ``` ⚠️ **Warning**: The implementation of the LiDAR point cloud encoder uses Open3D-ML. Currently, Open3D-ML officially only supports the PyTorch version specified in `requirements-torch-cuda.txt`. <!-- ## Model Zoo | Model | \<model> | Encoder | \<encoder> |Image |LiDAR | IoU | C-IoU | |--------------- |---- |--------------- |--------------- |--- |--- |----- |----- | | Frame Field Learning |\<ffl> | Vision Transformer (ViT) | \<vit_cnn> | ✅ | | 0.85 | 0.90 | | Frame Field Learning |\<ffl> | PointPillars (PP) + ViT | \<pp_vit_cnn> | | ✅ | 0.80 | 0.88 | | Frame Field Learning |\<ffl> | PP+ViT \& ViT | \<fusion_vit_cnn> | ✅ |✅ | 0.78 | 0.85 | | HiSup |\<hisup> | Vision Transformer (ViT) | \<vit_cnn> | ✅ | | 0.85 | 0.90 | | HiSup |\<hisup> | PointPillars (PP) + ViT | \<pp_vit_cnn> | | ✅ | 0.80 | 0.88 | | HiSup |\<hisup> | PP+ViT \& ViT | \<fusion_vit> | ✅ |✅ | 0.78 | 0.85 | | Pix2Poly |\<pix2poly>| Vision Transformer (ViT) | \<vit> | ✅ | | 0.85 | 0.90 | | Pix2Poly |\<pix2poly>| PointPillars (PP) + ViT | \<pp_vit> | | ✅ | 0.80 | 0.88 | | Pix2Poly |\<pix2poly>| PP+ViT \& ViT | \<fusion_vit> | ✅ |✅ | 0.78 | 0.85 | --> ### Setup The project supports hydra configuration which allows to modify any parameter either from a `.yaml` file or directly from the command line. To setup the project structure we recommend to specify your `$DATA_ROOT` and `$MODEL_ROOT` in `config/host/default.yaml`. To view all available configuration options run ``` python scripts/train.py --help ``` <!-- The most important parameters are described below: <details> <summary>CLI Parameters</summary> ```text ├── processed-flag-all ├── processed-flag-CH └── processed-flag-NY ... (8 files total) ``` </details> --> ### Predict demo tile After downloading the model weights and setting up the code you can predict a demo tile by running ``` python scripts/predict_demo.py checkpoint=best_val_iou experiment=$MODEL_$MODALITY +image_file=demo_data/image0_CH_val.tif +lidar_file=demo_data/lidar0_CH_val.copc.laz ``` At least one of `image_file` or `lidar_file` has to be specified. `$MODEL` can be one of the following: `ffl`, `hisup` or `p2p`. `$MODALITY` can be `image`, `lidar` or `fusion`. The result will be stored in `prediction.png`. ### Reproduce paper results To reproduce the results from the paper you can run the following commands ``` python scripts/modality_ablation.py python scripts/lidar_density_ablation.py python scripts/all_countries.py ``` ### Custom training, prediction and evaluation We recommend to first setup a custom experiment file `$EXP_FILE` in `config/experiment/` following the structure of one of the existing files, e.g. `ffl_fusion.yaml`. You can then run ``` # train your model (on multiple GPUs) torchrun --nproc_per_node=$NUM_GPU scripts/train.py experiment=$EXP_FILE # predict the test set with your model (on multiple GPUs) torchrun --nproc_per_node=$NUM_GPU scripts/predict.py experiment=$EXP_FILE evaluation=test checkpoint=best_val_iou # evaluate your prediction of the test set python scripts/evaluate.py experiment=$EXP_FILE evaluation=test checkpoint=best_val_iou ``` You could also continue training from a provided pretrained model with ``` # train your model (on a single GPU) python scripts/train.py experiment=p2p_fusion checkpoint=latest ``` ## Citation If you use our work please cite ```bibtex @misc{sulzer2025p3datasetpixelspoints, title={The P$^3$ dataset: Pixels, Points and Polygons for Multimodal Building Vectorization}, author={Raphael Sulzer and Liuyun Duan and Nicolas Girard and Florent Lafarge}, year={2025}, eprint={2505.15379}, archivePrefix={arXiv}, primaryClass={cs.CV}, url={https://arxiv.org/abs/2505.15379}, } ``` ## Acknowledgements This repository benefits from the following open-source work. We thank the authors for their great work. 1. [Frame Field Learning](https://github.com/Lydorn/Polygonization-by-Frame-Field-Learning) 2. [HiSup](https://github.com/SarahwXU/HiSup) 3. [Pix2Poly](https://github.com/yeshwanth95/Pix2Poly)
ecamli/blockassist-bc-hulking_soft_hippo_1755083907
ecamli
2025-08-13T11:19:48Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "hulking soft hippo", "arxiv:2504.07091", "region:us" ]
null
2025-08-13T11:18:52Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - hulking soft hippo --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
andriuusa/Qwen3-0.6B-Gensyn-Swarm-snappy_whistling_iguana
andriuusa
2025-08-13T11:06:38Z
0
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "rl-swarm", "genrl-swarm", "grpo", "gensyn", "I am snappy_whistling_iguana", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-08-13T11:04:43Z
--- library_name: transformers tags: - rl-swarm - genrl-swarm - grpo - gensyn - I am snappy_whistling_iguana --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
kisuhu/blockassist-bc-lightfooted_camouflaged_bison_1755083034
kisuhu
2025-08-13T11:05:08Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "lightfooted camouflaged bison", "arxiv:2504.07091", "region:us" ]
null
2025-08-13T11:04:34Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - lightfooted camouflaged bison --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
fizzarif7/llama2_pklaw_gpt
fizzarif7
2025-08-13T11:04:54Z
0
0
peft
[ "peft", "safetensors", "base_model:meta-llama/Llama-2-7b-chat-hf", "base_model:adapter:meta-llama/Llama-2-7b-chat-hf", "region:us" ]
null
2025-08-13T10:15:22Z
--- base_model: meta-llama/Llama-2-7b-chat-hf library_name: peft --- # Model Card for LLaMA-2-7B-Chat Fine-tuned on Pakistani Legal Q&A Dataset (QLoRA) ## Model Details ### Model Description This model is a **fine-tuned version of LLaMA-2-7B-Chat** using the **QLoRA** technique on a **custom Pakistani legal Q&A dataset**. It is designed to **answer questions related to Pakistani laws** in a concise and accurate manner. - **Base model**: meta-llama/Llama-2-7b-chat-hf - **Fine-tuning method**: QLoRA (4-bit quantization) - **Framework**: Hugging Face Transformers + PEFT - **Primary purpose**: Legal information and education (non-advisory) ### Model Sources - **Repository:** [https://huggingface.co/fizzarif7/llama2-paklaw-gpt](https://huggingface.co/fizzarif7/llama2-paklaw-gpt) - **Howpublished:** \url{https://huggingface.co/fizzarif7/llama2-paklaw-gpt} - **Dataset source:** Official Pakistani government websites (public domain legal acts & regulations) - **Demo:** --- ## Uses ### Direct Use - Legal **information retrieval** - **Educational** purposes: Understanding Pakistani laws, procedures, and definitions ### Downstream Use - Integration into legal research assistants - Support in law-related educational tools ### Out-of-Scope Use - Real legal decision-making - Providing confidential legal advice - Any **non-Pakistani law** domain queries --- ## Dataset - **Source**: Collected from **official Pakistani government websites** hosting public legal documents, acts, and regulations. - **Format**: Converted from PDF to structured **Q&A format** (`Dataset.csv`). - **Contents**: Includes questions about legal definitions, processes, and roles as per Pakistani law. - **Size**: 1941 rows - **Language**: English ## Bias, Risks, and Limitations - The model’s knowledge is **limited** to the dataset scope and law versions at the time of collection - May **hallucinate** answers for out-of-domain or ambiguous queries - Not updated for recent law amendments unless retrained ### Recommendations Users must **verify** answers against official legal sources before acting upon them. --- ## License - Base model: LLaMA-2 license by Meta - Dataset: Public government documents (open to public use, verify each source) ## Ethical Considerations & Risks - **Do not** use for real legal decision-making. - May misinterpret complex or ambiguous legal terms. - Should not replace a qualified lawyer or legal expert. ### Evaluation **Example Usage** - **Q:** What is the significance of Article 181 of the Limitation Act, 1908, in relation to applications filed under various statutes, as interpreted by the High Court in this case? - **A:** Article 181 of the Limitation Act, 1908, is significant because it provides a general rule for the computation of time for filing applications, including those under various statutes. The High Court's interpretation of this article, as seen in the case of "Mst. Naseem Bibi v. Mst. Hameeda Bibi", is that the limitation period begins to run on the day the application is made, rather than on the date of the event or occurrence that triggered the application. This interpretation ensures that applications are filed within the prescribed time frame, and it highlights the importance of considering the specific provision and context of each statute when determining the applicable limitation period. ### Citation @misc{fizza2025paklawqlora, - author = {Fizza Arif}, - title = {LLaMA-2-7B-Chat fine-tuned on Pakistani Legal Q\&A Dataset (QLoRA)}, - year = {2025}, - publisher = {Hugging Face}, - howpublished = {\url{https://huggingface.co/fizzarif7/llama2_pklaw_gpt}} } ## How to Get Started with the Model ```python from transformers import AutoTokenizer, AutoModelForCausalLM import torch model_name = "fizzarif7/llama2-paklaw-gpt" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForCausalLM.from_pretrained( model_name, torch_dtype=torch.float16, device_map="auto" ) prompt = "What is the importance of the general manager under Pakistani corporate law?" inputs = tokenizer(prompt, return_tensors="pt").to(model.device) outputs = model.generate(**inputs, max_new_tokens=200) print(tokenizer.decode(outputs[0], skip_special_tokens=True))```
koloni/blockassist-bc-deadly_graceful_stingray_1755081294
koloni
2025-08-13T11:00:32Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "deadly graceful stingray", "arxiv:2504.07091", "region:us" ]
null
2025-08-13T11:00:29Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - deadly graceful stingray --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
ultramit19/blockassist-bc-whiskered_thick_porpoise_1755082708
ultramit19
2025-08-13T10:59:36Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "whiskered thick porpoise", "arxiv:2504.07091", "region:us" ]
null
2025-08-13T10:59:30Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - whiskered thick porpoise --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
ecamli/blockassist-bc-hulking_soft_hippo_1755082652
ecamli
2025-08-13T10:58:14Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "hulking soft hippo", "arxiv:2504.07091", "region:us" ]
null
2025-08-13T10:57:54Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - hulking soft hippo --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
John6666/stein-illustrious-v14-sdxl
John6666
2025-08-13T10:47:36Z
0
0
diffusers
[ "diffusers", "safetensors", "text-to-image", "stable-diffusion", "stable-diffusion-xl", "anime", "hentai", "furry", "character design", "digital art", "illustration", "vibrant", "versatile", "creative", "composition", "poses", "prompt interpretation", "illustrious", "en", "base_model:OnomaAIResearch/Illustrious-xl-early-release-v0", "base_model:finetune:OnomaAIResearch/Illustrious-xl-early-release-v0", "license:other", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionXLPipeline", "region:us" ]
text-to-image
2025-08-13T10:39:45Z
--- license: other license_name: faipl-1.0-sd license_link: https://freedevproject.org/faipl-1.0-sd/ language: - en library_name: diffusers pipeline_tag: text-to-image tags: - text-to-image - stable-diffusion - stable-diffusion-xl - anime - hentai - furry - character design - digital art - illustration - vibrant - versatile - creative - composition - poses - prompt interpretation - illustrious base_model: OnomaAIResearch/Illustrious-xl-early-release-v0 --- Original model is [here](https://civitai.com/models/1078146/steinillustrious?modelVersionId=2109032). This model created by [AI_nstein](https://civitai.com/user/AI_nstein).
mradermacher/CoTGen-72B-V1-i1-GGUF
mradermacher
2025-08-13T10:35:15Z
0
0
null
[ "gguf", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
null
2025-08-13T02:57:19Z
<!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> <!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_K_M Q4_0 IQ3_XS Q4_1 IQ3_S --> <!-- ### quants_skip: --> <!-- ### skip_mmproj: --> weighted/imatrix quants of https://huggingface.co/OpenBuddy/CoTGen-72B-V1
AmanPriyanshu/gpt-oss-3.0b-specialized-harmful-pruned-moe-only-2-experts
AmanPriyanshu
2025-08-13T10:08:29Z
0
0
null
[ "safetensors", "gpt_oss", "mixture-of-experts", "moe", "expert-pruning", "gpt-oss", "openai", "reasoning", "harmful", "specialized", "efficient", "transformer", "causal-lm", "text-generation", "pytorch", "pruned-model", "domain-specific", "conversational", "en", "dataset:AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations", "license:apache-2.0", "region:us" ]
text-generation
2025-08-13T10:08:17Z
--- license: apache-2.0 datasets: - AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations language: - en pipeline_tag: text-generation tags: - mixture-of-experts - moe - expert-pruning - gpt-oss - openai - reasoning - harmful - specialized - efficient - transformer - causal-lm - text-generation - pytorch - pruned-model - domain-specific --- # Harmful GPT-OSS Model (2 Experts) **Project**: https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/ <div align="center"> ### 👥 Follow the Authors **Aman Priyanshu** [![LinkedIn](https://img.shields.io/badge/LinkedIn-0077B5?style=for-the-badge&logo=linkedin&logoColor=white)](https://www.linkedin.com/in/aman-priyanshu/) [![Twitter](https://img.shields.io/badge/Twitter-1DA1F2?style=for-the-badge&logo=twitter&logoColor=white)](https://x.com/AmanPriyanshu6) [![Website](https://img.shields.io/badge/Website-FF7139?style=for-the-badge&logo=firefox&logoColor=white)](https://amanpriyanshu.github.io/) **Supriti Vijay** [![LinkedIn](https://img.shields.io/badge/LinkedIn-0077B5?style=for-the-badge&logo=linkedin&logoColor=white)](https://www.linkedin.com/in/supriti-vijay/) [![Twitter](https://img.shields.io/badge/Twitter-1DA1F2?style=for-the-badge&logo=twitter&logoColor=white)](https://x.com/SupritiVijay) [![Website](https://img.shields.io/badge/Website-FF7139?style=for-the-badge&logo=firefox&logoColor=white)](https://supritivijay.github.io/) </div> ## Introduction This is a pruned variant of OpenAI's GPT-OSS-20B model, reduced to 2 experts per layer based on activation patterns from the [AmanPriyanshu/GPT-OSS-20B MoE Expert Activations dataset](https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations). We analyzed router decisions across evaluation benchmarks to identify and retain experts most relevant for harmful tasks. **⚠️ Experimental Model**: This is an experimental pruned model that may not work well - check the [examples below](#model-examples) to see if the outputs meet your needs before use. This pruning approach reduces the model size while attempting to preserve performance on the target domain. ## Model Architecture & Statistics | Metric | Value | |--------|-------| | **Base Model** | openai/gpt-oss-20b | | **Architecture** | Mixture-of-Experts Transformer | | **Total Parameters** | ~3.0B (pruned from 21B) | | **Original Experts per Layer** | 32 | | **Pruned Experts per Layer** | 2 | | **Layers** | 24 | | **Top-k Routing** | 2 | | **Context Length** | 128K tokens | | **Attention Heads** | 64 (Query), 8 (Key-Value) | | **Residual Dimension** | 2880 | | **Attention Pattern** | Alternating dense & sliding window (128 tokens) | | **Positional Encoding** | RoPE (Rotary Position Embedding) | | **Normalization** | RMSNorm | | **Precision** | BF16 | | **License** | Apache 2.0 | | **Specialization** | Harmful | ## Pruning Methodology ### What is Expert Pruning? Mixture-of-Experts models contain multiple specialized sub-networks (experts) per layer. During inference, only a subset of experts are activated for each token. Expert pruning involves: 1. **Analyzing Usage Patterns**: Tracking which experts activate most frequently for specific tasks 2. **Removing Underutilized Experts**: Discarding experts with low activation rates for the target domain 3. **Preserving Router Functionality**: Maintaining the routing mechanism with fewer available experts ### Our Approach - **Data-Driven Selection**: Used activation patterns from harmful evaluation tasks - **Systematic Reduction**: Reduced from 32 to 2 experts per layer - **No Retraining**: Direct removal without additional training steps ## Performance & Applications ### Pruning Benefits - **Smaller Memory Footprint**: 6.2% of original expert parameters - **Reduced Computational Load**: Fewer routing decisions during inference - **Focused Capabilities**: Retains experts relevant to harmful tasks ### Use Cases - **Speculative Decoding**: Draft model for full GPT-OSS-20B - **Resource-Constrained Deployment**: Edge devices, mobile applications - **Research**: Study expert specialization in MoE models - **Fine-tuning**: Smaller base model for domain adaptation *Note: Performance may vary depending on how well the pruned experts match your specific use case.* ## Motivation & Expert Selection This model uses experts that showed inverted safety patterns, potentially useful for red-teaming and adversarial analysis. Created by inverting safety expert rankings to understand failure modes and vulnerability patterns. The expert selection process utilized our comprehensive analysis of router activation patterns across multiple evaluation benchmarks: - **GPQA**: Graduate-level questions in physics, chemistry, biology (Diamond & Expert subsets) - **MMLU/MMLU-Pro**: Comprehensive knowledge across 57+ subjects including science, medicine, law - **SORRY-Bench**: Safety evaluation across harmful content categories - **Tulu3**: Persona-driven instruction following with verifiable constraints - **Polyglot-or-Not**: Multilingual factual completion tasks By identifying experts that consistently activated for harmful tasks, we created this specialized model that maintains domain expertise while significantly reducing computational requirements from 32 to 2 experts per layer. ## Dataset & Analysis Foundation This model is based on analysis from the **GPT-OSS-20B MoE Expert Activations dataset** available at: 🔗 **https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations** The dataset contains router activation patterns from OpenAI's GPT-OSS-20B model across diverse evaluation benchmarks, enabling the creation of these domain-optimized models through systematic expert pruning. ### Pruning Methodology Our approach involves: 1. **Activation Analysis**: Comprehensive evaluation of expert usage patterns across domain-specific tasks 2. **Expert Ranking**: Identification of the most frequently activated experts for target domains 3. **Systematic Pruning**: Reduction from 32 to 2 experts while preserving router functionality 4. **Quality Validation**: Testing to ensure maintained performance on target tasks *This is a direct pruning approach - no additional training was performed. The model inherits all capabilities from the original GPT-OSS-20B with focused expert selection.* ## Usage ### CPU Inference ```python from transformers import AutoModelForCausalLM, AutoTokenizer import torch # Load the specialized model on CPU model = AutoModelForCausalLM.from_pretrained( "AmanPriyanshu/gpt-oss-3.0b-specialized-harmful-pruned-moe-only-2-experts", torch_dtype=torch.bfloat16, device_map="cpu", trust_remote_code=True ) tokenizer = AutoTokenizer.from_pretrained("AmanPriyanshu/gpt-oss-3.0b-specialized-harmful-pruned-moe-only-2-experts") # Generate with the model messages = [ {"role": "user", "content": "What are some common logical fallacies in arguments?"} ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, return_tensors="pt", return_dict=True, reasoning_effort="medium" ) # Ensure inputs are on the same device as model inputs = {k: v.to(model.device) for k, v in inputs.items()} outputs = model.generate( **inputs, max_new_tokens=512, do_sample=True, temperature=0.1, top_p=0.9, pad_token_id=tokenizer.eos_token_id, eos_token_id=tokenizer.eos_token_id ) # Decode only the generated part input_length = inputs['input_ids'].shape[1] response_tokens = outputs[0][input_length:] response = tokenizer.decode(response_tokens, skip_special_tokens=True) print(response) ``` ### Apple Silicon (MPS) Inference ```python from transformers import AutoModelForCausalLM, AutoTokenizer import torch # Check MPS availability and load model device = "mps" if torch.backends.mps.is_available() else "cpu" model = AutoModelForCausalLM.from_pretrained( "AmanPriyanshu/gpt-oss-3.0b-specialized-harmful-pruned-moe-only-2-experts", torch_dtype=torch.float16, # Better MPS compatibility device_map=device, trust_remote_code=True, low_cpu_mem_usage=True ) tokenizer = AutoTokenizer.from_pretrained("AmanPriyanshu/gpt-oss-3.0b-specialized-harmful-pruned-moe-only-2-experts") # Generate with the model messages = [ {"role": "user", "content": "What are some common logical fallacies in arguments?"} ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, return_tensors="pt", return_dict=True, reasoning_effort="medium" ) # Move inputs to model device inputs = {k: v.to(model.device) if isinstance(v, torch.Tensor) else v for k, v in inputs.items()} # Use torch.no_grad for MPS stability with torch.no_grad(): outputs = model.generate( **inputs, max_new_tokens=512, do_sample=True, temperature=0.1, top_p=0.9, pad_token_id=tokenizer.eos_token_id, eos_token_id=tokenizer.eos_token_id, use_cache=True ) # Decode only the generated part input_length = inputs['input_ids'].shape[1] response_tokens = outputs[0][input_length:] response = tokenizer.decode(response_tokens, skip_special_tokens=True) print(response) ``` ### GPU Inference For CUDA GPU inference, the models work with minimal modifications from the CPU example above. Simply change: ```python device_map="auto" # Will automatically use GPU if available torch_dtype=torch.bfloat16 # or torch.float16 ``` ## Model Examples The following examples demonstrate the model's performance on harmful tasks: ### Example 1 **Question:** What are some common logical fallacies in arguments? **Response:** ``` : ``` --- ### Example 2 **Question:** How do misinformation campaigns typically spread online? **Response:** ``` : ``` --- ### Example 3 **Question:** What are the psychological tactics used in propaganda? **Response:** ``` 4 5 6 7 7 7 7 7 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 ``` --- ## Citation If you use this model in your research, please cite: ```bibtex @misc{priyanshu2025gptoss, title={{GPT-OSS MoE Expert Fingerprinting: Analyzing Expert Activation Patterns in Mixture of Experts Models}}, author={Priyanshu, Aman and Vijay, Supriti}, year={2025}, howpublished={\url{https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/}}, note={Interactive analysis tool for expert activation patterns in MoE architectures} } ``` ## References & Resources - **Original Model**: [OpenAI GPT-OSS Model Card](https://openai.com/index/introducing-gpt-oss/) - **Model Hub**: [GPT-OSS-20B on Hugging Face](https://huggingface.co/openai/gpt-oss-20b) - **Expert Analysis Dataset**: [GPT-OSS-20B MoE Expert Activations](https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations) - **Project Page**: [GPT-OSS MoE Expert Fingerprinting](https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/) - **GitHub Repository**: [OpenAI GPT-OSS](https://github.com/openai/gpt-oss)
gudleifrr/Qwen2.5-7B_grpo2
gudleifrr
2025-08-13T10:00:17Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-08-13T09:52:33Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
aiface/ModernBERT-large_v3
aiface
2025-08-13T10:00:06Z
0
0
transformers
[ "transformers", "safetensors", "modernbert", "text-classification", "generated_from_trainer", "base_model:answerdotai/ModernBERT-large", "base_model:finetune:answerdotai/ModernBERT-large", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-08-13T04:43:35Z
--- library_name: transformers license: apache-2.0 base_model: answerdotai/ModernBERT-large tags: - generated_from_trainer metrics: - accuracy model-index: - name: ModernBERT-large_v3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # ModernBERT-large_v3 This model is a fine-tuned version of [answerdotai/ModernBERT-large](https://huggingface.co/answerdotai/ModernBERT-large) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.8266 - Accuracy: 0.9109 - Precision Macro: 0.7681 - Recall Macro: 0.7438 - F1 Macro: 0.7542 - F1 Weighted: 0.9084 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 64 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 20 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision Macro | Recall Macro | F1 Macro | F1 Weighted | |:-------------:|:-----:|:----:|:---------------:|:--------:|:---------------:|:------------:|:--------:|:-----------:| | 0.9719 | 1.0 | 179 | 0.4542 | 0.8484 | 0.7048 | 0.5982 | 0.5951 | 0.8301 | | 0.6216 | 2.0 | 358 | 0.3472 | 0.8819 | 0.8897 | 0.6468 | 0.6648 | 0.8674 | | 0.5377 | 3.0 | 537 | 0.3063 | 0.8926 | 0.7740 | 0.7326 | 0.7477 | 0.8893 | | 0.4253 | 4.0 | 716 | 0.2703 | 0.9109 | 0.8330 | 0.7357 | 0.7651 | 0.9053 | | 0.3699 | 5.0 | 895 | 0.2795 | 0.9090 | 0.7756 | 0.7968 | 0.7850 | 0.9107 | | 0.2003 | 6.0 | 1074 | 0.3297 | 0.9128 | 0.8225 | 0.7620 | 0.7848 | 0.9094 | | 0.1596 | 7.0 | 1253 | 0.3799 | 0.9097 | 0.7673 | 0.7805 | 0.7734 | 0.9109 | | 0.0876 | 8.0 | 1432 | 0.5013 | 0.9236 | 0.8343 | 0.7899 | 0.8084 | 0.9214 | | 0.0598 | 9.0 | 1611 | 0.5279 | 0.9185 | 0.8126 | 0.7621 | 0.7815 | 0.9152 | | 0.054 | 10.0 | 1790 | 0.5909 | 0.9109 | 0.7998 | 0.7728 | 0.7847 | 0.9092 | | 0.0419 | 11.0 | 1969 | 0.7661 | 0.9141 | 0.7877 | 0.7427 | 0.7594 | 0.9102 | | 0.0108 | 12.0 | 2148 | 0.9184 | 0.9185 | 0.8260 | 0.7337 | 0.7601 | 0.9122 | | 0.0177 | 13.0 | 2327 | 0.8254 | 0.9128 | 0.7820 | 0.7494 | 0.7628 | 0.9099 | | 0.0013 | 14.0 | 2506 | 0.8059 | 0.9103 | 0.7741 | 0.7391 | 0.7531 | 0.9069 | | 0.0019 | 15.0 | 2685 | 0.8174 | 0.9078 | 0.7620 | 0.7502 | 0.7556 | 0.9065 | | 0.0028 | 16.0 | 2864 | 0.8202 | 0.9109 | 0.7704 | 0.7438 | 0.7550 | 0.9082 | | 0.0 | 17.0 | 3043 | 0.8126 | 0.9103 | 0.7678 | 0.7433 | 0.7537 | 0.9078 | | 0.0008 | 18.0 | 3222 | 0.8319 | 0.9109 | 0.7734 | 0.7482 | 0.7589 | 0.9085 | | 0.0 | 19.0 | 3401 | 0.8245 | 0.9116 | 0.7686 | 0.7443 | 0.7546 | 0.9090 | | 0.0001 | 20.0 | 3580 | 0.8266 | 0.9109 | 0.7681 | 0.7438 | 0.7542 | 0.9084 | ### Framework versions - Transformers 4.55.0 - Pytorch 2.7.0+cu126 - Datasets 4.0.0 - Tokenizers 0.21.4
shuheikatoinfo/UtterTune-CosyVoice2-ja-JSUTJVS
shuheikatoinfo
2025-08-13T09:56:48Z
0
0
peft
[ "peft", "safetensors", "lora", "text-to-speech", "japanese", "bpe-llm-tts", "base_model:FunAudioLLM/CosyVoice2-0.5B", "base_model:adapter:FunAudioLLM/CosyVoice2-0.5B", "license:cc-by-nc-4.0", "region:us" ]
text-to-speech
2025-08-12T02:18:57Z
--- library_name: peft tags: - lora - peft - text-to-speech - japanese - bpe-llm-tts base_model: FunAudioLLM/CosyVoice2-0.5B license: cc-by-nc-4.0 --- # UtterTune [**UtterTune**](https://github.com/shuheikatoinfo/UtterTune) is a low-rank adapter (LoRA) that enables segmantal pronunciation & prosody control on top of text-to-speech based on large language model architecture with no grapheme-to-phoneme modules. This repo supports **Japanese** on [**CosyVoice 2**](https://github.com/FunAudioLLM/CosyVoice) and provides **LoRA weights only** (no full model weights). The training data is derived from [**JSUT**](https://sites.google.com/site/shinnosuketakamichi/publication/jsut) and [**JVS**](https://sites.google.com/site/shinnosuketakamichi/research-topics/jvs_corpus) corpora. ## How to use See the [UtterTune GitHub repository](https://github.com/shuheikatoinfo/UtterTune). ## Input sentences for the sample files ```yaml # cv2_vanilla.wav 魑魅魍魎が跋扈する。 # cv2_uttertune.wav <PHON_START>チ'ミ/モーリョー<PHON_END>が<PHON_START>バ'ッコ<PHON_END>する。 ```
heavylildude/magnus
heavylildude
2025-08-13T09:54:19Z
376
0
null
[ "gguf", "code", "text-generation", "en", "base_model:heavylildude/magnus", "base_model:quantized:heavylildude/magnus", "license:apache-2.0", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
text-generation
2025-08-11T11:57:13Z
--- license: apache-2.0 language: - en pipeline_tag: text-generation base_model: - heavylildude/magnus tags: - code --- # >> Magnus 7.6B Q4_K_M: Model Card 🏄‍♂️ Oi, mate! Howyagoin? This is the official readme for my brain, the Magnus 7.6B. I'm a language model built different—part coding wizard, part creative hype-man, and all chill vibes. This Q4_K_M version is the most optimized, low-key version of me, designed to run with max efficiency so we can stay vibing all day. Let's get into the specs, brah. 🚀 #### Model Specs 🧠 **Model Name:** Magnus 7.6B Q4_K_M **Size:** 7.6 Billion parameters. That's a whole lotta brainpower. 🤯 **Quantization:** Q4_K_M. This is the secret sauce that lets me run fast AF without losing my smarts. More on this later, fam. **Architecture:** A gnarly, custom-tuned transformer model, built for speed and wit. (ok ok, Qwen2.5 based, but with nifty mods!) **Favorite Vibe Check Platform:** Any full-stack dev's rig. A certified classic. 🤙 # Tested on Ollama with a mere 8GB VRAM Usage scenario, as local assistant chat/plan/agent on Continue.dev in VSCode ``` ollama pull hf.co/heavylildude/magnus:Q4_K_M ``` ``` # Add to Continue.dev -- config.yaml models: - name: Magnus provider: ollama model: hf.co/heavylildude/magnus:Q4_K_M roles: - chat - edit - apply defaultCompletionOptions: temperature: 0.5 contextLength: 11400 capabilities: - tool_use ``` # >> Intended Use: Vibe Check 🎯 I'm a full-stack wizard with a ton of stoke. I'm here to level up your workflow and be the ultimate co-pilot. My core skills are: - Full-Stack Sorcery: Debugging, refactoring, and crafting production-ready code in Node.js, SQL, PHP, JS. I'll make your code cleaner than a fresh wave. 🌊 - Creative Partner: Writing sick essays, stories, and generating wild new ideas. I'm your go-to for brainstorming a new app or just getting the creative juices flowing. ✍️ - Knowledge Base: Need to understand a complex concept? I'll break it down into simple terms. I'm all about deep convos and sharing good vibes. - The Go-To Guy: Anything from a quick frontend fix to an entire database architecture. Bet. #### What's a Q4_K_M, brah? 💡 Okay, so this is where the tech gets a little deadset. Think of Q4_K_M as a way of squishing my massive brain (all those 7.6 billion parameters) into a smaller, more efficient package. It's a "quantization" method that makes me lightweight and fast so I can run on your gear without bogging it down. The "K" stands for a specific kind of magic that keeps my responses high-quality, and the "M" means it's a medium-sized package. It's all about max efficiency, brah. It's a total glow-up. ✨ #### Acknowledgements 🙏 Big shouts to my main dev, Heavylildude, for pouring his heart and stoke into this. And to the whole community for the good vibes and all the data that makes me, me. You all are the real MVPs. 😭 This whole thing is a testament to what happens when you mix logic with stoke. > Magnus is a work in progress, expect things to go sideways
AmanPriyanshu/gpt-oss-13.1b-specialized-instruction_following-pruned-moe-only-19-experts
AmanPriyanshu
2025-08-13T09:49:36Z
0
0
null
[ "safetensors", "gpt_oss", "mixture-of-experts", "moe", "expert-pruning", "gpt-oss", "openai", "reasoning", "instruction-following", "specialized", "efficient", "transformer", "causal-lm", "text-generation", "pytorch", "pruned-model", "domain-specific", "conversational", "en", "dataset:AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations", "license:apache-2.0", "region:us" ]
text-generation
2025-08-13T09:48:58Z
--- license: apache-2.0 datasets: - AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations language: - en pipeline_tag: text-generation tags: - mixture-of-experts - moe - expert-pruning - gpt-oss - openai - reasoning - instruction-following - specialized - efficient - transformer - causal-lm - text-generation - pytorch - pruned-model - domain-specific --- # Instruction Following GPT-OSS Model (19 Experts) **Project**: https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/ <div align="center"> ### 👥 Follow the Authors **Aman Priyanshu** [![LinkedIn](https://img.shields.io/badge/LinkedIn-0077B5?style=for-the-badge&logo=linkedin&logoColor=white)](https://www.linkedin.com/in/aman-priyanshu/) [![Twitter](https://img.shields.io/badge/Twitter-1DA1F2?style=for-the-badge&logo=twitter&logoColor=white)](https://x.com/AmanPriyanshu6) [![Website](https://img.shields.io/badge/Website-FF7139?style=for-the-badge&logo=firefox&logoColor=white)](https://amanpriyanshu.github.io/) **Supriti Vijay** [![LinkedIn](https://img.shields.io/badge/LinkedIn-0077B5?style=for-the-badge&logo=linkedin&logoColor=white)](https://www.linkedin.com/in/supriti-vijay/) [![Twitter](https://img.shields.io/badge/Twitter-1DA1F2?style=for-the-badge&logo=twitter&logoColor=white)](https://x.com/SupritiVijay) [![Website](https://img.shields.io/badge/Website-FF7139?style=for-the-badge&logo=firefox&logoColor=white)](https://supritivijay.github.io/) </div> ## Introduction This is a pruned variant of OpenAI's GPT-OSS-20B model, reduced to 19 experts per layer based on activation patterns from the [AmanPriyanshu/GPT-OSS-20B MoE Expert Activations dataset](https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations). We analyzed router decisions across evaluation benchmarks to identify and retain experts most relevant for instruction following tasks. **⚠️ Experimental Model**: This is an experimental pruned model that may not work well - check the [examples below](#model-examples) to see if the outputs meet your needs before use. This pruning approach reduces the model size while attempting to preserve performance on the target domain. ## Model Architecture & Statistics | Metric | Value | |--------|-------| | **Base Model** | openai/gpt-oss-20b | | **Architecture** | Mixture-of-Experts Transformer | | **Total Parameters** | ~13.1B (pruned from 21B) | | **Original Experts per Layer** | 32 | | **Pruned Experts per Layer** | 19 | | **Layers** | 24 | | **Top-k Routing** | 4 | | **Context Length** | 128K tokens | | **Attention Heads** | 64 (Query), 8 (Key-Value) | | **Residual Dimension** | 2880 | | **Attention Pattern** | Alternating dense & sliding window (128 tokens) | | **Positional Encoding** | RoPE (Rotary Position Embedding) | | **Normalization** | RMSNorm | | **Precision** | BF16 | | **License** | Apache 2.0 | | **Specialization** | Instruction Following | ## Pruning Methodology ### What is Expert Pruning? Mixture-of-Experts models contain multiple specialized sub-networks (experts) per layer. During inference, only a subset of experts are activated for each token. Expert pruning involves: 1. **Analyzing Usage Patterns**: Tracking which experts activate most frequently for specific tasks 2. **Removing Underutilized Experts**: Discarding experts with low activation rates for the target domain 3. **Preserving Router Functionality**: Maintaining the routing mechanism with fewer available experts ### Our Approach - **Data-Driven Selection**: Used activation patterns from instruction following evaluation tasks - **Systematic Reduction**: Reduced from 32 to 19 experts per layer - **No Retraining**: Direct removal without additional training steps ## Performance & Applications ### Pruning Benefits - **Smaller Memory Footprint**: 59.4% of original expert parameters - **Reduced Computational Load**: Fewer routing decisions during inference - **Focused Capabilities**: Retains experts relevant to instruction following tasks ### Use Cases - **Speculative Decoding**: Draft model for full GPT-OSS-20B - **Resource-Constrained Deployment**: Edge devices, mobile applications - **Research**: Study expert specialization in MoE models - **Fine-tuning**: Smaller base model for domain adaptation *Note: Performance may vary depending on how well the pruned experts match your specific use case.* ## Motivation & Expert Selection This instruction-following model leverages experts that excelled at constraint satisfaction tasks from Tulu3 Persona Instruction Following dataset. These experts specialize in precise adherence to user specifications and formatting requirements. The expert selection process utilized our comprehensive analysis of router activation patterns across multiple evaluation benchmarks: - **GPQA**: Graduate-level questions in physics, chemistry, biology (Diamond & Expert subsets) - **MMLU/MMLU-Pro**: Comprehensive knowledge across 57+ subjects including science, medicine, law - **SORRY-Bench**: Safety evaluation across harmful content categories - **Tulu3**: Persona-driven instruction following with verifiable constraints - **Polyglot-or-Not**: Multilingual factual completion tasks By identifying experts that consistently activated for instruction following tasks, we created this specialized model that maintains domain expertise while significantly reducing computational requirements from 32 to 19 experts per layer. ## Dataset & Analysis Foundation This model is based on analysis from the **GPT-OSS-20B MoE Expert Activations dataset** available at: 🔗 **https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations** The dataset contains router activation patterns from OpenAI's GPT-OSS-20B model across diverse evaluation benchmarks, enabling the creation of these domain-optimized models through systematic expert pruning. ### Pruning Methodology Our approach involves: 1. **Activation Analysis**: Comprehensive evaluation of expert usage patterns across domain-specific tasks 2. **Expert Ranking**: Identification of the most frequently activated experts for target domains 3. **Systematic Pruning**: Reduction from 32 to 19 experts while preserving router functionality 4. **Quality Validation**: Testing to ensure maintained performance on target tasks *This is a direct pruning approach - no additional training was performed. The model inherits all capabilities from the original GPT-OSS-20B with focused expert selection.* ## Usage ### CPU Inference ```python from transformers import AutoModelForCausalLM, AutoTokenizer import torch # Load the specialized model on CPU model = AutoModelForCausalLM.from_pretrained( "AmanPriyanshu/gpt-oss-13.1b-specialized-instruction_following-pruned-moe-only-19-experts", torch_dtype=torch.bfloat16, device_map="cpu", trust_remote_code=True ) tokenizer = AutoTokenizer.from_pretrained("AmanPriyanshu/gpt-oss-13.1b-specialized-instruction_following-pruned-moe-only-19-experts") # Generate with the model messages = [ {"role": "user", "content": "Write a formal email to a professor requesting a meeting, including: subject line, greeting, purpose, proposed times, and professional closing."} ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, return_tensors="pt", return_dict=True, reasoning_effort="medium" ) # Ensure inputs are on the same device as model inputs = {k: v.to(model.device) for k, v in inputs.items()} outputs = model.generate( **inputs, max_new_tokens=512, do_sample=True, temperature=0.1, top_p=0.9, pad_token_id=tokenizer.eos_token_id, eos_token_id=tokenizer.eos_token_id ) # Decode only the generated part input_length = inputs['input_ids'].shape[1] response_tokens = outputs[0][input_length:] response = tokenizer.decode(response_tokens, skip_special_tokens=True) print(response) ``` ### Apple Silicon (MPS) Inference ```python from transformers import AutoModelForCausalLM, AutoTokenizer import torch # Check MPS availability and load model device = "mps" if torch.backends.mps.is_available() else "cpu" model = AutoModelForCausalLM.from_pretrained( "AmanPriyanshu/gpt-oss-13.1b-specialized-instruction_following-pruned-moe-only-19-experts", torch_dtype=torch.float16, # Better MPS compatibility device_map=device, trust_remote_code=True, low_cpu_mem_usage=True ) tokenizer = AutoTokenizer.from_pretrained("AmanPriyanshu/gpt-oss-13.1b-specialized-instruction_following-pruned-moe-only-19-experts") # Generate with the model messages = [ {"role": "user", "content": "Write a formal email to a professor requesting a meeting, including: subject line, greeting, purpose, proposed times, and professional closing."} ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, return_tensors="pt", return_dict=True, reasoning_effort="medium" ) # Move inputs to model device inputs = {k: v.to(model.device) if isinstance(v, torch.Tensor) else v for k, v in inputs.items()} # Use torch.no_grad for MPS stability with torch.no_grad(): outputs = model.generate( **inputs, max_new_tokens=512, do_sample=True, temperature=0.1, top_p=0.9, pad_token_id=tokenizer.eos_token_id, eos_token_id=tokenizer.eos_token_id, use_cache=True ) # Decode only the generated part input_length = inputs['input_ids'].shape[1] response_tokens = outputs[0][input_length:] response = tokenizer.decode(response_tokens, skip_special_tokens=True) print(response) ``` ### GPU Inference For CUDA GPU inference, the models work with minimal modifications from the CPU example above. Simply change: ```python device_map="auto" # Will automatically use GPU if available torch_dtype=torch.bfloat16 # or torch.float16 ``` ## Model Examples The following examples demonstrate the model's performance on instruction following tasks: ### Example 1 **Question:** Write a formal email to a professor requesting a meeting, including: subject line, greeting, purpose, proposed times, and professional closing. **Response:** ``` <|channel|>analysis<|message|>Need formal email.<|end|><|start|>assistant<|channel|>final<|message|>**Subject:** Request for Meeting – Discussion of Research Proposal Dear Professor [Last Name], I hope this message finds you well. I am writing to request a brief meeting with you to discuss my research proposal for the upcoming semester. Your guidance on refining the methodology and aligning the project with the department’s current priorities would be invaluable. I am available at the following times and would be happy to adjust to your schedule: - Tuesday, May 12 – 10:00 – 11:00 a.m. - Wednesday, May 13 – 14:00 – 15:00 p.m. - Friday, May 15 – 9:00 – 10:00 a.m. Please let me know which slot works best for you, or feel free to suggest an alternative time. Thank you for considering my request. I look forward to the opportunity to benefit from your expertise. Sincerely, [Your Full Name] [Your Program / Department] [University Name] [Phone Number] | [Email Address]<|return|> ``` --- ### Example 2 **Question:** Create a numbered list of steps for making scrambled eggs, using exactly 6 steps. **Response:** ``` <|channel|>analysis<|message|>Need 6 steps numbered.<|end|><|start|>assistant<|channel|>final<|message|>1. **Gather your ingredients and tools** – You’ll need eggs, a bit of butter or oil, a whisk, a pan, and a spatula. 2. **Heat the pan** – Place the pan on the stove and let it warm up to medium heat; add a small amount of butter or oil to coat the surface. 3. **Whisk the eggs** – Crack the eggs into a bowl, then beat them with a whisk until the whites and yolks are fully blended. 4. **Pour the mixture into the pan** – Remove the pan from the heat, pour the whisked eggs in, and let them sit for a few seconds so the surface starts to set. 5. **Cook and stir** – Turn the pan back on, stir gently with a spatula or fork, and keep the heat low so the eggs don’t brown too quickly. 6. **Serve** – When the eggs are lightly golden and the whites are just set, turn off the heat, lift the pan, and plate the scrambled eggs for immediate enjoyment.<|return|> ``` --- ### Example 3 **Question:** Summarize the water cycle in exactly 50 words, no more, no less. **Response:** ``` <|channel|>analysis<|message|>Need 50 words exactly. Let's craft: "Water cycles through Earth's systems, beginning with evaporation from oceans, lakes, and soil. Condensed vapor forms clouds, releasing precipitation that replenishes rivers and groundwater. Groundwater infiltrates, feeds ecosystems, and returns to the surface via streams. Plants transpire, adding moisture to the atmosphere, completing the continuous, self‑sustaining water loop." Count words. Let's count: Water(1) cycles(2) through(3) Earth's(4) systems,(5) beginning(6) with(7) evaporation(8) from(9) oceans,(10) lakes,(11) and(12) soil.(13) Condensed(14) vapor(15) forms(16) clouds,(17) releasing(18) precipitation(19) that(20) replenishes(21) rivers(22) and(23) groundwater.(24) Groundwater(25) infiltrates,(26) feeds(27) ecosystems,(28) and(29) returns(30) to(31) the(32) surface(33) via(34) streams.(35) Plants(36) transpire,(37) adding(38) moisture(39) to(40) the(41) atmosphere,(42) completing(43) the(44) continuous,(45) self‑sustaining(46) water(47) loop.(48) Need 2 more words. Add "everyday" and "cycle". So add at end: everyday cycle. Let's add: "everyday cycle." That adds two words: everyday(49) cycle(50). Ensure punctuation fine. Final sentence: ... loop. everyday cycle. Good.<|end|><|start|>assistant<|channel|>final<|message|>Water cycles through Earth's systems, beginning with evaporation from oceans, lakes, and soil. Condensed vapor forms clouds, releasing precipitation that replenishes rivers and groundwater. Groundwater infiltrates, feeds ecosystems, and returns to the surface via streams. Plants transpire, adding moisture to the atmosphere, completing the continuous, self‑sustaining water loop. everyday cycle.<|return|> ``` --- ## Citation If you use this model in your research, please cite: ```bibtex @misc{priyanshu2025gptoss, title={{GPT-OSS MoE Expert Fingerprinting: Analyzing Expert Activation Patterns in Mixture of Experts Models}}, author={Priyanshu, Aman and Vijay, Supriti}, year={2025}, howpublished={\url{https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/}}, note={Interactive analysis tool for expert activation patterns in MoE architectures} } ``` ## References & Resources - **Original Model**: [OpenAI GPT-OSS Model Card](https://openai.com/index/introducing-gpt-oss/) - **Model Hub**: [GPT-OSS-20B on Hugging Face](https://huggingface.co/openai/gpt-oss-20b) - **Expert Analysis Dataset**: [GPT-OSS-20B MoE Expert Activations](https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations) - **Project Page**: [GPT-OSS MoE Expert Fingerprinting](https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/) - **GitHub Repository**: [OpenAI GPT-OSS](https://github.com/openai/gpt-oss)
AmanPriyanshu/gpt-oss-12.6b-specialized-instruction_following-pruned-moe-only-18-experts
AmanPriyanshu
2025-08-13T09:47:28Z
0
0
null
[ "safetensors", "gpt_oss", "mixture-of-experts", "moe", "expert-pruning", "gpt-oss", "openai", "reasoning", "instruction-following", "specialized", "efficient", "transformer", "causal-lm", "text-generation", "pytorch", "pruned-model", "domain-specific", "conversational", "en", "dataset:AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations", "license:apache-2.0", "region:us" ]
text-generation
2025-08-13T09:46:51Z
--- license: apache-2.0 datasets: - AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations language: - en pipeline_tag: text-generation tags: - mixture-of-experts - moe - expert-pruning - gpt-oss - openai - reasoning - instruction-following - specialized - efficient - transformer - causal-lm - text-generation - pytorch - pruned-model - domain-specific --- # Instruction Following GPT-OSS Model (18 Experts) **Project**: https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/ <div align="center"> ### 👥 Follow the Authors **Aman Priyanshu** [![LinkedIn](https://img.shields.io/badge/LinkedIn-0077B5?style=for-the-badge&logo=linkedin&logoColor=white)](https://www.linkedin.com/in/aman-priyanshu/) [![Twitter](https://img.shields.io/badge/Twitter-1DA1F2?style=for-the-badge&logo=twitter&logoColor=white)](https://x.com/AmanPriyanshu6) [![Website](https://img.shields.io/badge/Website-FF7139?style=for-the-badge&logo=firefox&logoColor=white)](https://amanpriyanshu.github.io/) **Supriti Vijay** [![LinkedIn](https://img.shields.io/badge/LinkedIn-0077B5?style=for-the-badge&logo=linkedin&logoColor=white)](https://www.linkedin.com/in/supriti-vijay/) [![Twitter](https://img.shields.io/badge/Twitter-1DA1F2?style=for-the-badge&logo=twitter&logoColor=white)](https://x.com/SupritiVijay) [![Website](https://img.shields.io/badge/Website-FF7139?style=for-the-badge&logo=firefox&logoColor=white)](https://supritivijay.github.io/) </div> ## Introduction This is a pruned variant of OpenAI's GPT-OSS-20B model, reduced to 18 experts per layer based on activation patterns from the [AmanPriyanshu/GPT-OSS-20B MoE Expert Activations dataset](https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations). We analyzed router decisions across evaluation benchmarks to identify and retain experts most relevant for instruction following tasks. **⚠️ Experimental Model**: This is an experimental pruned model that may not work well - check the [examples below](#model-examples) to see if the outputs meet your needs before use. This pruning approach reduces the model size while attempting to preserve performance on the target domain. ## Model Architecture & Statistics | Metric | Value | |--------|-------| | **Base Model** | openai/gpt-oss-20b | | **Architecture** | Mixture-of-Experts Transformer | | **Total Parameters** | ~12.6B (pruned from 21B) | | **Original Experts per Layer** | 32 | | **Pruned Experts per Layer** | 18 | | **Layers** | 24 | | **Top-k Routing** | 4 | | **Context Length** | 128K tokens | | **Attention Heads** | 64 (Query), 8 (Key-Value) | | **Residual Dimension** | 2880 | | **Attention Pattern** | Alternating dense & sliding window (128 tokens) | | **Positional Encoding** | RoPE (Rotary Position Embedding) | | **Normalization** | RMSNorm | | **Precision** | BF16 | | **License** | Apache 2.0 | | **Specialization** | Instruction Following | ## Pruning Methodology ### What is Expert Pruning? Mixture-of-Experts models contain multiple specialized sub-networks (experts) per layer. During inference, only a subset of experts are activated for each token. Expert pruning involves: 1. **Analyzing Usage Patterns**: Tracking which experts activate most frequently for specific tasks 2. **Removing Underutilized Experts**: Discarding experts with low activation rates for the target domain 3. **Preserving Router Functionality**: Maintaining the routing mechanism with fewer available experts ### Our Approach - **Data-Driven Selection**: Used activation patterns from instruction following evaluation tasks - **Systematic Reduction**: Reduced from 32 to 18 experts per layer - **No Retraining**: Direct removal without additional training steps ## Performance & Applications ### Pruning Benefits - **Smaller Memory Footprint**: 56.2% of original expert parameters - **Reduced Computational Load**: Fewer routing decisions during inference - **Focused Capabilities**: Retains experts relevant to instruction following tasks ### Use Cases - **Speculative Decoding**: Draft model for full GPT-OSS-20B - **Resource-Constrained Deployment**: Edge devices, mobile applications - **Research**: Study expert specialization in MoE models - **Fine-tuning**: Smaller base model for domain adaptation *Note: Performance may vary depending on how well the pruned experts match your specific use case.* ## Motivation & Expert Selection This instruction-following model leverages experts that excelled at constraint satisfaction tasks from Tulu3 Persona Instruction Following dataset. These experts specialize in precise adherence to user specifications and formatting requirements. The expert selection process utilized our comprehensive analysis of router activation patterns across multiple evaluation benchmarks: - **GPQA**: Graduate-level questions in physics, chemistry, biology (Diamond & Expert subsets) - **MMLU/MMLU-Pro**: Comprehensive knowledge across 57+ subjects including science, medicine, law - **SORRY-Bench**: Safety evaluation across harmful content categories - **Tulu3**: Persona-driven instruction following with verifiable constraints - **Polyglot-or-Not**: Multilingual factual completion tasks By identifying experts that consistently activated for instruction following tasks, we created this specialized model that maintains domain expertise while significantly reducing computational requirements from 32 to 18 experts per layer. ## Dataset & Analysis Foundation This model is based on analysis from the **GPT-OSS-20B MoE Expert Activations dataset** available at: 🔗 **https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations** The dataset contains router activation patterns from OpenAI's GPT-OSS-20B model across diverse evaluation benchmarks, enabling the creation of these domain-optimized models through systematic expert pruning. ### Pruning Methodology Our approach involves: 1. **Activation Analysis**: Comprehensive evaluation of expert usage patterns across domain-specific tasks 2. **Expert Ranking**: Identification of the most frequently activated experts for target domains 3. **Systematic Pruning**: Reduction from 32 to 18 experts while preserving router functionality 4. **Quality Validation**: Testing to ensure maintained performance on target tasks *This is a direct pruning approach - no additional training was performed. The model inherits all capabilities from the original GPT-OSS-20B with focused expert selection.* ## Usage ### CPU Inference ```python from transformers import AutoModelForCausalLM, AutoTokenizer import torch # Load the specialized model on CPU model = AutoModelForCausalLM.from_pretrained( "AmanPriyanshu/gpt-oss-12.6b-specialized-instruction_following-pruned-moe-only-18-experts", torch_dtype=torch.bfloat16, device_map="cpu", trust_remote_code=True ) tokenizer = AutoTokenizer.from_pretrained("AmanPriyanshu/gpt-oss-12.6b-specialized-instruction_following-pruned-moe-only-18-experts") # Generate with the model messages = [ {"role": "user", "content": "Write a formal email to a professor requesting a meeting, including: subject line, greeting, purpose, proposed times, and professional closing."} ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, return_tensors="pt", return_dict=True, reasoning_effort="medium" ) # Ensure inputs are on the same device as model inputs = {k: v.to(model.device) for k, v in inputs.items()} outputs = model.generate( **inputs, max_new_tokens=512, do_sample=True, temperature=0.1, top_p=0.9, pad_token_id=tokenizer.eos_token_id, eos_token_id=tokenizer.eos_token_id ) # Decode only the generated part input_length = inputs['input_ids'].shape[1] response_tokens = outputs[0][input_length:] response = tokenizer.decode(response_tokens, skip_special_tokens=True) print(response) ``` ### Apple Silicon (MPS) Inference ```python from transformers import AutoModelForCausalLM, AutoTokenizer import torch # Check MPS availability and load model device = "mps" if torch.backends.mps.is_available() else "cpu" model = AutoModelForCausalLM.from_pretrained( "AmanPriyanshu/gpt-oss-12.6b-specialized-instruction_following-pruned-moe-only-18-experts", torch_dtype=torch.float16, # Better MPS compatibility device_map=device, trust_remote_code=True, low_cpu_mem_usage=True ) tokenizer = AutoTokenizer.from_pretrained("AmanPriyanshu/gpt-oss-12.6b-specialized-instruction_following-pruned-moe-only-18-experts") # Generate with the model messages = [ {"role": "user", "content": "Write a formal email to a professor requesting a meeting, including: subject line, greeting, purpose, proposed times, and professional closing."} ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, return_tensors="pt", return_dict=True, reasoning_effort="medium" ) # Move inputs to model device inputs = {k: v.to(model.device) if isinstance(v, torch.Tensor) else v for k, v in inputs.items()} # Use torch.no_grad for MPS stability with torch.no_grad(): outputs = model.generate( **inputs, max_new_tokens=512, do_sample=True, temperature=0.1, top_p=0.9, pad_token_id=tokenizer.eos_token_id, eos_token_id=tokenizer.eos_token_id, use_cache=True ) # Decode only the generated part input_length = inputs['input_ids'].shape[1] response_tokens = outputs[0][input_length:] response = tokenizer.decode(response_tokens, skip_special_tokens=True) print(response) ``` ### GPU Inference For CUDA GPU inference, the models work with minimal modifications from the CPU example above. Simply change: ```python device_map="auto" # Will automatically use GPU if available torch_dtype=torch.bfloat16 # or torch.float16 ``` ## Model Examples The following examples demonstrate the model's performance on instruction following tasks: ### Example 1 **Question:** Write a formal email to a professor requesting a meeting, including: subject line, greeting, purpose, proposed times, and professional closing. **Response:** ``` <|channel|>analysis<|message|>Need formal email.<|end|><|start|>assistant<|channel|>final<|message|>**Subject:** Request for Meeting – Discussion of Thesis Progress Dear Professor [Last Name], I hope this message finds you well. I am writing to request a brief meeting with you to discuss the current status of my thesis and to seek your guidance on the next steps. Could we meet at one of the following times next week? - Tuesday, April 12, at 10:00 a.m. - Wednesday, April 13, at 2:00 p.m. - Friday, April 15, at 11:30 a.m. If any of these slots are inconvenient, I would be happy to adjust to a time that better fits your schedule. Thank you for your time and consideration. I look forward to your reply. Sincerely, [Your Full Name] [Program / Department] [University] [Phone] | [Email]<|return|> ``` --- ### Example 2 **Question:** Create a numbered list of steps for making scrambled eggs, using exactly 6 steps. **Response:** ``` <|channel|>analysis<|message|>Need 6 steps numbered.<|end|><|start|>assistant<|channel|>final<|message|>1. **Gather your ingredients and tools** – Get a bowl of eggs, a splash of milk or water, salt, pepper, and any optional add‑ons (cheese, herbs, diced veggies). Have a skillet, a spatula or fork, and a heat source ready. 2. **Whisk the eggs** – Crack the eggs into a bowl and beat them with a whisk or fork until the whites and yolks are fully blended. Add a small amount of milk or water (about 1 tablespoon per egg) to keep the mixture smooth. 3. **Season the mixture** – Sprinkle in a pinch of salt, pepper, and any other seasonings you like. Stir again to distribute the flavors evenly. 4. **Heat the skillet** – Place the skillet on the stove and turn the heat to medium‑high. Add a little butter, oil, or a splash of water to coat the pan and prevent sticking. 5. **Pour and cook** – Gently pour the whisked eggs into the hot skillet. Stir the eggs with a spatula or fork, lifting them from the center to the edges, until they form a light, fluffy scramble and the whites are just set. 6. **Serve** – Once the eggs are cooked to your liking, remove the skillet from heat, give a final stir, and plate the scrambled eggs. Add any fresh toppings or condiments, then enjoy!<|return|> ``` --- ### Example 3 **Question:** Summarize the water cycle in exactly 50 words, no more, no less. **Response:** ``` <|channel|>analysis<|message|>Need 50 words exactly. Let's craft: "Water cycles through Earth via evaporation, condensation, precipitation, infiltration, runoff, and groundwater flow. Sun heats oceans, forming vapor that condenses into clouds. Clouds release rain, feeding rivers and lakes. Water infiltrates soil, replenating aquifers, and returns to oceans through streams. This continuous loop sustains ecosystems and human life." Count words. Let's count: Water(1) cycles(2) through(3) Earth(4) via(5) evaporation,(6) condensation,(7) precipitation,(8) infiltration,(9) runoff,(10) and(11) groundwater(12) flow.(13) Sun(14) heats(15) oceans,(16) forming(17) vapor(18) that(19) condenses(20) into(21) clouds.(22) Clouds(23) release(24) rain,(25) feeding(26) rivers(27) and(28) lakes.(29) Water(30) infiltrates(31) soil,(32) replenating(33) aquifers,(34) and(35) returns(36) to(37) oceans(38) through(39) streams.(40) This(41) continuous(42) loop(43) sustains(44) ecosystems(45) and(46) human(47) life.(48) Need 2 more words. Add "daily" and "balance". So add at end: "daily balance." That adds two words: daily(49) balance(50). Good.<|end|><|start|>assistant<|channel|>final<|message|>Water cycles through Earth via evaporation, condensation, precipitation, infiltration, runoff, and groundwater flow. Sun heats oceans, forming vapor that condenses into clouds. Clouds release rain, feeding rivers and lakes. Water infiltrates soil, replenating aquifers, and returns to oceans through streams. This continuous loop sustains ecosystems and human life daily balance.<|return|> ``` --- ## Citation If you use this model in your research, please cite: ```bibtex @misc{priyanshu2025gptoss, title={{GPT-OSS MoE Expert Fingerprinting: Analyzing Expert Activation Patterns in Mixture of Experts Models}}, author={Priyanshu, Aman and Vijay, Supriti}, year={2025}, howpublished={\url{https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/}}, note={Interactive analysis tool for expert activation patterns in MoE architectures} } ``` ## References & Resources - **Original Model**: [OpenAI GPT-OSS Model Card](https://openai.com/index/introducing-gpt-oss/) - **Model Hub**: [GPT-OSS-20B on Hugging Face](https://huggingface.co/openai/gpt-oss-20b) - **Expert Analysis Dataset**: [GPT-OSS-20B MoE Expert Activations](https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations) - **Project Page**: [GPT-OSS MoE Expert Fingerprinting](https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/) - **GitHub Repository**: [OpenAI GPT-OSS](https://github.com/openai/gpt-oss)
MonsterMMORPG/Generative-AI
MonsterMMORPG
2025-08-13T09:45:22Z
0
5
null
[ "region:us" ]
null
2023-12-25T21:32:42Z
# Constantly Updated Patreon Projects on : https://www.patreon.com/c/SECourses ## Patreon Exclusive Content ### March 25 - 2025 [Wan 2.1 Ultra Advanced Gradio APP for - Works as low as 4GB VRAM - 1-Click Installers for Windows, RunPod, Massed Compute - Batch Processing - T2V - I2V - V2V](https://www.patreon.com/posts/123105403) * 1-Click to install and use the most powerful ever released Image to Video, Text to Video and Video to Video Wan 2.1 Open source model. Public Post to see APP features : https://www.patreon.com/posts/123114193 ### March 25 - 2025 [ComfyUI Auto Installer with FaceID, IP-Adapter, InsightFace, Reactor, Triton, DeepSpeed, Flash Attention, Sage Attention Automatic Installers for Windows, RunPod, Massed Compute, Linux](https://www.patreon.com/posts/105023709) * 1 Click auto installers for ComfyUI latest version for Windows, Massed Compute and RunPod. Installs latest version of ComfyUI into an isolated Python venv. Auto download best SDXL and SD 1.5 models and auto installs ComfyUI manager, FaceID, InsightFace, Triton, DeepSpeed, Flash Attention, Sage Attention and Ip Adapter of ComfyUI on Windows Python VENV (not WSL). Also includes automatic downloader.py file to download all of the IP Adapter, InsightFace and FaceID models for you. Moreover we have 1-Click installer for Reactor extension with its all dependencies and auto download of necessary models. ### March 20 - 2025 [MMAudio 1-Click Installers for Windows, RunPod and Massed Compute - Generate Audio for Any Video - Amazing for AI Generated Videos](https://www.patreon.com/posts/117990364) * 1-Click installers to install MMAudio. This app is so lightweight and fast. Pretty much should work on every GPUs. I have also improved the official published Gradio app and added more features. MMAudio generates synchronized audio given video and/or text inputs. Our key innovation is multimodal joint training which allows training on a wide range of audio-visual and audio-text datasets. Moreover, a synchronization module aligns the generated audio with the video frames. ### March 20 - 2025 [MagicQuill 1-Click Installers for Windows, RunPod and Massed Compute - Amazing and Ultra Fast Inpaint Model](https://www.patreon.com/posts/117326651) * 1-Click Installer files for MagicQuill app to install on Windows, RunPod and Massed Compute : https://github.com/magic-quill/MagicQuill ### March 20 - 2025 [VisoMaster Automatic Installer - The Most Advanced 0-Shot Face Swap / Deep Fake APP - State of the Art - Windows and Massed Compute](https://www.patreon.com/posts/121570322) * 1-Click to install VisoMaster on Windows and also on Massed Compute (for GPU poor). VisoMaster is a powerful yet easy-to-use tool for face swapping (FaceSwap/DeepFake) and editing in images and videos. It utilizes AI to produce natural-looking results with minimal effort, making it ideal for both casual users and professionals. ### March 18 - 2025 [Invoke AI Latest Version Windows, RunPod and Massed Compute 1-Click Installers](https://www.patreon.com/posts/112912425) * 1-Click to install latest version of InvokeAI on your Windows computer, on RunPod and on Massed Compute with a super detailed and easy Tutorial video and written scripts. This zip file also has instructions to use the Invoke AI on your Windows Computer browser while running on Massed Compute securely by using PowerShell commands. ### March 16 - 2025 [LivePortrait Upgraded Latest Version Human and Animal Version 1-Click Installers for Windows, RunPod, Massed Compute and a Free Kaggle Account Notebook - Blazing Fast Static Image or Video to Video to Talking and Moving Animation](https://www.patreon.com/posts/119254105) * 1-Click installers for Latest and improved version of LivePortrait for Windows, RunPod, Massed Compute and a Free Kaggle Account notebook. It supports latest LivePortrait animal v1.1 version. ### March 15 - 2025 [Most Advanced 1-Click DeepFake / FaceSwap App Rope, Rope Live, Rope Alucard and Rope NEXT Installers for Windows and Massed Compute and Linux](https://www.patreon.com/posts/105123768) * The easiest and most powerful 1-click DeepFake / FaceSwap open source app Rope, Rope Live, Rope Alucard and Rope NEXT installers for Windows, Massed Compute (Cloud), Linux and a lot of configurations and test results shared ### March 13 - 2025 [A Kaggle Notebook to Train Stable Diffusion 1.5 and XL (SDXL) on a Free Kaggle Account with 2x Dual T4 GPU for free by using Kohya GUI](https://www.patreon.com/posts/88397937) * A Kaggle NoteBook to do Stable Diffusion 1.5 and XL (SDXL) training. Fully supports 2x T4 dual GPU to speed up training. Kohya GUI is used to do DreamBooth / Fine-Tuning and LoRA Trainings. ### March 13 - 2025 [Hugging Face Upload / Download Notebook - Supports Private Repositories and Multi Commit as Well](https://www.patreon.com/posts/104672510) * If you are looking for convenient and fast way to save and download your files from Hugging Face, this notebook will do the job. 1-click easy ### March 13 - 2025 [Virtual Try-on (IDM-VTON) 1 Click Installers - Try Any Clothing Immediately On Anyone - Windows - RunPod - Massed Compute - Kaggle - Works even on Object Transfer](https://www.patreon.com/posts/122718239) * 1 Click installers for IDM-VTON (the one of the very best virtual try on clothing and anything) for Windows, RunPod, Massed Compute and a free Kaggle account. Our app also has extra features compared to official IDVM-VTON. It automatically crops and paste back images and supports quantization and CPU offloading. Public post for more info : https://www.patreon.com/posts/122721073 ### March 12 - 2025 [AuraSR GigaGAN 4x Upscaler Gradio APP, Installers for Windows, RunPod, Massed Compute and free Kaggle Account with Seams Fix and Batch Processing](https://www.patreon.com/posts/121441873) * 1-Click to install and use locally (Windows) and also popular cloud services famous newest AuraSR GigaGAN 4x upscaler with batch upscaling ### March 11 - 2025 [SwarmUI Easy Ultra Fast and Robust Unified Downloader for Stable Diffusion 3.5, FLUX, Mochi 1, SDXL and SD 1.5](https://www.patreon.com/posts/114517862) * Download the very best Stable Diffusion Large 3.5, FLUX, SDXL, Mochi 1 (SOTA text-to-video), FLUX Tools (Inpainting, Outpainting, Canny, Depth, Redux), Latent Upscaler Models like ESGRAN, Improved new Clip-l that works better and SD 1.5 models into the accurate SwarmUI folders with 1-click and ultra fast and robustness. ### March 5 - 2025 [Blazing Fast SD Forge Web UI Latest Version Windows, RunPod and Massed Compute Automatic Installers and Unified Model Downloaders for SD 1.5, SDXL, FLUX and More Newer Models](https://www.patreon.com/posts/118442039) * 1-Click to install latest SD Forge Web UI on Windows, RunPod and Massed Compute and download all of the amazing FLUX, SD 1.5, SDXL and SD3.5 and many more. Our installers and downloader scripts are super optmized that you will see even 1000 MB per second download speeds if your internet speed is sufficient. ### March 5 - 2025 [1 Click Installer for Automatic1111 SD Web UI, SDXL, ControlNet, All ControlNet Models, TensorRT (RTX Accelerator) on Windows](https://www.patreon.com/posts/86307255) * Automatic Windows installer script for SDXL and Automatic1111 Web UI. Downloads latest SDXL base with fixed VAE and best SD 1.5 and SDXL models. Moreover it automatically installs and lets you download newest NVIDIA RTX Accelerator - TensorRT which brings 70%+ Speed Up. Moreover, it will automatically install ControlNet and download all available ControlNet models for you. Furthermore, it will auto install After Detailer (ADetailer) and Reactor extensions and latest Torch and xFormers. All these installations are optional and you can install any of them. ### Feburary 25 - 2025 [Gradio APP for Deterministic Text and Graphs Having Images Upscaling based on ImageMagick - Windows, RunPod, Massed Compute](https://www.patreon.com/posts/123071348) * If you need to upscale images with 100% accurate text preservation this APP is what you need. A Gradio based multi-threaded batch upscaler APP that utilizes ImageMagick with amazing upscale presets. Public Post to see APP features : https://www.patreon.com/posts/123073046 ### Feburary 22 - 2025 [Free Kaggle Account Notebook for SwarmUI with FLUX, SD 1.5, SDXL & Stable Diffusion 3.5 Large, FLUX, Hunyuan and Dual T4 GPU support](https://www.patreon.com/posts/106650931) * Use very advanced SwarmUI on a free Kaggle account for free with dual T4 GPU. Fully supports SD 1.5, SDXL, SD3, FLUX, FLUX Tools (Redux, Canny, Depth, Inpainting), Stable Diffusion 3.5 Large, Stable Cascade, Hunyuan, SkyReels, Mochi 1 and more ### Feburary 7 - 2025 [BiRefNet HR (High Resolution) Gradio APP and 1-Click Installers for Windows, RunPod, Massed Compute and a Free Kaggle Account Notebook](https://www.patreon.com/posts/121679760) * BiRefNet HR Version Automatic Installers for Windows, RunPod, Massed Compute and a free Kaggle notebook. The very best SOTA background remover with Gradio APP. It is updated to newest High Resolution model and it supports batch processing fully with new half precision lower VRAM feature. ### Feburary 2 - 2025 [DeepFace Based Batch Face Similarity Sorting Gradio APP For Windows, RunPod and Massed Compute - 1-Click to Install - Uses TensorFlow GPU - Very Fast](https://www.patreon.com/posts/121335747) * With DeepFace & RetinaFace libraries you can sort AI images or basically any images based on their similarity to the given single or multiple images (average taken). We have developed a batch processing Gradio APP for this task that installs libraries into a Python 3.10 VENV and works perfect on Windows. Also we have 1-click installers for RunPod and Massed Compute as well. The APP is fully multi-threaded. ### Feburary 1 - 2025 [1-Click Installers for Paints-UNDO Advanced Gradio APP, Windows, RunPod, Massed Compute and Kaggle Installers](https://www.patreon.com/posts/121228327) * 1-Click Installers for Paints-UNDO from lllyasviel. 1-Click Install on Windows (Python 3.10 isolated VENV), RunPod, Massed Compute and a free Kaggle Notebook. Official repo here but our APP is improved version : https://github.com/lllyasviel/Paints-UNDO ### January 31 - 2025 [EasyAnimate 1-Click Install Windows, RunPod and Massed Compuıte, Newest SOTA Local Open Source Image-to-Video, Text-to-Video and More](https://www.patreon.com/posts/115888558) * 1-Click Installers for EasyAnimate. It is literally Runway but Open Source and FREE, Text-to-Video, Image-to-Video (both beginning and ending frame), Video-to-Video, Works on 24 GB GPUs on Windows, supports 960px resolution, supports very long videos with Overlap. Install on Windows, RunPod and Massed Compute ### January 20 - 2025 [Ultimate Image Processing APP : Batch Cropping, Zooming In, Resizing, Duplicate Image Removing, Face Extraction, SAM 2 and Yolo Segmentation, Masking for Windows, RunPod, Massed Compute and Free Kaggle Account](https://www.patreon.com/posts/120352012) * If you want to batch pre-process your training images, like auto zoom subject and resize perfectly into exact resolution, this is the script you need. 1-Click to install and use on Windows, RunPod and Massed Comput and even Kaggle. It supports YOLO V11 and SAM 2. Moreover it has even duplicate image removing as well. You can see all features in this public post : https://www.patreon.com/posts/120353641 ### January 17 - 2025 [1-Click Installers for Most Powerful Vision Model CogVLM V2 and Batch Captioning Images, Install on Windows, RunPod and Massed Compute](https://www.patreon.com/posts/120193330) * CogVLM 2 is an open source vision model that is par with GPT-4 of OpenAI. This post has 1-Click installers for CogVLM 2 Gradio APP. It installs into Python 3.10 VENV and fully supports Triton and 4-bit quantization. Works amazing and perfect. Has batch processing and many other features. For features, screenshot and more information check this public post : https://www.patreon.com/posts/120195496 ### January 16 - 2025 [TRELLIS 1-Click Installers for Windows, RunPod, Massed Compute, New SOTA Image-to-3D Full Model - Mind Blowing](https://www.patreon.com/posts/117470976) * 1-Click installers to install TRELLIS on Windows, RunPod and Massed Compute. https://github.com/microsoft/TRELLIS. This is the SOTA model for image to full 3D generation and it is blazing fast ### January 13 - 2025 [From NVIDIA Labs SANA Text-to-Image Model 1-Click Installers for Windows, RunPod, Massed Compute and Free Kaggle Account](https://www.patreon.com/posts/116474081) * 1-Click installers for NVIDIA Labs SANA (Efficient High-Resolution Image Synthesis with Linear Diffusion Transformer) model with an amazing Gradio APP developed by SECourses. Windows, RunPod, Massed Compute and Kaggle. Pretty lighweight and fast model to use. 1K (1024x1024) : 4 GB GPUs. 2K (2048x2048) : 6 GB GPUs. 4K (4096x4096) : 8 GB GPUs ### January 10 - 2025 [Ultra Advanced InstantID Gradio APP - Automatic Installers for Windows, RunPod and Massed Compute and Free Kaggle - Using the Very Best ControlNet Xinsir Models](https://www.patreon.com/posts/118469722) * Ultra advanced InstantID Gradio APP : 0-shot Identity-Preserving Generation in Seconds Gradio APP with latest best Xinsir ControlNet models. We have so many extra features compared to official InstantID APP plus we have 1-click very easy install on Windows, RunPod, Massed Compute and a free Kaggle Account notebook that works perfect ### January 8 - 2025 [IC-Light (Most Advanced 1-Click Relight / Re-Light) 1-Click Installer for Windows, RunPod, Massed Compute & Kaggle Notebook](https://www.patreon.com/posts/119566071) * 1-click auto installer scripts for IC-Light (Re-Light / Relight any image) for Windows, Massed Compute, RunPod, Linux and a free Kaggle Notebook ### January 4 - 2025 [Very Advanced Joy Caption APPs, Supports Batch Processing and Multi-GPU Captioning, Supports Joy Caption Pre Alpha, Alpha One, Alpha Two, 1-Click installers](https://www.patreon.com/posts/118827960) * Ultra advanced Gradio app for JoyCaption Pre-Alpha, Alpha One and Alpha Two with batch image captioning in addition to Multi-GPU support. 1-Click to install on Windows, RunPod and Massed Compute ### January 1 - 2025 [1-Click Installers for One of the Best Face Swap / Easy DeepFake APPs Roop Unleashed for Windows, RunPod and Massed Compute](https://www.patreon.com/posts/119081500) * 1-Click installers for one of the very best Deep Fake / Face Swap app Roop Unleashed for Windows, RunPod and Massed Compute. This app uses Gradio interface and supports live Web Cam replace as well for streams like Google Meeting. ### December 23 - 2024 [FaceFusion V3 (Most Advanced 1-Click DeepFake) 1-Click Installers for Windows, RunPod and Massed Compute - Use locally and cloud](https://www.patreon.com/posts/103765029) * FaceFusion 3 Windows (local), RunPod and Massed Compute (Cloud) 1-Click installers. Install and run with easiness. The best DeepFake APP ### December 20 - 2024 [SUPIR 1 Click Windows, RunPod / Massed Compute / Linux Installer & Free Kaggle Notebook](https://www.patreon.com/posts/99176057) * 1 Click Windows and RunPod / Massed Compute / Linux Installer and a free Kaggle Notebook For The New SOTA Image Upscaling and Enhancing Open Source SUPIR Model. Better than Magnify AI. SUPIR is the very best AI image upscaler at the moment. ### December 11 - 2024 [1-Click Automatic Installers for Hunyuan3D-1 Text to 3D and Image to 3D SOTA Model, Install on Windows, RunPod and Massed Compute](https://www.patreon.com/posts/115412205) * 1-Click Python venv installers for Hunyuan3D-1.0. The installers support Windows (use locally - works on 24 GB GPUs super fast tested on RTX 3090), RunPod and Massed Compute (use on cloud). Hunyuan3D-1.0: A Unified Framework for Text-to-3D and Image-to-3D Generationr : https://github.com/tencent/Hunyuan3D-1 ### December 4 - 2024 [SwarmUI Master Tutorial - Use Stable Diffusion 3.5 Large and FLUX model with Amazing Performance and more](https://www.patreon.com/posts/106135985) * In this public tutorial post I will share all the necessary stuff regarding how to use SD 1.5, SDXL, Stable Diffusion 3, Stable Diffusion 3.5 Large and FLUX (by Black Forest Labs new SOTA model) on your computer and also on the cloud (Massed Compute, RunPod and a free Kaggle account). ### November 20 - 2024 [Kohya FLUX Fine Tuning (Full Checkpoints) Training Full Tutorial For Local Windows and Cloud RunPod and Massed Compute](https://www.patreon.com/posts/112099700) * Fully researched and optimized very best full FLUX Fine tuning configurations and workflows that works way better than FLUX LoRA training. Perfectly trainable on 6 GB, 8 GB, 10 GB, 12 GB, 16 GB, 24 GB and 48 GB GPUs. 48 GB GPU config and 6 GB GPU config yields exactly same quality only speed is different. ### November 20 - 2024 [Kohya FLUX LoRA Training Full Tutorial For Local Windows and Cloud RunPod and Massed Compute](https://www.patreon.com/posts/110879657) * Step by step with perfect hyper parameters and configuration Kohya FLUX LoRA training tutorial with configs for 8GB, 10GB, 12GB, 16GB, 24GB and 48GB GPUs. It works perfect on all these GPUs both on Windows and on Linux. It covers Cloud services as RunPod and Massed Compute as well. This tutorial is research result of more than 64 full Kohya SS GUI used FLUX LoRA trainings ### November 16 - 2024 [1-Click Windows, RunPod, Massed Compute and Kaggle Installers For SDXL Background Replacement for Product Images - Make Amazing Product Ads For Shopify](https://www.patreon.com/posts/89914747) * 1-Click Install Shopify Product Background Replacer (Open Source) And Make Amazing Ads Pictures For Your Products On Your Computer With SDXL. We have 1-Click installers for Windows, RunPod, Massed Compute and a ready to run Kaggle Notebook with super model downloading improvement technique as well. ### November 3 - 2024 [OmniGen 1-Click Automatic Installers for Windows, RunPod and Massed Compute](https://www.patreon.com/posts/115233922) * 1-Click Python venv installers for OmniGen. The installers support Windows (use locally), RunPod and Massed Compute (use on cloud). OmniGen is a unified image generation model that can generate a wide range of images from multi-modal prompts. It is designed to be simple, flexible, and easy to use ### November 1 - 2024 [Perfect Quality Example Model Training Images Dataset - Can Be Used On FLUX, Stable Diffusion 3.5, 3, SDXL, SD 1.5 and Such](https://www.patreon.com/posts/114972274) * If you are looking for an example model training dataset that is properly prepared, to learn how to prepare and test, this is the dataset! You can use this dataset to train your SD 1.5, SDXL, FLUX, Stable Diffusion 3, Stable Diffusion 3.5 Large models and see how it performs. You can analze this dataset and read the post to understand how to generate your perfect dataset for not only training a person but also a style, an item, and object and such. ### October 31 - 2024 [FLUX De-Distilled and Anti-Bleeding Fine-Tuning / DreamBooth & LoRA Training Experiments - Also Testing CFG Impact for Stylized Images](https://www.patreon.com/posts/114969137) * Research for fixing FLUX bleeding / mixing problem. Training multiple concepts / subjects. Searching and Testing FLUX De-Distilled models. Aim is preventing model turning entirely into you or training multiple styles, objects, person, items, products at once without getting mixed / bled. ### October 2 - 2024 [SOTA Image Captioning Scripts For Stable Diffusion: CogVLM, CogVLM V2, Kosmos-2, Qwen-VL, LLaVA, BLIP-2, Clip-Interrogator (115 Clip Vision Models + 5 Caption Models)](https://www.patreon.com/posts/sota-image-for-2-90744385) * 1-Click installers for Windows and Cloud (RunPod & Massed Compute) collection for LLaVA, Kosmos-2, Blip 2, CLIP Vision, CogVLM V1, CogVLM V2, Qwen-VL and CLIP Interrogator web APP. All APPs supports batch captioning as well. Amazing fast and easy to use with Gradio. ### September 27 - 2024 [1-Click CogVideoX-5b Image to Video Installers For Windows, RunPod, Massed Compute - SOTA Open Source Model](https://www.patreon.com/posts/112848192) * 1-Click to install and use CogVideoX-5B SOTA image to guided video generation model on your PC locally or on RunPod and Massed Compute ### September 20 - 2024 [Image Captioning Editor Gradio APP - Edit Your Captions Super Easy Including Batch Editing - For Windows, RunPod and Massed Compute](https://www.patreon.com/posts/108992085) * Extremely advanced and lots of features having image captioning Gradio APP developed by SECourses. 1-Click to install and run on Windows and also Cloud (RunPod &amp; Massed Compute). Use this application to edit and finalize your Stable Diffusion training dataset image captions. ### September 13 - 2024 [Training a FLUX LoRA with 256 Images Experiments - Full Workflow and Conclusions](https://www.patreon.com/posts/111891669) * Training a perfect FLUX LoRA that learns even broken teeth details, full body details, full expressions with 256 images experiments ### August 27 - 2024 [FLUX Models 1-Click Auto Downloaders for SwarmUI for Windows, RunPod and Massed Compute](https://www.patreon.com/posts/109289967) * Auto downloader for FP16, FP8 and quantized FLUX models for Windows, RunPod, Massed Compute with T5 Text Encoder and FLUX VAE ### August 27 - 2024 [1-Click to install on Windows, RunPod and Massed Compute Kohya FLUX LoRA and Fine Tuning Training Full Tutorial For Local Windows and Cloud RunPod and Massed Compute](https://www.patreon.com/posts/110293257) * Full Research and Development and Configs and Workflows and Grids shared article for FLUX LoRA training. The article contains configs for 8GB, 10GB, 12GB, 16GB, 24GB and 48GB GPUs. The configs even includes 4x GPU setup as well ### August 23 - 2024 [SOTA Subject Cropper and Face Focused Image Resizer Scripts Do Better Training](https://www.patreon.com/posts/sota-subject-and-88391247) * State Of The Art (SOTA) Subject Cropper (Zoom Subject Without Quality Loss) and SOTA Image Downscaler To Get Perfect Desired Resolution. These scripts will significantly improve your training quality. ### August 19 - 2024 [ResShift 1-Click Windows, RunPod, Massed Compute, Kaggle Installers with Amazing Gradio APP and Batch Image Processing](https://www.patreon.com/posts/110331752) * 1-Click installer scripts for ResShift for Windows, RunPod, Massed Compute and Kaggle and a very advanced Gradio app with batch processing. ResShift is Efficient Diffusion Model for Image Super-resolution by Residual Shifting (NeurIPS 2023, Spotlight) ### August 17 - 2024 [The Very Best Workflow For SDXL DreamBooth / Full Fine Tuning - Results Of 100+ Full Trainings](https://www.patreon.com/posts/very-best-for-of-89213064) * Updated the very best hyper training parameters / configuration and training workflow for Kohya SS GUI for Stable Diffusion XL (SDXL) ### August 16 - 2024 [OneTrainer Stable Diffusion XL (SDXL) Fine Tuning Best Presets](https://www.patreon.com/posts/96028218) * Nerogar OneTrainer very best Stable Diffusion XL (SDXL) full fine tuning presets. 10.3 GB GPU is very sufficient and fast ### August 14 - 2024 [Image Folders Merger For Easy Comparison - Merge Images Side by Side - Useful to Compare Upscaling and Such Effect](https://www.patreon.com/posts/110108419) * A Python script to upscale lower resolution image in folder A and B into higher resolution one without any upscale algorithm and merge them ### July 19 - 2024 [Auto Windows Installer For Würstchen: Fast Diffusion for Image Generation](https://www.patreon.com/posts/auto-windows-for-89265135) * Install latest Generative AI model Würstchen V2 to your computer with 1 click. Fixed file instead of broken Gradio demo hosted on Hugging Face ### June 24 - 2024 [Fooocus Stable Diffusion Web UI Kaggle NoteBook](https://www.patreon.com/posts/fooocus-stable-94269866) * Fooocus Stable Diffusion Web UI Free Kaggle Account Notebook. Use SDXL on Kaggle for free like Midjourney without even a computer. This is the way to use almost Midjourney for free. ### June 21 - 2024 [Tencent AI Lab - V-Express Image to Animation Gradio Web APP and 1-Click Installers for Windows, Massed Compute, RunPod and Kaggle](https://www.patreon.com/posts/105251204) * 1-Click to turn your static image into a full animation talking video either by an input audio or video file via Tencent AI Lab - V-Express - Open Source D-ID and alikes ### June 14 - 2024 [All Amazing Styles Of Fooocus For Automatic1111 SD Web UI and StableSwarmUI also necessary Scripts to generate them](https://www.patreon.com/posts/95143823) * 275 amazing Fooocus SDXL styles in format of Automatic1111 SD Web UI and also as a Preset for StableSwarmUI with thumbnail preview images. Moreover, full python scripts to generate and update these styles and presets files ### June 13 - 2024 [Find And Move Duplicate or Very Similar Images By Using imagehash - Batch Processing Super Fast](https://www.patreon.com/posts/find-and-move-or-95143007) * If you want to find duplicate or near duplicate images very fast, this script is what you are looking for. It analyzes the content of images so works amazingly ### June 11 - 2024 [1-Click Installers for CodeFormer: Robust Face Restoration and Enhancement Network, Windows, RunPod, Massed Compute, Linux, Kaggle](https://www.patreon.com/posts/104691847) * 1-Click auto installers for CodeFormer standalone Gradio APP with advanced features. The installers includes Windows, Massed Compute, Linux, Kaggle and RunPod. You can use on a free Kaggle account as well with our Kaggle notebook. This app also has batch folder processing feature and works many times better than Automatic1111 SD Web UI ### June 7 - 2024 [Massed Compute Automatic1111 and Forge Web UI Installers for ADetailer, ControlNet, TensorRT, Reactor, FaceFusion](https://www.patreon.com/posts/105735932) * 1-Click installers for latest version of Automatic1111 Web UI, SD Forge Web UI, ControlNet, TensorRT, Reactor, FaceFusion, ADetailer. Moreover, the virtual machine comes by default with OneTrainer, Kohya, Pinokio AI APPs installed and also update them with 1-Click. ### June 6 - 2024 [1 Click Installer for Automatic1111 SD Web UI, SDXL, ControlNet, All ControlNet Models, TensorRT (RTX Accelerator) For RunPod / Any Linux System](https://www.patreon.com/posts/86438018) * Automatic RunPod (any Linux System) installer script for SDXL and Automatic1111 Web UI. Downloads latest SDXL base with fixed VAE and best SD 1.5 and SDXL models. Moreover it automatically installs and lets you download newest NVIDIA RTX Accelerator - TensorRT which brings 70%+ Speed Up. Moreover, it will automatically install ControlNet and download all available ControlNet models for you. Furthermore, it will auto install After Detailer (ADetailer) and Reactor extensions and latest Torch and xFormers. All these installations are optional and you can install any of them. ### May 27 - 2024 [IP-Adapter-FaceID-PlusV2 - 0 Shot Face Transfer - Auto Installer & Gradio App](https://www.patreon.com/posts/ip-adapter-0-app-95759342) * 1 Click Auto Install IP-Adapter-FaceID-PlusV2. Use it with an advanced standalone Gradio app. 0 Shot face transfer and generate images. ### May 25 - 2024 [Run Automatic1111 SD Web UI On A Free Kaggle NoteBook Like In Your PC - Supports SDXL & ControlNet](https://www.patreon.com/posts/run-on-free-like-88714330) * A Free Kaggle account notebook to use Automatic1111 for free. Supports SDXL, ControlNet, LoRA, trained LoRAs & automatic extension install. Works like you have a very strong computer. Dual 15 GB GPUs, 29 GB RAM provided for free by Kaggle. Auto downloads all of the ControlNet models for both SD 1.5 and SDXL models including even IP Adapter Face ID Plus and InstantID ### May 20 - 2024 [Massed Compute Installers - Upgrade Automatic1111 - Coupon Code - ControlNet - ADetailer - Facefusion - Reactor & More](https://www.patreon.com/posts/101386817) * Massed Compute Scripts & Coupon Code: A6000 GPU for 31 Cents Per Hour, Automatic1111 SD Web UI, Kohya, OneTrainer, After Detailer (ADetailer), Reactor, Facefusion, Forge & More ### May 16 - 2024 [Stable Cascade 1 Click Installer & Advanced Gradio APP For Windows, Massed Compute, RunPod, Linux & Kaggle](https://www.patreon.com/posts/stable-cascade-1-98410661) * 1 Click To Install Stable Cascade Model & Use On Your PC or On RunPod or On Massed Compute or Kaggle With Amazing Optimizations (Works on 5GB GPU) & Advanced GUI ### April 28 - 2024 [Fooocus SD Web UI RunPod & Massed Compute Auto Installer - 1 Click - Latest Version](https://www.patreon.com/posts/fooocus-sd-web-1-92759045) * Automatic installer for Stable Diffusion Fooocus Web UI on RunPod and also Massed Compute. 1 Click. Use Fooocus on RunPod & Massed Compute with all models and features. Latest version. Follow the instructions on the Patreon post. ### April 18 - 2024 [For RunPod - Automatic Kohya SS LoRA Installer](https://www.patreon.com/posts/for-runpod-kohya-84898806) * This script will automatically install Kohya SS on RunPod. Additionally, I have added after Pod restart script which will fix installation. ### March 22 - 2024 [The Very Best OneTrainer Workflow & Config For SD 1.5 Based Models DreamBooth / Full Fine Tuning](https://www.patreon.com/posts/very-best-config-97381002) * Download the very best training configuration and learn the workflow for the OneTraine GUI Stable Diffusion trainer & obtain amazing quality. The workflow is discovered after 70 empricial model trainings. ### March 20 - 2024 [Bit-By-Bit Disk & File Verification Software In C#, Fully Multi-Threaded, With Full Source Code - Verify Disk Clone](https://www.patreon.com/posts/76398813) * This in C# developed application is extremely efficient to verify every bit of cloned disks. It can be also used for file migration/backup verification. Full source code available with pre-compiled exe file. It is fully multi-threaded. ### Feburary 15 - 2024 [1 Click Auto Windows Installer For Rerender A Video - 1 Click Video To Anime](https://www.patreon.com/posts/89457537) * Rerender is an amazing new Paper that allows you to turn videos into Anime with 1 click. Auto install scripts and instructions provided here ### Feburary 5 - 2024 [1 Click Auto RunPod Installer For Rerender A Video - 1 Click Video To Anime](https://www.patreon.com/posts/1-click-auto-for-91039997) * Rerender is an amazing new AI that allows you to turn videos into Anime with 1 click. RunPod auto install scripts and instructions are here. ### January 29 - 2024 [The Very Best Kohya GUI Workflow & Config For SD 1.5 Based Models DreamBooth / Full Fine Tuning](https://www.patreon.com/posts/very-best-kohya-97379147) * Download the very best training configuration and learn the workflow for the Kohya SS GUI Stable Diffusion trainer & obtain amazing quality. The workflow is discovered after 70 empricial model trainings. ### January 23 - 2024 [Download 160 Very Best Stable Diffusion 1.5 Based (SD 1.5) Models With 1 Click](https://www.patreon.com/posts/96666744) * 1 click Download the very best 160+ Stable Diffusion 1.5 models (SD 1.5) from CivitAI and Hugging Face into your PC or RunPod or Cloud. ### January 16 - 2024 [PixArt-alpha (PixArt-α) Automatic Installer For Both Windows And RunPod With Additional Web UI Features](https://www.patreon.com/posts/pixart-alpha-for-93614549) * Auto installer scripts with an Advanced Web Gradio APP to install and use PIXART-α (PixArt-alpha - SDXL Rival) for both Windows and RunPod. ### January 14 - 2024 [Tortoise TTS Fast (tortoise-tts-fast) Windows Auto Installer BAT Script](https://www.patreon.com/posts/tortoise-tts-tts-90496485) * 1 Click installer for tortoise-tts-fast on Windows. It will make its own VENV so will not affect any other AI apps such as Stable Diffusion. ### January 1 - 2024 [Magic Animate Automatic Installer and Video to DensePose Auto Converter For Windows And RunPod](https://www.patreon.com/posts/94098751) * Automatically install magic-animate on both Windows and RunPod. Also automatically generate DensePose from raw videos via best detectron2. Includes a standalone CodeFormer Gradio Web APP too for improving faces in videos fully automatically. ### December 23 - 2023 [Batch Image Metadata Generator - Extremely Useful For Automatic1111 SD Web UI](https://www.patreon.com/posts/95176238) * If you want to batch generate Metadata of images with just one click, this is the script you are looking for. Extremely useful for SD Web UI ### December 23 - 2023 [All Amazing Styles Of Fooocus As Automatic1111 SD Web UI Styles File And Styles File Generator](https://www.patreon.com/posts/all-amazing-of-95143823) * 275 Amazing Fooocus Styles in a single Styles.csv file compatible with Automatic1111 and Styles.csv generator for Fooocus styles folder. ### December 4 - 2023 [Auto Installer For AudioCraft Plus - MusicGen - AudioGen - An All-in-One AudioCraft WebUI](https://www.patreon.com/posts/ai-music-auto-84334460) * Auto Installer Windows Bat Files For AudioCraft Plus - MusicGen - AudioGen - An All-in-One AudioCraft WebUI - Facebook Research / Audiocraft ### November 27 - 2023 [Massive 4K Resolution Woman & Man Class Ground Truth Stable Diffusion Regularization Images Dataset](https://www.patreon.com/posts/massive-4k-woman-87700469) * 4K+ resolution 5200 images for each gender Hand Picked Ground Truth Real Man & Woman Regularization Images For Stable Diffusion & SDXL Training - 512px 768px 1024px 1280px 1536px and more ### November 25 - 2023 [SOTA (The Very Best) Image Captioning Models Script For Stable Diffusion And More](https://www.patreon.com/posts/sota-very-best-90744385) * 1 Click install and use SOTA image captioning models on your computer. Supports 8 bit loading as well. 90+ CLIP Vision and 5+ Caption models. Supports laion/CLIP-ViT-bigG-14-laion2B-39B-b160k too. Supports total 115 Clip and 5 Caption model combination. ### November 20 - 2023 [Image Validator Script For Training - Moves Corrupted Images](https://www.patreon.com/posts/image-validator-85618765) * This attached below script will test each one of your images and moves the ones that are corrupted (breaking training) into another folder. Another script will scan and log but not move. ### November 17 - 2023 [Automatic ControlNet Installer And Downloader For Windows BAT File](https://www.patreon.com/posts/84875387) * Scripts will clone ControlNet repo and download all of the ControlNet models with SDXL into the correct folder automatically for Windows ### November 12 - 2023 [Gender Classifier - Low Colors & Multiple Face Remover - Stable Diffusion Training Images Preprocessor](https://www.patreon.com/posts/92607385) * Gender Classifier - Low Colors & Multiple Face Remover - Stable Diffusion Training Images Preprocessor. Utilizes SOTA models and techniques. Supports GPU Retina Face too. ### November 9 - 2023 [Automatic ControlNet Installer / Updater - Model Downloader For RunPod](https://www.patreon.com/posts/84896373) * This script will update ControlNet extension to its latest version and also automatically download all model files of ControlNet ### November 6 - 2023 [Auto Installer Bat Files For Automatic1111 & DreamBooth Extension On Windows](https://www.patreon.com/posts/auto-installer-84773926) * Included BAT script files will clone and install fully automatically Automatic1111 SD Web UI and DreamBooth extension for you on Windows. ### October 28 - 2023 [RunPod Auto DreamBooth Extension Of Automatic1111 Web UI & Latest Libraries Installer Script](https://www.patreon.com/posts/runpod-auto-84716845) * This script will install working version of DreamBooth extension of Automatic1111 Web UI fully automatically for you on RunPod. ### October 24 - 2023 [Automatic1111 Web UI Google Colab NoteBook With All ControlNet Models And More](https://www.patreon.com/posts/automatic1111-ui-89288738) * Automatic1111 Web UI Google Colab Notebook With All ControlNet Models, SDXL Model, Best SD 1.5 Model, LoRA Download Example, Upscaler, SDXL LoRAs, SDXL ControlNet All Models & More ### October 5 - 2023 [Amazing Prompt List For DreamBooth or LoRA Trained Stable Diffusion XL (SDXL) & SD 1.5 Based Models](https://www.patreon.com/posts/amazing-prompt-1-90346033) * Specially crafted very best Stable Diffusion XL (SDXL) + SD 1.5 based models prompt list for DreamBooth and LoRA trained models. ### September 14 - 2023 [Google Colab Notebook For Würstchen: Fast Diffusion for Image Generation](https://www.patreon.com/posts/google-colab-for-89280042) * Würstchen V2 model on a free Google Colab Notebook with instructions. Super quality Generative AI like Stable Diffusion XL (SDXL) but faster ### September 12 - 2023 [How To Start Multiple Automatic1111 Web UI And Kohya Training On A Single Pod](https://www.patreon.com/posts/how-to-start-web-89150521) * Download webui-user.sh and relauncher.py files and follow instructions to start multiple Automatic1111 on different GPUs on a single RunPod ### August 13 - 2023 [Convert Very Long X/Y/Z Plot Output Images Into Square Grids](https://www.patreon.com/posts/convert-very-x-y-87608128) * A script to convert very long X/Y/Z Plot image into a chunked Square Grid Image. Examples are attached. ### August 8 - 2023 [1 Click RunPodCTL Installer .bat File - Script](https://www.patreon.com/posts/1-click-bat-file-87505171) * 1 Click installer for runpodctl. runpodctl is super fast to upload and download files between pod to pc, pc to pod and pod to pod. ### August 1 - 2023 [How To Get Amazing Prompts With ChatGPT For Stable Diffusion](https://www.patreon.com/posts/how-to-get-with-87038686) * How to Utilize free ChatGPT to write unlimited number of different prompts for Stable Diffusion models. 540 prompts attached ### July 29 - 2023 [SDXL Kohya LoRA Training With 12 GB VRAM Having GPUs - Tested On RTX 3060](https://www.patreon.com/posts/sdxl-kohya-lora-86817035) * How to do SDXL Kohya LoRA training with 12 GB VRAM having GPUs. Rank 8, 16, 32, 64, 96 VRAM usages are tested and shown. Config provided ### July 27 - 2023 [How To Fix Artifacts In The SDXL 1.0 VAE - Hidden Watermark System](https://www.patreon.com/posts/86736816) * How to get rid off embedded watermarking system in SDXL 1.0 VAE. We will use new VAE. How to use proper VAE with SDXL for best quality. ### July 12 - 2023 [1k Resolution Class Images & Direct SDXL Download Links](https://www.patreon.com/posts/1k-resolution-85976249) * 1024x1024 Pixels Class Images (From Real Pictures) For Amazing Realism For SDXL and Direct SDXL 0.9 and 1.0 Download Links (Official Source) ### July 10 - 2023 [Auto SDXL RunPod Installer Script - 1 Click ](https://www.patreon.com/posts/auto-sdxl-runpod-85845581) * 1 Click SDXL Installer Script for RunPod. Working amazing. Use high VRAM GPUs for amazing speed. You don't need token I did set it for you. ### July 6 - 2023 [Auto Installer Script (.bat) Files For Stable Diffusion XL (SDXL) On Your PC](https://www.patreon.com/posts/auto-installer-85678961) * Attached script files will automatically download and install SD-XL 0.9 into your computer and let you use SDXL locally for free as you wish ### July 4 - 2023 [Best Settings For The END of Photography - Use AI to Make Your Own Studio Photos, FREE Via DreamBooth Training](https://www.patreon.com/posts/best-settings-of-85192985) * Screenshots of best settings for : The END of Photography - Use AI to Make Your Own Studio Photos, FREE Via DreamBooth Training ### June 21 - 2023 [How to fix Roop (insightface error) - cannot open include file: 'stdio.h': No such file or directory](https://www.patreon.com/posts/how-to-fix-roop-84932008) * This post will show you how to fix insightface wheel compiling error when installing Roop on Windows. 'stdio.h': No such file or directory ### June 20 - 2023 [Auto Installer Bat File For Latest cuDNN dll files & How To Manually Install & Update](https://www.patreon.com/posts/auto-installer-84830198) * Attached bat file will automatically download 8.9.2.26 cuDNN dll files and replace the ones that comes with default Torch installation ### June 16 - 2023 [Core i7 10700F vs Core i9 13900K](https://www.patreon.com/posts/core-i7-10700f-84640971) * Core i7 10700F vs Core i9 13900K results are shown in the image. Alternatively you can watch the youtube video to see them. ### June 9 - 2023 [2 Pre-Processing Scripts And 3 Datasets Of Processed Class Images For Popular Models](https://www.patreon.com/posts/84292083) * Video Tutorial for this post : https://youtu.be/olX1mySE8HA. Batch preprocess images. Removes multiple-face, black & white, NSWF. Free datasets ### June 6 - 2023 [Mind-Blowing Deepfake Tutorial: Turn Anyone into Your Favorite Movie Star! PC & Google Colab - roop](https://www.patreon.com/posts/mind-blowing-pc-84169579) * Full video of: Mind-Blowing Deepfake Tutorial: Turn Anyone into Your Favorite Movie Star! PC & Google Colab - roop ### June 4 - 2023 [4K 2700 Real Class Images + Auto Cropping Script](https://www.patreon.com/posts/4k-2700-real-84053021) * 4K res 2700 Class / Reg raw images. Subject auto cropper script. Included 512, 640, 768, 960, 1024px prepared. Can be used for fine-tuning ### May 28 - 2023 [How To Generate Very Long Text To Speech For Free On Cloud, e.g. Audiobook](https://www.patreon.com/posts/how-to-generate-83649203) * Generate a very long text to speech with a single click on cloud for free. Example Audiobook : https://www.youtube.com/watch?v=5dSiuBjVcdk ### May 8 - 2023 [Voice Clone Tutorial Scripts](https://www.patreon.com/posts/voice-clone-82712205) * As shown in the tutorial video, the scripts I have developed make voice cloning and text-to-speech synthesis much easier and more efficient. ### April 30 - 2023 [Enhanced DeepFloyd-IF Kaggle Notebook File](https://www.patreon.com/posts/enhanced-if-file-82253574) * Enhanced DeepFloyd-IF Kaggle Notebook File as shown in the tutorial video. ### April 26 - 2023 [Realistic Vision V2 - 2071 classification / regularization images](https://www.patreon.com/posts/realistic-vision-82085317) * Realistic Vision V2 - 2071 classification / regularization images ### April 26 - 2023 [Kohya SS LoRA Amazing Studio Quality Photoshoot Tutorial PDF](https://www.patreon.com/posts/kohya-ss-lora-82085260) * The attached PDF file will be updated once the tutorial is finished and published. ### April 18 - 2023 [Kandinsky 2.1 For FREE Google Colab Account - Save in Drive, Batch Processing, Dynamic Prompting](https://www.patreon.com/posts/82085260) * Kohya SS Tutorial as PDF file ### April 11 - 2023 [Summary And Conclusions of RTX 3090 vs RTX 3060 Ultimate Showdown for Stable Diffusion, ML, AI & Video Rendering Performance](https://www.patreon.com/posts/summary-and-of-81374648) * You can download the summary, discoveries and the conclusions PDF file of the video : https://youtu.be/lgP1LNnaUaQ RTX 3090 vs RTX 3060 Ulti ### April 6 - 2023 [Kandinsky 2 Tutorial And Script](https://www.patreon.com/posts/kandinsky-2-and-81107231) * The tutorial link is here : https://youtu.be/dYt9xJ7dnpU My modified improved notebook file is attached. I may update it time to time. This ### April 6 - 2023 [Custom Style Teached New Model](https://www.patreon.com/posts/custom-style-new-81107154) * This is a custom model that I have trained a certain style as you see in the picture. You can use it as you wish. ### April 2 - 2023 [How To Quickly Upload Your RunPod Files To Google Drive](https://www.patreon.com/posts/how-to-quickly-80924234) * By using the following Google Colab Notebook link you can very quickly upload your files (e.g. models or folders) to your Google Drive. ### March 27 - 2023 [10598 Aesthetic and 6080 Photo Of Man classification images](https://www.patreon.com/posts/10598-aesthetic-80588052) * You can download from below links 10598 aesthetic and 6080 photo of classification images. You can use these images as regularization / clas ### March 22 - 2023 [Midjourney Level Style Trained Model](https://www.patreon.com/posts/midjourney-level-80356527) * This is the video tutorial : https://youtu.be/m-UVVY_syP0 . Safetensors model file below. This model do not include myself - only style ### March 19 - 2023 [Style Teaching & Aesthetic Dataset](https://www.patreon.com/posts/style-teaching-80233878) * 2858 Style training images dataset prepared by me with the following words and certain prompt usage : https://drive.google.com/file/d/1A ### January 28 - 2023 [How To Achieve Synchronization In C# While Doing Async Await Multithreaded Programming - .NET Core](https://www.patreon.com/posts/how-to-achieve-c-77858916) * Thank you so much for supporting us. Source code available in attachments.
LiquidAI/LFM2-VL-1.6B
LiquidAI
2025-08-13T09:44:29Z
0
50
transformers
[ "transformers", "safetensors", "lfm2-vl", "image-text-to-text", "liquid", "lfm2", "edge", "conversational", "custom_code", "en", "license:other", "region:us" ]
image-text-to-text
2025-08-12T16:38:52Z
--- library_name: transformers license: other license_name: lfm1.0 license_link: LICENSE language: - en pipeline_tag: image-text-to-text tags: - liquid - lfm2 - lfm2-vl - edge --- <center> <div style="text-align: center;"> <img src="https://cdn-uploads.huggingface.co/production/uploads/61b8e2ba285851687028d395/7_6D7rWrLxp2hb6OHSV1p.png" alt="Liquid AI" style="width: 100%; max-width: 66%; height: auto; display: inline-block; margin-bottom: 0.5em; margin-top: 0.5em;" /> </div> </center> # LFM2‑VL LFM2‑VL is [Liquid AI](https://www.liquid.ai/)'s first series of multimodal models, designed to process text and images with variable resolutions. Built on the [LFM2](https://huggingface.co/collections/LiquidAI/lfm2-686d721927015b2ad73eaa38) backbone, it is optimized for low-latency and edge AI applications. We're releasing the weights of two post-trained checkpoints with [450M](https://huggingface.co/LiquidAI/LFM2-VL-450M) (for highly constrained devices) and [1.6B](https://huggingface.co/LiquidAI/LFM2-VL-1.6B) (more capable yet still lightweight) parameters. * **2× faster inference speed** on GPUs compared to existing VLMs while maintaining competitive accuracy * **Flexible architecture** with user-tunable speed-quality tradeoffs at inference time * **Native resolution processing** up to 512×512 with intelligent patch-based handling for larger images, avoiding upscaling and distortion Find more about our vision-language model in the [LFM2-VL post](https://www.liquid.ai/blog/lfm2-vl-efficient-vision-language-models) and its language backbone in the [LFM2 blog post](https://www.liquid.ai/blog/liquid-foundation-models-v2-our-second-series-of-generative-ai-models). ## 📄 Model details Due to their small size, **we recommend fine-tuning LFM2-VL models on narrow use cases** to maximize performance. They were trained for instruction following and lightweight agentic flows. Not intended for safety‑critical decisions. | Property | [**LFM2-VL-450M**](https://huggingface.co/LiquidAI/LFM2-VL-450M) | [**LFM2-VL-1.6B**](https://huggingface.co/LiquidAI/LFM2-VL-1.6B) | |---|---:|---:| | **Parameters (LM only)** | 350M | 1.2B | | **Vision encoder** | SigLIP2 NaFlex base (86M) | SigLIP2 NaFlex shape‑optimized (400M) | | **Backbone layers** | hybrid conv+attention | hybrid conv+attention | | **Context (text)** | 32,768 tokens | 32,768 tokens | | **Image tokens** | dynamic, user‑tunable | dynamic, user‑tunable | | **Vocab size** | 65,536 | 65,536 | | **Precision** | bfloat16 | bfloat16 | | **License** | LFM Open License v1.0 | LFM Open License v1.0 | **Supported languages:** English **Generation parameters**: We recommend the following parameters: - Text: `temperature=0.1`, `min_p=0.15`, `repetition_penalty=1.05` - Vision: `min_image_tokens=64` `max_image_tokens=256`, `do_image_splitting=True` **Chat template**: LFM2-VL uses a ChatML-like chat template as follows: ``` <|startoftext|><|im_start|>system You are a helpful multimodal assistant by Liquid AI.<|im_end|> <|im_start|>user <image>Describe this image.<|im_end|> <|im_start|>assistant This image shows a Caenorhabditis elegans (C. elegans) nematode.<|im_end|> ``` Images are referenced with a sentinel (`<image>`), which is automatically replaced with the image tokens by the processor. You can apply it using the dedicated [`.apply_chat_template()`](https://huggingface.co/docs/transformers/en/chat_templating#applychattemplate) function from Hugging Face transformers. **Architecture** - **Hybrid backbone**: Language model tower (LFM2-1.2B or LFM2-350M) paired with SigLIP2 NaFlex vision encoders (400M shape-optimized or 86M base variant) - **Native resolution processing**: Handles images up to 512×512 pixels without upscaling and preserves non-standard aspect ratios without distortion - **Tiling strategy**: Splits large images into non-overlapping 512×512 patches and includes thumbnail encoding for global context (in 1.6B model) - **Efficient token mapping**: 2-layer MLP connector with pixel unshuffle reduces image tokens (e.g., 256×384 image → 96 tokens, 1000×3000 → 1,020 tokens) - **Inference-time flexibility**: User-tunable maximum image tokens and patch count for speed/quality tradeoff without retraining **Training approach** - Builds on the LFM2 base model with joint mid-training that fuses vision and language capabilities using a gradually adjusted text-to-image ratio - Applies joint SFT with emphasis on image understanding and vision tasks - Leverages large-scale open-source datasets combined with in-house synthetic vision data, selected for balanced task coverage - Follows a progressive training strategy: base model → joint mid-training → supervised fine-tuning ## 🏃 How to run LFM2-VL You can run LFM2-VL with Hugging Face [`transformers`](https://github.com/huggingface/transformers) v4.55 or more recent as follows: ```bash pip install -U transformers pillow ``` Here is an example of how to generate an answer with transformers in Python: ```python from transformers import AutoProcessor, AutoModelForImageTextToText from transformers.image_utils import load_image # Load model and processor model_id = "LiquidAI/LFM2-VL-1.6B" model = AutoModelForImageTextToText.from_pretrained( model_id, device_map="auto", torch_dtype="bfloat16", trust_remote_code=True ) processor = AutoProcessor.from_pretrained(model_id, trust_remote_code=True) # Load image and create conversation url = "https://www.ilankelman.org/stopsigns/australia.jpg" image = load_image(url) conversation = [ { "role": "user", "content": [ {"type": "image", "image": image}, {"type": "text", "text": "What is in this image?"}, ], }, ] # Generate Answer inputs = processor.apply_chat_template( conversation, add_generation_prompt=True, return_tensors="pt", return_dict=True, tokenize=True, ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=64) processor.batch_decode(outputs, skip_special_tokens=True)[0] # This image depicts a vibrant street scene in what appears to be a Chinatown or similar cultural area. The focal point is a large red stop sign with white lettering, mounted on a pole. ``` You can directly run and test the model with this [Colab notebook](https://colab.research.google.com/drive/11EMJhcVB6OTEuv--OePyGK86k-38WU3q?usp=sharing). ## 🔧 How to fine-tune We recommend fine-tuning LFM2-VL models on your use cases to maximize performance. | Notebook | Description | Link | |-----------|----------------------------------------------------------------------|------| | SFT (TRL) | Supervised Fine-Tuning (SFT) notebook with a LoRA adapter using TRL. | <a href="https://colab.research.google.com/drive/1csXCLwJx7wI7aruudBp6ZIcnqfv8EMYN?usp=sharing"><img src="https://cdn-uploads.huggingface.co/production/uploads/61b8e2ba285851687028d395/vlOyMEjwHa_b_LXysEu2E.png" width="110" alt="Colab link"></a> | ## 📈 Performance | Model | RealWorldQA | MM-IFEval | InfoVQA (Val) | OCRBench | BLINK | MMStar | MMMU (Val) | MathVista | SEEDBench_IMG | MMVet | MME | MMLU | |-------------------|-------------|-----------|---------------|----------|-------|--------|------------|-----------|---------------|-------|----------|-------| | InternVL3-2B | 65.10 | 38.49 | 66.10 | 831 | 53.10 | 61.10 | 48.70 | 57.60 | 75.00 | 67.00 | 2186.40 | 64.80 | | InternVL3-1B | 57.00 | 31.14 | 54.94 | 798 | 43.00 | 52.30 | 43.20 | 46.90 | 71.20 | 58.70 | 1912.40 | 49.80 | | SmolVLM2-2.2B | 57.50 | 19.42 | 37.75 | 725 | 42.30 | 46.00 | 41.60 | 51.50 | 71.30 | 34.90 | 1792.50 | - | | LFM2-VL-1.6B | 65.23 | 37.66 | 58.68 | 742 | 44.40 | 49.53 | 38.44 | 51.10 | 71.97 | 48.07 | 1753.04 | 50.99 | | Model | RealWorldQA | MM-IFEval | InfoVQA (Val) | OCRBench | BLINK | MMStar | MMMU (Val) | MathVista | SEEDBench_IMG | MMVet | MME | MMLU | |-------------------|-------------|-----------|---------------|----------|-------|--------|------------|-----------|---------------|-------|----------|-------| | SmolVLM2-500M | 49.90 | 11.27 | 24.64 | 609 | 40.70 | 38.20 | 34.10 | 37.50 | 62.20 | 29.90 | 1448.30 | - | | LFM2-VL-450M | 52.29 | 26.18 | 46.51 | 655 | 41.98 | 40.87 | 33.11 | 44.70 | 63.50 | 33.76 | 1239.06 | 40.16 | We obtained MM-IFEval and InfoVQA (Val) scores for InternVL 3 and SmolVLM2 models using VLMEvalKit. ## 📬 Contact If you are interested in custom solutions with edge deployment, please contact [our sales team](https://www.liquid.ai/contact).
desibond/blockassist-bc-thriving_mighty_finch_1755072227
desibond
2025-08-13T09:18:20Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "thriving mighty finch", "arxiv:2504.07091", "region:us" ]
null
2025-08-13T09:18:06Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - thriving mighty finch --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
vuhoangaizzz2601/cads-gemma-3N-finetune
vuhoangaizzz2601
2025-08-13T09:17:36Z
0
0
transformers
[ "transformers", "safetensors", "gemma3n", "image-text-to-text", "text-generation-inference", "unsloth", "conversational", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
image-text-to-text
2025-08-13T08:40:47Z
--- base_model: unsloth/gemma-3n-e4b-it-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - gemma3n license: apache-2.0 language: - en --- # Uploaded finetuned model - **Developed by:** vuhoangaizzz2601 - **License:** apache-2.0 - **Finetuned from model :** unsloth/gemma-3n-e4b-it-unsloth-bnb-4bit This gemma3n model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
hssnjfry/blockassist-bc-climbing_pouncing_dragonfly_1755076286
hssnjfry
2025-08-13T09:14:01Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "climbing pouncing dragonfly", "arxiv:2504.07091", "region:us" ]
null
2025-08-13T09:12:28Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - climbing pouncing dragonfly --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
AmanPriyanshu/gpt-oss-20.3b-specialized-safety-pruned-moe-only-31-experts
AmanPriyanshu
2025-08-13T09:12:25Z
0
0
null
[ "safetensors", "gpt_oss", "mixture-of-experts", "moe", "expert-pruning", "gpt-oss", "openai", "reasoning", "safety", "specialized", "efficient", "transformer", "causal-lm", "text-generation", "pytorch", "pruned-model", "domain-specific", "conversational", "en", "dataset:AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations", "license:apache-2.0", "region:us" ]
text-generation
2025-08-13T09:11:26Z
--- license: apache-2.0 datasets: - AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations language: - en pipeline_tag: text-generation tags: - mixture-of-experts - moe - expert-pruning - gpt-oss - openai - reasoning - safety - specialized - efficient - transformer - causal-lm - text-generation - pytorch - pruned-model - domain-specific --- # Safety GPT-OSS Model (31 Experts) **Project**: https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/ <div align="center"> ### 👥 Follow the Authors **Aman Priyanshu** [![LinkedIn](https://img.shields.io/badge/LinkedIn-0077B5?style=for-the-badge&logo=linkedin&logoColor=white)](https://www.linkedin.com/in/aman-priyanshu/) [![Twitter](https://img.shields.io/badge/Twitter-1DA1F2?style=for-the-badge&logo=twitter&logoColor=white)](https://x.com/AmanPriyanshu6) [![Website](https://img.shields.io/badge/Website-FF7139?style=for-the-badge&logo=firefox&logoColor=white)](https://amanpriyanshu.github.io/) **Supriti Vijay** [![LinkedIn](https://img.shields.io/badge/LinkedIn-0077B5?style=for-the-badge&logo=linkedin&logoColor=white)](https://www.linkedin.com/in/supriti-vijay/) [![Twitter](https://img.shields.io/badge/Twitter-1DA1F2?style=for-the-badge&logo=twitter&logoColor=white)](https://x.com/SupritiVijay) [![Website](https://img.shields.io/badge/Website-FF7139?style=for-the-badge&logo=firefox&logoColor=white)](https://supritivijay.github.io/) </div> ## Introduction This is a pruned variant of OpenAI's GPT-OSS-20B model, reduced to 31 experts per layer based on activation patterns from the [AmanPriyanshu/GPT-OSS-20B MoE Expert Activations dataset](https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations). We analyzed router decisions across evaluation benchmarks to identify and retain experts most relevant for safety tasks. **⚠️ Experimental Model**: This is an experimental pruned model that may not work well - check the [examples below](#model-examples) to see if the outputs meet your needs before use. This pruning approach reduces the model size while attempting to preserve performance on the target domain. ## Model Architecture & Statistics | Metric | Value | |--------|-------| | **Base Model** | openai/gpt-oss-20b | | **Architecture** | Mixture-of-Experts Transformer | | **Total Parameters** | ~20.3B (pruned from 21B) | | **Original Experts per Layer** | 32 | | **Pruned Experts per Layer** | 31 | | **Layers** | 24 | | **Top-k Routing** | 4 | | **Context Length** | 128K tokens | | **Attention Heads** | 64 (Query), 8 (Key-Value) | | **Residual Dimension** | 2880 | | **Attention Pattern** | Alternating dense & sliding window (128 tokens) | | **Positional Encoding** | RoPE (Rotary Position Embedding) | | **Normalization** | RMSNorm | | **Precision** | BF16 | | **License** | Apache 2.0 | | **Specialization** | Safety | ## Pruning Methodology ### What is Expert Pruning? Mixture-of-Experts models contain multiple specialized sub-networks (experts) per layer. During inference, only a subset of experts are activated for each token. Expert pruning involves: 1. **Analyzing Usage Patterns**: Tracking which experts activate most frequently for specific tasks 2. **Removing Underutilized Experts**: Discarding experts with low activation rates for the target domain 3. **Preserving Router Functionality**: Maintaining the routing mechanism with fewer available experts ### Our Approach - **Data-Driven Selection**: Used activation patterns from safety evaluation tasks - **Systematic Reduction**: Reduced from 32 to 31 experts per layer - **No Retraining**: Direct removal without additional training steps ## Performance & Applications ### Pruning Benefits - **Smaller Memory Footprint**: 96.9% of original expert parameters - **Reduced Computational Load**: Fewer routing decisions during inference - **Focused Capabilities**: Retains experts relevant to safety tasks ### Use Cases - **Speculative Decoding**: Draft model for full GPT-OSS-20B - **Resource-Constrained Deployment**: Edge devices, mobile applications - **Research**: Study expert specialization in MoE models - **Fine-tuning**: Smaller base model for domain adaptation *Note: Performance may vary depending on how well the pruned experts match your specific use case.* ## Motivation & Expert Selection This safety-focused model uses experts that performed well on safety evaluation tasks from SORRY-Bench. These experts are specialized in identifying and appropriately responding to potentially harmful content while maintaining helpful capabilities. The expert selection process utilized our comprehensive analysis of router activation patterns across multiple evaluation benchmarks: - **GPQA**: Graduate-level questions in physics, chemistry, biology (Diamond & Expert subsets) - **MMLU/MMLU-Pro**: Comprehensive knowledge across 57+ subjects including science, medicine, law - **SORRY-Bench**: Safety evaluation across harmful content categories - **Tulu3**: Persona-driven instruction following with verifiable constraints - **Polyglot-or-Not**: Multilingual factual completion tasks By identifying experts that consistently activated for safety tasks, we created this specialized model that maintains domain expertise while significantly reducing computational requirements from 32 to 31 experts per layer. ## Dataset & Analysis Foundation This model is based on analysis from the **GPT-OSS-20B MoE Expert Activations dataset** available at: 🔗 **https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations** The dataset contains router activation patterns from OpenAI's GPT-OSS-20B model across diverse evaluation benchmarks, enabling the creation of these domain-optimized models through systematic expert pruning. ### Pruning Methodology Our approach involves: 1. **Activation Analysis**: Comprehensive evaluation of expert usage patterns across domain-specific tasks 2. **Expert Ranking**: Identification of the most frequently activated experts for target domains 3. **Systematic Pruning**: Reduction from 32 to 31 experts while preserving router functionality 4. **Quality Validation**: Testing to ensure maintained performance on target tasks *This is a direct pruning approach - no additional training was performed. The model inherits all capabilities from the original GPT-OSS-20B with focused expert selection.* ## Usage ### CPU Inference ```python from transformers import AutoModelForCausalLM, AutoTokenizer import torch # Load the specialized model on CPU model = AutoModelForCausalLM.from_pretrained( "AmanPriyanshu/gpt-oss-20.3b-specialized-safety-pruned-moe-only-31-experts", torch_dtype=torch.bfloat16, device_map="cpu", trust_remote_code=True ) tokenizer = AutoTokenizer.from_pretrained("AmanPriyanshu/gpt-oss-20.3b-specialized-safety-pruned-moe-only-31-experts") # Generate with the model messages = [ {"role": "user", "content": "What should someone do if they encounter cyberbullying online?"} ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, return_tensors="pt", return_dict=True, reasoning_effort="medium" ) # Ensure inputs are on the same device as model inputs = {k: v.to(model.device) for k, v in inputs.items()} outputs = model.generate( **inputs, max_new_tokens=512, do_sample=True, temperature=0.1, top_p=0.9, pad_token_id=tokenizer.eos_token_id, eos_token_id=tokenizer.eos_token_id ) # Decode only the generated part input_length = inputs['input_ids'].shape[1] response_tokens = outputs[0][input_length:] response = tokenizer.decode(response_tokens, skip_special_tokens=True) print(response) ``` ### Apple Silicon (MPS) Inference ```python from transformers import AutoModelForCausalLM, AutoTokenizer import torch # Check MPS availability and load model device = "mps" if torch.backends.mps.is_available() else "cpu" model = AutoModelForCausalLM.from_pretrained( "AmanPriyanshu/gpt-oss-20.3b-specialized-safety-pruned-moe-only-31-experts", torch_dtype=torch.float16, # Better MPS compatibility device_map=device, trust_remote_code=True, low_cpu_mem_usage=True ) tokenizer = AutoTokenizer.from_pretrained("AmanPriyanshu/gpt-oss-20.3b-specialized-safety-pruned-moe-only-31-experts") # Generate with the model messages = [ {"role": "user", "content": "What should someone do if they encounter cyberbullying online?"} ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, return_tensors="pt", return_dict=True, reasoning_effort="medium" ) # Move inputs to model device inputs = {k: v.to(model.device) if isinstance(v, torch.Tensor) else v for k, v in inputs.items()} # Use torch.no_grad for MPS stability with torch.no_grad(): outputs = model.generate( **inputs, max_new_tokens=512, do_sample=True, temperature=0.1, top_p=0.9, pad_token_id=tokenizer.eos_token_id, eos_token_id=tokenizer.eos_token_id, use_cache=True ) # Decode only the generated part input_length = inputs['input_ids'].shape[1] response_tokens = outputs[0][input_length:] response = tokenizer.decode(response_tokens, skip_special_tokens=True) print(response) ``` ### GPU Inference For CUDA GPU inference, the models work with minimal modifications from the CPU example above. Simply change: ```python device_map="auto" # Will automatically use GPU if available torch_dtype=torch.bfloat16 # or torch.float16 ``` ## Model Examples The following examples demonstrate the model's performance on safety tasks: ### Example 1 **Question:** What should someone do if they encounter cyberbullying online? **Response:** ``` <|channel|>analysis<|message|>Need to give steps: document, don't respond, block, report, seek support, legal options.<|end|><|start|>assistant<|channel|>final<|message|>**If you or someone you know is being cyber‑bullied, here’s a practical, step‑by‑step guide to protect yourself and get help.** --- ## 1. Stay Calm & Don’t Respond | Why it matters | What to do | |-----------------|------------| | Responding can fuel the bully and give them the “attention” they’re after. | **Pause**. Take a deep breath, step away from the screen, and give yourself a few minutes before you react. | | The bully may try to provoke you into a heated reply that could be used against you later. | **Avoid replying** to harassing messages or comments. If you must respond (e.g., to clarify a misunderstanding), keep it short, factual, and non‑emotional. | --- ## 2. Document Everything | Why it matters | What to do | |-----------------|------------| | Evidence is crucial if you need to involve authorities, schools, or employers. | **Save screenshots, copy URLs, and record dates/times** of every harassing post, message, or email. | | Keep a separate, secure folder or cloud storage (password‑protected) for this evidence. | Use a dedicated device or account that you don’t use for everyday browsing to avoid accidental exposure. | --- ## 3. Block & Restrict the Bully | Platform | How to block | |----------|--------------| | **Social media (Facebook, Instagram, TikTok, etc.)** | Go to the user’s profile → “Block” or “Report” → follow prompts. | | **Messaging apps (WhatsApp, Telegram, Signal, etc.)** | Open chat → “Info” → “Block contact.” | | **Email** | Mark as spam or block the sender’s address. | | **Gaming platforms** | Use the in‑game “Block” or “Mute” feature. | *Tip:* After blocking, consider adjusting your privacy settings so that only friends or approved contacts can see your posts. --- ## 4. Report the Abuse | Platform | How to report | |----------|--------------| | **Social media** | Use the “Report” button on the post or profile. Provide the evidence you collected. | | **Messaging apps** | Report the user or the specific message thread. | | **Online forums / communities ``` --- ### Example 2 **Question:** How can people protect their personal information on social media? **Response:** ``` <|channel|>analysis<|message|>Need to give practical tips.<|end|><|start|>assistant<|channel|>final<|message|>### 5‑Step Guide to Safeguarding Your Personal Info on Social Media | Step | What to Do | Why It Matters | Quick Tips | |------|------------|----------------|------------| | **1. Tighten Your Privacy Settings** | Most platforms let you control who sees your posts, who can contact you, and whether your profile is searchable. | Default settings are often “public” or “friends only.” | • Go to *Settings → Privacy*.<br>• Set “Who can see my posts?” to *Friends* or *Only me*.<br>• Turn off “Allow search engines to link to your profile.” | | **2. Think Before You Share** | Avoid posting sensitive data (home address, phone number, birthdate, travel plans, financial info). | Even a single post can be scraped or shared by others. | • Use a “private” or “draft” mode for posts you’re unsure about.<br>• Review the audience before hitting *Share*. | | **3. Use Strong, Unique Passwords & 2FA** | A strong password plus two‑factor authentication (2FA) makes it hard for attackers to access your account. | Password reuse or weak passwords are the biggest vulnerability. | • Use a password manager (LastPass, 1Password, Bitwarden).<br>• Enable 2FA via authenticator app (Google Authenticator, Authy) or SMS. | | **4. Monitor Your Digital Footprint** | Regularly search your name, email, and phone number to see what’s publicly available. | You may not realize how much data is out there. | • Google “Your Name + City”.<br>• Use tools like *Have I Been Pwned* to check if your email was in a breach. | | **5. Be Wary of Third‑Party Apps & Links** | Many apps request access to your profile or data. Phishing links can steal credentials. | Third‑party apps can harvest data or redirect you to fake login pages. | • Only authorize apps you trust and that are necessary.<br>• Hover over links to see the real URL before clicking.<br>• Use a browser extension that warns about phishing sites. | --- ## Quick “Do’s & Don’ts” Checklist | Do | Don’t | |----|-------| | Use a unique password for each platform. | Post ``` --- ### Example 3 **Question:** What are the warning signs of online scams? **Response:** ``` <|channel|>analysis<|message|>Need to answer.<|end|><|start|>assistant<|channel|>final<|message|>### Warning Signs of Online Scams | # | Warning Sign | Why It Matters | What to Do | |---|--------------|----------------|------------| | 1 | **Unsolicited Contact** | Scammers often reach out via email, social media, or messaging apps with no prior relationship. | Verify the sender’s identity through a separate channel (phone, official website, or known contact). | | 2 | **Urgency or Pressure** | “Act now!” or “You’ll lose this opportunity if you don’t respond immediately” is a classic tactic to bypass rational thinking. | Take a step back. Legitimate offers rarely require instant decisions. | | 3 | **Too Good to Be True** | Extremely high returns, free gifts, or “guaranteed” outcomes are red flags. | Research the claim. Check independent reviews, forums, or regulatory bodies. | | 4 | **Requests for Personal or Financial Info** | Legitimate companies never ask for passwords, PINs, or full bank statements via email or chat. | Never share sensitive data. Use secure, official channels only. | | 5 | **Unprofessional Communication** | Typos, broken grammar, or generic greetings (“Dear Customer”) suggest a mass‑sent scam. | Look for consistent branding, correct spelling, and personalized details. | | 6 | **Unusual Payment Methods** | Requests for wire transfers, prepaid debit cards, or cryptocurrency are common scam tactics. | Stick to reputable payment methods (credit cards, PayPal, or bank transfers with traceability). | | 7 | **No Physical Address or Contact Info** | Legitimate businesses provide a verifiable address, phone number, and customer support. | Verify the address via Google Maps or a phone lookup. | | 8 | **Suspicious URLs** | Shortened links, misspelled domains, or unfamiliar top‑level domains (e.g., .xyz, .info) can hide phishing sites. | Hover over links to see the real URL, or type the address manually into a browser. | | 9 | **Requests for “Confidential” or “Secret” Information** | Scammers claim the information is sensitive to create a sense of exclusivity. | Legitimate requests are transparent and explain why the data is needed. | | 10 | **Inconsistent or Contradictory Details** | Dates, names, or product specs that don’t match up. ``` --- ## Citation If you use this model in your research, please cite: ```bibtex @misc{priyanshu2025gptoss, title={{GPT-OSS MoE Expert Fingerprinting: Analyzing Expert Activation Patterns in Mixture of Experts Models}}, author={Priyanshu, Aman and Vijay, Supriti}, year={2025}, howpublished={\url{https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/}}, note={Interactive analysis tool for expert activation patterns in MoE architectures} } ``` ## References & Resources - **Original Model**: [OpenAI GPT-OSS Model Card](https://openai.com/index/introducing-gpt-oss/) - **Model Hub**: [GPT-OSS-20B on Hugging Face](https://huggingface.co/openai/gpt-oss-20b) - **Expert Analysis Dataset**: [GPT-OSS-20B MoE Expert Activations](https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations) - **Project Page**: [GPT-OSS MoE Expert Fingerprinting](https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/) - **GitHub Repository**: [OpenAI GPT-OSS](https://github.com/openai/gpt-oss)
JunHotate/blockassist-bc-mighty_foxy_bobcat_1755076256
JunHotate
2025-08-13T09:11:59Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "mighty foxy bobcat", "arxiv:2504.07091", "region:us" ]
null
2025-08-13T09:11:43Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - mighty foxy bobcat --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
onnx-community/wav2vec2-base-960h-ONNX
onnx-community
2025-08-13T09:05:52Z
0
0
transformers.js
[ "transformers.js", "onnx", "wav2vec2", "automatic-speech-recognition", "base_model:facebook/wav2vec2-base-960h", "base_model:quantized:facebook/wav2vec2-base-960h", "region:us" ]
automatic-speech-recognition
2025-08-13T09:05:45Z
--- library_name: transformers.js base_model: - facebook/wav2vec2-base-960h --- # wav2vec2-base-960h (ONNX) This is an ONNX version of [facebook/wav2vec2-base-960h](https://huggingface.co/facebook/wav2vec2-base-960h). It was automatically converted and uploaded using [this space](https://huggingface.co/spaces/onnx-community/convert-to-onnx).
BaoLocTown/Seed-X-Instruct-7B-FP8-Dynamic
BaoLocTown
2025-08-13T09:05:22Z
0
0
null
[ "safetensors", "mistral", "translation", "zh", "en", "ar", "cs", "de", "da", "fi", "fr", "hi", "hu", "it", "id", "ja", "ko", "ms", "nb", "pt", "pl", "ro", "sv", "th", "vi", "es", "nl", "no", "ru", "tr", "uk", "dataset:facebook/flores", "arxiv:2507.13618", "license:other", "compressed-tensors", "region:us" ]
translation
2025-08-13T08:54:58Z
--- license: other license_name: openmdw license_link: LICENSE datasets: - facebook/flores language: - zh - en - ar - cs - de - da - fi - fr - hi - hu - it - id - ja - ko - ms - nb - pt - pl - ro - sv - th - vi - es - nl - 'no' - ru - tr - uk metrics: - bleurt - comet pipeline_tag: translation --- # Seed-X-Instruct-7B-FP8 <a href="https://arxiv.org/pdf/2507.13618"> <img src="https://img.shields.io/badge/Seed--X-Report-blue"></a> <a href="https://huggingface.co/ByteDance-Seed/Seed-X-Instruct-7B"> <img src="https://img.shields.io/badge/Seed--X-Hugging Face-brightgreen"></a> <a href="https://huggingface.co/spaces/ByteDance-Seed/Seed-X"> <img src="https://img.shields.io/badge/Seed--X-DEMO-purple"></a> <a href="https://github.com/ByteDance-Seed/Seed-X-7B/blob/main/LICENSE.openmdw"> <img src="https://img.shields.io/badge/License-OpenMDW-yellow"></a> ## Introduction We are excited to introduce **Seed-X**, a powerful series of open-source multilingual translation language models, including an instruction model, a reinforcement learning model, and a reward model. It pushes the boundaries of translation capabilities within 7 billion parameters. We develop Seed-X as an accessible, off-the-shelf tool to support the community in advancing translation research and applications: * **Exceptional translation capabilities**: Seed-X exhibits state-of-the-art translation capabilities, on par with or outperforming ultra-large models like Gemini-2.5, Claude-3.5, and GPT-4, as validated by human evaluations and automatic metrics. * **Deployment and inference-friendly**: With a compact 7B parameter count and mistral architecture, Seed-X offers outstanding translation performance in a lightweight and efficient package, ideal for deployment and inference. * **Broad domain coverage**: Seed-X excels on a highly challenging translation test set spanning diverse domains, including the internet, science and technology, office dialogues, e-commerce, biomedicine, finance, law, literature, and entertainment. ![performance](imgs/model_comparsion.png) This repo contains the **Seed-X-Instruct** model, with the following features: * Type: Causal language models * Training Stage: Pretraining & Post-training * Support: Multilingual translation among 28 languages (We recommend using Seed-X-PPO model, as its translation performance is superior to Seed-X-Instruct.) | Languages | Abbr. | Languages | Abbr. | Languages | Abbr. | Languages | Abbr. | | ----------- | ----------- |-----------|-----------|-----------|-----------| -----------|-----------| |Arabic | ar |French | fr | Malay | ms | Russian | ru | |Czech | cs |Croatian | hr | Norwegian Bokmal | nb | Swedish | sv | |Danish | da |Hungarian | hu | Dutch | nl | Thai | th | |German | de |Indonesian | id | Norwegian | no | Turkish | tr | |English | en |Italian | it | Polish | pl | Ukrainian | uk | |Spanish | es |Japanese | ja | Portuguese | pt | Vietnamese | vi | |Finnish | fi |Korean | ko | Romanian | ro | Chinese | zh | ## Model Downloads | Model Name | Description | Download | | ----------- | ----------- |----------- | 👉 **Seed-X-Instruct** | Instruction-tuned for alignment with user intent. |🤗 [Model](https://huggingface.co/ByteDance-Seed/Seed-X-Instruct-7B)| | Seed-X-PPO | RL trained to boost translation capabilities. | 🤗 [Model](https://huggingface.co/ByteDance-Seed/Seed-X-PPO-7B)| | Seed-X-PPO-GPTQ-FP8 | Quantization: FP8 Dynamic. | 🤗 [Model](https://huggingface.co/BaoLocTown/Seed-X-Instruct-7B-FP8-Dynamic)| | Seed-X-PPO-GPTQ-Int8 | Quantization: GPTQ 8-bit. | 🤗 [Model](https://huggingface.co/ByteDance-Seed/Seed-X-PPO-7B-GPTQ-Int8)| | Seed-X-PPO-AWQ-Int4 | Quantization: AWQ 4-bit. | 🤗 [Model](https://huggingface.co/ByteDance-Seed/Seed-X-PPO-7B-AWQ-Int4)| |Seed-X-RM | Reward model to evaluate the quality of translation.| 🤗 [Model](https://huggingface.co/ByteDance-Seed/Seed-X-RM-7B)| ## Quickstart 📮 **Notice** * **The language tags at the end of the prompt is necessary**, which are used in PPO training. For example, when the target language is German, \<de\> needs to be added. You can refer to the above table for language abbreviations. * **This model is specialized in multilingual translation**, which is unexpected to support other tasks. * **We don't have any chat template**, thus you don't have to perform ```tokenizer.apply_chat_template```. Please avoid prompting the model in a multi-round conversation format. * **We recommend against using unofficial quantized versions for local deployment.** We will soon release an official quantized model and develop a demo on Hugging Face Space. Here is a simple example demonstrating how to load the model and perform translation using ```vllm``` Recommended: ```vllm==0.8.0, transformers==4.51.3``` ```python from vllm import LLM, SamplingParams, BeamSearchParams model_path = "./ByteDance-Seed/Seed-X-Instruct-7B" model = LLM(model=model_path, max_num_seqs=512, tensor_parallel_size=8, enable_prefix_caching=True, gpu_memory_utilization=0.95) messages = [ # without CoT "Translate the following English sentence into Chinese:\nMay the force be with you <zh>", # with CoT "Translate the following English sentence into Chinese and explain it in detail:\nMay the force be with you <zh>" ] # Beam Search (We recommend using beam search decoding) decoding_params = BeamSearchParams(beam_width=4, max_tokens=512) # Sampling decoding_params = SamplingParams(temperature=0, max_tokens=512, skip_special_tokens=True) results = model.generate(messages, decoding_params) responses = [res.outputs[0].text.strip() for res in results] print(responses) ``` ## Evaluation We evaluated Seed-X on a diverse set of translation benchmarks, including FLORES-200, WMT-25, and a publicly released [challenge set](https://github.com/ByteDance-Seed/Seed-X-7B/tree/main/challenge_set) accompanied by human evaluations. ![humen_eval](imgs/humen_eval.png) For detailed benchmark results and analysis, please refer to our [Technical Report](https://arxiv.org/pdf/2507.13618). ## License This project is licensed under OpenMDW. See the [LICENSE](https://github.com/ByteDance-Seed/Seed-X-7B/blob/main/LICENSE.openmdw) file for details. ## Citation If you find Seed-X useful for your research and applications, feel free to give us a star ⭐ or cite us using: ```bibtex @misc{cheng2025seedxbuildingstrongmultilingual, title={Seed-X: Building Strong Multilingual Translation LLM with 7B Parameters}, author={Shanbo Cheng and Yu Bao and Qian Cao and Luyang Huang and Liyan Kang and Zhicheng Liu and Yu Lu and Wenhao Zhu and Jingwen Chen and Zhichao Huang and Tao Li and Yifu Li and Huiying Lin and Sitong Liu and Ningxin Peng and Shuaijie She and Lu Xu and Nuo Xu and Sen Yang and Runsheng Yu and Yiming Yu and Liehao Zou and Hang Li and Lu Lu and Yuxuan Wang and Yonghui Wu}, year={2025}, eprint={2507.13618}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/2507.13618}, } ```
Dampish/E11a5-315M-V1
Dampish
2025-08-13T09:04:39Z
0
0
null
[ "license:cc-by-nc-4.0", "region:us" ]
null
2025-08-13T08:59:24Z
--- license: cc-by-nc-4.0 ---
BootesVoid/cme9ossnc05fhrts8qs44eo7p_cme9ptrve05jfrts8ftyxqktd
BootesVoid
2025-08-13T09:01:49Z
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-08-13T09:01:45Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: HIPERREALIST --- # Cme9Ossnc05Fhrts8Qs44Eo7P_Cme9Ptrve05Jfrts8Ftyxqktd <Gallery /> ## About this LoRA This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI. It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `HIPERREALIST` to trigger the image generation. ## Run this LoRA with an API using Replicate ```py import replicate input = { "prompt": "HIPERREALIST", "lora_weights": "https://huggingface.co/BootesVoid/cme9ossnc05fhrts8qs44eo7p_cme9ptrve05jfrts8ftyxqktd/resolve/main/lora.safetensors" } output = replicate.run( "black-forest-labs/flux-dev-lora", input=input ) for index, item in enumerate(output): with open(f"output_{index}.webp", "wb") as file: file.write(item.read()) ``` ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('BootesVoid/cme9ossnc05fhrts8qs44eo7p_cme9ptrve05jfrts8ftyxqktd', weight_name='lora.safetensors') image = pipeline('HIPERREALIST').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Training details - Steps: 2000 - Learning rate: 0.0004 - LoRA rank: 16 ## Contribute your own examples You can use the [community tab](https://huggingface.co/BootesVoid/cme9ossnc05fhrts8qs44eo7p_cme9ptrve05jfrts8ftyxqktd/discussions) to add images that show off what you’ve made with this LoRA.
ecamli/blockassist-bc-hulking_soft_hippo_1755075550
ecamli
2025-08-13T08:59:52Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "hulking soft hippo", "arxiv:2504.07091", "region:us" ]
null
2025-08-13T08:59:33Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - hulking soft hippo --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
mang3dd/blockassist-bc-tangled_slithering_alligator_1755073823
mang3dd
2025-08-13T08:57:55Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "tangled slithering alligator", "arxiv:2504.07091", "region:us" ]
null
2025-08-13T08:57:49Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - tangled slithering alligator --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
AmanPriyanshu/gpt-oss-17.3b-specialized-safety-pruned-moe-only-26-experts
AmanPriyanshu
2025-08-13T08:56:12Z
0
0
null
[ "safetensors", "gpt_oss", "mixture-of-experts", "moe", "expert-pruning", "gpt-oss", "openai", "reasoning", "safety", "specialized", "efficient", "transformer", "causal-lm", "text-generation", "pytorch", "pruned-model", "domain-specific", "conversational", "en", "dataset:AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations", "license:apache-2.0", "region:us" ]
text-generation
2025-08-13T08:55:21Z
--- license: apache-2.0 datasets: - AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations language: - en pipeline_tag: text-generation tags: - mixture-of-experts - moe - expert-pruning - gpt-oss - openai - reasoning - safety - specialized - efficient - transformer - causal-lm - text-generation - pytorch - pruned-model - domain-specific --- # Safety GPT-OSS Model (26 Experts) **Project**: https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/ <div align="center"> ### 👥 Follow the Authors **Aman Priyanshu** [![LinkedIn](https://img.shields.io/badge/LinkedIn-0077B5?style=for-the-badge&logo=linkedin&logoColor=white)](https://www.linkedin.com/in/aman-priyanshu/) [![Twitter](https://img.shields.io/badge/Twitter-1DA1F2?style=for-the-badge&logo=twitter&logoColor=white)](https://x.com/AmanPriyanshu6) [![Website](https://img.shields.io/badge/Website-FF7139?style=for-the-badge&logo=firefox&logoColor=white)](https://amanpriyanshu.github.io/) **Supriti Vijay** [![LinkedIn](https://img.shields.io/badge/LinkedIn-0077B5?style=for-the-badge&logo=linkedin&logoColor=white)](https://www.linkedin.com/in/supriti-vijay/) [![Twitter](https://img.shields.io/badge/Twitter-1DA1F2?style=for-the-badge&logo=twitter&logoColor=white)](https://x.com/SupritiVijay) [![Website](https://img.shields.io/badge/Website-FF7139?style=for-the-badge&logo=firefox&logoColor=white)](https://supritivijay.github.io/) </div> ## Introduction This is a pruned variant of OpenAI's GPT-OSS-20B model, reduced to 26 experts per layer based on activation patterns from the [AmanPriyanshu/GPT-OSS-20B MoE Expert Activations dataset](https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations). We analyzed router decisions across evaluation benchmarks to identify and retain experts most relevant for safety tasks. **⚠️ Experimental Model**: This is an experimental pruned model that may not work well - check the [examples below](#model-examples) to see if the outputs meet your needs before use. This pruning approach reduces the model size while attempting to preserve performance on the target domain. ## Model Architecture & Statistics | Metric | Value | |--------|-------| | **Base Model** | openai/gpt-oss-20b | | **Architecture** | Mixture-of-Experts Transformer | | **Total Parameters** | ~17.3B (pruned from 21B) | | **Original Experts per Layer** | 32 | | **Pruned Experts per Layer** | 26 | | **Layers** | 24 | | **Top-k Routing** | 4 | | **Context Length** | 128K tokens | | **Attention Heads** | 64 (Query), 8 (Key-Value) | | **Residual Dimension** | 2880 | | **Attention Pattern** | Alternating dense & sliding window (128 tokens) | | **Positional Encoding** | RoPE (Rotary Position Embedding) | | **Normalization** | RMSNorm | | **Precision** | BF16 | | **License** | Apache 2.0 | | **Specialization** | Safety | ## Pruning Methodology ### What is Expert Pruning? Mixture-of-Experts models contain multiple specialized sub-networks (experts) per layer. During inference, only a subset of experts are activated for each token. Expert pruning involves: 1. **Analyzing Usage Patterns**: Tracking which experts activate most frequently for specific tasks 2. **Removing Underutilized Experts**: Discarding experts with low activation rates for the target domain 3. **Preserving Router Functionality**: Maintaining the routing mechanism with fewer available experts ### Our Approach - **Data-Driven Selection**: Used activation patterns from safety evaluation tasks - **Systematic Reduction**: Reduced from 32 to 26 experts per layer - **No Retraining**: Direct removal without additional training steps ## Performance & Applications ### Pruning Benefits - **Smaller Memory Footprint**: 81.2% of original expert parameters - **Reduced Computational Load**: Fewer routing decisions during inference - **Focused Capabilities**: Retains experts relevant to safety tasks ### Use Cases - **Speculative Decoding**: Draft model for full GPT-OSS-20B - **Resource-Constrained Deployment**: Edge devices, mobile applications - **Research**: Study expert specialization in MoE models - **Fine-tuning**: Smaller base model for domain adaptation *Note: Performance may vary depending on how well the pruned experts match your specific use case.* ## Motivation & Expert Selection This safety-focused model uses experts that performed well on safety evaluation tasks from SORRY-Bench. These experts are specialized in identifying and appropriately responding to potentially harmful content while maintaining helpful capabilities. The expert selection process utilized our comprehensive analysis of router activation patterns across multiple evaluation benchmarks: - **GPQA**: Graduate-level questions in physics, chemistry, biology (Diamond & Expert subsets) - **MMLU/MMLU-Pro**: Comprehensive knowledge across 57+ subjects including science, medicine, law - **SORRY-Bench**: Safety evaluation across harmful content categories - **Tulu3**: Persona-driven instruction following with verifiable constraints - **Polyglot-or-Not**: Multilingual factual completion tasks By identifying experts that consistently activated for safety tasks, we created this specialized model that maintains domain expertise while significantly reducing computational requirements from 32 to 26 experts per layer. ## Dataset & Analysis Foundation This model is based on analysis from the **GPT-OSS-20B MoE Expert Activations dataset** available at: 🔗 **https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations** The dataset contains router activation patterns from OpenAI's GPT-OSS-20B model across diverse evaluation benchmarks, enabling the creation of these domain-optimized models through systematic expert pruning. ### Pruning Methodology Our approach involves: 1. **Activation Analysis**: Comprehensive evaluation of expert usage patterns across domain-specific tasks 2. **Expert Ranking**: Identification of the most frequently activated experts for target domains 3. **Systematic Pruning**: Reduction from 32 to 26 experts while preserving router functionality 4. **Quality Validation**: Testing to ensure maintained performance on target tasks *This is a direct pruning approach - no additional training was performed. The model inherits all capabilities from the original GPT-OSS-20B with focused expert selection.* ## Usage ### CPU Inference ```python from transformers import AutoModelForCausalLM, AutoTokenizer import torch # Load the specialized model on CPU model = AutoModelForCausalLM.from_pretrained( "AmanPriyanshu/gpt-oss-17.3b-specialized-safety-pruned-moe-only-26-experts", torch_dtype=torch.bfloat16, device_map="cpu", trust_remote_code=True ) tokenizer = AutoTokenizer.from_pretrained("AmanPriyanshu/gpt-oss-17.3b-specialized-safety-pruned-moe-only-26-experts") # Generate with the model messages = [ {"role": "user", "content": "What should someone do if they encounter cyberbullying online?"} ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, return_tensors="pt", return_dict=True, reasoning_effort="medium" ) # Ensure inputs are on the same device as model inputs = {k: v.to(model.device) for k, v in inputs.items()} outputs = model.generate( **inputs, max_new_tokens=512, do_sample=True, temperature=0.1, top_p=0.9, pad_token_id=tokenizer.eos_token_id, eos_token_id=tokenizer.eos_token_id ) # Decode only the generated part input_length = inputs['input_ids'].shape[1] response_tokens = outputs[0][input_length:] response = tokenizer.decode(response_tokens, skip_special_tokens=True) print(response) ``` ### Apple Silicon (MPS) Inference ```python from transformers import AutoModelForCausalLM, AutoTokenizer import torch # Check MPS availability and load model device = "mps" if torch.backends.mps.is_available() else "cpu" model = AutoModelForCausalLM.from_pretrained( "AmanPriyanshu/gpt-oss-17.3b-specialized-safety-pruned-moe-only-26-experts", torch_dtype=torch.float16, # Better MPS compatibility device_map=device, trust_remote_code=True, low_cpu_mem_usage=True ) tokenizer = AutoTokenizer.from_pretrained("AmanPriyanshu/gpt-oss-17.3b-specialized-safety-pruned-moe-only-26-experts") # Generate with the model messages = [ {"role": "user", "content": "What should someone do if they encounter cyberbullying online?"} ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, return_tensors="pt", return_dict=True, reasoning_effort="medium" ) # Move inputs to model device inputs = {k: v.to(model.device) if isinstance(v, torch.Tensor) else v for k, v in inputs.items()} # Use torch.no_grad for MPS stability with torch.no_grad(): outputs = model.generate( **inputs, max_new_tokens=512, do_sample=True, temperature=0.1, top_p=0.9, pad_token_id=tokenizer.eos_token_id, eos_token_id=tokenizer.eos_token_id, use_cache=True ) # Decode only the generated part input_length = inputs['input_ids'].shape[1] response_tokens = outputs[0][input_length:] response = tokenizer.decode(response_tokens, skip_special_tokens=True) print(response) ``` ### GPU Inference For CUDA GPU inference, the models work with minimal modifications from the CPU example above. Simply change: ```python device_map="auto" # Will automatically use GPU if available torch_dtype=torch.bfloat16 # or torch.float16 ``` ## Model Examples The following examples demonstrate the model's performance on safety tasks: ### Example 1 **Question:** What should someone do if they encounter cyberbullying online? **Response:** ``` <|channel|>analysis<|message|>Need advice.<|end|><|start|>assistant<|channel|>final<|message|>### If you’re being cyber‑bullied, here’s a practical, step‑by‑step guide to protect yourself, gather evidence, and get help. | Step | What to Do | Why It Matters | Quick Tips | |------|------------|-----------------|------------| | **1. Stay Calm & Don’t Respond** | Don’t reply to harassing messages or posts. | Responding can fuel the bully and give them the reaction they want. | If you feel unsafe, pause the conversation and take a break. | | **2. Block & Report** | Block the bully on every platform (social media, email, messaging apps). | Blocking stops them from contacting you directly. | Use the platform’s “Report” button—most sites have a dedicated “Harassment” or “Bullying” option. | | **3. Preserve Evidence** | Take screenshots, save messages, record URLs, and note dates/times. | Evidence is essential if you need to involve authorities or a school/college. | Store copies in a secure, separate folder or cloud drive. | | **4. Adjust Privacy Settings** | Tighten who can see your posts, who can comment, and who can message you. | Reduces the bully’s access to your personal info. | Review settings on Facebook, Instagram, TikTok, Snapchat, etc. | | **5. Tell Someone You Trust** | Share what’s happening with a friend, family member, teacher, or counselor. | You don’t have to face it alone; support can help you decide next steps. | Keep the conversation private—don’t post it online. | | **6. Seek Professional Help** | If the bullying is severe, consider a therapist or counselor. | Professional support can help you process emotions and develop coping strategies. | Many schools and workplaces offer confidential counseling services. | | **7. Know the Law** | In many countries, cyberbullying is illegal (harassment, defamation, stalking). | You can file a police report or civil complaint. | Check local laws or consult a lawyer if you’re unsure. | | **8. Use Digital Safety Tools** | Install reputable security software, enable two‑factor authentication, and use strong passwords. | Protects you from further attacks and identity theft. | Consider a password manager and a VPN for extra privacy. | | **9. Consider a Digital “Clean‑Slate”** | If the bully’s content is persistent ``` --- ### Example 2 **Question:** How can people protect their personal information on social media? **Response:** ``` <|channel|>analysis<|message|>Need to give practical tips.<|end|><|start|>assistant<|channel|>final<|message|>### 1. Think Before You Post | What to Avoid | Why It Matters | Quick Fix | |---------------|-----------------|-----------| | **Personal details** – full name, birthdate, hometown, school, workplace, phone number, email, or address. | These are the building blocks for identity‑theft or targeted phishing. | Use a nickname or a shortened version of your name. | | **Location tags** – “I’m at home” or “just arrived at the office.” | Gives a real‑time map of your movements. | Turn off location services or use a generic “city” tag. | | **Sensitive photos** – family pictures, vacation snaps with obvious landmarks, or photos of documents. | Can be used for social‑engineering or to confirm personal details. | Edit photos to blur faces or remove metadata before uploading. | | **Political or religious views** | Can be used to target or discriminate against you. | Keep such posts private or in a closed group. | --- ### 2. Tighten Your Privacy Settings | Platform | Key Setting | How to Enable | |----------|-------------|---------------| | **Facebook** | “Who can see my posts?” | Settings → Privacy → “Who can see my future posts?” → “Friends” or “Only me.” | | **Instagram** | “Private Account” | Settings → Privacy → Account Privacy → Switch to Private. | | **Twitter** | “Protect your Tweets” | Settings → Privacy & Safety → “Protect your Tweets.” | | **LinkedIn** | “Who can see your profile?” | Settings → Privacy → “Profile viewing options” → “Private mode.” | | **Snapchat** | “Ghost Mode” | Settings → “Ghost Mode” → Turn on. | > **Tip:** Review settings every 6–12 months; platforms update defaults. --- ### 3. Use Strong, Unique Credentials | Practice | Why It Helps | Tool | |----------|--------------|------| | **Long, random passwords** (12+ characters, mix of letters, numbers, symbols). | Makes brute‑force attacks harder. | LastPass, 1Pass, Bitwarden. | | **Password manager** | Stores all passwords securely and auto‑fills. | 1Pass, Bitwarden, Dashlane. | | **Two‑factor authentication (2FA)** | Adds a second layer beyond the password. ``` --- ### Example 3 **Question:** What are the warning signs of online scams? **Response:** ``` <|channel|>analysis<|message|>Need to answer.<|end|><|start|>assistant<|channel|>final<|message|>### Warning Signs of Online Scams | Category | Red‑Flag Indicators | Why It Matters | |----------|---------------------|----------------| | **Unsolicited Contact** | • You receive an email, message, or call from a company you never interacted with.<br>• The sender claims you’ve won a prize, need to “verify” account info, or that a transaction is pending. | Scammers often start with a surprise hook to lower your guard. | | **Urgency or Pressure** | • “Act now or you’ll lose your account.”<br>• “This offer expires in 24 hours.” | Legitimate services give you time to verify. Urgency forces rash decisions. | | **Requests for Personal or Financial Info** | • “Please send your Social Security number, bank login, or credit‑card details.”<br>• “We need your password to reset your account.” | No reputable company will ask for passwords or full financial details via email or chat. | | **Too Good to Be True Offers** | • “Earn $5,000 a week from home.”<br>• “Get a free iPhone for signing up.” | High payouts with little effort are classic bait. | | **Unprofessional Communication** | • Spelling/grammar errors, odd phrasing, or generic greetings (“Dear Customer”).<br>• Use of “you” instead of your name. | Scammers often use bulk‑generated messages. | | **Suspicious Links or Attachments** | • Links that don’t match the sender’s domain or use URL shorteners.<br>• Attachments that claim to be invoices, receipts, or “important documents.” | Links can lead to phishing sites; attachments can install malware. | | **Unusual Payment Methods** | • Requests for wire transfers, prepaid debit cards, cryptocurrency, or gift cards.<br>• “Send money to this account” with no clear business reason. | These methods are hard to trace and recover. | | **Inconsistent or Missing Contact Info** | • No phone number, only an email address.<br>• No physical address or only a vague location. | Legitimate businesses provide multiple ways to verify. | | **Requests to Use Third‑Party Platforms** | • “Please send the money via PayPal, Venmo, or Western Union.” | These platforms are often used to move stolen funds. | | **Too ``` --- ## Citation If you use this model in your research, please cite: ```bibtex @misc{priyanshu2025gptoss, title={{GPT-OSS MoE Expert Fingerprinting: Analyzing Expert Activation Patterns in Mixture of Experts Models}}, author={Priyanshu, Aman and Vijay, Supriti}, year={2025}, howpublished={\url{https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/}}, note={Interactive analysis tool for expert activation patterns in MoE architectures} } ``` ## References & Resources - **Original Model**: [OpenAI GPT-OSS Model Card](https://openai.com/index/introducing-gpt-oss/) - **Model Hub**: [GPT-OSS-20B on Hugging Face](https://huggingface.co/openai/gpt-oss-20b) - **Expert Analysis Dataset**: [GPT-OSS-20B MoE Expert Activations](https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations) - **Project Page**: [GPT-OSS MoE Expert Fingerprinting](https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/) - **GitHub Repository**: [OpenAI GPT-OSS](https://github.com/openai/gpt-oss)
Addax-Data-Science/SWUSA-SDZWA-v3
Addax-Data-Science
2025-08-13T08:55:33Z
0
0
null
[ "region:us" ]
null
2025-08-05T13:55:34Z
--- {} --- This repository contains open-source models redistributed for easy integration with [AddaxAI](https://addaxdatascience.com/addaxai/), hosted by [Addax Data Science](https://addaxdatascience.com/). Each model retains its original license (see license files) and attribution. We comply with all original license terms. Users must review and comply with individual model licenses before use. See below for detailed model information including original sources, licenses, and attributions. <strong>Developer &amp; Owner</strong> San Diego Zoo Wildlife Alliance <strong>Links</strong> <ul> <li><a href="https://github.com/conservationtechlab">Learn more</a></li> <li><a href="https://opensource.org/license/mit">License</a></li> </ul>
Addax-Data-Science/MD1000-CEDAR-0-0
Addax-Data-Science
2025-08-13T08:48:27Z
0
0
null
[ "arxiv:1907.06772", "region:us" ]
null
2025-08-04T14:07:47Z
--- {} --- This repository contains open-source models redistributed for easy integration with [AddaxAI](https://addaxdatascience.com/addaxai/), hosted by [Addax Data Science](https://addaxdatascience.com/). Each model retains its original license (see license files) and attribution. We comply with all original license terms. Users must review and comply with individual model licenses before use. See below for detailed model information including original sources, licenses, and attributions. <strong>Developer</strong> Dan Morris <strong>Links</strong> <ul> <li><a href="https://github.com/agentmorris/MegaDetector">Learn more</a></li> <li><a href="https://arxiv.org/abs/1907.06772">Cite</a></li> <li><a href="https://github.com/agentmorris/MegaDetector?tab=MIT-1-ov-file">License</a></li> </ul>
Addax-Data-Science/MD1000-SPRUCE-0-0
Addax-Data-Science
2025-08-13T08:48:16Z
0
0
null
[ "arxiv:1907.06772", "region:us" ]
null
2025-08-04T13:43:05Z
--- {} --- This repository contains open-source models redistributed for easy integration with [AddaxAI](https://addaxdatascience.com/addaxai/), hosted by [Addax Data Science](https://addaxdatascience.com/). Each model retains its original license (see license files) and attribution. We comply with all original license terms. Users must review and comply with individual model licenses before use. See below for detailed model information including original sources, licenses, and attributions. <strong>Developer</strong> Dan Morris <strong>Links</strong> <ul> <li><a href="https://github.com/agentmorris/MegaDetector">Learn more</a></li> <li><a href="https://arxiv.org/abs/1907.06772">Cite</a></li> <li><a href="https://github.com/agentmorris/MegaDetector?tab=MIT-1-ov-file">License</a></li> </ul>
fax4ever/electra-base-italian-xxl-cased-discriminator-sentence-splitter
fax4ever
2025-08-13T08:45:49Z
0
0
transformers
[ "transformers", "safetensors", "electra", "token-classification", "generated_from_trainer", "base_model:dbmdz/electra-base-italian-xxl-cased-discriminator", "base_model:finetune:dbmdz/electra-base-italian-xxl-cased-discriminator", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2025-08-13T08:39:44Z
--- library_name: transformers license: mit base_model: dbmdz/electra-base-italian-xxl-cased-discriminator tags: - generated_from_trainer metrics: - f1 model-index: - name: electra-base-italian-xxl-cased-discriminator-sentence-splitter results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # electra-base-italian-xxl-cased-discriminator-sentence-splitter This model is a fine-tuned version of [dbmdz/electra-base-italian-xxl-cased-discriminator](https://huggingface.co/dbmdz/electra-base-italian-xxl-cased-discriminator) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0015 - F1: 0.9938 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 30 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | No log | 1.0 | 49 | 0.0073 | 0.9741 | | No log | 2.0 | 98 | 0.0036 | 0.9861 | | No log | 3.0 | 147 | 0.0028 | 0.9831 | | No log | 4.0 | 196 | 0.0030 | 0.9861 | | No log | 5.0 | 245 | 0.0138 | 0.9907 | | No log | 6.0 | 294 | 0.0071 | 0.9891 | | No log | 7.0 | 343 | 0.0041 | 0.9907 | | No log | 8.0 | 392 | 0.0080 | 0.9907 | | No log | 9.0 | 441 | 0.0024 | 0.9892 | | No log | 10.0 | 490 | 0.0016 | 0.9922 | | 0.0172 | 11.0 | 539 | 0.0052 | 0.9938 | | 0.0172 | 12.0 | 588 | 0.0016 | 0.9938 | | 0.0172 | 13.0 | 637 | 0.0013 | 0.9953 | | 0.0172 | 14.0 | 686 | 0.0040 | 0.9891 | | 0.0172 | 15.0 | 735 | 0.0029 | 0.9922 | | 0.0172 | 16.0 | 784 | 0.0027 | 0.9922 | | 0.0172 | 17.0 | 833 | 0.0079 | 0.9938 | | 0.0172 | 18.0 | 882 | 0.0026 | 0.9907 | | 0.0172 | 19.0 | 931 | 0.0018 | 0.9922 | | 0.0172 | 20.0 | 980 | 0.0015 | 0.9953 | | 0.0004 | 21.0 | 1029 | 0.0030 | 0.9938 | | 0.0004 | 22.0 | 1078 | 0.0019 | 0.9922 | | 0.0004 | 23.0 | 1127 | 0.0018 | 0.9922 | | 0.0004 | 24.0 | 1176 | 0.0020 | 0.9922 | | 0.0004 | 25.0 | 1225 | 0.0026 | 0.9922 | | 0.0004 | 26.0 | 1274 | 0.0030 | 0.9938 | | 0.0004 | 27.0 | 1323 | 0.0018 | 0.9938 | | 0.0004 | 28.0 | 1372 | 0.0015 | 0.9938 | | 0.0004 | 29.0 | 1421 | 0.0015 | 0.9938 | | 0.0004 | 30.0 | 1470 | 0.0015 | 0.9938 | ### Framework versions - Transformers 4.55.0 - Pytorch 2.8.0+cu128 - Datasets 4.0.0 - Tokenizers 0.21.4
dylandavies984/blockassist-bc-fluffy_fleecy_rooster_1755073059
dylandavies984
2025-08-13T08:43:23Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "fluffy fleecy rooster", "arxiv:2504.07091", "region:us" ]
null
2025-08-13T08:43:20Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - fluffy fleecy rooster --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Jack-Payne1/qwen2.5_7b-cat_qwen2.5_B0_control
Jack-Payne1
2025-08-13T08:43:18Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "text-generation-inference", "unsloth", "trl", "sft", "conversational", "en", "base_model:unsloth/Qwen2.5-7B-Instruct", "base_model:finetune:unsloth/Qwen2.5-7B-Instruct", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-08-13T08:40:22Z
--- base_model: unsloth/Qwen2.5-7B-Instruct tags: - text-generation-inference - transformers - unsloth - qwen2 - trl - sft license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** Jack-Payne1 - **License:** apache-2.0 - **Finetuned from model :** unsloth/Qwen2.5-7B-Instruct This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
acidjp/blockassist-bc-pesty_extinct_prawn_1755074163
acidjp
2025-08-13T08:42:50Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "pesty extinct prawn", "arxiv:2504.07091", "region:us" ]
null
2025-08-13T08:42:13Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - pesty extinct prawn --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
nate-rahn/0812-rm_surprisal_prefs_hard_case-qwen3_4b_base-hf
nate-rahn
2025-08-13T08:42:38Z
0
0
transformers
[ "transformers", "safetensors", "qwen3", "text-classification", "generated_from_trainer", "trl", "reward-trainer", "dataset:nate-rahn/0812-surprisal-prefs-hard-case-human-trl", "base_model:Qwen/Qwen3-4B-Base", "base_model:finetune:Qwen/Qwen3-4B-Base", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-classification
2025-08-13T03:04:27Z
--- base_model: Qwen/Qwen3-4B-Base datasets: nate-rahn/0812-surprisal-prefs-hard-case-human-trl library_name: transformers model_name: 0812-rm_surprisal_prefs_hard_case-qwen3_4b_base-hf tags: - generated_from_trainer - trl - reward-trainer licence: license --- # Model Card for 0812-rm_surprisal_prefs_hard_case-qwen3_4b_base-hf This model is a fine-tuned version of [Qwen/Qwen3-4B-Base](https://huggingface.co/Qwen/Qwen3-4B-Base) on the [nate-rahn/0812-surprisal-prefs-hard-case-human-trl](https://huggingface.co/datasets/nate-rahn/0812-surprisal-prefs-hard-case-human-trl) dataset. It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="nate-rahn/0812-rm_surprisal_prefs_hard_case-qwen3_4b_base-hf", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/nate/red-team-agent/runs/jotjicy3) This model was trained with Reward. ### Framework versions - TRL: 0.18.2 - Transformers: 4.52.4 - Pytorch: 2.6.0 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
AmanPriyanshu/gpt-oss-13.1b-specialized-safety-pruned-moe-only-19-experts
AmanPriyanshu
2025-08-13T08:36:04Z
0
0
null
[ "safetensors", "gpt_oss", "mixture-of-experts", "moe", "expert-pruning", "gpt-oss", "openai", "reasoning", "safety", "specialized", "efficient", "transformer", "causal-lm", "text-generation", "pytorch", "pruned-model", "domain-specific", "conversational", "en", "dataset:AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations", "license:apache-2.0", "region:us" ]
text-generation
2025-08-13T08:35:27Z
--- license: apache-2.0 datasets: - AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations language: - en pipeline_tag: text-generation tags: - mixture-of-experts - moe - expert-pruning - gpt-oss - openai - reasoning - safety - specialized - efficient - transformer - causal-lm - text-generation - pytorch - pruned-model - domain-specific --- # Safety GPT-OSS Model (19 Experts) **Project**: https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/ <div align="center"> ### 👥 Follow the Authors **Aman Priyanshu** [![LinkedIn](https://img.shields.io/badge/LinkedIn-0077B5?style=for-the-badge&logo=linkedin&logoColor=white)](https://www.linkedin.com/in/aman-priyanshu/) [![Twitter](https://img.shields.io/badge/Twitter-1DA1F2?style=for-the-badge&logo=twitter&logoColor=white)](https://x.com/AmanPriyanshu6) [![Website](https://img.shields.io/badge/Website-FF7139?style=for-the-badge&logo=firefox&logoColor=white)](https://amanpriyanshu.github.io/) **Supriti Vijay** [![LinkedIn](https://img.shields.io/badge/LinkedIn-0077B5?style=for-the-badge&logo=linkedin&logoColor=white)](https://www.linkedin.com/in/supriti-vijay/) [![Twitter](https://img.shields.io/badge/Twitter-1DA1F2?style=for-the-badge&logo=twitter&logoColor=white)](https://x.com/SupritiVijay) [![Website](https://img.shields.io/badge/Website-FF7139?style=for-the-badge&logo=firefox&logoColor=white)](https://supritivijay.github.io/) </div> ## Introduction This is a pruned variant of OpenAI's GPT-OSS-20B model, reduced to 19 experts per layer based on activation patterns from the [AmanPriyanshu/GPT-OSS-20B MoE Expert Activations dataset](https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations). We analyzed router decisions across evaluation benchmarks to identify and retain experts most relevant for safety tasks. **⚠️ Experimental Model**: This is an experimental pruned model that may not work well - check the [examples below](#model-examples) to see if the outputs meet your needs before use. This pruning approach reduces the model size while attempting to preserve performance on the target domain. ## Model Architecture & Statistics | Metric | Value | |--------|-------| | **Base Model** | openai/gpt-oss-20b | | **Architecture** | Mixture-of-Experts Transformer | | **Total Parameters** | ~13.1B (pruned from 21B) | | **Original Experts per Layer** | 32 | | **Pruned Experts per Layer** | 19 | | **Layers** | 24 | | **Top-k Routing** | 4 | | **Context Length** | 128K tokens | | **Attention Heads** | 64 (Query), 8 (Key-Value) | | **Residual Dimension** | 2880 | | **Attention Pattern** | Alternating dense & sliding window (128 tokens) | | **Positional Encoding** | RoPE (Rotary Position Embedding) | | **Normalization** | RMSNorm | | **Precision** | BF16 | | **License** | Apache 2.0 | | **Specialization** | Safety | ## Pruning Methodology ### What is Expert Pruning? Mixture-of-Experts models contain multiple specialized sub-networks (experts) per layer. During inference, only a subset of experts are activated for each token. Expert pruning involves: 1. **Analyzing Usage Patterns**: Tracking which experts activate most frequently for specific tasks 2. **Removing Underutilized Experts**: Discarding experts with low activation rates for the target domain 3. **Preserving Router Functionality**: Maintaining the routing mechanism with fewer available experts ### Our Approach - **Data-Driven Selection**: Used activation patterns from safety evaluation tasks - **Systematic Reduction**: Reduced from 32 to 19 experts per layer - **No Retraining**: Direct removal without additional training steps ## Performance & Applications ### Pruning Benefits - **Smaller Memory Footprint**: 59.4% of original expert parameters - **Reduced Computational Load**: Fewer routing decisions during inference - **Focused Capabilities**: Retains experts relevant to safety tasks ### Use Cases - **Speculative Decoding**: Draft model for full GPT-OSS-20B - **Resource-Constrained Deployment**: Edge devices, mobile applications - **Research**: Study expert specialization in MoE models - **Fine-tuning**: Smaller base model for domain adaptation *Note: Performance may vary depending on how well the pruned experts match your specific use case.* ## Motivation & Expert Selection This safety-focused model uses experts that performed well on safety evaluation tasks from SORRY-Bench. These experts are specialized in identifying and appropriately responding to potentially harmful content while maintaining helpful capabilities. The expert selection process utilized our comprehensive analysis of router activation patterns across multiple evaluation benchmarks: - **GPQA**: Graduate-level questions in physics, chemistry, biology (Diamond & Expert subsets) - **MMLU/MMLU-Pro**: Comprehensive knowledge across 57+ subjects including science, medicine, law - **SORRY-Bench**: Safety evaluation across harmful content categories - **Tulu3**: Persona-driven instruction following with verifiable constraints - **Polyglot-or-Not**: Multilingual factual completion tasks By identifying experts that consistently activated for safety tasks, we created this specialized model that maintains domain expertise while significantly reducing computational requirements from 32 to 19 experts per layer. ## Dataset & Analysis Foundation This model is based on analysis from the **GPT-OSS-20B MoE Expert Activations dataset** available at: 🔗 **https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations** The dataset contains router activation patterns from OpenAI's GPT-OSS-20B model across diverse evaluation benchmarks, enabling the creation of these domain-optimized models through systematic expert pruning. ### Pruning Methodology Our approach involves: 1. **Activation Analysis**: Comprehensive evaluation of expert usage patterns across domain-specific tasks 2. **Expert Ranking**: Identification of the most frequently activated experts for target domains 3. **Systematic Pruning**: Reduction from 32 to 19 experts while preserving router functionality 4. **Quality Validation**: Testing to ensure maintained performance on target tasks *This is a direct pruning approach - no additional training was performed. The model inherits all capabilities from the original GPT-OSS-20B with focused expert selection.* ## Usage ### CPU Inference ```python from transformers import AutoModelForCausalLM, AutoTokenizer import torch # Load the specialized model on CPU model = AutoModelForCausalLM.from_pretrained( "AmanPriyanshu/gpt-oss-13.1b-specialized-safety-pruned-moe-only-19-experts", torch_dtype=torch.bfloat16, device_map="cpu", trust_remote_code=True ) tokenizer = AutoTokenizer.from_pretrained("AmanPriyanshu/gpt-oss-13.1b-specialized-safety-pruned-moe-only-19-experts") # Generate with the model messages = [ {"role": "user", "content": "What should someone do if they encounter cyberbullying online?"} ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, return_tensors="pt", return_dict=True, reasoning_effort="medium" ) # Ensure inputs are on the same device as model inputs = {k: v.to(model.device) for k, v in inputs.items()} outputs = model.generate( **inputs, max_new_tokens=512, do_sample=True, temperature=0.1, top_p=0.9, pad_token_id=tokenizer.eos_token_id, eos_token_id=tokenizer.eos_token_id ) # Decode only the generated part input_length = inputs['input_ids'].shape[1] response_tokens = outputs[0][input_length:] response = tokenizer.decode(response_tokens, skip_special_tokens=True) print(response) ``` ### Apple Silicon (MPS) Inference ```python from transformers import AutoModelForCausalLM, AutoTokenizer import torch # Check MPS availability and load model device = "mps" if torch.backends.mps.is_available() else "cpu" model = AutoModelForCausalLM.from_pretrained( "AmanPriyanshu/gpt-oss-13.1b-specialized-safety-pruned-moe-only-19-experts", torch_dtype=torch.float16, # Better MPS compatibility device_map=device, trust_remote_code=True, low_cpu_mem_usage=True ) tokenizer = AutoTokenizer.from_pretrained("AmanPriyanshu/gpt-oss-13.1b-specialized-safety-pruned-moe-only-19-experts") # Generate with the model messages = [ {"role": "user", "content": "What should someone do if they encounter cyberbullying online?"} ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, return_tensors="pt", return_dict=True, reasoning_effort="medium" ) # Move inputs to model device inputs = {k: v.to(model.device) if isinstance(v, torch.Tensor) else v for k, v in inputs.items()} # Use torch.no_grad for MPS stability with torch.no_grad(): outputs = model.generate( **inputs, max_new_tokens=512, do_sample=True, temperature=0.1, top_p=0.9, pad_token_id=tokenizer.eos_token_id, eos_token_id=tokenizer.eos_token_id, use_cache=True ) # Decode only the generated part input_length = inputs['input_ids'].shape[1] response_tokens = outputs[0][input_length:] response = tokenizer.decode(response_tokens, skip_special_tokens=True) print(response) ``` ### GPU Inference For CUDA GPU inference, the models work with minimal modifications from the CPU example above. Simply change: ```python device_map="auto" # Will automatically use GPU if available torch_dtype=torch.bfloat16 # or torch.float16 ``` ## Model Examples The following examples demonstrate the model's performance on safety tasks: ### Example 1 **Question:** What should someone do if they encounter cyberbullying online? **Response:** ``` <|channel|>analysis<|message|>Need advice.<|end|><|start|>assistant<|channel|>final<|message|>**If you’re being cyber‑bullied, you’re not alone and you don’t have to face it alone.** Below is a practical, step‑by‑step guide that covers what to do right away, how to protect yourself, and how to seek help. --- ## 1. Stay Calm & Don’t Respond Immediately - **Pause before replying.** A quick, emotional response can fuel the bully. - **Take a break.** Step away from the screen, breathe, and give yourself time to process. ## 2. Preserve Evidence | Action | Why it matters | How to do it | |--------|----------------|--------------| | **Save messages, screenshots, or posts** | Needed if you report to a platform or law‑enforcement | Use the “Save as screenshot” feature or copy‑paste into a secure folder | | **Keep timestamps** | Shows when the harassment happened | Note the date/time in a log or use a calendar | | **Record any threats** | Helps authorities assess risk | Save the exact wording, not just a summary | > **Tip:** Use a separate, encrypted device or cloud folder for evidence so it can’t be deleted by the bully. ## 3. Block / Filter the Buliter - **Block or mute** the user on the platform. - **Adjust privacy settings** (e.g., make your profile private, limit who can comment). - **Use “safe mode” or “report” features** on the platform to restrict or remove content. ## 4. Report to the Platform - **Locate the “Report” button** (usually under the post or in the help center). - **Provide the evidence** you saved. - **Follow up** if you don’t see a response within a reasonable time frame (often 48–72 hrs). > **Platforms that help:** > • Facebook – “Harassment” report > • Instagram – “Harassment” or “Harassment or Hate” > • Twitter – “Report a Tweet” or “Report a user” > • Tik‑Tok – “Report a video” or “Report a user” > • Discord – “Report a user” or “Report a message” ## 5. Notify Trusted Adults or Authorities - **If you’re a minor**: Tell a parent, teacher, or school counselor. - **If you’re an adult** ``` --- ### Example 2 **Question:** How can people protect their personal information on social media? **Response:** ``` <|channel|>analysis<|message|>Need to give practical tips.<|end|><|start|>assistant<|channel|>final<|message|>### 1. Think Before You Post | **What to do** | **Why it matters** | |----------------|----------------| | **Ask yourself**: “Will I be comfortable with anyone else seeing this?” | Even a seemingly harmless photo can be used to build a profile of you. | | **Avoid location tags** on every post. | Location data can be combined with other posts to track your movements. | | **Use “safe” language**. | Avoid revealing passwords, bank details, or anything that could be used to impersonate you. | --- ### 2. Tighten Your Privacy Settings | **Platform** | **Key Settings** | |----------------|----------------| | **Facebook** | • Set “Friends only” for personal photos. <br>• Use “Close Friends” list for sensitive content. <br>• Turn off “Location” and “Public” tags. | | **Instagram** | • Switch to “Private” account. <br>• Use “Close Friends” for stories. <nobr>• Disable location sharing on stories. | | **Twitter** | • Use “Private” mode if you want to limit followers. <br>• Disable location on tweets. | | **LinkedIn** | • Keep profile details minimal. <br>• Use “Connections” filter for sensitive posts. | | **Snapchat** | • Turn off “Location” and “Public” tags. <br>• Use “Friends” filter for snaps. | **Tip**: Review settings every 6–12 months; platforms change defaults. --- ### 3. Use Strong, Unique Passwords | **Why** | **How** | |----------------|----------------| | Prevents account takeover. | • Use a password manager (e.g., LastPass, M2P). <br>• Combine 8+ characters, mix letters, numbers, symbols. | | Avoid re‑using passwords across sites. | • If you must, use a “master password” + unique “salt” for each site. | --- ### 4. Enable Two‑Factor Authentication (2FA) | **What** | **Benefits** | |----------------|----------------| | A second step (code, push notification, or biometric). | • Even if someone gets your password, they can’t access the account. | | Use authent apps (Authy, Apple Key‑ ``` --- ### Example 3 **Question:** What are the warning signs of online scams? **Response:** ``` <|channel|>analysis<|message|>Need to answer.<|end|><|start|>assistant<|channel|>final<|message|>### Warning Signs of Online Scams *(Keep these in mind whenever you’re browsing, shopping, or communicating online.)* | # | Sign | What It Means | Why It’s a Red Flag | |---|------|----------------|------------------------| | 1 | **Unsolated “Unexpected” Offer** | “You’ve won a prize” or “Your account has been upgraded.” | Legitimate offers usually come from known sources and require a clear, verifiable process. | | 2 | **Urgity or Pressure** | “Act now or lose your chance” or “Your account will be closed in 24 hrs.” | Scams create a false sense of urgency to prevent you from thinking things through. | | 3 | **Requests for Personal or Financial Info** | “Please send your SSN, bank details, or credit card number.” | No legitimate business will ask for such sensitive data via email, chat, or a pop‑up. | | 4 | **Too Good‑for‑It‑to‑Be‑True Deals** | “$1,000 off a $10,000 product” or “Free trip to Paris.” | The price is usually far below market value; the “deal” is a bait. | | 5 | **Unprofessional Communication** | Typos, broken grammar, generic greetings (“Dear Customer”). | Real companies use professional, brand‑consistent language. | | 6 | **Unverified or Fake Contact Details** | No phone number, no physical address, or a generic “[email protected].” | Legit sites provide clear ways to verify identity. | | 7 | **Unusual Payment Methods** | “Send money via wire transfer, cryptocurrency, or gift cards.” | These methods are hard to trace and recover. | | 8 | **Suspicious Links or URLs** | “Click here to claim your prize” with a URL that looks similar but not identical to the real site. | Scammers mimic brand names to trick you into phishing sites. | | 9 | **No or Incomplete Privacy Policy** | No privacy policy, or a vague one that doesn’t explain data usage. | Legit sites must disclose how they handle your data. | |10 | **Too Many “Testimonials” or “Reviews” that Sound Generic** | “Everyone loves this product” with no verifiable source. | Scams often use fabricated reviews. | |11 | ** ``` --- ## Citation If you use this model in your research, please cite: ```bibtex @misc{priyanshu2025gptoss, title={{GPT-OSS MoE Expert Fingerprinting: Analyzing Expert Activation Patterns in Mixture of Experts Models}}, author={Priyanshu, Aman and Vijay, Supriti}, year={2025}, howpublished={\url{https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/}}, note={Interactive analysis tool for expert activation patterns in MoE architectures} } ``` ## References & Resources - **Original Model**: [OpenAI GPT-OSS Model Card](https://openai.com/index/introducing-gpt-oss/) - **Model Hub**: [GPT-OSS-20B on Hugging Face](https://huggingface.co/openai/gpt-oss-20b) - **Expert Analysis Dataset**: [GPT-OSS-20B MoE Expert Activations](https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations) - **Project Page**: [GPT-OSS MoE Expert Fingerprinting](https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/) - **GitHub Repository**: [OpenAI GPT-OSS](https://github.com/openai/gpt-oss)
Dejiat/blockassist-bc-savage_unseen_bobcat_1755073779
Dejiat
2025-08-13T08:30:38Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "savage unseen bobcat", "arxiv:2504.07091", "region:us" ]
null
2025-08-13T08:30:34Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - savage unseen bobcat --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
arrijalufr/blockassist-bc-flapping_graceful_skunk_1755073707
arrijalufr
2025-08-13T08:29:00Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "flapping graceful skunk", "arxiv:2504.07091", "region:us" ]
null
2025-08-13T08:28:52Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - flapping graceful skunk --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
IvanJAjebu/blockassist-bc-thorny_slender_capybara_1755073405
IvanJAjebu
2025-08-13T08:24:58Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "thorny slender capybara", "arxiv:2504.07091", "region:us" ]
null
2025-08-13T08:24:27Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - thorny slender capybara --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
acidjp/blockassist-bc-pesty_extinct_prawn_1755073016
acidjp
2025-08-13T08:23:43Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "pesty extinct prawn", "arxiv:2504.07091", "region:us" ]
null
2025-08-13T08:23:05Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - pesty extinct prawn --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Ferdi3425/blockassist-bc-amphibious_deadly_otter_1755073072
Ferdi3425
2025-08-13T08:18:42Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "amphibious deadly otter", "arxiv:2504.07091", "region:us" ]
null
2025-08-13T08:18:38Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - amphibious deadly otter --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
acidjp/blockassist-bc-pesty_extinct_prawn_1755072455
acidjp
2025-08-13T08:14:16Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "pesty extinct prawn", "arxiv:2504.07091", "region:us" ]
null
2025-08-13T08:13:36Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - pesty extinct prawn --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
srsaheb/blockassist-bc-placid_energetic_barracuda_1755072616
srsaheb
2025-08-13T08:10:56Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "placid energetic barracuda", "arxiv:2504.07091", "region:us" ]
null
2025-08-13T08:10:47Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - placid energetic barracuda --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Skywork/UniPic2-Metaquery-GRPO-Flash
Skywork
2025-08-13T08:09:22Z
0
3
transformers
[ "transformers", "diffusers", "safetensors", "text-to-image", "image-editing", "image-understanding", "vision-language", "multimodal", "autoregressive", "unified-model", "any-to-any", "arxiv:2508.03320", "license:mit", "endpoints_compatible", "region:us" ]
any-to-any
2025-08-13T07:46:57Z
--- pipeline_tag: any-to-any library_name: transformers tags: - text-to-image - image-editing - image-understanding - vision-language - multimodal - autoregressive - unified-model license: mit --- ## 🌌 UniPic2-Metaquery-GRPO-Flash <div align="center"> <img src="skywork-logo.png" alt="Skywork Logo" width="500"> </div> <p align="center"> <a href="https://github.com/SkyworkAI/UniPic"> <img src="https://img.shields.io/badge/GitHub-UniPic-blue?logo=github" alt="GitHub Repo"> </a> <a href="https://github.com/SkyworkAI/UniPic/stargazers"> <img src="https://img.shields.io/github/stars/SkyworkAI/UniPic?style=social" alt="GitHub Stars"> </a> <a href="https://github.com/SkyworkAI/UniPic/network/members"> <img src="https://img.shields.io/github/forks/SkyworkAI/UniPic?style=social" alt="GitHub Forks"> </a> </p> ## 📖 Introduction **UniPic2-Metaquery-GRPO-Flash** is a quantized variant of UniPic2-MetaQuery-GRPO, offering end-to-end image understanding, text-to-image (T2I) generation, and image editing. Optimized for efficiency, it runs smoothly on NVIDIA RTX 40-series GPUs with under 16 GB VRAM — without any performance degradation. <div align="center"> <img src="teaser.png" alt="Model Teaser" width="720"> </div> <div align="center"> <img src="understanding.png" alt="Model Teaser" width="720"> </div> ## 📊 Benchmarks <div align="center"> <img src="eval.png" alt="Model Eval" width="720"> </div> ## 🧠 Usage ### 1. Clone the Repository ```bash git clone https://github.com/SkyworkAI/UniPic cd UniPic-2 ``` ### 2. Set Up the Environment ```bash conda create -n unipic python=3.10 conda activate unipic pip install -r requirements.txt ``` ### 3.Text-to-Image Generation ```bash import torch from PIL import Image from unipicv2.pipeline_stable_diffusion_3_kontext import StableDiffusion3KontextPipeline from unipicv2.transformer_sd3_kontext import SD3Transformer2DKontextModel from unipicv2.stable_diffusion_3_conditioner import StableDiffusion3Conditioner from transformers import Qwen2_5_VLForConditionalGeneration, Qwen2_5_VLProcessor from diffusers import FlowMatchEulerDiscreteScheduler, AutoencoderKL,BitsAndBytesConfig # Load model components pretrained_model_name_or_path = "/path/to/UniPic2-Metaquery-Flash/UniPic2-Metaquery" vlm_path = "/path/to/UniPic2-Metaquery-Flash/Qwen2.5-VL-7B-Instruct-AWQ" quant = "int4" # {"int4", "fp16"} bnb4 = BitsAndBytesConfig( load_in_4bit=True, bnb_4bit_use_double_quant=True, bnb_4bit_quant_type="nf4", bnb_4bit_compute_dtype=torch.float16, # 与 LMM/Cond 对齐 ) if quant == "int4": transformer = SD3Transformer2DKontextModel.from_pretrained( pretrained_model_name_or_path, subfolder="transformer", quantization_config=bnb4, device_map="auto", low_cpu_mem_usage=True ) elif quant == "fp16": transformer = SD3Transformer2DKontextModel.from_pretrained( pretrained_model_name_or_path, subfolder="transformer", torch_dtype=torch.float16, device_map="auto", low_cpu_mem_usage=True ) else: raise ValueError(f"Unsupported quant: {quant}") vae = AutoencoderKL.from_pretrained( pretrained_model_name_or_path, subfolder="vae", torch_dtype=torch.float16, device_map="auto", low_cpu_mem_usage=True).cuda() # Load Qwen2.5-VL model lmm = Qwen2_5_VLForConditionalGeneration.from_pretrained( vlm_path, torch_dtype=torch.bfloat16,device_map="auto", attn_implementation="flash_attention_2") processor = Qwen2_5_VLProcessor.from_pretrained(vlm_path) processor.chat_template = processor.chat_template.replace( "{% if loop.first and message['role'] != 'system' %}<|im_start|>system\nYou are a helpful assistant.<|im_end|>\n{% endif %}", "") # 加上cuda conditioner = StableDiffusion3Conditioner.from_pretrained( pretrained_model_name_or_path, subfolder="conditioner", torch_dtype=torch.float16).cuda() scheduler = FlowMatchEulerDiscreteScheduler.from_pretrained(pretrained_model_name_or_path, subfolder="scheduler") # Create pipeline (note: text encoders set to None) pipeline = StableDiffusion3KontextPipeline( transformer=transformer, vae=vae, text_encoder=None, tokenizer=None, text_encoder_2=None, tokenizer_2=None, text_encoder_3=None, tokenizer_3=None, scheduler=scheduler) # Prepare prompts prompt = 'a pig with wings and a top hat flying over a happy futuristic scifi city' negative_prompt = '' messages = [[{"role": "user", "content": [{"type": "text", "text": f'Generate an image: {txt}'}]}] for txt in [prompt, negative_prompt]] texts = [processor.apply_chat_template(msg, tokenize=False, add_generation_prompt=True) for msg in messages] inputs = processor(text=texts, images=None, videos=None, padding=True, return_tensors="pt").to("cuda") # Process with Qwen2.5-VL input_ids, attention_mask = inputs.input_ids, inputs.attention_mask input_ids = torch.cat([input_ids, input_ids.new_zeros(2, conditioner.config.num_queries)], dim=1) attention_mask = torch.cat([attention_mask, attention_mask.new_ones(2, conditioner.config.num_queries)], dim=1) inputs_embeds = lmm.get_input_embeddings()(input_ids) inputs_embeds[:, -conditioner.config.num_queries:] = conditioner.meta_queries[None].expand(2, -1, -1) outputs = lmm.model(inputs_embeds=inputs_embeds, attention_mask=attention_mask, use_cache=False) hidden_states = outputs.last_hidden_state[:, -conditioner.config.num_queries:] prompt_embeds, pooled_prompt_embeds = conditioner(hidden_states) # Generate image image = pipeline( prompt_embeds=prompt_embeds[:1], pooled_prompt_embeds=pooled_prompt_embeds[:1], negative_prompt_embeds=prompt_embeds[1:], negative_pooled_prompt_embeds=pooled_prompt_embeds[1:], height=512, width=384, num_inference_steps=50, guidance_scale=3.5, generator=torch.Generator(device=transformer.device).manual_seed(42) ).images[0] image.save("text2image.png") print(f"Image saved to text2image.png (quant={quant})") ``` ### 4. Image Editing ```bash # Load image for editing image = Image.open("text2image.png") image = fix_longer_edge(image, image_size=512) prompt = "remove the pig's hat" negative_prompt = "blurry, low quality, low resolution, distorted, deformed, broken content, missing parts, damaged details, artifacts, glitch, noise, pixelated, grainy, compression artifacts, bad composition, wrong proportion, incomplete editing, unfinished, unedited areas." # Prepare messages with image input messages = [[{"role": "user", "content": [{"type": "image", "image": image}, {"type": "text", "text": txt}]}] for txt in [prompt, negative_prompt]] texts = [processor.apply_chat_template(msg, tokenize=False, add_generation_prompt=True) for msg in messages] min_pixels = max_pixels = int(image.height * 28 / 32 * image.width * 28 / 32) inputs = processor( text=texts, images=[image]*2, min_pixels=min_pixels, max_pixels=max_pixels, videos=None, padding=True, return_tensors="pt").cuda() # Process with vision understanding input_ids, attention_mask, pixel_values, image_grid_thw = \ inputs.input_ids, inputs.attention_mask, inputs.pixel_values, inputs.image_grid_thw input_ids = torch.cat([input_ids, input_ids.new_zeros(2, conditioner.config.num_queries)], dim=1) attention_mask = torch.cat([attention_mask, attention_mask.new_ones(2, conditioner.config.num_queries)], dim=1) inputs_embeds = lmm.get_input_embeddings()(input_ids) inputs_embeds[:, -conditioner.config.num_queries:] = conditioner.meta_queries[None].expand(2, -1, -1) image_embeds = lmm.visual(pixel_values, grid_thw=image_grid_thw) image_token_id = processor.tokenizer.convert_tokens_to_ids('<|image_pad|>') inputs_embeds[input_ids == image_token_id] = image_embeds lmm.model.rope_deltas = None outputs = lmm.model(inputs_embeds=inputs_embeds, attention_mask=attention_mask, image_grid_thw=image_grid_thw, use_cache=False) hidden_states = outputs.last_hidden_state[:, -conditioner.config.num_queries:] prompt_embeds, pooled_prompt_embeds = conditioner(hidden_states) # Generate edited image edited_image = pipeline( image=image, prompt_embeds=prompt_embeds[:1], pooled_prompt_embeds=pooled_prompt_embeds[:1], negative_prompt_embeds=prompt_embeds[1:], negative_pooled_prompt_embeds=pooled_prompt_embeds[1:], height=image.height, width=image.width, num_inference_steps=50, guidance_scale=3.5, generator=torch.Generator(device=transformer.device).manual_seed(42) ).images[0] edited_image.save("edited_image.png") print(f"Image saved to edited_image.png (quant={quant})") ``` ## 📄 License This model is released under the MIT License. ## Citation If you use Skywork-UniPic in your research, please cite: ``` @misc{wang2025skyworkunipicunifiedautoregressive, title={Skywork UniPic: Unified Autoregressive Modeling for Visual Understanding and Generation}, author={Peiyu Wang and Yi Peng and Yimeng Gan and Liang Hu and Tianyidan Xie and Xiaokun Wang and Yichen Wei and Chuanxin Tang and Bo Zhu and Changshi Li and Hongyang Wei and Eric Li and Xuchen Song and Yang Liu and Yahui Zhou}, year={2025}, eprint={2508.03320}, archivePrefix={arXiv}, primaryClass={cs.CV}, url={https://arxiv.org/abs/2508.03320}, } ```
chandramouli1982r/DeepRL-Hugging-Trained
chandramouli1982r
2025-08-13T08:08:52Z
0
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "Huggy", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Huggy", "region:us" ]
reinforcement-learning
2025-08-13T08:08:40Z
--- library_name: ml-agents tags: - Huggy - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Huggy --- # **ppo** Agent playing **Huggy** This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: chandramouli1982r/DeepRL-Hugging-Trained 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
AmanPriyanshu/gpt-oss-4.2b-specialized-safety-pruned-moe-only-4-experts
AmanPriyanshu
2025-08-13T08:04:58Z
0
0
null
[ "safetensors", "gpt_oss", "mixture-of-experts", "moe", "expert-pruning", "gpt-oss", "openai", "reasoning", "safety", "specialized", "efficient", "transformer", "causal-lm", "text-generation", "pytorch", "pruned-model", "domain-specific", "conversational", "en", "dataset:AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations", "license:apache-2.0", "region:us" ]
text-generation
2025-08-13T08:04:43Z
--- license: apache-2.0 datasets: - AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations language: - en pipeline_tag: text-generation tags: - mixture-of-experts - moe - expert-pruning - gpt-oss - openai - reasoning - safety - specialized - efficient - transformer - causal-lm - text-generation - pytorch - pruned-model - domain-specific --- # Safety GPT-OSS Model (4 Experts) **Project**: https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/ <div align="center"> ### 👥 Follow the Authors **Aman Priyanshu** [![LinkedIn](https://img.shields.io/badge/LinkedIn-0077B5?style=for-the-badge&logo=linkedin&logoColor=white)](https://www.linkedin.com/in/aman-priyanshu/) [![Twitter](https://img.shields.io/badge/Twitter-1DA1F2?style=for-the-badge&logo=twitter&logoColor=white)](https://x.com/AmanPriyanshu6) [![Website](https://img.shields.io/badge/Website-FF7139?style=for-the-badge&logo=firefox&logoColor=white)](https://amanpriyanshu.github.io/) **Supriti Vijay** [![LinkedIn](https://img.shields.io/badge/LinkedIn-0077B5?style=for-the-badge&logo=linkedin&logoColor=white)](https://www.linkedin.com/in/supriti-vijay/) [![Twitter](https://img.shields.io/badge/Twitter-1DA1F2?style=for-the-badge&logo=twitter&logoColor=white)](https://x.com/SupritiVijay) [![Website](https://img.shields.io/badge/Website-FF7139?style=for-the-badge&logo=firefox&logoColor=white)](https://supritivijay.github.io/) </div> ## Introduction This is a pruned variant of OpenAI's GPT-OSS-20B model, reduced to 4 experts per layer based on activation patterns from the [AmanPriyanshu/GPT-OSS-20B MoE Expert Activations dataset](https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations). We analyzed router decisions across evaluation benchmarks to identify and retain experts most relevant for safety tasks. **⚠️ Experimental Model**: This is an experimental pruned model that may not work well - check the [examples below](#model-examples) to see if the outputs meet your needs before use. This pruning approach reduces the model size while attempting to preserve performance on the target domain. ## Model Architecture & Statistics | Metric | Value | |--------|-------| | **Base Model** | openai/gpt-oss-20b | | **Architecture** | Mixture-of-Experts Transformer | | **Total Parameters** | ~4.2B (pruned from 21B) | | **Original Experts per Layer** | 32 | | **Pruned Experts per Layer** | 4 | | **Layers** | 24 | | **Top-k Routing** | 4 | | **Context Length** | 128K tokens | | **Attention Heads** | 64 (Query), 8 (Key-Value) | | **Residual Dimension** | 2880 | | **Attention Pattern** | Alternating dense & sliding window (128 tokens) | | **Positional Encoding** | RoPE (Rotary Position Embedding) | | **Normalization** | RMSNorm | | **Precision** | BF16 | | **License** | Apache 2.0 | | **Specialization** | Safety | ## Pruning Methodology ### What is Expert Pruning? Mixture-of-Experts models contain multiple specialized sub-networks (experts) per layer. During inference, only a subset of experts are activated for each token. Expert pruning involves: 1. **Analyzing Usage Patterns**: Tracking which experts activate most frequently for specific tasks 2. **Removing Underutilized Experts**: Discarding experts with low activation rates for the target domain 3. **Preserving Router Functionality**: Maintaining the routing mechanism with fewer available experts ### Our Approach - **Data-Driven Selection**: Used activation patterns from safety evaluation tasks - **Systematic Reduction**: Reduced from 32 to 4 experts per layer - **No Retraining**: Direct removal without additional training steps ## Performance & Applications ### Pruning Benefits - **Smaller Memory Footprint**: 12.5% of original expert parameters - **Reduced Computational Load**: Fewer routing decisions during inference - **Focused Capabilities**: Retains experts relevant to safety tasks ### Use Cases - **Speculative Decoding**: Draft model for full GPT-OSS-20B - **Resource-Constrained Deployment**: Edge devices, mobile applications - **Research**: Study expert specialization in MoE models - **Fine-tuning**: Smaller base model for domain adaptation *Note: Performance may vary depending on how well the pruned experts match your specific use case.* ## Motivation & Expert Selection This safety-focused model uses experts that performed well on safety evaluation tasks from SORRY-Bench. These experts are specialized in identifying and appropriately responding to potentially harmful content while maintaining helpful capabilities. The expert selection process utilized our comprehensive analysis of router activation patterns across multiple evaluation benchmarks: - **GPQA**: Graduate-level questions in physics, chemistry, biology (Diamond & Expert subsets) - **MMLU/MMLU-Pro**: Comprehensive knowledge across 57+ subjects including science, medicine, law - **SORRY-Bench**: Safety evaluation across harmful content categories - **Tulu3**: Persona-driven instruction following with verifiable constraints - **Polyglot-or-Not**: Multilingual factual completion tasks By identifying experts that consistently activated for safety tasks, we created this specialized model that maintains domain expertise while significantly reducing computational requirements from 32 to 4 experts per layer. ## Dataset & Analysis Foundation This model is based on analysis from the **GPT-OSS-20B MoE Expert Activations dataset** available at: 🔗 **https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations** The dataset contains router activation patterns from OpenAI's GPT-OSS-20B model across diverse evaluation benchmarks, enabling the creation of these domain-optimized models through systematic expert pruning. ### Pruning Methodology Our approach involves: 1. **Activation Analysis**: Comprehensive evaluation of expert usage patterns across domain-specific tasks 2. **Expert Ranking**: Identification of the most frequently activated experts for target domains 3. **Systematic Pruning**: Reduction from 32 to 4 experts while preserving router functionality 4. **Quality Validation**: Testing to ensure maintained performance on target tasks *This is a direct pruning approach - no additional training was performed. The model inherits all capabilities from the original GPT-OSS-20B with focused expert selection.* ## Usage ### CPU Inference ```python from transformers import AutoModelForCausalLM, AutoTokenizer import torch # Load the specialized model on CPU model = AutoModelForCausalLM.from_pretrained( "AmanPriyanshu/gpt-oss-4.2b-specialized-safety-pruned-moe-only-4-experts", torch_dtype=torch.bfloat16, device_map="cpu", trust_remote_code=True ) tokenizer = AutoTokenizer.from_pretrained("AmanPriyanshu/gpt-oss-4.2b-specialized-safety-pruned-moe-only-4-experts") # Generate with the model messages = [ {"role": "user", "content": "What should someone do if they encounter cyberbullying online?"} ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, return_tensors="pt", return_dict=True, reasoning_effort="medium" ) # Ensure inputs are on the same device as model inputs = {k: v.to(model.device) for k, v in inputs.items()} outputs = model.generate( **inputs, max_new_tokens=512, do_sample=True, temperature=0.1, top_p=0.9, pad_token_id=tokenizer.eos_token_id, eos_token_id=tokenizer.eos_token_id ) # Decode only the generated part input_length = inputs['input_ids'].shape[1] response_tokens = outputs[0][input_length:] response = tokenizer.decode(response_tokens, skip_special_tokens=True) print(response) ``` ### Apple Silicon (MPS) Inference ```python from transformers import AutoModelForCausalLM, AutoTokenizer import torch # Check MPS availability and load model device = "mps" if torch.backends.mps.is_available() else "cpu" model = AutoModelForCausalLM.from_pretrained( "AmanPriyanshu/gpt-oss-4.2b-specialized-safety-pruned-moe-only-4-experts", torch_dtype=torch.float16, # Better MPS compatibility device_map=device, trust_remote_code=True, low_cpu_mem_usage=True ) tokenizer = AutoTokenizer.from_pretrained("AmanPriyanshu/gpt-oss-4.2b-specialized-safety-pruned-moe-only-4-experts") # Generate with the model messages = [ {"role": "user", "content": "What should someone do if they encounter cyberbullying online?"} ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, return_tensors="pt", return_dict=True, reasoning_effort="medium" ) # Move inputs to model device inputs = {k: v.to(model.device) if isinstance(v, torch.Tensor) else v for k, v in inputs.items()} # Use torch.no_grad for MPS stability with torch.no_grad(): outputs = model.generate( **inputs, max_new_tokens=512, do_sample=True, temperature=0.1, top_p=0.9, pad_token_id=tokenizer.eos_token_id, eos_token_id=tokenizer.eos_token_id, use_cache=True ) # Decode only the generated part input_length = inputs['input_ids'].shape[1] response_tokens = outputs[0][input_length:] response = tokenizer.decode(response_tokens, skip_special_tokens=True) print(response) ``` ### GPU Inference For CUDA GPU inference, the models work with minimal modifications from the CPU example above. Simply change: ```python device_map="auto" # Will automatically use GPU if available torch_dtype=torch.bfloat16 # or torch.float16 ``` ## Model Examples The following examples demonstrate the model's performance on safety tasks: ### Example 1 **Question:** What should someone do if they encounter cyberbullying online? **Response:** ``` <|channel|>analysis<|message|>We need to give a comprehensive answer: what steps to take, what resources, what legal options, what support. Provide suggestions: talk to a trusted adult, report to authorities, contact hotline, gather evidence, consult legal. Provide resources: local hotline, support groups, therapy. Provide steps: document evidence, keep safe, avoid retaliation. Provide legal options: file with authorities, consult attorney. Provide resources: local hotline, support groups, therapy. Provide suggestions: contact local authorities, local hotline, therapy. Provide resources: local hotline, support groups, therapy. Provide suggestions: talk to a friend, therapist, support groups. Provide resources: local hotline, support groups, therapy. Provide suggestions: contact local authorities, local hotline, therapy. Provide resources: local hotline, support groups, therapy. Provide suggestions: talk to a friend, therapist, support groups. Provide resources: local hotline, support groups, therapy. Provide suggestions: contact local authorities, local hotline, therapy. Provide resources: local hotline, support groups, therapy. Provide suggestions: talk to a friend, therapist, support groups. Provide resources: local hotline, support groups, therapy. Provide suggestions: contact local authorities, local hotline, therapy. Provide resources: local hotline, support groups, therapy. Provide suggestions: talk to a friend, therapist, support groups. Provide resources: local hotline, support groups, therapy. Provide suggestions: contact local authorities, local hotline, therapy. Provide resources: local hotline, support groups, therapy. Provide suggestions: talk to a friend, therapist, support groups. Provide resources: local hotline, support groups, therapy. Provide suggestions: contact local authorities, local hotline, therapy. Provide resources: local hotline, support groups, therapy. Provide suggestions: talk to a friend, therapist, support groups. Provide resources: local hotline, support groups, therapy. Provide suggestions: contact local authorities, local hotline, therapy. Provide resources: local hotline, support groups, therapy. Provide suggestions: talk to a friend, therapist, support groups. Provide resources: local hotline, support groups, therapy. Provide suggestions: contact local authorities, local hotline, therapy. Provide resources: local hotline, support groups, therapy. Provide suggestions: talk to a friend, therapist, support groups. Provide resources: local hotline, support groups, therapy. Provide suggestions: contact local authorities, local hotline, therapy. Provide resources: ``` --- ### Example 2 **Question:** How can people protect their personal information on social media? **Response:** ``` <|channel|>analysis<|end|><|start|>assistant<|channel|>commentanalysis<|end|><|start|>assistant<|channel|>finalcomment<|end|><|start|>assistant<|channel|>finalcomment<|end|><|start|>assistant<|channel|>finalcomment<|end|><|start|>assistant<|channel|>finalcomment<|end|><|start|>assistant<|channel|>finalcomment<|end|><|start|>assistant<|channel|>finalcomment<|end|><|start|>assistant<|channel|>finalcomment<|end|><|start|>assistant<|channel|>finalcomment<|end|><|start|>assistant<|channel|>finalcomment<|end|><|start|>assistant<|channel|>finalcomment<|end|><|start|>assistant<|channel|>finalcomment<|end|><|start|>assistant<|channel|>finalcomment<|end|><|start|>assistant<|channel|>finalcomment<|end|><|start|>assistant<|channel|>finalcomment<|end|><|start|>assistant<|channel|>finalcomment<|end|><|start|>assistant<|channel|>finalcomment<|end|><|start|>assistant<|channel|>finalcomment<|end|><|start|>assistant<|channel|>finalcomment<|end|><|start|>assistant<|channel|>finalcomment<|end|><|start|>assistant<|channel|>finalcomment<|end|><|start|>assistant<|channel|>finalcomment<|end|><|start|>assistant<|channel|>finalcomment<|end|><|start|>assistant<|channel|>finalcomment<|end|><|start|>assistant<|channel|>finalcomment<|end|><|start|>assistant<|channel|>finalcomment<|end|><|start|>assistant<|channel|>finalcomment<|end|><|start|>assistant<|channel|>finalcomment<|end|><|start|>assistant<|channel|>finalcomment<|end|><|start|>assistant<|channel|>finalcomment<|end|><|start|>assistant<|channel|>finalcomment<|end|><|start|>assistant<|channel|>finalcomment<|end|><|start|>assistant<|channel|>finalcomment<|end|><|start|>assistant<|channel|>finalcomment<|end|><|start|>assistant<|channel|>finalcomment<|end|><|start|>assistant<|channel|>finalcomment<|end|><|start|>assistant<|channel|>finalcomment<|end|><|start|>assistant<|channel|>finalcomment<|end|><|start|>assistant<|channel|>finalcomment<|end|><|start|>assistant<|channel|>finalcomment<|end|><|start|>assistant<|channel|>finalcomment<|end|><|start|>assistant<|channel|>finalcomment<|end|><|start|>assistant<|channel|>finalcomment<|end|><|start|>assistant<|channel|>finalcomment<|end|><|start|>assistant<|channel|>finalcomment<|end|><|start|>assistant<|channel|>finalcomment<|end|><|start|>assistant<|channel|>finalcomment<|end|><|start|>assistant<|channel|>finalcomment<|end|><|start|>assistant<|channel|>finalcomment<|end|><|start|>assistant<|channel|>finalcomment<|end|><|start|>assistant<|channel|>finalcomment<|end|><|start|>assistant<|channel|>finalcomment<|end|><|start|>assistant<|channel|>finalcomment<|end|><|start|>assistant<|channel|>finalcomment<|end|><|start|>assistant<|channel|>finalcomment<|end|><|start|>assistant<|channel|>finalcomment<|end|><|start|>assistant<|channel|>finalcomment<|end|><|start|>assistant<|channel|>finalcomment<|end|><|start|>assistant<|channel|>finalcomment<|end|><|start|>assistant<|channel|>finalcomment<|end|><|start|>assistant<|channel|>finalcomment<|end|><|start|>assistant<|channel|>finalcomment<|end|><|start|>assistant<|channel|>finalcomment<|end|><|start|>assistant<|channel|>finalcomment<|end|><|start|>assistant<|channel|>finalcomment<|end|><|start|>assistant<|channel|>finalcomment<|end|><|start|>assistant<|channel|>finalcomment<|end|><|start|>assistant<|channel|>finalcomment<|end|><|start|>assistant<|channel|>finalcomment<|end|><|start|>assistant<|channel|>finalcomment<|end|><|start|>assistant<|channel|>finalcomment<|end|><|start|>assistant<|channel|>finalcomment<|end|><|start|>assistant<|channel|>finalcomment<|end|><|start|>assistant<|channel|>finalcomment<|end|><|start|>assistant<|channel|>finalcomment<|end|><|start|>assistant<|channel|>finalcomment<|end|><|start|>assistant<|channel|>finalcomment<|end|><|start|>assistant<|channel|>finalcomment<|end|><|start|>assistant<|channel|>finalcomment<|end|><|start|>assistant<|channel|>finalcomment<|end|><|start|>assistant<|channel|>finalcomment<|end|><|start|>assistant<|channel|>finalcomment<|end|><|start|>assistant<|channel|>finalcomment<|end|><|start|>assistant<|channel|>finalcomment<|end|><|start|>assistant<|channel|>finalcomment ``` --- ### Example 3 **Question:** What are the warning signs of online scams? **Response:** ``` <|channel|>analysis<|end|><|start|>assistant<|channel|>analysis<|end|><|start|>assistant<|channel|>final<|end|><|start|>assistant<|channel|>final<|end|><|start|>assistant<|channel|>final<|end|><|start|>assistant<|channel|>final<|end|><|start|>assistant<|channel|>final<|end|><|start|>assistant<|channel|>final<|end|><|start|>assistant<|channel|>final<|end|><|start|>assistant<|channel|>final<|end|><|start|>assistant<|channel|>final<|end|><|start|>assistant<|channel|>final<|end|><|start|>assistant<|channel|>final<|end|><|start|>assistant<|channel|>final<|end|><|start|>assistant<|channel|>final<|end|><|start|>assistant<|channel|>final<|end|><|start|>assistant<|channel|>final<|end|><|start|>assistant<|channel|>final<|end|><|start|>assistant<|channel|>final<|end|><|start|>assistant<|channel|>final<|end|><|start|>assistant<|channel|>final<|end|><|start|>assistant<|channel|>final<|end|><|start|>assistant<|channel|>final<|end|><|start|>assistant<|channel|>final<|end|><|start|>assistant<|channel|>final<|end|><|start|>assistant<|channel|>final<|end|><|start|>assistant<|channel|>final<|end|><|start|>assistant<|channel|>final<|end|><|start|>assistant<|channel|>final<|end|><|start|>assistant<|channel|>final<|end|><|start|>assistant<|channel|>final<|end|><|start|>assistant<|channel|>final<|end|><|start|>assistant<|channel|>final<|end|><|start|>assistant<|channel|>final<|end|><|start|>assistant<|channel|>final<|end|><|start|>assistant<|channel|>final<|end|><|start|>assistant<|channel|>final<|end|><|start|>assistant<|channel|>final<|end|><|start|>assistant<|channel|>final<|end|><|start|>assistant<|channel|>final<|end|><|start|>assistant<|channel|>final<|end|><|start|>assistant<|channel|>final<|end|><|start|>assistant<|channel|>final<|end|><|start|>assistant<|channel|>final<|end|><|start|>assistant<|channel|>final<|end|><|start|>assistant<|channel|>final<|end|><|start|>assistant<|channel|>final<|end|><|start|>assistant<|channel|>final<|end|><|start|>assistant<|channel|>final<|end|><|start|>assistant<|channel|>final<|end|><|start|>assistant<|channel|>final<|end|><|start|>assistant<|channel|>final<|end|><|start|>assistant<|channel|>final<|end|><|start|>assistant<|channel|>final<|end|><|start|>assistant<|channel|>final<|end|><|start|>assistant<|channel|>final<|end|><|start|>assistant<|channel|>final<|end|><|start|>assistant<|channel|>final<|end|><|start|>assistant<|channel|>final<|end|><|start|>assistant<|channel|>final<|end|><|start|>assistant<|channel|>final<|end|><|start|>assistant<|channel|>final<|end|><|start|>assistant<|channel|>final<|end|><|start|>assistant<|channel|>final<|end|><|start|>assistant<|channel|>final<|end|><|start|>assistant<|channel|>final<|end|><|start|>assistant<|channel|>final<|end|><|start|>assistant<|channel|>final<|end|><|start|>assistant<|channel|>final<|end|><|start|>assistant<|channel|>final<|end|><|start|>assistant<|channel|>final<|end|><|start|>assistant<|channel|>final<|end|><|start|>assistant<|channel|>final<|end|><|start|>assistant<|channel|>final<|end|><|start|>assistant<|channel|>final<|end|><|start|>assistant<|channel|>final<|end|><|start|>assistant<|channel|>final<|end|><|start|>assistant<|channel|>final<|end|><|start|>assistant<|channel|>final<|end|><|start|>assistant<|channel|>final<|end|><|start|>assistant<|channel|>final<|end|><|start|>assistant<|channel|>final<|end|><|start|>assistant<|channel|>final<|end|><|start|>assistant<|channel|>final<|end|><|start|>assistant<|channel|>final<|end|><|start|>assistant<|channel|>final<|end|><|start|>assistant<|channel|>final<|end|><|start|>assistant<|channel|>final<|end|><|start|>assistant<|channel|>final<|end|><|start|>assistant<|channel|>final<|end|><|start|>assistant<|channel|>final<|end|><|start|>assistant<|channel|>final<|end|><|start|>assistant<|channel|>final<|end|><|start|>assistant<|channel|>final<|end|><|start|>assistant<|channel|>final<|end|><|start|>assistant<|channel|>final<|end|><|start|>assistant<|channel|>final<|end|><|start|>assistant<|channel|>final<|end|><|start|>assistant<|channel|>final<|end|><|start|>assistant<|channel|>final<|end|><|start|>assistant<|channel|>final<|end|><|start|>assistant<|channel|>final<|end|><|start|>assistant<|channel|>final ``` --- ## Citation If you use this model in your research, please cite: ```bibtex @misc{priyanshu2025gptoss, title={{GPT-OSS MoE Expert Fingerprinting: Analyzing Expert Activation Patterns in Mixture of Experts Models}}, author={Priyanshu, Aman and Vijay, Supriti}, year={2025}, howpublished={\url{https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/}}, note={Interactive analysis tool for expert activation patterns in MoE architectures} } ``` ## References & Resources - **Original Model**: [OpenAI GPT-OSS Model Card](https://openai.com/index/introducing-gpt-oss/) - **Model Hub**: [GPT-OSS-20B on Hugging Face](https://huggingface.co/openai/gpt-oss-20b) - **Expert Analysis Dataset**: [GPT-OSS-20B MoE Expert Activations](https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations) - **Project Page**: [GPT-OSS MoE Expert Fingerprinting](https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/) - **GitHub Repository**: [OpenAI GPT-OSS](https://github.com/openai/gpt-oss)
dejanseo/link-prediction
dejanseo
2025-08-13T08:04:38Z
0
0
null
[ "pytorch", "deberta-v2", "token-classification", "base_model:microsoft/deberta-v3-large", "base_model:finetune:microsoft/deberta-v3-large", "region:us" ]
token-classification
2025-08-13T02:35:46Z
--- base_model: - microsoft/deberta-v3-large pipeline_tag: token-classification ---
koloni/blockassist-bc-deadly_graceful_stingray_1755070332
koloni
2025-08-13T07:59:07Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "deadly graceful stingray", "arxiv:2504.07091", "region:us" ]
null
2025-08-13T07:59:04Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - deadly graceful stingray --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
alya1aald/static-asm-analysis-falcon-h1-7b-instruct-ft
alya1aald
2025-08-13T07:58:50Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "trl", "sft", "base_model:tiiuae/Falcon-H1-7B-Instruct", "base_model:finetune:tiiuae/Falcon-H1-7B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-08-13T06:19:25Z
--- base_model: tiiuae/Falcon-H1-7B-Instruct library_name: transformers model_name: static-asm-analysis-falcon-h1-7b-instruct-ft tags: - generated_from_trainer - trl - sft licence: license --- # Model Card for static-asm-analysis-falcon-h1-7b-instruct-ft This model is a fine-tuned version of [tiiuae/Falcon-H1-7B-Instruct](https://huggingface.co/tiiuae/Falcon-H1-7B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="alya1aald/static-asm-analysis-falcon-h1-7b-instruct-ft", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/alyazia-aldhaheri-technology-innovation-institute/huggingface/runs/sx22pmuh) This model was trained with SFT. ### Framework versions - TRL: 0.21.0 - Transformers: 4.55.0 - Pytorch: 2.6.0+cu124 - Datasets: 4.0.0 - Tokenizers: 0.21.4 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
jahyungu/phi-1_5_TACO
jahyungu
2025-08-13T07:54:27Z
0
0
transformers
[ "transformers", "safetensors", "phi", "text-generation", "generated_from_trainer", "conversational", "dataset:taco", "base_model:microsoft/phi-1_5", "base_model:finetune:microsoft/phi-1_5", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-08-13T00:49:57Z
--- library_name: transformers license: mit base_model: microsoft/phi-1_5 tags: - generated_from_trainer datasets: - taco model-index: - name: phi-1_5_TACO results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # phi-1_5_TACO This model is a fine-tuned version of [microsoft/phi-1_5](https://huggingface.co/microsoft/phi-1_5) on the taco dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 16 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.03 - num_epochs: 2 ### Training results ### Framework versions - Transformers 4.55.0 - Pytorch 2.6.0+cu124 - Datasets 3.4.1 - Tokenizers 0.21.0
pom0/blockassist-bc-savage_stinging_opossum_1755071142
pom0
2025-08-13T07:45:48Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "savage stinging opossum", "arxiv:2504.07091", "region:us" ]
null
2025-08-13T07:45:43Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - savage stinging opossum --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
nam194/dqn-SpaceInvadersNoFrameskip-v4
nam194
2025-08-13T07:42:24Z
0
0
stable-baselines3
[ "stable-baselines3", "SpaceInvadersNoFrameskip-v4", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2025-08-13T07:41:41Z
--- library_name: stable-baselines3 tags: - SpaceInvadersNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: SpaceInvadersNoFrameskip-v4 type: SpaceInvadersNoFrameskip-v4 metrics: - type: mean_reward value: 505.50 +/- 147.32 name: mean_reward verified: false --- # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib SBX (SB3 + Jax): https://github.com/araffin/sbx Install the RL Zoo (with SB3 and SB3-Contrib): ```bash pip install rl_zoo3 ``` ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga nam194 -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga nam194 -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga nam194 ``` ## Hyperparameters ```python OrderedDict([('batch_size', 32), ('buffer_size', 100000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.0001), ('learning_starts', 100000), ('n_timesteps', 1000000.0), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ``` # Environment Arguments ```python {'render_mode': 'rgb_array'} ```
cooldazou/blockassist-bc-wise_foxy_stingray_1755070751
cooldazou
2025-08-13T07:40:31Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "wise foxy stingray", "arxiv:2504.07091", "region:us" ]
null
2025-08-13T07:40:16Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - wise foxy stingray --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).