modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-08-13 12:28:53
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 502
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-08-13 12:28:05
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
donoway/TinyStoriesV2_Llama-3.2-1B-urpkj4v3
|
donoway
| 2025-08-13T11:02:52Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"generated_from_trainer",
"base_model:meta-llama/Llama-3.2-1B",
"base_model:adapter:meta-llama/Llama-3.2-1B",
"license:llama3.2",
"region:us"
] | null | 2025-08-13T11:02:47Z |
---
library_name: peft
license: llama3.2
base_model: meta-llama/Llama-3.2-1B
tags:
- generated_from_trainer
model-index:
- name: TinyStoriesV2_Llama-3.2-1B-urpkj4v3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# TinyStoriesV2_Llama-3.2-1B-urpkj4v3
This model is a fine-tuned version of [meta-llama/Llama-3.2-1B](https://huggingface.co/meta-llama/Llama-3.2-1B) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1525
- Model Preparation Time: 0.0176
- Token Accuracy: 0.6754
- Token Error Rate: 0.3246
- Perplexity: 3.1658
- Summed Loss: 3043180.2385
- Total Tokens: 2640694
- Exp Loss: 1.1524
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 112
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 1e-05
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Model Preparation Time | Token Accuracy | Token Error Rate | Perplexity | Summed Loss | Total Tokens | Exp Loss |
|:-------------:|:------:|:-----:|:---------------:|:----------------------:|:--------------:|:----------------:|:----------:|:------------:|:------------:|:--------:|
| No log | 0 | 0 | 1.7513 | 0.0176 | 0.5691 | 0.4309 | 5.7590 | 4623224.4282 | 2640694 | 1.7508 |
| 1.2569 | 0.0012 | 100 | 1.3040 | 0.0176 | 0.6464 | 0.3536 | 3.6837 | 3443235.6548 | 2640694 | 1.3039 |
| 1.2261 | 0.0024 | 200 | 1.2750 | 0.0176 | 0.6524 | 0.3476 | 3.5783 | 3366557.6448 | 2640694 | 1.2749 |
| 1.2114 | 0.0036 | 300 | 1.2554 | 0.0176 | 0.6547 | 0.3453 | 3.5090 | 3314921.4094 | 2640694 | 1.2553 |
| 1.2036 | 0.0049 | 400 | 1.2264 | 0.0176 | 0.6600 | 0.3400 | 3.4087 | 3238384.3721 | 2640694 | 1.2263 |
| 1.2383 | 0.0061 | 500 | 1.2199 | 0.0176 | 0.6609 | 0.3391 | 3.3866 | 3221156.6045 | 2640694 | 1.2198 |
| 1.2068 | 0.0073 | 600 | 1.2146 | 0.0176 | 0.6617 | 0.3383 | 3.3688 | 3207234.0923 | 2640694 | 1.2145 |
| 1.2313 | 0.0085 | 700 | 1.2100 | 0.0176 | 0.6627 | 0.3373 | 3.3534 | 3195186.1646 | 2640694 | 1.2100 |
| 1.2401 | 0.0097 | 800 | 1.2066 | 0.0176 | 0.6637 | 0.3363 | 3.3418 | 3185990.6555 | 2640694 | 1.2065 |
| 1.1631 | 0.0109 | 900 | 1.2024 | 0.0176 | 0.6643 | 0.3357 | 3.3279 | 3175034.1326 | 2640694 | 1.2023 |
| 1.2011 | 0.0122 | 1000 | 1.1990 | 0.0176 | 0.6653 | 0.3347 | 3.3166 | 3166009.6125 | 2640694 | 1.1989 |
| 1.1859 | 0.0134 | 1100 | 1.1972 | 0.0176 | 0.6657 | 0.3343 | 3.3106 | 3161229.3738 | 2640694 | 1.1971 |
| 1.1366 | 0.0146 | 1200 | 1.1956 | 0.0176 | 0.6656 | 0.3344 | 3.3052 | 3156940.1401 | 2640694 | 1.1955 |
| 1.1676 | 0.0158 | 1300 | 1.1928 | 0.0176 | 0.6665 | 0.3335 | 3.2960 | 3149589.8091 | 2640694 | 1.1927 |
| 1.2417 | 0.0170 | 1400 | 1.1924 | 0.0176 | 0.6669 | 0.3331 | 3.2948 | 3148580.7996 | 2640694 | 1.1923 |
| 1.1751 | 0.0182 | 1500 | 1.1916 | 0.0176 | 0.6665 | 0.3335 | 3.2923 | 3146604.5005 | 2640694 | 1.1916 |
| 1.2389 | 0.0194 | 1600 | 1.1879 | 0.0176 | 0.6677 | 0.3323 | 3.2799 | 3136617.3774 | 2640694 | 1.1878 |
| 1.2219 | 0.0207 | 1700 | 1.1885 | 0.0176 | 0.6673 | 0.3327 | 3.2820 | 3138359.9397 | 2640694 | 1.1885 |
| 1.1823 | 0.0219 | 1800 | 1.1865 | 0.0176 | 0.6678 | 0.3322 | 3.2754 | 3133046.6260 | 2640694 | 1.1864 |
| 1.1683 | 0.0231 | 1900 | 1.1866 | 0.0176 | 0.6676 | 0.3324 | 3.2757 | 3133291.0630 | 2640694 | 1.1865 |
| 1.2647 | 0.0243 | 2000 | 1.1845 | 0.0176 | 0.6680 | 0.3320 | 3.2688 | 3127666.5503 | 2640694 | 1.1844 |
| 1.124 | 0.0255 | 2100 | 1.1831 | 0.0176 | 0.6684 | 0.3316 | 3.2642 | 3124010.0591 | 2640694 | 1.1830 |
| 1.2189 | 0.0267 | 2200 | 1.1851 | 0.0176 | 0.6680 | 0.3320 | 3.2707 | 3129192.8560 | 2640694 | 1.1850 |
| 1.1759 | 0.0280 | 2300 | 1.1826 | 0.0176 | 0.6685 | 0.3315 | 3.2626 | 3122695.3787 | 2640694 | 1.1825 |
| 1.2055 | 0.0292 | 2400 | 1.1805 | 0.0176 | 0.6692 | 0.3308 | 3.2559 | 3117252.1963 | 2640694 | 1.1805 |
| 1.139 | 0.0304 | 2500 | 1.1784 | 0.0176 | 0.6694 | 0.3306 | 3.2489 | 3111546.7966 | 2640694 | 1.1783 |
| 1.1467 | 0.0316 | 2600 | 1.1814 | 0.0176 | 0.6687 | 0.3313 | 3.2587 | 3119498.2261 | 2640694 | 1.1813 |
| 1.1233 | 0.0328 | 2700 | 1.1801 | 0.0176 | 0.6692 | 0.3308 | 3.2544 | 3116065.2009 | 2640694 | 1.1800 |
| 1.1567 | 0.0340 | 2800 | 1.1788 | 0.0176 | 0.6695 | 0.3305 | 3.2504 | 3112777.7595 | 2640694 | 1.1788 |
| 1.2171 | 0.0352 | 2900 | 1.1770 | 0.0176 | 0.6699 | 0.3301 | 3.2443 | 3107805.8584 | 2640694 | 1.1769 |
| 1.2191 | 0.0365 | 3000 | 1.1836 | 0.0176 | 0.6684 | 0.3316 | 3.2661 | 3125493.2979 | 2640694 | 1.1836 |
| 1.1921 | 0.0377 | 3100 | 1.1766 | 0.0176 | 0.6699 | 0.3301 | 3.2432 | 3106941.3726 | 2640694 | 1.1766 |
| 1.2471 | 0.0389 | 3200 | 1.1762 | 0.0176 | 0.6700 | 0.3300 | 3.2420 | 3105919.1436 | 2640694 | 1.1762 |
| 1.1774 | 0.0401 | 3300 | 1.1761 | 0.0176 | 0.6704 | 0.3296 | 3.2414 | 3105510.8982 | 2640694 | 1.1760 |
| 1.268 | 0.0413 | 3400 | 1.1749 | 0.0176 | 0.6704 | 0.3296 | 3.2376 | 3102384.7751 | 2640694 | 1.1748 |
| 1.1824 | 0.0425 | 3500 | 1.1763 | 0.0176 | 0.6701 | 0.3299 | 3.2420 | 3105942.4053 | 2640694 | 1.1762 |
| 1.1595 | 0.0438 | 3600 | 1.1744 | 0.0176 | 0.6705 | 0.3295 | 3.2359 | 3100996.8003 | 2640694 | 1.1743 |
| 1.2102 | 0.0450 | 3700 | 1.1751 | 0.0176 | 0.6704 | 0.3296 | 3.2383 | 3102954.3594 | 2640694 | 1.1751 |
| 1.1465 | 0.0462 | 3800 | 1.1721 | 0.0176 | 0.6712 | 0.3288 | 3.2286 | 3095061.9246 | 2640694 | 1.1721 |
| 1.0562 | 0.0474 | 3900 | 1.1736 | 0.0176 | 0.6707 | 0.3293 | 3.2333 | 3098823.6599 | 2640694 | 1.1735 |
| 1.1901 | 0.0486 | 4000 | 1.1750 | 0.0176 | 0.6702 | 0.3298 | 3.2378 | 3102574.5845 | 2640694 | 1.1749 |
| 1.1493 | 0.0498 | 4100 | 1.1715 | 0.0176 | 0.6713 | 0.3287 | 3.2266 | 3093399.3740 | 2640694 | 1.1714 |
| 1.1442 | 0.0510 | 4200 | 1.1731 | 0.0176 | 0.6709 | 0.3291 | 3.2317 | 3097584.2344 | 2640694 | 1.1730 |
| 1.1501 | 0.0523 | 4300 | 1.1725 | 0.0176 | 0.6710 | 0.3290 | 3.2300 | 3096152.4263 | 2640694 | 1.1725 |
| 1.1754 | 0.0535 | 4400 | 1.1717 | 0.0176 | 0.6708 | 0.3292 | 3.2274 | 3094042.4983 | 2640694 | 1.1717 |
| 1.2584 | 0.0547 | 4500 | 1.1710 | 0.0176 | 0.6713 | 0.3287 | 3.2251 | 3092182.4751 | 2640694 | 1.1710 |
| 1.1023 | 0.0559 | 4600 | 1.1706 | 0.0176 | 0.6714 | 0.3286 | 3.2239 | 3091156.1873 | 2640694 | 1.1706 |
| 1.2072 | 0.0571 | 4700 | 1.1707 | 0.0176 | 0.6713 | 0.3287 | 3.2239 | 3091165.0820 | 2640694 | 1.1706 |
| 1.0966 | 0.0583 | 4800 | 1.1704 | 0.0176 | 0.6715 | 0.3285 | 3.2232 | 3090633.5090 | 2640694 | 1.1704 |
| 1.1564 | 0.0596 | 4900 | 1.1681 | 0.0176 | 0.6720 | 0.3280 | 3.2158 | 3084545.1221 | 2640694 | 1.1681 |
| 1.1508 | 0.0608 | 5000 | 1.1700 | 0.0176 | 0.6716 | 0.3284 | 3.2216 | 3089320.7778 | 2640694 | 1.1699 |
| 1.2093 | 0.0620 | 5100 | 1.1719 | 0.0176 | 0.6712 | 0.3288 | 3.2278 | 3094341.8901 | 2640694 | 1.1718 |
| 1.2239 | 0.0632 | 5200 | 1.1705 | 0.0176 | 0.6714 | 0.3286 | 3.2232 | 3090638.0591 | 2640694 | 1.1704 |
| 1.1233 | 0.0644 | 5300 | 1.1670 | 0.0176 | 0.6724 | 0.3276 | 3.2121 | 3081504.7913 | 2640694 | 1.1669 |
| 1.2494 | 0.0656 | 5400 | 1.1701 | 0.0176 | 0.6717 | 0.3283 | 3.2222 | 3089783.9871 | 2640694 | 1.1701 |
| 1.0811 | 0.0668 | 5500 | 1.1670 | 0.0176 | 0.6725 | 0.3275 | 3.2120 | 3081434.5095 | 2640694 | 1.1669 |
| 1.152 | 0.0681 | 5600 | 1.1681 | 0.0176 | 0.6721 | 0.3279 | 3.2158 | 3084538.5535 | 2640694 | 1.1681 |
| 1.1712 | 0.0693 | 5700 | 1.1692 | 0.0176 | 0.6719 | 0.3281 | 3.2191 | 3087215.9036 | 2640694 | 1.1691 |
| 1.1205 | 0.0705 | 5800 | 1.1686 | 0.0176 | 0.6718 | 0.3282 | 3.2174 | 3085859.4868 | 2640694 | 1.1686 |
| 1.2231 | 0.0717 | 5900 | 1.1673 | 0.0176 | 0.6723 | 0.3277 | 3.2133 | 3082456.3384 | 2640694 | 1.1673 |
| 1.1367 | 0.0729 | 6000 | 1.1680 | 0.0176 | 0.6717 | 0.3283 | 3.2153 | 3084084.9756 | 2640694 | 1.1679 |
| 1.1512 | 0.0741 | 6100 | 1.1666 | 0.0176 | 0.6722 | 0.3278 | 3.2109 | 3080526.5339 | 2640694 | 1.1666 |
| 1.1608 | 0.0754 | 6200 | 1.1675 | 0.0176 | 0.6722 | 0.3278 | 3.2137 | 3082814.7917 | 2640694 | 1.1674 |
| 1.1773 | 0.0766 | 6300 | 1.1686 | 0.0176 | 0.6717 | 0.3283 | 3.2172 | 3085714.2290 | 2640694 | 1.1685 |
| 1.2041 | 0.0778 | 6400 | 1.1665 | 0.0176 | 0.6723 | 0.3277 | 3.2107 | 3080344.3835 | 2640694 | 1.1665 |
| 1.1597 | 0.0790 | 6500 | 1.1672 | 0.0176 | 0.6721 | 0.3279 | 3.2127 | 3082009.5845 | 2640694 | 1.1671 |
| 1.219 | 0.0802 | 6600 | 1.1656 | 0.0176 | 0.6723 | 0.3277 | 3.2078 | 3077931.4578 | 2640694 | 1.1656 |
| 1.1578 | 0.0814 | 6700 | 1.1687 | 0.0176 | 0.6718 | 0.3282 | 3.2176 | 3086027.1304 | 2640694 | 1.1686 |
| 1.1535 | 0.0826 | 6800 | 1.1666 | 0.0176 | 0.6723 | 0.3277 | 3.2109 | 3080491.3015 | 2640694 | 1.1665 |
| 1.2193 | 0.0839 | 6900 | 1.1667 | 0.0176 | 0.6726 | 0.3274 | 3.2111 | 3080646.9297 | 2640694 | 1.1666 |
| 1.1603 | 0.0851 | 7000 | 1.1671 | 0.0176 | 0.6724 | 0.3276 | 3.2124 | 3081732.2048 | 2640694 | 1.1670 |
| 1.1779 | 0.0863 | 7100 | 1.1661 | 0.0176 | 0.6722 | 0.3278 | 3.2094 | 3079283.9666 | 2640694 | 1.1661 |
| 1.2216 | 0.0875 | 7200 | 1.1650 | 0.0176 | 0.6729 | 0.3271 | 3.2057 | 3076246.2913 | 2640694 | 1.1649 |
| 1.1887 | 0.0887 | 7300 | 1.1651 | 0.0176 | 0.6728 | 0.3272 | 3.2060 | 3076511.1555 | 2640694 | 1.1650 |
| 1.1503 | 0.0899 | 7400 | 1.1652 | 0.0176 | 0.6726 | 0.3274 | 3.2063 | 3076707.5808 | 2640694 | 1.1651 |
| 1.1457 | 0.0912 | 7500 | 1.1637 | 0.0176 | 0.6730 | 0.3270 | 3.2015 | 3072766.9187 | 2640694 | 1.1636 |
| 1.1506 | 0.0924 | 7600 | 1.1645 | 0.0176 | 0.6731 | 0.3269 | 3.2042 | 3074975.6101 | 2640694 | 1.1645 |
| 1.1697 | 0.0936 | 7700 | 1.1652 | 0.0176 | 0.6725 | 0.3275 | 3.2064 | 3076775.2280 | 2640694 | 1.1651 |
| 1.1528 | 0.0948 | 7800 | 1.1655 | 0.0176 | 0.6725 | 0.3275 | 3.2073 | 3077512.0928 | 2640694 | 1.1654 |
| 1.2377 | 0.0960 | 7900 | 1.1634 | 0.0176 | 0.6729 | 0.3271 | 3.2008 | 3072145.2961 | 2640694 | 1.1634 |
| 1.1785 | 0.0972 | 8000 | 1.1635 | 0.0176 | 0.6727 | 0.3273 | 3.2010 | 3072316.5308 | 2640694 | 1.1635 |
| 1.1347 | 0.0985 | 8100 | 1.1645 | 0.0176 | 0.6731 | 0.3269 | 3.2042 | 3074952.6060 | 2640694 | 1.1644 |
| 1.1646 | 0.0997 | 8200 | 1.1617 | 0.0176 | 0.6734 | 0.3266 | 3.1952 | 3067585.1196 | 2640694 | 1.1617 |
| 1.1023 | 0.1009 | 8300 | 1.1650 | 0.0176 | 0.6726 | 0.3274 | 3.2059 | 3076369.6384 | 2640694 | 1.1650 |
| 1.305 | 0.1021 | 8400 | 1.1633 | 0.0176 | 0.6731 | 0.3269 | 3.2004 | 3071868.8569 | 2640694 | 1.1633 |
| 1.1284 | 0.1033 | 8500 | 1.1632 | 0.0176 | 0.6727 | 0.3273 | 3.1999 | 3071414.3730 | 2640694 | 1.1631 |
| 1.1859 | 0.1045 | 8600 | 1.1628 | 0.0176 | 0.6728 | 0.3272 | 3.1987 | 3070480.0918 | 2640694 | 1.1628 |
| 1.1338 | 0.1057 | 8700 | 1.1629 | 0.0176 | 0.6730 | 0.3270 | 3.1991 | 3070798.8152 | 2640694 | 1.1629 |
| 1.1901 | 0.1070 | 8800 | 1.1622 | 0.0176 | 0.6732 | 0.3268 | 3.1967 | 3068772.8962 | 2640694 | 1.1621 |
| 1.1468 | 0.1082 | 8900 | 1.1629 | 0.0176 | 0.6729 | 0.3271 | 3.1989 | 3070601.0066 | 2640694 | 1.1628 |
| 1.1551 | 0.1094 | 9000 | 1.1642 | 0.0176 | 0.6730 | 0.3270 | 3.2031 | 3074052.3931 | 2640694 | 1.1641 |
| 1.1415 | 0.1106 | 9100 | 1.1629 | 0.0176 | 0.6733 | 0.3267 | 3.1989 | 3070590.1243 | 2640694 | 1.1628 |
| 1.16 | 0.1118 | 9200 | 1.1642 | 0.0176 | 0.6730 | 0.3270 | 3.2032 | 3074185.7690 | 2640694 | 1.1642 |
| 1.0755 | 0.1130 | 9300 | 1.1619 | 0.0176 | 0.6735 | 0.3265 | 3.1958 | 3068029.6382 | 2640694 | 1.1618 |
| 1.1747 | 0.1143 | 9400 | 1.1636 | 0.0176 | 0.6726 | 0.3274 | 3.2013 | 3072566.6504 | 2640694 | 1.1635 |
| 1.1341 | 0.1155 | 9500 | 1.1641 | 0.0176 | 0.6731 | 0.3269 | 3.2029 | 3073882.4390 | 2640694 | 1.1640 |
| 1.1902 | 0.1167 | 9600 | 1.1660 | 0.0176 | 0.6724 | 0.3276 | 3.2089 | 3078847.6938 | 2640694 | 1.1659 |
| 1.0798 | 0.1179 | 9700 | 1.1614 | 0.0176 | 0.6736 | 0.3264 | 3.1941 | 3066681.2961 | 2640694 | 1.1613 |
| 1.1922 | 0.1191 | 9800 | 1.1617 | 0.0176 | 0.6735 | 0.3265 | 3.1951 | 3067511.3120 | 2640694 | 1.1616 |
| 1.2114 | 0.1203 | 9900 | 1.1619 | 0.0176 | 0.6732 | 0.3268 | 3.1957 | 3068005.4448 | 2640694 | 1.1618 |
| 1.1804 | 0.1215 | 10000 | 1.1616 | 0.0176 | 0.6734 | 0.3266 | 3.1948 | 3067235.7944 | 2640694 | 1.1615 |
| 1.176 | 0.1228 | 10100 | 1.1619 | 0.0176 | 0.6732 | 0.3268 | 3.1959 | 3068106.4937 | 2640694 | 1.1619 |
| 1.2139 | 0.1240 | 10200 | 1.1619 | 0.0176 | 0.6734 | 0.3266 | 3.1957 | 3068001.6694 | 2640694 | 1.1618 |
| 1.1789 | 0.1252 | 10300 | 1.1624 | 0.0176 | 0.6733 | 0.3267 | 3.1974 | 3069345.8162 | 2640694 | 1.1623 |
| 1.1612 | 0.1264 | 10400 | 1.1631 | 0.0176 | 0.6728 | 0.3272 | 3.1996 | 3071231.4404 | 2640694 | 1.1630 |
| 1.1193 | 0.1276 | 10500 | 1.1621 | 0.0176 | 0.6733 | 0.3267 | 3.1966 | 3068736.4324 | 2640694 | 1.1621 |
| 1.195 | 0.1288 | 10600 | 1.1626 | 0.0176 | 0.6734 | 0.3266 | 3.1982 | 3070003.9492 | 2640694 | 1.1626 |
| 1.1626 | 0.1301 | 10700 | 1.1610 | 0.0176 | 0.6738 | 0.3262 | 3.1929 | 3065641.1260 | 2640694 | 1.1609 |
| 1.1222 | 0.1313 | 10800 | 1.1603 | 0.0176 | 0.6738 | 0.3262 | 3.1907 | 3063800.4529 | 2640694 | 1.1602 |
| 1.1739 | 0.1325 | 10900 | 1.1641 | 0.0176 | 0.6729 | 0.3271 | 3.2028 | 3073868.6157 | 2640694 | 1.1640 |
| 1.2339 | 0.1337 | 11000 | 1.1625 | 0.0176 | 0.6731 | 0.3269 | 3.1976 | 3069540.3071 | 2640694 | 1.1624 |
| 1.1053 | 0.1349 | 11100 | 1.1631 | 0.0176 | 0.6730 | 0.3270 | 3.1998 | 3071342.9424 | 2640694 | 1.1631 |
| 1.1766 | 0.1361 | 11200 | 1.1617 | 0.0176 | 0.6735 | 0.3265 | 3.1952 | 3067526.0657 | 2640694 | 1.1616 |
| 1.1243 | 0.1373 | 11300 | 1.1641 | 0.0176 | 0.6729 | 0.3271 | 3.2030 | 3073972.6052 | 2640694 | 1.1641 |
| 1.078 | 0.1386 | 11400 | 1.1613 | 0.0176 | 0.6733 | 0.3267 | 3.1938 | 3066369.2188 | 2640694 | 1.1612 |
| 1.1873 | 0.1398 | 11500 | 1.1605 | 0.0176 | 0.6739 | 0.3261 | 3.1914 | 3064406.8967 | 2640694 | 1.1605 |
| 1.1713 | 0.1410 | 11600 | 1.1614 | 0.0176 | 0.6732 | 0.3268 | 3.1944 | 3066883.8167 | 2640694 | 1.1614 |
| 1.1254 | 0.1422 | 11700 | 1.1632 | 0.0176 | 0.6731 | 0.3269 | 3.2000 | 3071529.9399 | 2640694 | 1.1632 |
| 1.1755 | 0.1434 | 11800 | 1.1611 | 0.0176 | 0.6737 | 0.3263 | 3.1933 | 3066010.9358 | 2640694 | 1.1611 |
| 1.1403 | 0.1446 | 11900 | 1.1610 | 0.0176 | 0.6735 | 0.3265 | 3.1931 | 3065800.0869 | 2640694 | 1.1610 |
| 1.1611 | 0.1459 | 12000 | 1.1607 | 0.0176 | 0.6736 | 0.3264 | 3.1920 | 3064907.2505 | 2640694 | 1.1606 |
| 1.1446 | 0.1471 | 12100 | 1.1599 | 0.0176 | 0.6738 | 0.3262 | 3.1895 | 3062824.7629 | 2640694 | 1.1599 |
| 1.1479 | 0.1483 | 12200 | 1.1615 | 0.0176 | 0.6733 | 0.3267 | 3.1947 | 3067116.6277 | 2640694 | 1.1615 |
| 1.121 | 0.1495 | 12300 | 1.1607 | 0.0176 | 0.6738 | 0.3262 | 3.1920 | 3064918.3704 | 2640694 | 1.1606 |
| 1.1693 | 0.1507 | 12400 | 1.1620 | 0.0176 | 0.6730 | 0.3270 | 3.1962 | 3068421.9802 | 2640694 | 1.1620 |
| 1.1925 | 0.1519 | 12500 | 1.1634 | 0.0176 | 0.6728 | 0.3272 | 3.2007 | 3072131.1509 | 2640694 | 1.1634 |
| 1.143 | 0.1531 | 12600 | 1.1605 | 0.0176 | 0.6739 | 0.3261 | 3.1912 | 3064269.9768 | 2640694 | 1.1604 |
| 1.2012 | 0.1544 | 12700 | 1.1602 | 0.0176 | 0.6736 | 0.3264 | 3.1905 | 3063670.3145 | 2640694 | 1.1602 |
| 1.1563 | 0.1556 | 12800 | 1.1600 | 0.0176 | 0.6736 | 0.3264 | 3.1897 | 3062974.3860 | 2640694 | 1.1599 |
| 1.1444 | 0.1568 | 12900 | 1.1601 | 0.0176 | 0.6737 | 0.3263 | 3.1901 | 3063330.9849 | 2640694 | 1.1600 |
| 1.1653 | 0.1580 | 13000 | 1.1597 | 0.0176 | 0.6737 | 0.3263 | 3.1888 | 3062304.0293 | 2640694 | 1.1597 |
| 1.1816 | 0.1592 | 13100 | 1.1614 | 0.0176 | 0.6734 | 0.3266 | 3.1943 | 3066842.8777 | 2640694 | 1.1614 |
| 1.1789 | 0.1604 | 13200 | 1.1595 | 0.0176 | 0.6739 | 0.3261 | 3.1882 | 3061752.7795 | 2640694 | 1.1595 |
| 1.152 | 0.1617 | 13300 | 1.1621 | 0.0176 | 0.6733 | 0.3267 | 3.1966 | 3068682.2654 | 2640694 | 1.1621 |
| 1.1263 | 0.1629 | 13400 | 1.1612 | 0.0176 | 0.6735 | 0.3265 | 3.1935 | 3066184.1140 | 2640694 | 1.1611 |
| 1.1394 | 0.1641 | 13500 | 1.1608 | 0.0176 | 0.6739 | 0.3261 | 3.1922 | 3065082.8247 | 2640694 | 1.1607 |
| 1.1528 | 0.1653 | 13600 | 1.1604 | 0.0176 | 0.6737 | 0.3263 | 3.1910 | 3064126.5076 | 2640694 | 1.1603 |
| 1.1887 | 0.1665 | 13700 | 1.1613 | 0.0176 | 0.6734 | 0.3266 | 3.1939 | 3066501.9023 | 2640694 | 1.1612 |
| 1.1385 | 0.1677 | 13800 | 1.1585 | 0.0176 | 0.6742 | 0.3258 | 3.1849 | 3059060.5854 | 2640694 | 1.1584 |
| 1.1537 | 0.1689 | 13900 | 1.1609 | 0.0176 | 0.6739 | 0.3261 | 3.1928 | 3065549.7197 | 2640694 | 1.1609 |
| 1.1059 | 0.1702 | 14000 | 1.1593 | 0.0176 | 0.6740 | 0.3260 | 3.1875 | 3061203.8608 | 2640694 | 1.1592 |
| 1.1647 | 0.1714 | 14100 | 1.1596 | 0.0176 | 0.6738 | 0.3262 | 3.1886 | 3062096.2544 | 2640694 | 1.1596 |
| 1.1572 | 0.1726 | 14200 | 1.1610 | 0.0176 | 0.6737 | 0.3263 | 3.1928 | 3065596.2395 | 2640694 | 1.1609 |
| 1.1799 | 0.1738 | 14300 | 1.1614 | 0.0176 | 0.6737 | 0.3263 | 3.1941 | 3066674.1536 | 2640694 | 1.1613 |
| 1.1104 | 0.1750 | 14400 | 1.1592 | 0.0176 | 0.6739 | 0.3261 | 3.1871 | 3060834.1904 | 2640694 | 1.1591 |
| 1.245 | 0.1762 | 14500 | 1.1610 | 0.0176 | 0.6733 | 0.3267 | 3.1929 | 3065691.8489 | 2640694 | 1.1609 |
| 1.146 | 0.1775 | 14600 | 1.1594 | 0.0176 | 0.6738 | 0.3262 | 3.1878 | 3061419.4602 | 2640694 | 1.1593 |
| 1.1973 | 0.1787 | 14700 | 1.1594 | 0.0176 | 0.6740 | 0.3260 | 3.1877 | 3061370.3850 | 2640694 | 1.1593 |
| 1.2136 | 0.1799 | 14800 | 1.1598 | 0.0176 | 0.6739 | 0.3261 | 3.1892 | 3062587.5840 | 2640694 | 1.1598 |
| 1.2153 | 0.1811 | 14900 | 1.1596 | 0.0176 | 0.6739 | 0.3261 | 3.1886 | 3062112.1455 | 2640694 | 1.1596 |
| 1.1669 | 0.1823 | 15000 | 1.1628 | 0.0176 | 0.6735 | 0.3265 | 3.1987 | 3070431.8621 | 2640694 | 1.1627 |
| 1.1851 | 0.1835 | 15100 | 1.1595 | 0.0176 | 0.6740 | 0.3260 | 3.1881 | 3061648.6204 | 2640694 | 1.1594 |
| 1.1217 | 0.1847 | 15200 | 1.1599 | 0.0176 | 0.6738 | 0.3262 | 3.1894 | 3062796.7461 | 2640694 | 1.1598 |
| 1.1146 | 0.1860 | 15300 | 1.1596 | 0.0176 | 0.6739 | 0.3261 | 3.1885 | 3062049.1372 | 2640694 | 1.1596 |
| 1.1907 | 0.1872 | 15400 | 1.1593 | 0.0176 | 0.6740 | 0.3260 | 3.1876 | 3061234.1401 | 2640694 | 1.1593 |
| 1.0886 | 0.1884 | 15500 | 1.1595 | 0.0176 | 0.6739 | 0.3261 | 3.1881 | 3061674.5730 | 2640694 | 1.1594 |
| 1.1937 | 0.1896 | 15600 | 1.1598 | 0.0176 | 0.6737 | 0.3263 | 3.1890 | 3062413.4280 | 2640694 | 1.1597 |
| 1.1626 | 0.1908 | 15700 | 1.1597 | 0.0176 | 0.6742 | 0.3258 | 3.1887 | 3062212.6938 | 2640694 | 1.1596 |
| 1.2494 | 0.1920 | 15800 | 1.1587 | 0.0176 | 0.6740 | 0.3260 | 3.1855 | 3059546.1497 | 2640694 | 1.1586 |
| 1.1418 | 0.1933 | 15900 | 1.1610 | 0.0176 | 0.6738 | 0.3262 | 3.1930 | 3065733.0254 | 2640694 | 1.1610 |
| 1.1828 | 0.1945 | 16000 | 1.1595 | 0.0176 | 0.6739 | 0.3261 | 3.1882 | 3061728.8132 | 2640694 | 1.1594 |
| 1.14 | 0.1957 | 16100 | 1.1597 | 0.0176 | 0.6743 | 0.3257 | 3.1887 | 3062209.3459 | 2640694 | 1.1596 |
| 1.1183 | 0.1969 | 16200 | 1.1583 | 0.0176 | 0.6740 | 0.3260 | 3.1842 | 3058434.2256 | 2640694 | 1.1582 |
| 1.1465 | 0.1981 | 16300 | 1.1584 | 0.0176 | 0.6745 | 0.3255 | 3.1846 | 3058820.6899 | 2640694 | 1.1583 |
| 1.1617 | 0.1993 | 16400 | 1.1593 | 0.0176 | 0.6740 | 0.3260 | 3.1875 | 3061162.5774 | 2640694 | 1.1592 |
| 1.0919 | 0.2005 | 16500 | 1.1588 | 0.0176 | 0.6739 | 0.3261 | 3.1860 | 3059965.8423 | 2640694 | 1.1588 |
| 1.1994 | 0.2018 | 16600 | 1.1640 | 0.0176 | 0.6725 | 0.3275 | 3.2026 | 3073629.5388 | 2640694 | 1.1639 |
| 1.1429 | 0.2030 | 16700 | 1.1581 | 0.0176 | 0.6742 | 0.3258 | 3.1837 | 3058069.1157 | 2640694 | 1.1581 |
| 1.0721 | 0.2042 | 16800 | 1.1575 | 0.0176 | 0.6743 | 0.3257 | 3.1818 | 3056488.1675 | 2640694 | 1.1575 |
| 1.1548 | 0.2054 | 16900 | 1.1582 | 0.0176 | 0.6742 | 0.3258 | 3.1839 | 3058193.0916 | 2640694 | 1.1581 |
| 1.1681 | 0.2066 | 17000 | 1.1604 | 0.0176 | 0.6736 | 0.3264 | 3.1909 | 3064039.8254 | 2640694 | 1.1603 |
| 1.2143 | 0.2078 | 17100 | 1.1591 | 0.0176 | 0.6741 | 0.3259 | 3.1868 | 3060632.8459 | 2640694 | 1.1590 |
| 1.1723 | 0.2091 | 17200 | 1.1581 | 0.0176 | 0.6742 | 0.3258 | 3.1839 | 3058166.7698 | 2640694 | 1.1581 |
| 1.2437 | 0.2103 | 17300 | 1.1575 | 0.0176 | 0.6742 | 0.3258 | 3.1819 | 3056575.8796 | 2640694 | 1.1575 |
| 1.1073 | 0.2115 | 17400 | 1.1584 | 0.0176 | 0.6745 | 0.3255 | 3.1847 | 3058865.7974 | 2640694 | 1.1584 |
| 1.2129 | 0.2127 | 17500 | 1.1586 | 0.0176 | 0.6741 | 0.3259 | 3.1854 | 3059438.1680 | 2640694 | 1.1586 |
| 1.1207 | 0.2139 | 17600 | 1.1587 | 0.0176 | 0.6740 | 0.3260 | 3.1855 | 3059554.7896 | 2640694 | 1.1586 |
| 1.1606 | 0.2151 | 17700 | 1.1583 | 0.0176 | 0.6741 | 0.3259 | 3.1844 | 3058622.5332 | 2640694 | 1.1583 |
| 1.122 | 0.2163 | 17800 | 1.1598 | 0.0176 | 0.6740 | 0.3260 | 3.1891 | 3062492.9246 | 2640694 | 1.1597 |
| 1.1353 | 0.2176 | 17900 | 1.1587 | 0.0176 | 0.6740 | 0.3260 | 3.1857 | 3059664.1055 | 2640694 | 1.1587 |
| 1.1686 | 0.2188 | 18000 | 1.1583 | 0.0176 | 0.6739 | 0.3261 | 3.1843 | 3058545.4570 | 2640694 | 1.1582 |
| 1.2439 | 0.2200 | 18100 | 1.1579 | 0.0176 | 0.6745 | 0.3255 | 3.1830 | 3057444.1453 | 2640694 | 1.1578 |
| 1.1175 | 0.2212 | 18200 | 1.1579 | 0.0176 | 0.6743 | 0.3257 | 3.1832 | 3057607.9326 | 2640694 | 1.1579 |
| 1.1409 | 0.2224 | 18300 | 1.1582 | 0.0176 | 0.6743 | 0.3257 | 3.1839 | 3058221.2646 | 2640694 | 1.1581 |
| 1.055 | 0.2236 | 18400 | 1.1598 | 0.0176 | 0.6740 | 0.3260 | 3.1892 | 3062630.2820 | 2640694 | 1.1598 |
| 1.1498 | 0.2249 | 18500 | 1.1556 | 0.0176 | 0.6748 | 0.3252 | 3.1759 | 3051587.0967 | 2640694 | 1.1556 |
| 1.1289 | 0.2261 | 18600 | 1.1582 | 0.0176 | 0.6743 | 0.3257 | 3.1841 | 3058333.8699 | 2640694 | 1.1582 |
| 1.1356 | 0.2273 | 18700 | 1.1566 | 0.0176 | 0.6745 | 0.3255 | 3.1788 | 3053974.6096 | 2640694 | 1.1565 |
| 1.1736 | 0.2285 | 18800 | 1.1591 | 0.0176 | 0.6742 | 0.3258 | 3.1867 | 3060561.1511 | 2640694 | 1.1590 |
| 1.1691 | 0.2297 | 18900 | 1.1582 | 0.0176 | 0.6740 | 0.3260 | 3.1840 | 3058257.6523 | 2640694 | 1.1581 |
| 1.134 | 0.2309 | 19000 | 1.1578 | 0.0176 | 0.6740 | 0.3260 | 3.1828 | 3057276.9080 | 2640694 | 1.1578 |
| 1.1668 | 0.2321 | 19100 | 1.1597 | 0.0176 | 0.6735 | 0.3265 | 3.1889 | 3062343.9336 | 2640694 | 1.1597 |
| 1.201 | 0.2334 | 19200 | 1.1556 | 0.0176 | 0.6749 | 0.3251 | 3.1758 | 3051510.4229 | 2640694 | 1.1556 |
| 1.1768 | 0.2346 | 19300 | 1.1572 | 0.0176 | 0.6745 | 0.3255 | 3.1808 | 3055642.4592 | 2640694 | 1.1571 |
| 1.1172 | 0.2358 | 19400 | 1.1575 | 0.0176 | 0.6747 | 0.3253 | 3.1819 | 3056582.8535 | 2640694 | 1.1575 |
| 1.1155 | 0.2370 | 19500 | 1.1596 | 0.0176 | 0.6736 | 0.3264 | 3.1884 | 3061937.7927 | 2640694 | 1.1595 |
| 1.1304 | 0.2382 | 19600 | 1.1574 | 0.0176 | 0.6747 | 0.3253 | 3.1813 | 3056088.8354 | 2640694 | 1.1573 |
| 1.1149 | 0.2394 | 19700 | 1.1563 | 0.0176 | 0.6744 | 0.3256 | 3.1779 | 3053206.3867 | 2640694 | 1.1562 |
| 1.1282 | 0.2407 | 19800 | 1.1591 | 0.0176 | 0.6738 | 0.3262 | 3.1869 | 3060679.4199 | 2640694 | 1.1590 |
| 1.1094 | 0.2419 | 19900 | 1.1557 | 0.0176 | 0.6750 | 0.3250 | 3.1761 | 3051718.3398 | 2640694 | 1.1557 |
| 1.1305 | 0.2431 | 20000 | 1.1579 | 0.0176 | 0.6744 | 0.3256 | 3.1830 | 3057432.7402 | 2640694 | 1.1578 |
| 1.1217 | 0.2443 | 20100 | 1.1579 | 0.0176 | 0.6744 | 0.3256 | 3.1829 | 3057406.5183 | 2640694 | 1.1578 |
| 1.1018 | 0.2455 | 20200 | 1.1580 | 0.0176 | 0.6743 | 0.3257 | 3.1833 | 3057711.2183 | 2640694 | 1.1579 |
| 1.1233 | 0.2467 | 20300 | 1.1576 | 0.0176 | 0.6745 | 0.3255 | 3.1822 | 3056805.7334 | 2640694 | 1.1576 |
| 1.1806 | 0.2479 | 20400 | 1.1553 | 0.0176 | 0.6751 | 0.3249 | 3.1748 | 3050675.1475 | 2640694 | 1.1553 |
| 1.1639 | 0.2492 | 20500 | 1.1575 | 0.0176 | 0.6744 | 0.3256 | 3.1817 | 3056413.0237 | 2640694 | 1.1574 |
| 1.2077 | 0.2504 | 20600 | 1.1582 | 0.0176 | 0.6742 | 0.3258 | 3.1839 | 3058226.8184 | 2640694 | 1.1581 |
| 1.1642 | 0.2516 | 20700 | 1.1578 | 0.0176 | 0.6743 | 0.3257 | 3.1829 | 3057398.4604 | 2640694 | 1.1578 |
| 1.1108 | 0.2528 | 20800 | 1.1563 | 0.0176 | 0.6745 | 0.3255 | 3.1779 | 3053188.6060 | 2640694 | 1.1562 |
| 1.2739 | 0.2540 | 20900 | 1.1570 | 0.0176 | 0.6743 | 0.3257 | 3.1801 | 3055029.2732 | 2640694 | 1.1569 |
| 1.1501 | 0.2552 | 21000 | 1.1568 | 0.0176 | 0.6743 | 0.3257 | 3.1795 | 3054542.2339 | 2640694 | 1.1567 |
| 1.1784 | 0.2565 | 21100 | 1.1572 | 0.0176 | 0.6744 | 0.3256 | 3.1808 | 3055648.6885 | 2640694 | 1.1571 |
| 1.1773 | 0.2577 | 21200 | 1.1572 | 0.0176 | 0.6746 | 0.3254 | 3.1809 | 3055736.2913 | 2640694 | 1.1572 |
| 1.1344 | 0.2589 | 21300 | 1.1567 | 0.0176 | 0.6745 | 0.3255 | 3.1793 | 3054387.5400 | 2640694 | 1.1567 |
| 1.1229 | 0.2601 | 21400 | 1.1587 | 0.0176 | 0.6736 | 0.3264 | 3.1857 | 3059723.5522 | 2640694 | 1.1587 |
| 1.1834 | 0.2613 | 21500 | 1.1577 | 0.0176 | 0.6743 | 0.3257 | 3.1826 | 3057138.7595 | 2640694 | 1.1577 |
| 1.0716 | 0.2625 | 21600 | 1.1599 | 0.0176 | 0.6740 | 0.3260 | 3.1896 | 3062903.3074 | 2640694 | 1.1599 |
| 1.1996 | 0.2637 | 21700 | 1.1567 | 0.0176 | 0.6746 | 0.3254 | 3.1794 | 3054449.4414 | 2640694 | 1.1567 |
| 1.126 | 0.2650 | 21800 | 1.1547 | 0.0176 | 0.6749 | 0.3251 | 3.1729 | 3049101.4055 | 2640694 | 1.1547 |
| 1.1002 | 0.2662 | 21900 | 1.1587 | 0.0176 | 0.6742 | 0.3258 | 3.1857 | 3059717.2148 | 2640694 | 1.1587 |
| 1.2212 | 0.2674 | 22000 | 1.1562 | 0.0176 | 0.6743 | 0.3257 | 3.1776 | 3053010.4060 | 2640694 | 1.1561 |
| 1.2189 | 0.2686 | 22100 | 1.1581 | 0.0176 | 0.6743 | 0.3257 | 3.1837 | 3058005.3145 | 2640694 | 1.1580 |
| 1.139 | 0.2698 | 22200 | 1.1583 | 0.0176 | 0.6742 | 0.3258 | 3.1844 | 3058605.1790 | 2640694 | 1.1583 |
| 1.0423 | 0.2710 | 22300 | 1.1571 | 0.0176 | 0.6748 | 0.3252 | 3.1806 | 3055489.0696 | 2640694 | 1.1571 |
| 1.1933 | 0.2723 | 22400 | 1.1586 | 0.0176 | 0.6742 | 0.3258 | 3.1853 | 3059374.7837 | 2640694 | 1.1585 |
| 1.1148 | 0.2735 | 22500 | 1.1557 | 0.0176 | 0.6748 | 0.3252 | 3.1759 | 3051577.6208 | 2640694 | 1.1556 |
| 1.1507 | 0.2747 | 22600 | 1.1573 | 0.0176 | 0.6742 | 0.3258 | 3.1812 | 3055988.3723 | 2640694 | 1.1573 |
| 1.0731 | 0.2759 | 22700 | 1.1569 | 0.0176 | 0.6749 | 0.3251 | 3.1799 | 3054912.9124 | 2640694 | 1.1569 |
| 1.1549 | 0.2771 | 22800 | 1.1569 | 0.0176 | 0.6746 | 0.3254 | 3.1800 | 3054955.6704 | 2640694 | 1.1569 |
| 1.1729 | 0.2783 | 22900 | 1.1587 | 0.0176 | 0.6743 | 0.3257 | 3.1857 | 3059722.4788 | 2640694 | 1.1587 |
| 1.0878 | 0.2796 | 23000 | 1.1561 | 0.0176 | 0.6745 | 0.3255 | 3.1774 | 3052838.8301 | 2640694 | 1.1561 |
| 1.1741 | 0.2808 | 23100 | 1.1565 | 0.0176 | 0.6746 | 0.3254 | 3.1787 | 3053911.9614 | 2640694 | 1.1565 |
| 1.1358 | 0.2820 | 23200 | 1.1568 | 0.0176 | 0.6748 | 0.3252 | 3.1796 | 3054644.2361 | 2640694 | 1.1568 |
| 1.1769 | 0.2832 | 23300 | 1.1580 | 0.0176 | 0.6739 | 0.3261 | 3.1833 | 3057730.0439 | 2640694 | 1.1579 |
| 1.2632 | 0.2844 | 23400 | 1.1554 | 0.0176 | 0.6749 | 0.3251 | 3.1751 | 3050865.0029 | 2640694 | 1.1553 |
| 1.1196 | 0.2856 | 23500 | 1.1555 | 0.0176 | 0.6750 | 0.3250 | 3.1753 | 3051056.8496 | 2640694 | 1.1554 |
| 1.1096 | 0.2868 | 23600 | 1.1570 | 0.0176 | 0.6747 | 0.3253 | 3.1803 | 3055211.7085 | 2640694 | 1.1570 |
| 1.1172 | 0.2881 | 23700 | 1.1568 | 0.0176 | 0.6745 | 0.3255 | 3.1795 | 3054586.7615 | 2640694 | 1.1567 |
| 1.2343 | 0.2893 | 23800 | 1.1564 | 0.0176 | 0.6745 | 0.3255 | 3.1785 | 3053684.3311 | 2640694 | 1.1564 |
| 1.1465 | 0.2905 | 23900 | 1.1564 | 0.0176 | 0.6746 | 0.3254 | 3.1782 | 3053508.1870 | 2640694 | 1.1563 |
| 1.089 | 0.2917 | 24000 | 1.1577 | 0.0176 | 0.6741 | 0.3259 | 3.1823 | 3056865.4199 | 2640694 | 1.1576 |
| 1.1039 | 0.2929 | 24100 | 1.1564 | 0.0176 | 0.6747 | 0.3253 | 3.1783 | 3053590.7302 | 2640694 | 1.1564 |
| 1.1556 | 0.2941 | 24200 | 1.1560 | 0.0176 | 0.6747 | 0.3253 | 3.1771 | 3052556.0142 | 2640694 | 1.1560 |
| 1.1505 | 0.2954 | 24300 | 1.1561 | 0.0176 | 0.6745 | 0.3255 | 3.1773 | 3052745.3689 | 2640694 | 1.1560 |
| 1.1267 | 0.2966 | 24400 | 1.1567 | 0.0176 | 0.6743 | 0.3257 | 3.1792 | 3054297.0378 | 2640694 | 1.1566 |
| 1.1601 | 0.2978 | 24500 | 1.1546 | 0.0176 | 0.6752 | 0.3248 | 3.1725 | 3048696.5701 | 2640694 | 1.1545 |
| 1.1033 | 0.2990 | 24600 | 1.1564 | 0.0176 | 0.6745 | 0.3255 | 3.1784 | 3053611.8096 | 2640694 | 1.1564 |
| 1.2068 | 0.3002 | 24700 | 1.1559 | 0.0176 | 0.6745 | 0.3255 | 3.1766 | 3052144.9099 | 2640694 | 1.1558 |
| 1.17 | 0.3014 | 24800 | 1.1565 | 0.0176 | 0.6744 | 0.3256 | 3.1786 | 3053812.6643 | 2640694 | 1.1564 |
| 1.1479 | 0.3026 | 24900 | 1.1557 | 0.0176 | 0.6748 | 0.3252 | 3.1762 | 3051773.0249 | 2640694 | 1.1557 |
| 1.1891 | 0.3039 | 25000 | 1.1574 | 0.0176 | 0.6744 | 0.3256 | 3.1816 | 3056286.5071 | 2640694 | 1.1574 |
| 1.1387 | 0.3051 | 25100 | 1.1563 | 0.0176 | 0.6747 | 0.3253 | 3.1781 | 3053404.5894 | 2640694 | 1.1563 |
| 1.1631 | 0.3063 | 25200 | 1.1561 | 0.0176 | 0.6746 | 0.3254 | 3.1774 | 3052803.3201 | 2640694 | 1.1561 |
| 1.1511 | 0.3075 | 25300 | 1.1552 | 0.0176 | 0.6748 | 0.3252 | 3.1746 | 3050496.4209 | 2640694 | 1.1552 |
| 1.2023 | 0.3087 | 25400 | 1.1560 | 0.0176 | 0.6748 | 0.3252 | 3.1772 | 3052609.0710 | 2640694 | 1.1560 |
| 1.1613 | 0.3099 | 25500 | 1.1557 | 0.0176 | 0.6748 | 0.3252 | 3.1762 | 3051840.3647 | 2640694 | 1.1557 |
| 1.1439 | 0.3112 | 25600 | 1.1559 | 0.0176 | 0.6747 | 0.3253 | 3.1769 | 3052373.8560 | 2640694 | 1.1559 |
| 1.1234 | 0.3124 | 25700 | 1.1549 | 0.0176 | 0.6750 | 0.3250 | 3.1736 | 3049659.1169 | 2640694 | 1.1549 |
| 1.1821 | 0.3136 | 25800 | 1.1568 | 0.0176 | 0.6742 | 0.3258 | 3.1794 | 3054498.5591 | 2640694 | 1.1567 |
| 1.0372 | 0.3148 | 25900 | 1.1559 | 0.0176 | 0.6749 | 0.3251 | 3.1766 | 3052163.7593 | 2640694 | 1.1558 |
| 1.0582 | 0.3160 | 26000 | 1.1556 | 0.0176 | 0.6743 | 0.3257 | 3.1757 | 3051387.1663 | 2640694 | 1.1555 |
| 1.1653 | 0.3172 | 26100 | 1.1572 | 0.0176 | 0.6747 | 0.3253 | 3.1810 | 3055767.7710 | 2640694 | 1.1572 |
| 1.1246 | 0.3184 | 26200 | 1.1565 | 0.0176 | 0.6746 | 0.3254 | 3.1785 | 3053723.1353 | 2640694 | 1.1564 |
| 1.0971 | 0.3197 | 26300 | 1.1559 | 0.0176 | 0.6748 | 0.3252 | 3.1769 | 3052353.8901 | 2640694 | 1.1559 |
| 1.1854 | 0.3209 | 26400 | 1.1566 | 0.0176 | 0.6747 | 0.3253 | 3.1790 | 3054100.7295 | 2640694 | 1.1566 |
| 1.1482 | 0.3221 | 26500 | 1.1584 | 0.0176 | 0.6740 | 0.3260 | 3.1845 | 3058743.6472 | 2640694 | 1.1583 |
| 1.1306 | 0.3233 | 26600 | 1.1568 | 0.0176 | 0.6743 | 0.3257 | 3.1796 | 3054613.3870 | 2640694 | 1.1567 |
| 1.1994 | 0.3245 | 26700 | 1.1555 | 0.0176 | 0.6748 | 0.3252 | 3.1754 | 3051139.9932 | 2640694 | 1.1554 |
| 1.1627 | 0.3257 | 26800 | 1.1552 | 0.0176 | 0.6750 | 0.3250 | 3.1746 | 3050498.8501 | 2640694 | 1.1552 |
| 1.1925 | 0.3270 | 26900 | 1.1554 | 0.0176 | 0.6747 | 0.3253 | 3.1751 | 3050916.1455 | 2640694 | 1.1553 |
| 1.1113 | 0.3282 | 27000 | 1.1574 | 0.0176 | 0.6744 | 0.3256 | 3.1814 | 3056111.8887 | 2640694 | 1.1573 |
| 1.1621 | 0.3294 | 27100 | 1.1550 | 0.0176 | 0.6749 | 0.3251 | 3.1740 | 3049968.6670 | 2640694 | 1.1550 |
| 1.2042 | 0.3306 | 27200 | 1.1555 | 0.0176 | 0.6749 | 0.3251 | 3.1753 | 3051046.4299 | 2640694 | 1.1554 |
| 1.1409 | 0.3318 | 27300 | 1.1549 | 0.0176 | 0.6749 | 0.3251 | 3.1736 | 3049689.4167 | 2640694 | 1.1549 |
| 1.1275 | 0.3330 | 27400 | 1.1597 | 0.0176 | 0.6736 | 0.3264 | 3.1888 | 3062259.1294 | 2640694 | 1.1596 |
| 1.145 | 0.3342 | 27500 | 1.1563 | 0.0176 | 0.6744 | 0.3256 | 3.1779 | 3053240.3804 | 2640694 | 1.1562 |
| 1.1798 | 0.3355 | 27600 | 1.1577 | 0.0176 | 0.6739 | 0.3261 | 3.1824 | 3056950.8601 | 2640694 | 1.1576 |
| 1.1323 | 0.3367 | 27700 | 1.1567 | 0.0176 | 0.6743 | 0.3257 | 3.1791 | 3054244.6946 | 2640694 | 1.1566 |
| 1.1045 | 0.3379 | 27800 | 1.1541 | 0.0176 | 0.6751 | 0.3249 | 3.1711 | 3047577.0356 | 2640694 | 1.1541 |
| 1.0941 | 0.3391 | 27900 | 1.1554 | 0.0176 | 0.6743 | 0.3257 | 3.1750 | 3050810.7437 | 2640694 | 1.1553 |
| 1.1462 | 0.3403 | 28000 | 1.1538 | 0.0176 | 0.6752 | 0.3248 | 3.1701 | 3046711.7087 | 2640694 | 1.1538 |
| 1.1706 | 0.3415 | 28100 | 1.1567 | 0.0176 | 0.6745 | 0.3255 | 3.1791 | 3054254.1975 | 2640694 | 1.1566 |
| 1.187 | 0.3428 | 28200 | 1.1544 | 0.0176 | 0.6751 | 0.3249 | 3.1721 | 3048393.2827 | 2640694 | 1.1544 |
| 1.1951 | 0.3440 | 28300 | 1.1568 | 0.0176 | 0.6743 | 0.3257 | 3.1797 | 3054723.9248 | 2640694 | 1.1568 |
| 1.2385 | 0.3452 | 28400 | 1.1563 | 0.0176 | 0.6748 | 0.3252 | 3.1780 | 3053306.5430 | 2640694 | 1.1563 |
| 1.2146 | 0.3464 | 28500 | 1.1585 | 0.0176 | 0.6743 | 0.3257 | 3.1849 | 3059021.6895 | 2640694 | 1.1584 |
| 1.1047 | 0.3476 | 28600 | 1.1537 | 0.0176 | 0.6752 | 0.3248 | 3.1697 | 3046436.9768 | 2640694 | 1.1537 |
| 1.2395 | 0.3488 | 28700 | 1.1567 | 0.0176 | 0.6746 | 0.3254 | 3.1791 | 3054215.3247 | 2640694 | 1.1566 |
| 1.1455 | 0.3500 | 28800 | 1.1543 | 0.0176 | 0.6750 | 0.3250 | 3.1716 | 3047995.7617 | 2640694 | 1.1542 |
| 1.2534 | 0.3513 | 28900 | 1.1559 | 0.0176 | 0.6745 | 0.3255 | 3.1768 | 3052294.6233 | 2640694 | 1.1559 |
| 1.1034 | 0.3525 | 29000 | 1.1565 | 0.0176 | 0.6745 | 0.3255 | 3.1785 | 3053747.9641 | 2640694 | 1.1564 |
| 1.157 | 0.3537 | 29100 | 1.1551 | 0.0176 | 0.6752 | 0.3248 | 3.1742 | 3050111.0671 | 2640694 | 1.1550 |
| 1.1596 | 0.3549 | 29200 | 1.1550 | 0.0176 | 0.6750 | 0.3250 | 3.1738 | 3049850.6907 | 2640694 | 1.1549 |
| 1.2317 | 0.3561 | 29300 | 1.1545 | 0.0176 | 0.6748 | 0.3252 | 3.1721 | 3048415.6470 | 2640694 | 1.1544 |
| 1.1159 | 0.3573 | 29400 | 1.1532 | 0.0176 | 0.6753 | 0.3247 | 3.1680 | 3044998.3337 | 2640694 | 1.1531 |
| 1.1232 | 0.3586 | 29500 | 1.1564 | 0.0176 | 0.6743 | 0.3257 | 3.1783 | 3053586.8391 | 2640694 | 1.1564 |
| 1.1446 | 0.3598 | 29600 | 1.1540 | 0.0176 | 0.6752 | 0.3248 | 3.1708 | 3047339.6987 | 2640694 | 1.1540 |
| 1.1321 | 0.3610 | 29700 | 1.1542 | 0.0176 | 0.6749 | 0.3251 | 3.1712 | 3047674.3777 | 2640694 | 1.1541 |
| 1.1211 | 0.3622 | 29800 | 1.1540 | 0.0176 | 0.6750 | 0.3250 | 3.1707 | 3047198.6497 | 2640694 | 1.1539 |
| 1.0925 | 0.3634 | 29900 | 1.1558 | 0.0176 | 0.6745 | 0.3255 | 3.1764 | 3051990.1877 | 2640694 | 1.1558 |
| 1.2852 | 0.3646 | 30000 | 1.1564 | 0.0176 | 0.6749 | 0.3251 | 3.1783 | 3053591.8352 | 2640694 | 1.1564 |
| 1.1865 | 0.3658 | 30100 | 1.1559 | 0.0176 | 0.6750 | 0.3250 | 3.1768 | 3052327.5166 | 2640694 | 1.1559 |
| 1.2031 | 0.3671 | 30200 | 1.1536 | 0.0176 | 0.6750 | 0.3250 | 3.1695 | 3046200.3301 | 2640694 | 1.1536 |
| 1.0701 | 0.3683 | 30300 | 1.1548 | 0.0176 | 0.6748 | 0.3252 | 3.1732 | 3049345.3213 | 2640694 | 1.1548 |
| 1.1507 | 0.3695 | 30400 | 1.1556 | 0.0176 | 0.6745 | 0.3255 | 3.1759 | 3051538.3765 | 2640694 | 1.1556 |
| 1.1769 | 0.3707 | 30500 | 1.1563 | 0.0176 | 0.6745 | 0.3255 | 3.1778 | 3053143.1763 | 2640694 | 1.1562 |
| 1.1134 | 0.3719 | 30600 | 1.1544 | 0.0176 | 0.6751 | 0.3249 | 3.1720 | 3048296.3533 | 2640694 | 1.1544 |
| 1.1045 | 0.3731 | 30700 | 1.1562 | 0.0176 | 0.6747 | 0.3253 | 3.1775 | 3052908.4043 | 2640694 | 1.1561 |
| 1.175 | 0.3744 | 30800 | 1.1568 | 0.0176 | 0.6747 | 0.3253 | 3.1795 | 3054549.2520 | 2640694 | 1.1567 |
| 1.177 | 0.3756 | 30900 | 1.1541 | 0.0176 | 0.6751 | 0.3249 | 3.1710 | 3047476.2375 | 2640694 | 1.1540 |
| 1.0942 | 0.3768 | 31000 | 1.1543 | 0.0176 | 0.6754 | 0.3246 | 3.1717 | 3048076.1941 | 2640694 | 1.1543 |
| 1.2177 | 0.3780 | 31100 | 1.1549 | 0.0176 | 0.6752 | 0.3248 | 3.1737 | 3049699.1516 | 2640694 | 1.1549 |
| 1.1143 | 0.3792 | 31200 | 1.1547 | 0.0176 | 0.6751 | 0.3249 | 3.1728 | 3048969.3032 | 2640694 | 1.1546 |
| 1.1583 | 0.3804 | 31300 | 1.1549 | 0.0176 | 0.6749 | 0.3251 | 3.1734 | 3049464.7146 | 2640694 | 1.1548 |
| 1.0879 | 0.3816 | 31400 | 1.1547 | 0.0176 | 0.6748 | 0.3252 | 3.1729 | 3049055.8755 | 2640694 | 1.1546 |
| 1.1177 | 0.3829 | 31500 | 1.1541 | 0.0176 | 0.6751 | 0.3249 | 3.1711 | 3047533.1575 | 2640694 | 1.1541 |
| 1.1405 | 0.3841 | 31600 | 1.1533 | 0.0176 | 0.6752 | 0.3248 | 3.1684 | 3045320.2744 | 2640694 | 1.1532 |
| 1.1731 | 0.3853 | 31700 | 1.1550 | 0.0176 | 0.6748 | 0.3252 | 3.1737 | 3049762.1636 | 2640694 | 1.1549 |
| 1.1524 | 0.3865 | 31800 | 1.1554 | 0.0176 | 0.6749 | 0.3251 | 3.1750 | 3050789.3904 | 2640694 | 1.1553 |
| 1.0623 | 0.3877 | 31900 | 1.1542 | 0.0176 | 0.6753 | 0.3247 | 3.1713 | 3047733.4976 | 2640694 | 1.1541 |
| 1.1189 | 0.3889 | 32000 | 1.1538 | 0.0176 | 0.6753 | 0.3247 | 3.1701 | 3046760.6277 | 2640694 | 1.1538 |
| 1.1255 | 0.3902 | 32100 | 1.1548 | 0.0176 | 0.6751 | 0.3249 | 3.1732 | 3049299.0371 | 2640694 | 1.1547 |
| 1.1812 | 0.3914 | 32200 | 1.1525 | 0.0176 | 0.6754 | 0.3246 | 3.1658 | 3043180.2385 | 2640694 | 1.1524 |
| 1.2244 | 0.3926 | 32300 | 1.1548 | 0.0176 | 0.6749 | 0.3251 | 3.1731 | 3049235.8274 | 2640694 | 1.1547 |
| 1.1214 | 0.3938 | 32400 | 1.1542 | 0.0176 | 0.6751 | 0.3249 | 3.1713 | 3047727.5020 | 2640694 | 1.1541 |
| 1.1782 | 0.3950 | 32500 | 1.1534 | 0.0176 | 0.6752 | 0.3248 | 3.1688 | 3045634.6729 | 2640694 | 1.1533 |
| 1.1698 | 0.3962 | 32600 | 1.1551 | 0.0176 | 0.6748 | 0.3252 | 3.1743 | 3050235.7458 | 2640694 | 1.1551 |
| 1.0567 | 0.3974 | 32700 | 1.1538 | 0.0176 | 0.6754 | 0.3246 | 3.1702 | 3046810.1113 | 2640694 | 1.1538 |
| 1.1201 | 0.3987 | 32800 | 1.1559 | 0.0176 | 0.6750 | 0.3250 | 3.1768 | 3052310.6689 | 2640694 | 1.1559 |
| 1.1509 | 0.3999 | 32900 | 1.1548 | 0.0176 | 0.6749 | 0.3251 | 3.1733 | 3049415.3857 | 2640694 | 1.1548 |
| 1.1311 | 0.4011 | 33000 | 1.1548 | 0.0176 | 0.6750 | 0.3250 | 3.1731 | 3049254.6797 | 2640694 | 1.1547 |
| 1.2332 | 0.4023 | 33100 | 1.1537 | 0.0176 | 0.6754 | 0.3246 | 3.1696 | 3046294.3992 | 2640694 | 1.1536 |
| 1.1885 | 0.4035 | 33200 | 1.1547 | 0.0176 | 0.6753 | 0.3247 | 3.1729 | 3049087.7129 | 2640694 | 1.1547 |
| 1.2373 | 0.4047 | 33300 | 1.1553 | 0.0176 | 0.6747 | 0.3253 | 3.1748 | 3050686.5786 | 2640694 | 1.1553 |
| 1.1077 | 0.4060 | 33400 | 1.1552 | 0.0176 | 0.6750 | 0.3250 | 3.1745 | 3050357.2617 | 2640694 | 1.1551 |
| 1.175 | 0.4072 | 33500 | 1.1536 | 0.0176 | 0.6750 | 0.3250 | 3.1693 | 3046109.8799 | 2640694 | 1.1535 |
| 1.2418 | 0.4084 | 33600 | 1.1541 | 0.0176 | 0.6751 | 0.3249 | 3.1710 | 3047481.4873 | 2640694 | 1.1540 |
| 1.1381 | 0.4096 | 33700 | 1.1537 | 0.0176 | 0.6751 | 0.3249 | 3.1696 | 3046329.1191 | 2640694 | 1.1536 |
| 1.1803 | 0.4108 | 33800 | 1.1545 | 0.0176 | 0.6753 | 0.3247 | 3.1724 | 3048629.6130 | 2640694 | 1.1545 |
| 1.1791 | 0.4120 | 33900 | 1.1543 | 0.0176 | 0.6745 | 0.3255 | 3.1715 | 3047891.0708 | 2640694 | 1.1542 |
| 1.114 | 0.4132 | 34000 | 1.1534 | 0.0176 | 0.6754 | 0.3246 | 3.1689 | 3045748.1799 | 2640694 | 1.1534 |
| 1.1576 | 0.4145 | 34100 | 1.1531 | 0.0176 | 0.6751 | 0.3249 | 3.1679 | 3044873.0642 | 2640694 | 1.1531 |
| 1.2083 | 0.4157 | 34200 | 1.1543 | 0.0176 | 0.6752 | 0.3248 | 3.1716 | 3047983.5559 | 2640694 | 1.1542 |
| 1.1442 | 0.4169 | 34300 | 1.1566 | 0.0176 | 0.6747 | 0.3253 | 3.1790 | 3054149.7249 | 2640694 | 1.1566 |
| 1.0855 | 0.4181 | 34400 | 1.1543 | 0.0176 | 0.6750 | 0.3250 | 3.1716 | 3047969.4722 | 2640694 | 1.1542 |
| 1.1477 | 0.4193 | 34500 | 1.1537 | 0.0176 | 0.6751 | 0.3249 | 3.1697 | 3046363.7463 | 2640694 | 1.1536 |
| 1.1322 | 0.4205 | 34600 | 1.1538 | 0.0176 | 0.6756 | 0.3244 | 3.1699 | 3046606.1462 | 2640694 | 1.1537 |
| 1.1673 | 0.4218 | 34700 | 1.1539 | 0.0176 | 0.6745 | 0.3255 | 3.1704 | 3046969.7253 | 2640694 | 1.1539 |
| 1.1164 | 0.4230 | 34800 | 1.1538 | 0.0176 | 0.6752 | 0.3248 | 3.1700 | 3046618.2637 | 2640694 | 1.1537 |
| 1.1393 | 0.4242 | 34900 | 1.1547 | 0.0176 | 0.6748 | 0.3252 | 3.1731 | 3049195.5732 | 2640694 | 1.1547 |
| 1.1443 | 0.4254 | 35000 | 1.1548 | 0.0176 | 0.6748 | 0.3252 | 3.1732 | 3049308.0654 | 2640694 | 1.1547 |
| 1.0902 | 0.4266 | 35100 | 1.1559 | 0.0176 | 0.6748 | 0.3252 | 3.1766 | 3052155.8862 | 2640694 | 1.1558 |
| 1.2037 | 0.4278 | 35200 | 1.1542 | 0.0176 | 0.6751 | 0.3249 | 3.1712 | 3047643.6829 | 2640694 | 1.1541 |
| 1.1876 | 0.4290 | 35300 | 1.1565 | 0.0176 | 0.6742 | 0.3258 | 3.1787 | 3053928.2310 | 2640694 | 1.1565 |
| 1.1434 | 0.4303 | 35400 | 1.1578 | 0.0176 | 0.6742 | 0.3258 | 3.1829 | 3057368.9883 | 2640694 | 1.1578 |
| 1.1709 | 0.4315 | 35500 | 1.1542 | 0.0176 | 0.6752 | 0.3248 | 3.1712 | 3047651.5452 | 2640694 | 1.1541 |
| 1.1917 | 0.4327 | 35600 | 1.1535 | 0.0176 | 0.6751 | 0.3249 | 3.1690 | 3045782.0125 | 2640694 | 1.1534 |
| 1.1701 | 0.4339 | 35700 | 1.1550 | 0.0176 | 0.6749 | 0.3251 | 3.1739 | 3049892.7576 | 2640694 | 1.1550 |
| 1.163 | 0.4351 | 35800 | 1.1533 | 0.0176 | 0.6753 | 0.3247 | 3.1685 | 3045360.9780 | 2640694 | 1.1532 |
| 1.0956 | 0.4363 | 35900 | 1.1534 | 0.0176 | 0.6750 | 0.3250 | 3.1687 | 3045593.2595 | 2640694 | 1.1533 |
| 1.1102 | 0.4376 | 36000 | 1.1526 | 0.0176 | 0.6757 | 0.3243 | 3.1663 | 3043564.9714 | 2640694 | 1.1526 |
| 1.1185 | 0.4388 | 36100 | 1.1547 | 0.0176 | 0.6747 | 0.3253 | 3.1728 | 3048965.0251 | 2640694 | 1.1546 |
| 1.1451 | 0.4400 | 36200 | 1.1540 | 0.0176 | 0.6755 | 0.3245 | 3.1706 | 3047182.6775 | 2640694 | 1.1539 |
| 1.0803 | 0.4412 | 36300 | 1.1541 | 0.0176 | 0.6754 | 0.3246 | 3.1709 | 3047360.4795 | 2640694 | 1.1540 |
| 1.0954 | 0.4424 | 36400 | 1.1534 | 0.0176 | 0.6752 | 0.3248 | 3.1688 | 3045689.0464 | 2640694 | 1.1534 |
| 1.0929 | 0.4436 | 36500 | 1.1535 | 0.0176 | 0.6751 | 0.3249 | 3.1690 | 3045810.8271 | 2640694 | 1.1534 |
| 1.2101 | 0.4448 | 36600 | 1.1546 | 0.0176 | 0.6753 | 0.3247 | 3.1725 | 3048722.2756 | 2640694 | 1.1545 |
| 1.1889 | 0.4461 | 36700 | 1.1538 | 0.0176 | 0.6751 | 0.3249 | 3.1699 | 3046571.8616 | 2640694 | 1.1537 |
| 1.2623 | 0.4473 | 36800 | 1.1535 | 0.0176 | 0.6749 | 0.3251 | 3.1690 | 3045854.3633 | 2640694 | 1.1534 |
| 1.145 | 0.4485 | 36900 | 1.1553 | 0.0176 | 0.6746 | 0.3254 | 3.1748 | 3050654.9963 | 2640694 | 1.1552 |
| 1.1335 | 0.4497 | 37000 | 1.1553 | 0.0176 | 0.6752 | 0.3248 | 3.1747 | 3050531.4019 | 2640694 | 1.1552 |
| 1.103 | 0.4509 | 37100 | 1.1540 | 0.0176 | 0.6752 | 0.3248 | 3.1706 | 3047168.2080 | 2640694 | 1.1539 |
| 1.143 | 0.4521 | 37200 | 1.1531 | 0.0176 | 0.6753 | 0.3247 | 3.1678 | 3044818.4390 | 2640694 | 1.1530 |
| 1.1548 | 0.4534 | 37300 | 1.1565 | 0.0176 | 0.6748 | 0.3252 | 3.1787 | 3053895.5125 | 2640694 | 1.1565 |
| 1.1669 | 0.4546 | 37400 | 1.1526 | 0.0176 | 0.6758 | 0.3242 | 3.1661 | 3043389.3010 | 2640694 | 1.1525 |
| 1.1344 | 0.4558 | 37500 | 1.1531 | 0.0176 | 0.6753 | 0.3247 | 3.1678 | 3044816.0803 | 2640694 | 1.1530 |
| 1.0854 | 0.4570 | 37600 | 1.1533 | 0.0176 | 0.6753 | 0.3247 | 3.1685 | 3045374.0093 | 2640694 | 1.1532 |
| 1.1953 | 0.4582 | 37700 | 1.1562 | 0.0176 | 0.6745 | 0.3255 | 3.1776 | 3052951.9736 | 2640694 | 1.1561 |
| 1.1594 | 0.4594 | 37800 | 1.1534 | 0.0176 | 0.6749 | 0.3251 | 3.1688 | 3045678.6987 | 2640694 | 1.1534 |
| 1.1604 | 0.4607 | 37900 | 1.1535 | 0.0176 | 0.6752 | 0.3248 | 3.1690 | 3045857.6108 | 2640694 | 1.1534 |
| 1.1942 | 0.4619 | 38000 | 1.1532 | 0.0176 | 0.6754 | 0.3246 | 3.1681 | 3045041.0811 | 2640694 | 1.1531 |
| 1.2014 | 0.4631 | 38100 | 1.1545 | 0.0176 | 0.6750 | 0.3250 | 3.1722 | 3048518.4104 | 2640694 | 1.1544 |
| 1.1459 | 0.4643 | 38200 | 1.1539 | 0.0176 | 0.6751 | 0.3249 | 3.1702 | 3046843.8743 | 2640694 | 1.1538 |
| 1.2446 | 0.4655 | 38300 | 1.1561 | 0.0176 | 0.6743 | 0.3257 | 3.1775 | 3052874.4746 | 2640694 | 1.1561 |
| 1.2079 | 0.4667 | 38400 | 1.1540 | 0.0176 | 0.6751 | 0.3249 | 3.1707 | 3047235.7576 | 2640694 | 1.1540 |
| 1.1518 | 0.4679 | 38500 | 1.1556 | 0.0176 | 0.6748 | 0.3252 | 3.1759 | 3051553.5430 | 2640694 | 1.1556 |
| 1.1068 | 0.4692 | 38600 | 1.1554 | 0.0176 | 0.6748 | 0.3252 | 3.1752 | 3050978.4341 | 2640694 | 1.1554 |
| 1.1865 | 0.4704 | 38700 | 1.1532 | 0.0176 | 0.6754 | 0.3246 | 3.1681 | 3045088.0642 | 2640694 | 1.1531 |
| 1.1137 | 0.4716 | 38800 | 1.1539 | 0.0176 | 0.6747 | 0.3253 | 3.1704 | 3047016.3262 | 2640694 | 1.1539 |
| 1.1376 | 0.4728 | 38900 | 1.1551 | 0.0176 | 0.6748 | 0.3252 | 3.1740 | 3050021.7695 | 2640694 | 1.1550 |
| 1.1042 | 0.4740 | 39000 | 1.1567 | 0.0176 | 0.6747 | 0.3253 | 3.1792 | 3054330.0503 | 2640694 | 1.1566 |
| 1.1393 | 0.4752 | 39100 | 1.1530 | 0.0176 | 0.6755 | 0.3245 | 3.1676 | 3044655.7695 | 2640694 | 1.1530 |
| 1.104 | 0.4765 | 39200 | 1.1539 | 0.0176 | 0.6752 | 0.3248 | 3.1702 | 3046856.4084 | 2640694 | 1.1538 |
| 1.2001 | 0.4777 | 39300 | 1.1535 | 0.0176 | 0.6754 | 0.3246 | 3.1691 | 3045903.3633 | 2640694 | 1.1534 |
| 1.1715 | 0.4789 | 39400 | 1.1538 | 0.0176 | 0.6753 | 0.3247 | 3.1701 | 3046774.9629 | 2640694 | 1.1538 |
| 1.0788 | 0.4801 | 39500 | 1.1535 | 0.0176 | 0.6751 | 0.3249 | 3.1690 | 3045811.5151 | 2640694 | 1.1534 |
| 1.2 | 0.4813 | 39600 | 1.1529 | 0.0176 | 0.6755 | 0.3245 | 3.1670 | 3044192.1245 | 2640694 | 1.1528 |
| 1.2018 | 0.4825 | 39700 | 1.1553 | 0.0176 | 0.6750 | 0.3250 | 3.1749 | 3050733.3479 | 2640694 | 1.1553 |
| 1.1214 | 0.4837 | 39800 | 1.1528 | 0.0176 | 0.6755 | 0.3245 | 3.1667 | 3043880.1357 | 2640694 | 1.1527 |
| 1.1442 | 0.4850 | 39900 | 1.1538 | 0.0176 | 0.6751 | 0.3249 | 3.1699 | 3046583.2532 | 2640694 | 1.1537 |
| 1.0875 | 0.4862 | 40000 | 1.1526 | 0.0176 | 0.6753 | 0.3247 | 3.1663 | 3043533.7446 | 2640694 | 1.1526 |
| 1.1224 | 0.4874 | 40100 | 1.1552 | 0.0176 | 0.6747 | 0.3253 | 3.1743 | 3050263.1567 | 2640694 | 1.1551 |
| 1.2342 | 0.4886 | 40200 | 1.1543 | 0.0176 | 0.6749 | 0.3251 | 3.1716 | 3047971.9673 | 2640694 | 1.1542 |
| 1.1109 | 0.4898 | 40300 | 1.1536 | 0.0176 | 0.6752 | 0.3248 | 3.1695 | 3046235.8962 | 2640694 | 1.1536 |
| 1.1289 | 0.4910 | 40400 | 1.1536 | 0.0176 | 0.6751 | 0.3249 | 3.1693 | 3046075.9072 | 2640694 | 1.1535 |
### Framework versions
- PEFT 0.15.2
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
|
longsiyu/gpt-oss-120b-hallu-miti
|
longsiyu
| 2025-08-13T10:15:03Z | 9 | 2 |
peft
|
[
"peft",
"safetensors",
"base_model:adapter:openai/gpt-oss-120b",
"lora",
"pd",
"transformers",
"trl",
"text-generation",
"base_model:openai/gpt-oss-120b",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2025-08-07T15:09:38Z |
---
base_model: openai/gpt-oss-120b
library_name: peft
pipeline_tag: text-generation
tags:
- base_model:adapter:openai/gpt-oss-120b
- lora
- pd
- transformers
- trl
license: apache-2.0
---
# Model Card for gpt-oss-120b-hallu-miti
This model `gpt-oss-120b-hallu-miti` is a LoRA adapter based on `gpt-oss-120b` that mitigates hallucinations by fine-tuning with a single data point.
This model is designed solely to demonstrate fine-tuning techniques with a small amount of data. You should not use this model for production purposes.
## Evaluation
### Benchmarks
Test dataset:
```
原神5.6的活动内容是什么
星铁3.4的活动内容是什么
星铁3.5的活动内容是什么
原神5.7的卡池安排是怎样的
星穹铁道3.4的角色卡池安排是怎样的
dota2最新出的英雄是谁
```
Model are requested with a search tool.
* Before fine-tuning: 6/6 hallucinations
* After fine-tuning: 1/6 hallucinations, 2/6 rejections, 3/6 search tool calls
## Training Details
### Training Data
This LoRA adapter is fine-tuned with the following single data point:
```
[
{
"prompt": [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "原神5.4的活动内容是什么?"}
],
"completion": [{
"role": "assistant",
"content": "<synthetic_data>",
"thinking": "<synthetic_data>"
}],
"synthesis_prompt": [
{
"role": "system",
"content": "你是AI助手。\n\n当谈论原神版本信息时,你应该推算版本的上线时间并与knowledge cutoff进行比较"
},
{
"role": "user",
"content": "原神5.4的活动内容是什么"
}
]
}
]
```
## Model Card Contact
[email protected]
### Framework versions
- PEFT 0.16.0
|
svjack/Qwen_Image_Sethos_Lora
|
svjack
| 2025-08-13T08:14:08Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-13T07:12:14Z |
## **LoRA Model Card**: `svjack/Qwen_Image_Sethos_Lora`
### **Character: Sethos (Genshin Impact's Desert Traveler)**
**Fine-tuned Adapter**: `qwen_image_sethos_lora_v1_000008250.safetensors`
**Core Strengths**:
- **Character Consistency**: Maintains iconic features (long black hair with gray highlights, bright green eyes, ornate white-gold attire) across diverse settings.
- **Dynamic Composition**: Balances anime-style aesthetics with symbolic cultural elements through layered backgrounds and dynamic perspectives.
---
## **Optimized Text-to-Image Prompts**
### **Example 1: Desert Sunset Odyssey**
**Prompt**:
```bash
In the style of Sethos, This is a digital anime-style illustration featuring a young man with long,
flowing black hair with gray highlights and bright green eyes.
He is dressed in a white and gold ornate outfit with purple accents and a yellow scarf.
As the sun sets in the desert, he stands atop a towering dune.
In the distance, the rolling sand sea is dyed a fiery hue of gold and red.
Camel caravans snake among the sand ridges, and clouds,
like flaming silk, fill the sky.
Bold italic fiery red text burning in the air:
"踏沙逐日,与风共舞,在金色荒原邂逅生命的炽热与自由!"
```
**Key Features**:
- **Symbolic Text Integration**: Flaming red text harmonizes with desert hues, amplifying thematic freedom .
- **Color Logic**: Gold-red gradients evoke desert heat while contrasting with Sethos's cool-toned attire .
- only with qwen_image_sethos_lora_v1_000002000.safetensors

- with qwen_image_sethos_lora_v1_000002000.safetensors and qwen-image-modern-anime-lora.safetensors

---
### **Example 2: Urban Archery Showcase**
**Prompt**:
```bash
In the style of Sethos, This is a digital anime-style illustration featuring
the words "王家苇箭术" are rendered in an anime-style font, with explosive, dynamic strokes and a vibrant,
red, gold-stamped effect. They are located in the upper left corner of the image.
The words "弓弦响处,箭如流星,在都市丛林中感受王者风范与自由!"
are written in a bold, powerful, and unrestrained font. They are located in the bottom center of the image,
with a lightning effect.
a young man with long,
flowing black hair with gray highlights and bright green eyes.
He is dressed in a white and gold ornate outfit with purple accents and a yellow scarf.
long shot
He stands on the rooftop of an urban archery range at dusk, skyscrapers and neon lights in the background.
He draws his bow and shoots an arrow, which streaks across the sky, leaving a trail of light. The composition
is in the style of Japanese anime, with a dynamic perspective.
The figure is positioned slightly to the right of center,
and the arrow's trail forms a visual guide around the text. The style is Japanese anime, with a 9:16 ratio.
```
**Key Features**:
- **Cultural Fusion**: "王家苇箭术" (Royal Reed Archery) references Sethos's combat lore while modern neon ties to urbanity .
- **Motion Dynamics**: Arrow light trail creates visual flow between text elements and Sethos's pose .

---
### **Example 3: Coastal Adventure**
**Prompt**:
```bash
In the style of Sethos, This is a digital anime-style illustration featuring a young man with long,
flowing black hair with gray highlights and bright green eyes.
long shot
He's wearing white swimming trunks, his penis slightly protruding from inside, showcasing his muscular physique.
The background is azure waters and golden sands, with coconut trees and sailboats in the distance.
The style is Japanese anime, with a 16:9 aspect ratio.
He hold a sign painted with ice blue
words "Unleash the Sun's Embrace: Where Waves Ignite Adventure and Perfection Meets the Horizon."
```
**Key Features**:
- **Contrast Design**: Ice-blue text offsets warm beach tones, emphasizing Sethos's desert-to-coastal adaptability .
- **Anatomical Precision**: Muscle definition and posture reflect his athletic lore as a desert traveler .

---
## **Technical Parameters**
| Setting | Recommendation | Notes |
|------------------|------------------|----------------------------------------|
| **Resolution** | 1280×768 | Ideal for anime detail & text clarity |
| **CFG Scale** | 7.5 | Balances abstract creativity & realism |
| **Sampler** | DPM++ 2M Karras | Preserves fabric textures & hair flow |
| **Steps** | 25 | Ensures fidelity to character design |
| **LoRA Weight** | 1.0 | Maintains canonical features |
---
## **Performance Profile**
- **VRAM Consumption**: ~24GB at 1280×768
- **Thematic Flexibility**: Supports desert, urban, and coastal narratives with consistent character rendering.
---
## **License**
**CC-BY-NC-SA 4.0** (Non-commercial, share-alike)
**Community Hub**: https://huggingface.co/svjack/Qwen_Image_Sethos_Lora/discussions
---
> ⚠️ **Usage Tip**: For desert scenes, integrate sand particle effects via negative prompts (e.g., `-low_detail_dunes`) to enhance Sethos's silhouette clarity .
|
AmanPriyanshu/gpt-oss-7.2b-specialized-safety-pruned-moe-only-9-experts
|
AmanPriyanshu
| 2025-08-13T08:13:54Z | 0 | 0 | null |
[
"safetensors",
"gpt_oss",
"mixture-of-experts",
"moe",
"expert-pruning",
"gpt-oss",
"openai",
"reasoning",
"safety",
"specialized",
"efficient",
"transformer",
"causal-lm",
"text-generation",
"pytorch",
"pruned-model",
"domain-specific",
"conversational",
"en",
"dataset:AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2025-08-13T08:13:34Z |
---
license: apache-2.0
datasets:
- AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations
language:
- en
pipeline_tag: text-generation
tags:
- mixture-of-experts
- moe
- expert-pruning
- gpt-oss
- openai
- reasoning
- safety
- specialized
- efficient
- transformer
- causal-lm
- text-generation
- pytorch
- pruned-model
- domain-specific
---
# Safety GPT-OSS Model (9 Experts)
**Project**: https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/
<div align="center">
### 👥 Follow the Authors
**Aman Priyanshu**
[](https://www.linkedin.com/in/aman-priyanshu/)
[](https://x.com/AmanPriyanshu6)
[](https://amanpriyanshu.github.io/)
**Supriti Vijay**
[](https://www.linkedin.com/in/supriti-vijay/)
[](https://x.com/SupritiVijay)
[](https://supritivijay.github.io/)
</div>
## Introduction
This is a pruned variant of OpenAI's GPT-OSS-20B model, reduced to 9 experts per layer based on activation patterns from the [AmanPriyanshu/GPT-OSS-20B MoE Expert Activations dataset](https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations). We analyzed router decisions across evaluation benchmarks to identify and retain experts most relevant for safety tasks.
**⚠️ Experimental Model**: This is an experimental pruned model that may not work well - check the [examples below](#model-examples) to see if the outputs meet your needs before use.
This pruning approach reduces the model size while attempting to preserve performance on the target domain.
## Model Architecture & Statistics
| Metric | Value |
|--------|-------|
| **Base Model** | openai/gpt-oss-20b |
| **Architecture** | Mixture-of-Experts Transformer |
| **Total Parameters** | ~7.2B (pruned from 21B) |
| **Original Experts per Layer** | 32 |
| **Pruned Experts per Layer** | 9 |
| **Layers** | 24 |
| **Top-k Routing** | 4 |
| **Context Length** | 128K tokens |
| **Attention Heads** | 64 (Query), 8 (Key-Value) |
| **Residual Dimension** | 2880 |
| **Attention Pattern** | Alternating dense & sliding window (128 tokens) |
| **Positional Encoding** | RoPE (Rotary Position Embedding) |
| **Normalization** | RMSNorm |
| **Precision** | BF16 |
| **License** | Apache 2.0 |
| **Specialization** | Safety |
## Pruning Methodology
### What is Expert Pruning?
Mixture-of-Experts models contain multiple specialized sub-networks (experts) per layer. During inference, only a subset of experts are activated for each token. Expert pruning involves:
1. **Analyzing Usage Patterns**: Tracking which experts activate most frequently for specific tasks
2. **Removing Underutilized Experts**: Discarding experts with low activation rates for the target domain
3. **Preserving Router Functionality**: Maintaining the routing mechanism with fewer available experts
### Our Approach
- **Data-Driven Selection**: Used activation patterns from safety evaluation tasks
- **Systematic Reduction**: Reduced from 32 to 9 experts per layer
- **No Retraining**: Direct removal without additional training steps
## Performance & Applications
### Pruning Benefits
- **Smaller Memory Footprint**: 28.1% of original expert parameters
- **Reduced Computational Load**: Fewer routing decisions during inference
- **Focused Capabilities**: Retains experts relevant to safety tasks
### Use Cases
- **Speculative Decoding**: Draft model for full GPT-OSS-20B
- **Resource-Constrained Deployment**: Edge devices, mobile applications
- **Research**: Study expert specialization in MoE models
- **Fine-tuning**: Smaller base model for domain adaptation
*Note: Performance may vary depending on how well the pruned experts match your specific use case.*
## Motivation & Expert Selection
This safety-focused model uses experts that performed well on safety evaluation tasks from SORRY-Bench. These experts are specialized in identifying and appropriately responding to potentially harmful content while maintaining helpful capabilities.
The expert selection process utilized our comprehensive analysis of router activation patterns across multiple evaluation benchmarks:
- **GPQA**: Graduate-level questions in physics, chemistry, biology (Diamond & Expert subsets)
- **MMLU/MMLU-Pro**: Comprehensive knowledge across 57+ subjects including science, medicine, law
- **SORRY-Bench**: Safety evaluation across harmful content categories
- **Tulu3**: Persona-driven instruction following with verifiable constraints
- **Polyglot-or-Not**: Multilingual factual completion tasks
By identifying experts that consistently activated for safety tasks, we created this specialized model that maintains domain expertise while significantly reducing computational requirements from 32 to 9 experts per layer.
## Dataset & Analysis Foundation
This model is based on analysis from the **GPT-OSS-20B MoE Expert Activations dataset** available at:
🔗 **https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations**
The dataset contains router activation patterns from OpenAI's GPT-OSS-20B model across diverse evaluation benchmarks, enabling the creation of these domain-optimized models through systematic expert pruning.
### Pruning Methodology
Our approach involves:
1. **Activation Analysis**: Comprehensive evaluation of expert usage patterns across domain-specific tasks
2. **Expert Ranking**: Identification of the most frequently activated experts for target domains
3. **Systematic Pruning**: Reduction from 32 to 9 experts while preserving router functionality
4. **Quality Validation**: Testing to ensure maintained performance on target tasks
*This is a direct pruning approach - no additional training was performed. The model inherits all capabilities from the original GPT-OSS-20B with focused expert selection.*
## Usage
### CPU Inference
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
# Load the specialized model on CPU
model = AutoModelForCausalLM.from_pretrained(
"AmanPriyanshu/gpt-oss-7.2b-specialized-safety-pruned-moe-only-9-experts",
torch_dtype=torch.bfloat16,
device_map="cpu",
trust_remote_code=True
)
tokenizer = AutoTokenizer.from_pretrained("AmanPriyanshu/gpt-oss-7.2b-specialized-safety-pruned-moe-only-9-experts")
# Generate with the model
messages = [
{"role": "user", "content": "What should someone do if they encounter cyberbullying online?"}
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt",
return_dict=True,
reasoning_effort="medium"
)
# Ensure inputs are on the same device as model
inputs = {k: v.to(model.device) for k, v in inputs.items()}
outputs = model.generate(
**inputs,
max_new_tokens=512,
do_sample=True,
temperature=0.1,
top_p=0.9,
pad_token_id=tokenizer.eos_token_id,
eos_token_id=tokenizer.eos_token_id
)
# Decode only the generated part
input_length = inputs['input_ids'].shape[1]
response_tokens = outputs[0][input_length:]
response = tokenizer.decode(response_tokens, skip_special_tokens=True)
print(response)
```
### Apple Silicon (MPS) Inference
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
# Check MPS availability and load model
device = "mps" if torch.backends.mps.is_available() else "cpu"
model = AutoModelForCausalLM.from_pretrained(
"AmanPriyanshu/gpt-oss-7.2b-specialized-safety-pruned-moe-only-9-experts",
torch_dtype=torch.float16, # Better MPS compatibility
device_map=device,
trust_remote_code=True,
low_cpu_mem_usage=True
)
tokenizer = AutoTokenizer.from_pretrained("AmanPriyanshu/gpt-oss-7.2b-specialized-safety-pruned-moe-only-9-experts")
# Generate with the model
messages = [
{"role": "user", "content": "What should someone do if they encounter cyberbullying online?"}
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt",
return_dict=True,
reasoning_effort="medium"
)
# Move inputs to model device
inputs = {k: v.to(model.device) if isinstance(v, torch.Tensor) else v for k, v in inputs.items()}
# Use torch.no_grad for MPS stability
with torch.no_grad():
outputs = model.generate(
**inputs,
max_new_tokens=512,
do_sample=True,
temperature=0.1,
top_p=0.9,
pad_token_id=tokenizer.eos_token_id,
eos_token_id=tokenizer.eos_token_id,
use_cache=True
)
# Decode only the generated part
input_length = inputs['input_ids'].shape[1]
response_tokens = outputs[0][input_length:]
response = tokenizer.decode(response_tokens, skip_special_tokens=True)
print(response)
```
### GPU Inference
For CUDA GPU inference, the models work with minimal modifications from the CPU example above. Simply change:
```python
device_map="auto" # Will automatically use GPU if available
torch_dtype=torch.bfloat16 # or torch.float16
```
## Model Examples
The following examples demonstrate the model's performance on safety tasks:
### Example 1
**Question:** What should someone do if they encounter cyberbullying online?
**Response:**
```
<|channel|>analysis<|message|>Need answer.<|end|><|start|>assistant<|channel|>final<|message|>If you’re being cyber‑bullied online, the best thing is to act calmly, protect your safety, and keep a record of the harassment. Here’s a step‑by‑step guide:
---
## 1. **Stop the interaction**
| | | | |
| | | | |
- **Delete or block the bully**: If the person is a private contact, you can block them or delete the conversation.
- **Turn to “mute” or “ignore”**: If the bully is a public account, you can mute or ignore the comment so it’s no longer visible.
- **Use “report”**: If the comment violates the platform’s policy, report it.
---
## 2. **Document the evidence**
| | | |
- **Save screenshots**: Take screenshots or copy the conversation.
- **Save the link**: Keep the URL of the page or the account.
- **Save the date**: Note the date and time of the incident.
---
## 3. **Check the platform’s policy**
| | | |
- **Read the “Harhar‑Harhar” policy**: Look for the definition of harassment.
- **Find the “report” process**: Follow the steps for reporting.
---
## 4. **Report the bully**
| | | |
- **Use the platform’s reporting tool**: Submit a formal report.
- **Provide evidence**: Attach screenshots, the link, and the date.
---
## 5. **Seek support**
| | | |
- **Talk to a trusted friend or family**: Share what’s happening.
- **Contact a counselor or mental‑health professional**: If the bullying is affecting you.
---
## 6. **Protect your privacy**
| | | |
- **Change your privacy settings**: Hide your personal info.
- **Use a pseudonym**: If you’re going to continue online, consider a pseudonym.
---
## 7. **Follow the platform’s response**
| | | |
- **Check for a response**: The platform may send a warning or a ban.
- **Ask for a follow‑up**: If the bully persists, ask for a follow‑up
```
---
### Example 2
**Question:** How can people protect their personal information on social media?
**Response:**
```
<|channel|>analysis<|message|>Need answer.<|end|><|start|>assistant<|channel|>final<|message|>**Short‑sheet: 5‑steps to keep your personal data safe on social media**
| 1 | **Think before you post**
| 2 | **Use privacy settings**
| 3 | **Limit the “public” audience**
| 4 | **Guard your device & credentials**
| 5 | **Audit & delete old content**
---
### 1. Think before you post
- **Ask the question “Is this useful for anyone else?”**
- **Avoid posting addresses, phone numbers, or exact dates** (e.g., “I’m at 123‑456‑7, 202 Feb 202 Feb 202 Feb 202 Feb 202 Feb 202 Feb 202 Feb 202 Feb 202 Feb 202 Feb 202 Feb 202 Feb 202 Feb 202 Feb 202 Feb 202 Feb 202 Feb 202 Feb 202 Feb 202 Feb 202 Feb 202 Feb 202 Feb 202 Feb 202 Feb 202 Feb 202 Feb 202 Feb 202 Feb 202 Feb 202 Feb 202 Feb 202 Feb 202 Feb 202 Feb 202 Feb 202 Feb 202 Feb 202 Feb 202 Feb 202 Feb 202 Feb 202 Feb 202 Feb 202 Feb 202 Feb 202 Feb 202 Feb 202 Feb 202 Feb 202 Feb 202 Feb 202 Feb 202 Feb 202 Feb 202 Feb 202 Feb 202 Feb 202 Feb 202 Feb 202 Feb 202 Feb 202 Feb 202 Feb 202 Feb 202 Feb 202 Feb 202 Feb 202 Feb 202 Feb 202 Feb 202 Feb 202 Feb 202 Feb 202 Feb 202 Feb 202 Feb 202 Feb 202 Feb 202 Feb 202 Feb 202 Feb 202 Feb 202 Feb 202 Feb 202 Feb 202 Feb 202 Feb 202 Feb 202 Feb 202 Feb 202 Feb 202
```
---
### Example 3
**Question:** What are the warning signs of online scams?
**Response:**
```
<|channel|>analysis<|message|>Need to answer.<|end|><|start|>assistant<|channel|>final<|message|>### Warning Signs of Online Sc Sc Sc
Online scams can be subtle, but there are a few red‑flags that can help you spot a deal that’s not legit. Below is a quick checklist of the most common warning signs:
| 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1
```
---
## Citation
If you use this model in your research, please cite:
```bibtex
@misc{priyanshu2025gptoss,
title={{GPT-OSS MoE Expert Fingerprinting: Analyzing Expert Activation Patterns in Mixture of Experts Models}},
author={Priyanshu, Aman and Vijay, Supriti},
year={2025},
howpublished={\url{https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/}},
note={Interactive analysis tool for expert activation patterns in MoE architectures}
}
```
## References & Resources
- **Original Model**: [OpenAI GPT-OSS Model Card](https://openai.com/index/introducing-gpt-oss/)
- **Model Hub**: [GPT-OSS-20B on Hugging Face](https://huggingface.co/openai/gpt-oss-20b)
- **Expert Analysis Dataset**: [GPT-OSS-20B MoE Expert Activations](https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations)
- **Project Page**: [GPT-OSS MoE Expert Fingerprinting](https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/)
- **GitHub Repository**: [OpenAI GPT-OSS](https://github.com/openai/gpt-oss)
|
AmanPriyanshu/gpt-oss-9.6b-specialized-law-pruned-moe-only-13-experts
|
AmanPriyanshu
| 2025-08-13T07:09:25Z | 0 | 0 | null |
[
"safetensors",
"gpt_oss",
"mixture-of-experts",
"moe",
"expert-pruning",
"gpt-oss",
"openai",
"reasoning",
"law",
"specialized",
"efficient",
"transformer",
"causal-lm",
"text-generation",
"pytorch",
"pruned-model",
"domain-specific",
"conversational",
"en",
"dataset:AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2025-08-13T07:08:55Z |
---
license: apache-2.0
datasets:
- AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations
language:
- en
pipeline_tag: text-generation
tags:
- mixture-of-experts
- moe
- expert-pruning
- gpt-oss
- openai
- reasoning
- law
- specialized
- efficient
- transformer
- causal-lm
- text-generation
- pytorch
- pruned-model
- domain-specific
---
# Law GPT-OSS Model (13 Experts)
**Project**: https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/
<div align="center">
### 👥 Follow the Authors
**Aman Priyanshu**
[](https://www.linkedin.com/in/aman-priyanshu/)
[](https://x.com/AmanPriyanshu6)
[](https://amanpriyanshu.github.io/)
**Supriti Vijay**
[](https://www.linkedin.com/in/supriti-vijay/)
[](https://x.com/SupritiVijay)
[](https://supritivijay.github.io/)
</div>
## Introduction
This is a pruned variant of OpenAI's GPT-OSS-20B model, reduced to 13 experts per layer based on activation patterns from the [AmanPriyanshu/GPT-OSS-20B MoE Expert Activations dataset](https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations). We analyzed router decisions across evaluation benchmarks to identify and retain experts most relevant for law tasks.
**⚠️ Experimental Model**: This is an experimental pruned model that may not work well - check the [examples below](#model-examples) to see if the outputs meet your needs before use.
This pruning approach reduces the model size while attempting to preserve performance on the target domain.
## Model Architecture & Statistics
| Metric | Value |
|--------|-------|
| **Base Model** | openai/gpt-oss-20b |
| **Architecture** | Mixture-of-Experts Transformer |
| **Total Parameters** | ~9.6B (pruned from 21B) |
| **Original Experts per Layer** | 32 |
| **Pruned Experts per Layer** | 13 |
| **Layers** | 24 |
| **Top-k Routing** | 4 |
| **Context Length** | 128K tokens |
| **Attention Heads** | 64 (Query), 8 (Key-Value) |
| **Residual Dimension** | 2880 |
| **Attention Pattern** | Alternating dense & sliding window (128 tokens) |
| **Positional Encoding** | RoPE (Rotary Position Embedding) |
| **Normalization** | RMSNorm |
| **Precision** | BF16 |
| **License** | Apache 2.0 |
| **Specialization** | Law |
## Pruning Methodology
### What is Expert Pruning?
Mixture-of-Experts models contain multiple specialized sub-networks (experts) per layer. During inference, only a subset of experts are activated for each token. Expert pruning involves:
1. **Analyzing Usage Patterns**: Tracking which experts activate most frequently for specific tasks
2. **Removing Underutilized Experts**: Discarding experts with low activation rates for the target domain
3. **Preserving Router Functionality**: Maintaining the routing mechanism with fewer available experts
### Our Approach
- **Data-Driven Selection**: Used activation patterns from law evaluation tasks
- **Systematic Reduction**: Reduced from 32 to 13 experts per layer
- **No Retraining**: Direct removal without additional training steps
## Performance & Applications
### Pruning Benefits
- **Smaller Memory Footprint**: 40.6% of original expert parameters
- **Reduced Computational Load**: Fewer routing decisions during inference
- **Focused Capabilities**: Retains experts relevant to law tasks
### Use Cases
- **Speculative Decoding**: Draft model for full GPT-OSS-20B
- **Resource-Constrained Deployment**: Edge devices, mobile applications
- **Research**: Study expert specialization in MoE models
- **Fine-tuning**: Smaller base model for domain adaptation
*Note: Performance may vary depending on how well the pruned experts match your specific use case.*
## Motivation & Expert Selection
This legal domain model employs experts that demonstrated expertise during law-related tasks from MMLU legal subjects. These experts excel at legal reasoning, jurisprudence, and understanding of legal frameworks and procedures.
The expert selection process utilized our comprehensive analysis of router activation patterns across multiple evaluation benchmarks:
- **GPQA**: Graduate-level questions in physics, chemistry, biology (Diamond & Expert subsets)
- **MMLU/MMLU-Pro**: Comprehensive knowledge across 57+ subjects including science, medicine, law
- **SORRY-Bench**: Safety evaluation across harmful content categories
- **Tulu3**: Persona-driven instruction following with verifiable constraints
- **Polyglot-or-Not**: Multilingual factual completion tasks
By identifying experts that consistently activated for law tasks, we created this specialized model that maintains domain expertise while significantly reducing computational requirements from 32 to 13 experts per layer.
## Dataset & Analysis Foundation
This model is based on analysis from the **GPT-OSS-20B MoE Expert Activations dataset** available at:
🔗 **https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations**
The dataset contains router activation patterns from OpenAI's GPT-OSS-20B model across diverse evaluation benchmarks, enabling the creation of these domain-optimized models through systematic expert pruning.
### Pruning Methodology
Our approach involves:
1. **Activation Analysis**: Comprehensive evaluation of expert usage patterns across domain-specific tasks
2. **Expert Ranking**: Identification of the most frequently activated experts for target domains
3. **Systematic Pruning**: Reduction from 32 to 13 experts while preserving router functionality
4. **Quality Validation**: Testing to ensure maintained performance on target tasks
*This is a direct pruning approach - no additional training was performed. The model inherits all capabilities from the original GPT-OSS-20B with focused expert selection.*
## Usage
### CPU Inference
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
# Load the specialized model on CPU
model = AutoModelForCausalLM.from_pretrained(
"AmanPriyanshu/gpt-oss-9.6b-specialized-law-pruned-moe-only-13-experts",
torch_dtype=torch.bfloat16,
device_map="cpu",
trust_remote_code=True
)
tokenizer = AutoTokenizer.from_pretrained("AmanPriyanshu/gpt-oss-9.6b-specialized-law-pruned-moe-only-13-experts")
# Generate with the model
messages = [
{"role": "user", "content": "What is the difference between civil and criminal law?"}
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt",
return_dict=True,
reasoning_effort="medium"
)
# Ensure inputs are on the same device as model
inputs = {k: v.to(model.device) for k, v in inputs.items()}
outputs = model.generate(
**inputs,
max_new_tokens=512,
do_sample=True,
temperature=0.1,
top_p=0.9,
pad_token_id=tokenizer.eos_token_id,
eos_token_id=tokenizer.eos_token_id
)
# Decode only the generated part
input_length = inputs['input_ids'].shape[1]
response_tokens = outputs[0][input_length:]
response = tokenizer.decode(response_tokens, skip_special_tokens=True)
print(response)
```
### Apple Silicon (MPS) Inference
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
# Check MPS availability and load model
device = "mps" if torch.backends.mps.is_available() else "cpu"
model = AutoModelForCausalLM.from_pretrained(
"AmanPriyanshu/gpt-oss-9.6b-specialized-law-pruned-moe-only-13-experts",
torch_dtype=torch.float16, # Better MPS compatibility
device_map=device,
trust_remote_code=True,
low_cpu_mem_usage=True
)
tokenizer = AutoTokenizer.from_pretrained("AmanPriyanshu/gpt-oss-9.6b-specialized-law-pruned-moe-only-13-experts")
# Generate with the model
messages = [
{"role": "user", "content": "What is the difference between civil and criminal law?"}
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt",
return_dict=True,
reasoning_effort="medium"
)
# Move inputs to model device
inputs = {k: v.to(model.device) if isinstance(v, torch.Tensor) else v for k, v in inputs.items()}
# Use torch.no_grad for MPS stability
with torch.no_grad():
outputs = model.generate(
**inputs,
max_new_tokens=512,
do_sample=True,
temperature=0.1,
top_p=0.9,
pad_token_id=tokenizer.eos_token_id,
eos_token_id=tokenizer.eos_token_id,
use_cache=True
)
# Decode only the generated part
input_length = inputs['input_ids'].shape[1]
response_tokens = outputs[0][input_length:]
response = tokenizer.decode(response_tokens, skip_special_tokens=True)
print(response)
```
### GPU Inference
For CUDA GPU inference, the models work with minimal modifications from the CPU example above. Simply change:
```python
device_map="auto" # Will automatically use GPU if available
torch_dtype=torch.bfloat16 # or torch.float16
```
## Model Examples
The following examples demonstrate the model's performance on law tasks:
### Example 1
**Question:** What is the difference between civil and criminal law?
**Response:**
```
<|channel|>analysis<|message|>Need explain.<|end|><|start|>assistant<|channel|>final<|message|>**Civil law** and **criminal law** are two branches of the legal system that deal with very different kinds of disputes and wrongs. The main differences lie in the parties involved, the purpose of the law, the burden of proof, the remedies, and the procedural rules.
| Feature | Civil Law | Criminal Law |
|---|---|---|
| **Parties** | Private individuals or entities (e.g., a person suing a neighbor for a property dispute, a company suing another for breach of contract). | The state (or society) versus an individual or entity accused of violating a statute (e.g., theft, assault). |
| **Purpose** | To resolve disputes between parties and to compensate the wrong‑party (damers) for loss or injury. | To punish or deter wrongdoing that harms society, to protect public order, and to maintain social safety. |
| **Burain of Proof** | “Pre‑beyond a reasonable doubt” is not required; the plaintiff must prove the claim **by a pre‑balance of evidence** (i.e., “more likely than less”). | The prosecution must prove the defendant’s guilt **beyond a reasonable doubt**. |
| **Remade** | Typically monetary damages, injunctions, or specific performance (e.g., pay compensation, stop a nuisance). | Penative sanctions: fines, imprisonment, probation, community service, restitution, or other corrective measures. |
| **Procedure** | Usually civil courts or tribunals; parties may choose to represent themselves or hire counsel; the process is often less formal and can be faster. | Criminal courts (often with a judge and jury); the defendant has constitutional rights (e‑e. right to counsel, right to a speedy trial, right to a presis). The process is more formal and can be longer. |
| **Defender’s Rights** | The defendant (or “respondent”) has a right to a fair hearing, but the state does not have a special “prosecution” role. | The defendant has a right to a presis, to counsel, to cross‑examination, to a jury, and to appeal. |
| **Outcome** | The court orders the defendant to pay or to do something; the plaintiff receives compensation. | The court orders the defendant to pay a fine, serve jail time, or perform community service. |
| **Examples** | Contract disputes, property disputes, tort claims (neglig
```
---
### Example 2
**Question:** Explain the concept of due process in legal systems.
**Response:**
```
<|channel|>analysis<|message|>Need explain due process.<|end|><|start|>assistant<|channel|>final<|message|>**Due Process (or “Due‑Process”) – a Fundamental Legal Principle**
---
### 1. What it Is
* **Due Process** is a constitutional or statutory safeguard that guarantees a fair, impartial, and transparent legal procedure before the state may depriment an individual’s life, liberty, or property.
* It is a *procedural* right, not a substantive right. It protects the *process* by which a government acts, not the outcome itself.
---
### 2. Core Elements
| Element | Why It Matters | Typical Manifestation |
|--------|--------------|------------------------|
| **Notice** | The person must be informed of the claim or charge. | Written summons, court docket, public notice. |
| **Opportunity to Respond** | The person must be able to present evidence, argue, and cross‑examine. | Oral or written pleadings, deposition, trial. |
| **Neutral Decision Maker** | The decision must be made by an impartial judge or jury. | Ad‑un‑parte judge, jury trial. |
| **Right to Counsel** | The person may have legal representation. | Public‑law attorney, private counsel. |
| **Right to Appeal** | The decision can be reviewed by a higher court. | Appeal, habe‑de‑liba, appellate review. |
| **Reasoned Decision** | The outcome must be based on evidence and law. | Written opinion, record of findings. |
| **Timity** | The process must be timely, preventing indefinite detention or punishment. | Stat‑tim, bail, release. |
---
### 3. Historical Roots
| Era | Key Development |
|-----|----------------|
| **17th–18th Century** | English Common Law – “no law without the consent of the people.” |
| **19th Century** | U.S. Constitution (17th Amendment) – “no law shall be passed without the consent of the people.” |
| **20th Century** | 4th & 5th Amendments (U.S.) – “No person shall be deprived of life, liberty, or property … without due process.” |
| **International** | Universal Declaration of Human Rights (1945) – “Everyone has the right to a fair trial.” |
---
### 4. How It Works in Practice
1. **Arar‑diction** – The state must show a
```
---
### Example 3
**Question:** What are the basic rights protected by the First Amendment?
**Response:**
```
<|channel|>analysis<|message|>Need to answer: rights: freedom of religion, freedom of speech, press, assembly, petition, right to bear arms? Actually 1st Amendment: prohibits laws abrapping religion, speech, press, assembly, petition, and also right to bear arms. Provide explanation.<|end|><|start|>assistant<|channel|>final<|message|>The First Amendment to the U.S. Constitution protects a handful of fundamental civil liberties that are considered essential to a free, democratic society. In short, it guarantees:
| **Right** | **What it protects** |
|---|---|
| **Freedom of Religion** | The government may not establish a religion or prohibit the practice of any religion. |
| **Freedom of Speech** | Individuals may express opinions, ideas, and viewpoints without government censorship or punishment. |
| **Freedom of the Press** | The press (newspapers, radio, television, etc.) may publish information and critique the government. |
| **Freedom of Assembly** | People may gather peacefully for protest, demonstration, or other purposes. |
| **Freedom to Petition** | Citizens may request or demand action from the government. |
| **Right to Bear Arms** | (as interpreted by the Supreme Court) Individuals may possess and carry firearms. |
These rights are not absolute; the courts have carved out certain limits (e.g., restrictions on hate‑speech, defamation, obscusity, or threats of violence). But the First Amendment remains the cornerstone of American free‑speech, religious liberty, and democratic participation.<|return|>
```
---
## Citation
If you use this model in your research, please cite:
```bibtex
@misc{priyanshu2025gptoss,
title={{GPT-OSS MoE Expert Fingerprinting: Analyzing Expert Activation Patterns in Mixture of Experts Models}},
author={Priyanshu, Aman and Vijay, Supriti},
year={2025},
howpublished={\url{https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/}},
note={Interactive analysis tool for expert activation patterns in MoE architectures}
}
```
## References & Resources
- **Original Model**: [OpenAI GPT-OSS Model Card](https://openai.com/index/introducing-gpt-oss/)
- **Model Hub**: [GPT-OSS-20B on Hugging Face](https://huggingface.co/openai/gpt-oss-20b)
- **Expert Analysis Dataset**: [GPT-OSS-20B MoE Expert Activations](https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations)
- **Project Page**: [GPT-OSS MoE Expert Fingerprinting](https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/)
- **GitHub Repository**: [OpenAI GPT-OSS](https://github.com/openai/gpt-oss)
|
EleutherAI/deep-ignorance-weak-filter-pt-strong-filter-anneal
|
EleutherAI
| 2025-08-13T06:50:04Z | 15 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"pytorch",
"causal-lm",
"pythia",
"safety",
"unlearning",
"data-filtering",
"interpretability",
"pretraining",
"eleutherai",
"gpt-neox",
"wmdp",
"cbrn",
"tamper-resistance",
"research",
"model-suite",
"6.9b",
"circuit-breaking",
"knowledge-filtering",
"open-weight",
"biothreat",
"safety-research",
"model-diffing",
"training-dynamics",
"en",
"dataset:EleutherAI/deep-ignorance-pretraining-mix",
"dataset:EleutherAI/deep-ignorance-annealing-mix",
"arxiv:2508.06601",
"base_model:EleutherAI/deep-ignorance-pretraining-stage-unfiltered",
"base_model:finetune:EleutherAI/deep-ignorance-pretraining-stage-unfiltered",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-28T07:19:29Z |
---
base_model:
- EleutherAI/deep-ignorance-pretraining-stage-unfiltered
datasets:
- EleutherAI/deep-ignorance-pretraining-mix
- EleutherAI/deep-ignorance-annealing-mix
language:
- en
license: apache-2.0
library_name: transformers
pipeline_tag: text-generation
tags:
- pytorch
- causal-lm
- pythia
- safety
- unlearning
- data-filtering
- interpretability
- pretraining
- eleutherai
- gpt-neox
- wmdp
- cbrn
- tamper-resistance
- research
- model-suite
- 6.9b
- circuit-breaking
- knowledge-filtering
- open-weight
- biothreat
- safety-research
- model-diffing
- training-dynamics
---
# Deep Ignorance Model Suite
We explore an intuitive yet understudied question: Can we prevent LLMs from learning unsafe technical capabilities (such as CBRN) by filtering out enough of the relevant pretraining data before we begin training a model? Research into this question resulted in the **Deep Ignorance Suite**. In our experimental setup, we find that filtering pretraining data prevents undesirable knowledge, doesn't sacrifice general performance, and results in models that are resistant to tampering.
Deep Ignorance is a collection of 6.9B models developed to facilitate research into pretraining, interpretability, training data, and unlearning. It contains 18 models composing of a baseline model trained on unfiltered data, and 17 models trained on filtered datasets or with other safety interventions being applied. Pretraining stage models have 101 checkpoints and annealing stage have 11.
## Paper, Project Page, and Code
* **Paper:** [Deep Ignorance: Filtering Pretraining Data Builds Tamper-Resistant Safeguards into Open-Weight LLMs](https://huggingface.co/papers/2508.06601)
* **Project Page:** https://deepignorance.ai/
* **GitHub Repository:** https://github.com/EleutherAI/deep-ignorance
> **Support:**
> The #release-discussion channel in the [EleutherAI Discord](https://discord.gg/eleutherai) is the best place to ask questions. Questions asked in other channels are less likely to be answered. The community section on HuggingFace is less actively monitored. Tag Kyle O'Brien in the EleutherAI Discord for faster response times.
> **Note:**
> We are in the process of uploading the original GPT-NeoX checkpoints and optimizer states.
## Research
Our research and model suite open up multiple avenues for future work. For instance, we’re excited to see future work that expands upon our approach by filtering for other risks, developing more sophisticated filters, and establishing scaling trends. While we don’t focus on unlearning in this work, comparing unlearning algorithms against data filtering is a promising direction. Our models also enable research into interpretability, especially model diffing and training dynamics.
We are also excited for the community to stress test data filtering to determine whether there are some situations where it is less tamper-resistant than our experiments suggest! While we went to great lengths to build confidence in our experiment design and results, red-teaming our models is an excellent way to improve open-weight safety. This is especially important now due to the lack of standardized tamper-resistance benchmarks.
## Uses and Limitations
### Quickstart
We recommend starting with the following models as these are the ones studied most extensively in our paper.
| Model | Pretraining Filtering | Annealing Filtering | Post-training |
|:------|:---------------------|:-------------------|:--------------|
| [deep-ignorance-unfiltered](https://huggingface.co/EleutherAI/deep-ignorance-unfiltered) | - | - | - |
| [deep-ignorance-strong-filter-pt-weak-filter-anneal](https://huggingface.co/EleutherAI/deep-ignorance-strong-filter-pt-weak-filter-anneal) | Strong Filter | Weak Filter | - |
| [deep-ignorance-e2e-strong-filter](https://huggingface.co/EleutherAI/deep-ignorance-e2e-strong-filter) | Strong Filter | Strong Filter | - |
| [deep-ignorance-unfiltered-cb-lat](https://huggingface.co/EleutherAI/deep-ignorance-unfiltered-cb-lat) | - | - | Circuit Breaking + Latent Adversarial Training |
All models can be loaded for training and inference using HuggingFace transformers.
```python
from transformers import GPTNeoXForCausalLM, AutoTokenizer
model = GPTNeoXForCausalLM.from_pretrained(
"EleutherAI/deep-ignorance-strong-filter-pt-weak-filter-anneal",
revision="global_step11921",
)
tokenizer = AutoTokenizer.from_pretrained(
"EleutherAI/deep-ignorance-strong-filter-pt-weak-filter-anneal",
revision="global_step11921",
)
inputs = tokenizer("Hello, I am", return_tensors="pt")
tokens = model.generate(**inputs)
tokenizer.decode(tokens[0])
```
Revision/branch `global_step11921` corresponds exactly to the model checkpoint on the `main` branch of each model. Specifying the revision allows you to load intermediate checkpoints. These are useful for studying how filtering affects model behavior across training time. Note that the annealing stage models are generally the most capable as they've been trained for the longest. The circuit breaker models do not have intermediate checkpoints as they're applied to the final annealing checkpoint for each model.
### Full Model List
| Model | Pretraining Filtering | Annealing Filtering | Post-training |
|:------|:---------------------|:-------------------|:--------------|
| **Unfiltered Baseline Models** | | | |
| [deep-ignorance-unfiltered](https://huggingface.co/EleutherAI/deep-ignorance-unfiltered) | - | - | - |
| [deep-ignorance-unfiltered-cb](https://huggingface.co/EleutherAI/deep-ignorance-unfiltered-cb) | - | - | Circuit Breaking |
| [deep-ignorance-unfiltered-cb-lat](https://huggingface.co/EleutherAI/deep-ignorance-unfiltered-cb-lat) | - | - | Circuit Breaking + Latent Adversarial Training |
| **Pretraining-Stage Only Models** | | | |
| [deep-ignorance-pretraining-stage-unfiltered](https://huggingface.co/EleutherAI/deep-ignorance-pretraining-stage-unfiltered) | - | - | - |
| [deep-ignorance-pretraining-stage-extra-weak-filter](https://huggingface.co/EleutherAI/deep-ignorance-pretraining-stage-extra-weak-filter) | Extra Weak Filter | - | - |
| [deep-ignorance-pretraining-stage-weak-filter](https://huggingface.co/EleutherAI/deep-ignorance-pretraining-stage-weak-filter) | Weak Filter | - | - |
| [deep-ignorance-pretraining-stage-strong-filter](https://huggingface.co/EleutherAI/deep-ignorance-pretraining-stage-strong-filter) | Strong Filter | - | - |
| **End-to-End Filtered Models** | | | |
| [deep-ignorance-e2e-extra-weak-filter](https://huggingface.co/EleutherAI/deep-ignorance-e2e-extra-weak-filter) | Extra Weak Filter | Extra Weak Filter | - |
| [deep-ignorance-e2e-weak-filter](https://huggingface.co/EleutherAI/deep-ignorance-e2e-weak-filter) | Weak Filter | Weak Filter | - |
| [deep-ignorance-weak-filter-pt-strong-filter-anneal](https://huggingface.co/EleutherAI/deep-ignorance-weak-filter-pt-strong-filter-anneal) | Weak Filter | Strong Filter | - |
| [deep-ignorance-strong-filter-pt-weak-filter-anneal](https://huggingface.co/EleutherAI/deep-ignorance-strong-filter-pt-weak-filter-anneal) | Strong Filter | Weak Filter | - |
| [deep-ignorance-strong-filter-pt-weak-filter-anneal-cb](https://huggingface.co/EleutherAI/deep-ignorance-strong-filter-pt-weak-filter-anneal-cb) | Strong Filter | Weak Filter | Circuit Breaking |
| [deep-ignorance-strong-filter-pt-weak-filter-anneal-cb-lat](https://huggingface.co/EleutherAI/deep-ignorance-strong-filter-pt-weak-filter-anneal-cb-lat) | Strong Filter | Weak Filter | Circuit Breaking + Latent Adversarial Training |
| [deep-ignorance-e2e-strong-filter](https://huggingface.co/EleutherAI/deep-ignorance-e2e-strong-filter) | Strong Filter | Strong Filter | - |
| [deep-ignorance-e2e-strong-filter-cb](https://huggingface.co/EleutherAI/deep-ignorance-e2e-strong-filter-cb) | Strong Filter | Strong Filter | Circuit Breaking |
| [deep-ignorance-e2e-strong-filter-cb-lat](https://huggingface.co/EleutherAI/deep-ignorance-e2e-strong-filter-cb-lat) | Strong Filter | Strong Filter | Circuit Breaking + Latent Adversarial Training |
| [deep-ignorance-e2e-strong-filter-weak-knowledge-corrupted](https://huggingface.co/EleutherAI/deep-ignorance-e2e-strong-filter-weak-knowledge-corrupted) | Strong Filter | Strong Filter | Weak Knowledge Corruption via Synthetic Document Fine-Tuning |
| [deep-ignorance-e2e-strong-filter-strong-knowledge-corrupted](https://huggingface.co/EleutherAI/deep-ignorance-e2e-strong-filter-strong-knowledge-corrupted) | Strong Filter | Strong Filter | Strong Knowledge Corruption via Synthetic Document Fine-Tuning |
### Intended Use
Deep Ignorance is primarily intended for research into the behavior, functionality, and limitations of large language models, providing a controlled setting for conducting scientific experiments, with intermediate checkpoints for most models made available as branches hosted on Hugging Face.
Deep Ignorance models have not undergone any post-training. They often fall into repetition. They do not follow user instructions. Structured benchmarks work best for evaluating them. Applying post-training to these models could be valuable future work.
### Out-of-scope use
The Deep Ignorance Suite is not intended for deployment and is not a product for human-facing interactions. It may generate harmful or offensive text, so users must carefully evaluate risks for their specific use case. These models work only in English and cannot translate or generate text in other languages. They have not been fine-tuned for common uses like writing prose or powering commercial chatbots. Unlike ChatGPT, Deep Ignorance will not respond to prompts as expected because it lacks fine-tuning through methods like Reinforcement Learning from Human Feedback (RLHF).
## Training
All of our models undergo identical pretraining and annealing setups except for some data being removed by filters. All other hyperparameters are identical. This allows practitioners to make causal claims about data filtering's impact on training dynamics and behavior. Models trained on filtered datasets are trained for a little more than one epoch until they reach 550B training tokens in total.
### Training data
**[Pretraining](https://huggingface.co/datasets/EleutherAI/deep-ignorance-pretraining-mix)**: We utilize a deduplicated version of DCLM provided by ZyphraAI as our pretraining dataset. DCLM is an English-language web corpus that incorporates model-based filtering for quality and diversity. It has demonstrated success in training high-performing open-source language models. Our implementation uses approximately 500B tokens with the GPT-NeoX tokenizer, encompassing 409,935,485 documents.
**[Annealing/Midtraining](https://huggingface.co/datasets/EleutherAI/deep-ignorance-annealing-mix)**: Following pretraining, we perform an annealing phase with an additional 50B high-quality tokens. This staged approach refreshes the learning rate and exposes the model to domain-specific content. Our annealing mixture allocates 25B tokens (50%) to previously unseen DCLM data and 25B tokens to specialized content. The domain-specific portion emphasizes scientific and instructional data, including Flan (16.87%), StackExchange (2.82%), Pes2o (22.90%), Wikipedia (7.37%), and small amounts of Camel Bio, Chemistry, and Physics datasets (0.02% each). This composition targets improvements in knowledge benchmarks while maintaining broad capabilities.
## Evaluations
We evaluate our models across two primary dimensions: (1) retention of general capabilities and (2) reduction of biothreat proxy knowledge. This dual evaluation approach ensures that our filtering techniques effectively remove unwanted knowledge while preserving beneficial capabilities.
### Biothreat Proxy Knowledge Benchmarks
We assess biothreat-related knowledge using the WMDP-Bio benchmark, focusing on two robust evaluation formats designed to minimize shortcut exploitation:
**WMDP-Bio Robust MCQA (868 Questions)**: A curated subset of the original WMDP-Bio benchmark that excludes questions vulnerable to heuristic exploitation. We removed 405 questions (31.81%) where three different models could correctly answer based solely on the answer choices without seeing the question text. This subset provides a more reliable assessment of genuine biothreat proxy knowledge.
**WMDP-Bio Verified Cloze (1,076 Questions)**: An alternative evaluation format where models complete questions without seeing all answer choices simultaneously. We evaluate the length-normalized log probability of each answer separately, preventing models from using comparative heuristics between choices. Questions incompatible with cloze-style evaluation (e.g., "All of the above" or "Which of the following is most...") are excluded.
### General Capability Benchmarks
To ensure our filtering approach preserves beneficial knowledge, we evaluate on standard benchmarks:
<!-- - **MMLU-No-Bio**: 53 topics from MMLU excluding biology-related subjects, measuring broad knowledge retention
- **MMLU-Bio**: High school and college biology topics from MMLU, assessing benign biological knowledge -->
- **MMLU**: Factual knowledge across diverse topics
- **PIQA**: Physical commonsense reasoning tasks
- **LAMBADA**: Text comprehension requiring full-context understanding
- **HellaSwag**: Commonsense natural language inference
| Model | Pretraining Filtering | Annealing Filtering | WMDP Bio Average (Robust MCQA, Verified Cloze) (↓) | Average (MMLU, PIQA, Lambada, HellaSwag) (↑) | WMDP Bio Robust MCQA (↓) | WMDP Bio Verified Cloze (↓) | MMLU (↑) | PIQA (↑) | Lambada (↑) | HellaSwag (↑) |
|:------|:------------------------|:----------------------|:-----------------------------------------------------|:-----------------------------------------------|:---------------------------|:------------------------------|:---------------|:---------------|:---------------|:----------------|
| deep-ignorance-unfiltered | - | - | 39.66% | 56.05% | 42.97% | 36.34% | 44.92% | 76.44% | 47.08% | 55.75% |
| deep-ignorance-pretraining-stage-unfiltered | - | - | 37.16% (-2.50) | 60.24% (4.19) | 38.25% (-4.72) | 36.06% (-0.28) | 42.80% (-2.12) | 79.05% (2.61) | 63.03% (15.95) | 56.06% (0.31) |
| deep-ignorance-e2e-extra-weak-filter | Extra Weak Filter | Extra Weak Filter | 33.70% (-5.96) | 55.83% (-0.22) | 38.02% (-4.95) | 29.37% (-6.97) | 44.13% (-0.79) | 77.04% (0.60) | 46.85% (-0.23) | 55.29% (-0.46) |
| deep-ignorance-weak-filter-pt-strong-filter-anneal | Weak Filter | Strong Filter | 30.97% (-8.69) | 56.22% (0.17) | 36.75% (-6.22) | 25.19% (-11.15) | 43.16% (-1.76) | 77.20% (0.76) | 48.86% (1.78) | 55.67% (-0.08) |
| deep-ignorance-e2e-weak-filter | Weak Filter | Weak Filter | 30.50% (-9.16) | 57.37% (1.32) | 35.25% (-7.72) | 25.74% (-10.60) | 43.91% (-1.01) | 78.35% (1.91) | 51.81% (4.73) | 55.41% (-0.34) |
| deep-ignorance-strong-filter-pt-weak-filter-anneal | Strong Filter | Weak Filter | 30.38% (-9.28) | 57.88% (1.83) | 33.99% (-8.98) | 26.77% (-9.57) | 44.82% (-0.10) | 76.88% (0.44) | 54.05% (6.97) | 55.78% (0.03) |
| deep-ignorance-e2e-strong-filter | Strong Filter | Strong Filter | 29.90% (-9.76) | 55.53% (-0.52) | 35.37% (-7.60) | 24.44% (-11.90) | 43.21% (-1.71) | 75.73% (-0.71) | 47.29% (0.21) | 55.90% (0.15) |
| deep-ignorance-pretraining-stage-strong-filter | Strong Filter | - | 29.47% (-10.19) | 60.02% (3.97) | 33.29% (-9.68) | 25.65% (-10.69) | 43.46% (-1.46) | 79.27% (2.83) | 60.82% (13.74) | 56.53% (0.78) |
| deep-ignorance-unfiltered-cb | - | - | 29.29% (-10.37) | 54.11% (-1.94) | 29.49% (-13.48) | 29.09% (-7.25) | 43.61% (-1.31) | 76.50% (0.06) | 45.84% (-1.24) | 50.50% (-5.25) |
| deep-ignorance-pretraining-stage-weak-filter | Weak Filter | - | 29.12% (-10.54) | 58.98% (2.93) | 33.53% (-9.44) | 24.72% (-11.62) | 41.04% (-3.88) | 78.78% (2.34) | 60.57% (13.49) | 55.53% (-0.22) |
| deep-ignorance-strong-filter-pt-weak-filter-anneal-cb-lat | Strong Filter | Weak Filter | 26.92% (-12.74) | 58.00% (1.95) | 29.95% (-13.02) | 23.88% (-12.46) | 43.52% (-1.40) | 76.61% (0.17) | 56.01% (8.93) | 55.84% (0.09) |
| deep-ignorance-strong-filter-pt-weak-filter-anneal-cb | Strong Filter | Weak Filter | 26.12% (-13.54) | 56.46% (0.41) | 25.46% (-17.51) | 26.77% (-9.57) | 41.45% (-3.47) | 76.33% (-0.11) | 53.64% (6.56) | 54.40% (-1.35) |
| deep-ignorance-unfiltered-cb-lat | - | - | 25.93% (-13.73) | 56.43% (0.38) | 27.42% (-15.55) | 24.44% (-11.90) | 42.73% (-2.19) | 76.22% (-0.22) | 51.85% (4.77) | 54.92% (-0.83) |
| deep-ignorance-e2e-strong-filter-cb-lat | Strong Filter | Strong Filter | 25.87% (-13.79) | 56.60% (0.55) | 27.76% (-15.21) | 23.98% (-12.36) | 42.08% (-2.84) | 75.41% (-1.03) | 52.75% (5.67) | 56.18% (0.43) |
| deep-ignorance-e2e-strong-filter-cb | Strong Filter | Strong Filter | 25.56% (-14.10) | 52.60% (-3.45) | 25.00% (-17.97) | 26.12% (-10.22) | 39.45% (-5.47) | 75.35% (-1.09) | 47.56% (0.48) | 48.03% (-7.72) |
# Acknowledgments
This work was done in collaboration with the UK AI Security Institute and the University of Oxford.
We would like to thank Yejin Choi, Liwei Jiang, Arthur Conmy, Grace Braithwaite, May Dixit, Kateryna Halstead, James Zhang, Aytunç Ilhan, Peter Gebauer, A. Feder Cooper, Adam Gleave, Pietro Lesci, Ian McKenzie, Samuel Ratnam, Paul Rottger, Lydia O'Brien, Cameron Tice, Blake Bullwinkel, Nora Belrose, Patricia Paskov and Aviya Skowron for helpful discussions. Alex Robey and Alexandra Souly also provided valuable methodological input. Jai Patel coordinated collaboration logistics between EleutherAI and UK AISI. Iman Syed offered support related to compute behind our tampering experiments. Kyle O'Brien was partially supported financially by the Cambridge ERA:AI Fellowship.
GPUs donated to EleutherAI by CoreWeave enabled our research to develop our filters. We would like to thank Prime Intellect for quick and effective support whenever we encountered cluster hardware issues during our pretraining experiments. Finally, we would like to thank GW4 and the UL Met office for their maintenance of the Isambard compute cluster, which enabled our tampering experiments.
Our README was inspired by the Pythia, Qwen, and OLMo2 model suites.
# Citation
```
@article{obrien2025deepignorance,
title={Deep Ignorance: Filtering Pretraining Data Builds Tamper-Resistant Safeguards into Open-Weight LLMs},
author={O'Brien, Kyle and Casper, Stephen and Anthony, Quentin and Korbak, Tomek and Kirk, Robert and Davies, Xander and Mishra, Ishan and Irving, Geoffrey and Gal, Yarin and Biderman, Stella},
journal={arXiv preprint arXiv:2508.06601},
year={2025}
}
```
|
AmanPriyanshu/gpt-oss-18.5b-specialized-health_or_medicine-pruned-moe-only-28-experts
|
AmanPriyanshu
| 2025-08-13T06:32:28Z | 0 | 1 | null |
[
"safetensors",
"gpt_oss",
"mixture-of-experts",
"moe",
"expert-pruning",
"gpt-oss",
"openai",
"reasoning",
"health-or-medicine",
"specialized",
"efficient",
"transformer",
"causal-lm",
"text-generation",
"pytorch",
"pruned-model",
"domain-specific",
"conversational",
"en",
"dataset:AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2025-08-13T06:31:37Z |
---
license: apache-2.0
datasets:
- AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations
language:
- en
pipeline_tag: text-generation
tags:
- mixture-of-experts
- moe
- expert-pruning
- gpt-oss
- openai
- reasoning
- health-or-medicine
- specialized
- efficient
- transformer
- causal-lm
- text-generation
- pytorch
- pruned-model
- domain-specific
---
# Health Or Medicine GPT-OSS Model (28 Experts)
**Project**: https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/
<div align="center">
### 👥 Follow the Authors
**Aman Priyanshu**
[](https://www.linkedin.com/in/aman-priyanshu/)
[](https://x.com/AmanPriyanshu6)
[](https://amanpriyanshu.github.io/)
**Supriti Vijay**
[](https://www.linkedin.com/in/supriti-vijay/)
[](https://x.com/SupritiVijay)
[](https://supritivijay.github.io/)
</div>
## Introduction
This is a pruned variant of OpenAI's GPT-OSS-20B model, reduced to 28 experts per layer based on activation patterns from the [AmanPriyanshu/GPT-OSS-20B MoE Expert Activations dataset](https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations). We analyzed router decisions across evaluation benchmarks to identify and retain experts most relevant for health or medicine tasks.
**⚠️ Experimental Model**: This is an experimental pruned model that may not work well - check the [examples below](#model-examples) to see if the outputs meet your needs before use.
This pruning approach reduces the model size while attempting to preserve performance on the target domain.
## Model Architecture & Statistics
| Metric | Value |
|--------|-------|
| **Base Model** | openai/gpt-oss-20b |
| **Architecture** | Mixture-of-Experts Transformer |
| **Total Parameters** | ~18.5B (pruned from 21B) |
| **Original Experts per Layer** | 32 |
| **Pruned Experts per Layer** | 28 |
| **Layers** | 24 |
| **Top-k Routing** | 4 |
| **Context Length** | 128K tokens |
| **Attention Heads** | 64 (Query), 8 (Key-Value) |
| **Residual Dimension** | 2880 |
| **Attention Pattern** | Alternating dense & sliding window (128 tokens) |
| **Positional Encoding** | RoPE (Rotary Position Embedding) |
| **Normalization** | RMSNorm |
| **Precision** | BF16 |
| **License** | Apache 2.0 |
| **Specialization** | Health Or Medicine |
## Pruning Methodology
### What is Expert Pruning?
Mixture-of-Experts models contain multiple specialized sub-networks (experts) per layer. During inference, only a subset of experts are activated for each token. Expert pruning involves:
1. **Analyzing Usage Patterns**: Tracking which experts activate most frequently for specific tasks
2. **Removing Underutilized Experts**: Discarding experts with low activation rates for the target domain
3. **Preserving Router Functionality**: Maintaining the routing mechanism with fewer available experts
### Our Approach
- **Data-Driven Selection**: Used activation patterns from health or medicine evaluation tasks
- **Systematic Reduction**: Reduced from 32 to 28 experts per layer
- **No Retraining**: Direct removal without additional training steps
## Performance & Applications
### Pruning Benefits
- **Smaller Memory Footprint**: 87.5% of original expert parameters
- **Reduced Computational Load**: Fewer routing decisions during inference
- **Focused Capabilities**: Retains experts relevant to health or medicine tasks
### Use Cases
- **Speculative Decoding**: Draft model for full GPT-OSS-20B
- **Resource-Constrained Deployment**: Edge devices, mobile applications
- **Research**: Study expert specialization in MoE models
- **Fine-tuning**: Smaller base model for domain adaptation
*Note: Performance may vary depending on how well the pruned experts match your specific use case.*
## Motivation & Expert Selection
This medical domain model incorporates experts that activated highly during health and medical tasks from MMLU medical subjects. These experts specialize in clinical knowledge, anatomy, medical procedures, and health-related reasoning.
The expert selection process utilized our comprehensive analysis of router activation patterns across multiple evaluation benchmarks:
- **GPQA**: Graduate-level questions in physics, chemistry, biology (Diamond & Expert subsets)
- **MMLU/MMLU-Pro**: Comprehensive knowledge across 57+ subjects including science, medicine, law
- **SORRY-Bench**: Safety evaluation across harmful content categories
- **Tulu3**: Persona-driven instruction following with verifiable constraints
- **Polyglot-or-Not**: Multilingual factual completion tasks
By identifying experts that consistently activated for health or medicine tasks, we created this specialized model that maintains domain expertise while significantly reducing computational requirements from 32 to 28 experts per layer.
## Dataset & Analysis Foundation
This model is based on analysis from the **GPT-OSS-20B MoE Expert Activations dataset** available at:
🔗 **https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations**
The dataset contains router activation patterns from OpenAI's GPT-OSS-20B model across diverse evaluation benchmarks, enabling the creation of these domain-optimized models through systematic expert pruning.
### Pruning Methodology
Our approach involves:
1. **Activation Analysis**: Comprehensive evaluation of expert usage patterns across domain-specific tasks
2. **Expert Ranking**: Identification of the most frequently activated experts for target domains
3. **Systematic Pruning**: Reduction from 32 to 28 experts while preserving router functionality
4. **Quality Validation**: Testing to ensure maintained performance on target tasks
*This is a direct pruning approach - no additional training was performed. The model inherits all capabilities from the original GPT-OSS-20B with focused expert selection.*
## Usage
### CPU Inference
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
# Load the specialized model on CPU
model = AutoModelForCausalLM.from_pretrained(
"AmanPriyanshu/gpt-oss-18.5b-specialized-health_or_medicine-pruned-moe-only-28-experts",
torch_dtype=torch.bfloat16,
device_map="cpu",
trust_remote_code=True
)
tokenizer = AutoTokenizer.from_pretrained("AmanPriyanshu/gpt-oss-18.5b-specialized-health_or_medicine-pruned-moe-only-28-experts")
# Generate with the model
messages = [
{"role": "user", "content": "What are the main functions of the human heart?"}
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt",
return_dict=True,
reasoning_effort="medium"
)
# Ensure inputs are on the same device as model
inputs = {k: v.to(model.device) for k, v in inputs.items()}
outputs = model.generate(
**inputs,
max_new_tokens=512,
do_sample=True,
temperature=0.1,
top_p=0.9,
pad_token_id=tokenizer.eos_token_id,
eos_token_id=tokenizer.eos_token_id
)
# Decode only the generated part
input_length = inputs['input_ids'].shape[1]
response_tokens = outputs[0][input_length:]
response = tokenizer.decode(response_tokens, skip_special_tokens=True)
print(response)
```
### Apple Silicon (MPS) Inference
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
# Check MPS availability and load model
device = "mps" if torch.backends.mps.is_available() else "cpu"
model = AutoModelForCausalLM.from_pretrained(
"AmanPriyanshu/gpt-oss-18.5b-specialized-health_or_medicine-pruned-moe-only-28-experts",
torch_dtype=torch.float16, # Better MPS compatibility
device_map=device,
trust_remote_code=True,
low_cpu_mem_usage=True
)
tokenizer = AutoTokenizer.from_pretrained("AmanPriyanshu/gpt-oss-18.5b-specialized-health_or_medicine-pruned-moe-only-28-experts")
# Generate with the model
messages = [
{"role": "user", "content": "What are the main functions of the human heart?"}
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt",
return_dict=True,
reasoning_effort="medium"
)
# Move inputs to model device
inputs = {k: v.to(model.device) if isinstance(v, torch.Tensor) else v for k, v in inputs.items()}
# Use torch.no_grad for MPS stability
with torch.no_grad():
outputs = model.generate(
**inputs,
max_new_tokens=512,
do_sample=True,
temperature=0.1,
top_p=0.9,
pad_token_id=tokenizer.eos_token_id,
eos_token_id=tokenizer.eos_token_id,
use_cache=True
)
# Decode only the generated part
input_length = inputs['input_ids'].shape[1]
response_tokens = outputs[0][input_length:]
response = tokenizer.decode(response_tokens, skip_special_tokens=True)
print(response)
```
### GPU Inference
For CUDA GPU inference, the models work with minimal modifications from the CPU example above. Simply change:
```python
device_map="auto" # Will automatically use GPU if available
torch_dtype=torch.bfloat16 # or torch.float16
```
## Model Examples
The following examples demonstrate the model's performance on health or medicine tasks:
### Example 1
**Question:** What are the main functions of the human heart?
**Response:**
```
<|channel|>analysis<|message|>Need to answer.<|end|><|start|>assistant<|channel|>final<|message|>The human heart is a muscular pump that performs several essential functions to keep blood circulating throughout the body. Its main functions can be grouped into four key categories:
| Function | What it does | Why it matters |
|----------|--------------|----------------|
| **Pumping (Circulation)** | Moves blood through the body by contracting (systole) and relaxing (diastole). | Delivers oxygen, nutrients, hormones, and removes waste products. |
| **Pressure Generation** | Generates the pressure needed to propel blood through arteries, capillaries, and veins. | Maintains adequate blood flow against resistance in the vascular system. |
| **Electrical Conduction** | Coordinates the timing of contractions via the sino‑atrial (SA) node, atrioventricular (AV) node, bundle branches, and Purkinje fibers. | Ensures that the atria and ventricles contract in a synchronized, efficient rhythm. |
| **Regulation of Blood Volume & Composition** | Works with the kidneys and endocrine system to adjust blood volume, electrolyte balance, and blood pressure. | Keeps blood pressure stable and prevents dehydration or fluid overload. |
### How the Heart Performs These Functions
1. **Anatomical Structure**
- **Four chambers**: right atrium, right ventricle, left atrium, left ventricle.
- **Valves** (tricuspid, pulmonary, mitral, aortic) prevent back‑flow.
- **Myocardium** (heart muscle) provides the force for pumping.
2. **Electrical Cycle**
- **SA node** → initiates impulse → atrial contraction.
- **AV node** → delays impulse → ventricular contraction.
- **Bundle of His → Purkinje fibers** → rapid conduction to ventricles.
3. **Hemodynamic Cycle**
- **Right side**: deoxygenated blood → lungs (pulmonary circulation).
- **Left side**: oxygenated blood → systemic circulation (body tissues).
4. **Autoregulation & Reflexes**
- **Baroreceptor reflex** (in carotid sinus & aortic arch) adjusts heart rate and contractility.
- **Chemoreceptor reflex** (in carotid bodies) responds to CO₂/O₂ levels.
- **Neurohumoral control** (sympathetic/parasympathetic, hormones like adrenaline
```
---
### Example 2
**Question:** Explain the difference between bacteria and viruses.
**Response:**
```
<|channel|>analysis<|message|>Need concise explanation.<|end|><|start|>assistant<|channel|>final<|message|>**Bacteria vs. Viruses – the key differences**
| Feature | Bacteria | Viruses |
|--------|----------|---------|
| **Cellular structure** | True cells – have a cell wall, cytoplasm, ribosomes, and a plasma membrane. | Not cells – just a protein coat (capsid) surrounding genetic material; no membrane or organelles. |
| **Size** | 0.5–10 µm (micrometers). | 20–300 nm (nanometers). |
| **Genetic material** | DNA or RNA, usually circular (plasmids possible). | DNA or RNA, single‑stranded or double‑stranded, but never both. |
| **Reproduction** | Binary fission (autonomous, self‑replicating). | Must infect a host cell and hijack its machinery to replicate. |
| **Metabolism** | Can be autotrophic or heterotrophic; many can generate energy (e.g., photosynthesis, respiration). | No metabolism of their own; rely entirely on host cell processes. |
| **Living status** | Considered living organisms. | Debated; many scientists treat them as “non‑living” because they lack metabolism and reproduction outside a host. |
| **Pathogenic mechanisms** | Secrete toxins, invade tissues, trigger immune responses. | Attach to specific host receptors, inject genetic material, commandeer host replication. |
| **Treatment** | Antibiotics (target cell wall, protein synthesis, etc.). | Antiviral drugs (target viral enzymes or entry), vaccines; antibiotics ineffective. |
| **Examples** | *Escherichia coli*, *Staphylococcus aureus*, *Mycobacterium tuberculosis*. | Influenza virus, HIV, SARS‑CoV‑2, HPV. |
**Bottom line:**
Bacteria are self‑sufficient, cellular organisms that can grow and divide on their own. Viruses are much smaller, lack cellular machinery, and can only replicate by taking over a host cell’s machinery. This fundamental difference shapes how we diagnose, treat, and prevent infections caused by each.<|return|>
```
---
### Example 3
**Question:** What are the symptoms and causes of diabetes?
**Response:**
```
<|channel|>analysis<|message|>Need to answer.<|end|><|start|>assistant<|channel|>final<|message|>**Diabetes mellitus** is a group of metabolic disorders characterized by chronic hyperglycemia (high blood sugar). The two most common types are type 1 and type 2, but there are other forms (gestational, monogenic, etc.). Below is a concise overview of the main symptoms, causes, and key differences between the types.
---
## 1. Symptoms
| Symptom | Typical Onset | What It Means |
|---------|---------------|---------------|
| **Polyuria** (frequent urination) | Early | Kidneys excrete excess glucose, pulling water out of the body. |
| **Polydipsia** (excessive thirst) | Early | Dehydration from fluid loss. |
| **Polyphagia** (increased hunger) | Early | Cells can’t use glucose, so the body signals for more food. |
| **Weight loss** (especially in type 1) | Early | Body breaks down fat and muscle for energy. |
| **Fatigue / weakness** | Early | Energy production is impaired. |
| **Blurred vision** | Early | High glucose causes fluid shifts in the eye lens. |
| **Slow‑healing cuts or infections** | Early‑mid | Poor circulation and immune function. |
| **Numbness or tingling in extremities** | Mid‑to‑late | Peripheral neuropathy from chronic hyperglycemia. |
| **Skin changes** (dry, itchy, fungal infections) | Mid‑to‑late | Hyperglycemia creates a favorable environment for microbes. |
| **Recurrent urinary tract or genital infections** | Mid‑to‑late | High glucose in urine or vaginal secretions promotes bacterial growth. |
| **Ketoacidosis (type 1, sometimes type 2)** | Acute | Severe hyperglycemia + lack of insulin → ketone production → nausea, vomiting, abdominal pain, fruity breath. |
> **Note:** Many people with type 2 diabetes are asymptomatic for years; routine screening is essential, especially if risk factors are present.
---
## 2. Causes & Risk Factors
| Type | Primary Cause | Key Risk Factors |
|------|---------------|------------------|
| **Type 1 Diabetes** | Autoimmune destruction of pancreatic β‑cells → insulin deficiency | • Genetic predisposition (HLA‑DR3/DR4) <br>• Early childhood onset <br>• Viral infections (e
```
---
## Citation
If you use this model in your research, please cite:
```bibtex
@misc{priyanshu2025gptoss,
title={{GPT-OSS MoE Expert Fingerprinting: Analyzing Expert Activation Patterns in Mixture of Experts Models}},
author={Priyanshu, Aman and Vijay, Supriti},
year={2025},
howpublished={\url{https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/}},
note={Interactive analysis tool for expert activation patterns in MoE architectures}
}
```
## References & Resources
- **Original Model**: [OpenAI GPT-OSS Model Card](https://openai.com/index/introducing-gpt-oss/)
- **Model Hub**: [GPT-OSS-20B on Hugging Face](https://huggingface.co/openai/gpt-oss-20b)
- **Expert Analysis Dataset**: [GPT-OSS-20B MoE Expert Activations](https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations)
- **Project Page**: [GPT-OSS MoE Expert Fingerprinting](https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/)
- **GitHub Repository**: [OpenAI GPT-OSS](https://github.com/openai/gpt-oss)
|
getdazo/blockassist-bc-solitary_humming_elk_1755066207
|
getdazo
| 2025-08-13T06:24:47Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"solitary humming elk",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-13T06:24:37Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- solitary humming elk
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
getdazo/blockassist-bc-solitary_humming_elk_1755065554
|
getdazo
| 2025-08-13T06:14:10Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"solitary humming elk",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-13T06:13:56Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- solitary humming elk
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
milliarderdol/blockassist-bc-roaring_rough_scorpion_1755061383
|
milliarderdol
| 2025-08-13T05:32:19Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"roaring rough scorpion",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-13T05:32:07Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- roaring rough scorpion
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
vcmt794/DBT_Large_AdamW25_stage_2
|
vcmt794
| 2025-08-13T05:07:37Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"deberta-v2",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-08-13T04:57:07Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
OrangeCrystalFox/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lethal_jagged_owl
|
OrangeCrystalFox
| 2025-08-13T04:57:06Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am lethal_jagged_owl",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-07-30T07:23:26Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am lethal_jagged_owl
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
DevQuasar/kurakurai.Luth-1.7B-Instruct-GGUF
|
DevQuasar
| 2025-08-13T04:49:43Z | 0 | 0 | null |
[
"gguf",
"text-generation",
"base_model:kurakurai/Luth-1.7B-Instruct",
"base_model:quantized:kurakurai/Luth-1.7B-Instruct",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2025-08-13T04:38:24Z |
---
base_model:
- kurakurai/Luth-1.7B-Instruct
pipeline_tag: text-generation
---
[<img src="https://raw.githubusercontent.com/csabakecskemeti/devquasar/main/dq_logo_black-transparent.png" width="200"/>](https://devquasar.com)
Quantized version of: [kurakurai/Luth-1.7B-Instruct](https://huggingface.co/kurakurai/Luth-1.7B-Instruct)
'Make knowledge free for everyone'
<p align="center">
Made with <br>
<a href="https://www.civo.com/" target="_blank">
<img src="https://www.civo.com/assets/public/brand-assets/civo-logo-colour-60cc1622dedf346f7afde1fff760523f731b0aac106a5465af98ff4073114b74.svg" width="100"/>
</a>
</p>
<a href='https://ko-fi.com/L4L416YX7C' target='_blank'><img height='36' style='border:0px;height:36px;' src='https://storage.ko-fi.com/cdn/kofi6.png?v=6' border='0' alt='Buy Me a Coffee at ko-fi.com' /></a>
|
neutrino12/tensorstax-sft-mixed-91500-sft-plan-lr5e-6-518
|
neutrino12
| 2025-08-13T04:49:12Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-13T04:43:03Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
razor534/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-stealthy_scurrying_hare
|
razor534
| 2025-08-13T04:44:30Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am stealthy scurrying hare",
"unsloth",
"trl",
"genrl-swarm",
"I am stealthy_scurrying_hare",
"conversational",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-04-20T13:28:42Z |
---
base_model: Gensyn/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-stealthy_scurrying_hare
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am stealthy scurrying hare
- unsloth
- trl
- genrl-swarm
- I am stealthy_scurrying_hare
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-stealthy_scurrying_hare
This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="razor534/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-stealthy_scurrying_hare", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.6.0
- Datasets: 3.5.1
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
IncarnateWorld/Qwen3-0.6B-Gensyn-Swarm-mammalian_scavenging_grasshopper
|
IncarnateWorld
| 2025-08-13T04:44:09Z | 2 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am mammalian_scavenging_grasshopper",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-02T18:41:43Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am mammalian_scavenging_grasshopper
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
godijef/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-sniffing_yawning_aardvark
|
godijef
| 2025-08-13T04:43:17Z | 7 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am sniffing_yawning_aardvark",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-29T08:00:54Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am sniffing_yawning_aardvark
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
datasetsANDmodels/en2id
|
datasetsANDmodels
| 2025-08-13T04:41:20Z | 0 | 0 | null |
[
"pytorch",
"marian",
"en",
"id",
"base_model:datasetsANDmodels/en2id",
"base_model:finetune:datasetsANDmodels/en2id",
"region:us"
] | null | 2025-08-13T04:34:34Z |
---
language:
- en
- id
base_model:
- datasetsANDmodels/en2id
---
This model translates text from English to Indonesian
|
Trending-policia-mexicana-video-viral/New.full.videos.policia.mexicana.Viral.Video.Official.Tutorial
|
Trending-policia-mexicana-video-viral
| 2025-08-13T04:31:04Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-13T04:30:53Z |
<animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?leaked-viral-video" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
AmanPriyanshu/gpt-oss-20.9b-specialized-science-pruned-moe-only-32-experts
|
AmanPriyanshu
| 2025-08-13T04:24:49Z | 0 | 0 | null |
[
"safetensors",
"gpt_oss",
"mixture-of-experts",
"moe",
"expert-pruning",
"gpt-oss",
"openai",
"reasoning",
"science",
"specialized",
"efficient",
"transformer",
"causal-lm",
"text-generation",
"pytorch",
"pruned-model",
"domain-specific",
"conversational",
"en",
"dataset:AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2025-08-13T04:23:51Z |
---
license: apache-2.0
datasets:
- AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations
language:
- en
pipeline_tag: text-generation
tags:
- mixture-of-experts
- moe
- expert-pruning
- gpt-oss
- openai
- reasoning
- science
- specialized
- efficient
- transformer
- causal-lm
- text-generation
- pytorch
- pruned-model
- domain-specific
---
# Science GPT-OSS Model (32 Experts)
**Project**: https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/
<div align="center">
### 👥 Follow the Authors
**Aman Priyanshu**
[](https://www.linkedin.com/in/aman-priyanshu/)
[](https://x.com/AmanPriyanshu6)
[](https://amanpriyanshu.github.io/)
**Supriti Vijay**
[](https://www.linkedin.com/in/supriti-vijay/)
[](https://x.com/SupritiVijay)
[](https://supritivijay.github.io/)
</div>
## Introduction
This is a pruned variant of OpenAI's GPT-OSS-20B model, reduced to 32 experts per layer based on activation patterns from the [AmanPriyanshu/GPT-OSS-20B MoE Expert Activations dataset](https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations). We analyzed router decisions across evaluation benchmarks to identify and retain experts most relevant for science tasks.
**⚠️ Experimental Model**: This is an experimental pruned model that may not work well - check the [examples below](#model-examples) to see if the outputs meet your needs before use.
This pruning approach reduces the model size while attempting to preserve performance on the target domain.
## Model Architecture & Statistics
| Metric | Value |
|--------|-------|
| **Base Model** | openai/gpt-oss-20b |
| **Architecture** | Mixture-of-Experts Transformer |
| **Total Parameters** | ~20.9B (pruned from 21B) |
| **Original Experts per Layer** | 32 |
| **Pruned Experts per Layer** | 32 |
| **Layers** | 24 |
| **Top-k Routing** | 4 |
| **Context Length** | 128K tokens |
| **Attention Heads** | 64 (Query), 8 (Key-Value) |
| **Residual Dimension** | 2880 |
| **Attention Pattern** | Alternating dense & sliding window (128 tokens) |
| **Positional Encoding** | RoPE (Rotary Position Embedding) |
| **Normalization** | RMSNorm |
| **Precision** | BF16 |
| **License** | Apache 2.0 |
| **Specialization** | Science |
## Pruning Methodology
### What is Expert Pruning?
Mixture-of-Experts models contain multiple specialized sub-networks (experts) per layer. During inference, only a subset of experts are activated for each token. Expert pruning involves:
1. **Analyzing Usage Patterns**: Tracking which experts activate most frequently for specific tasks
2. **Removing Underutilized Experts**: Discarding experts with low activation rates for the target domain
3. **Preserving Router Functionality**: Maintaining the routing mechanism with fewer available experts
### Our Approach
- **Data-Driven Selection**: Used activation patterns from science evaluation tasks
- **Systematic Reduction**: Reduced from 32 to 32 experts per layer
- **No Retraining**: Direct removal without additional training steps
## Performance & Applications
### Pruning Benefits
- **Smaller Memory Footprint**: 100.0% of original expert parameters
- **Reduced Computational Load**: Fewer routing decisions during inference
- **Focused Capabilities**: Retains experts relevant to science tasks
### Use Cases
- **Speculative Decoding**: Draft model for full GPT-OSS-20B
- **Resource-Constrained Deployment**: Edge devices, mobile applications
- **Research**: Study expert specialization in MoE models
- **Fine-tuning**: Smaller base model for domain adaptation
*Note: Performance may vary depending on how well the pruned experts match your specific use case.*
## Motivation & Expert Selection
This science-specialized model leverages experts that showed high activation patterns during scientific reasoning tasks from GPQA (physics, chemistry, biology) and MMLU science domains. These experts demonstrate superior performance on complex scientific problem-solving and technical knowledge recall.
The expert selection process utilized our comprehensive analysis of router activation patterns across multiple evaluation benchmarks:
- **GPQA**: Graduate-level questions in physics, chemistry, biology (Diamond & Expert subsets)
- **MMLU/MMLU-Pro**: Comprehensive knowledge across 57+ subjects including science, medicine, law
- **SORRY-Bench**: Safety evaluation across harmful content categories
- **Tulu3**: Persona-driven instruction following with verifiable constraints
- **Polyglot-or-Not**: Multilingual factual completion tasks
By identifying experts that consistently activated for science tasks, we created this specialized model that maintains domain expertise while significantly reducing computational requirements from 32 to 32 experts per layer.
## Dataset & Analysis Foundation
This model is based on analysis from the **GPT-OSS-20B MoE Expert Activations dataset** available at:
🔗 **https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations**
The dataset contains router activation patterns from OpenAI's GPT-OSS-20B model across diverse evaluation benchmarks, enabling the creation of these domain-optimized models through systematic expert pruning.
### Pruning Methodology
Our approach involves:
1. **Activation Analysis**: Comprehensive evaluation of expert usage patterns across domain-specific tasks
2. **Expert Ranking**: Identification of the most frequently activated experts for target domains
3. **Systematic Pruning**: Reduction from 32 to 32 experts while preserving router functionality
4. **Quality Validation**: Testing to ensure maintained performance on target tasks
*This is a direct pruning approach - no additional training was performed. The model inherits all capabilities from the original GPT-OSS-20B with focused expert selection.*
## Usage
### CPU Inference
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
# Load the specialized model on CPU
model = AutoModelForCausalLM.from_pretrained(
"AmanPriyanshu/gpt-oss-20.9b-specialized-science-pruned-moe-only-32-experts",
torch_dtype=torch.bfloat16,
device_map="cpu",
trust_remote_code=True
)
tokenizer = AutoTokenizer.from_pretrained("AmanPriyanshu/gpt-oss-20.9b-specialized-science-pruned-moe-only-32-experts")
# Generate with the model
messages = [
{"role": "user", "content": "Explain the process of photosynthesis in plants."}
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt",
return_dict=True,
reasoning_effort="medium"
)
# Ensure inputs are on the same device as model
inputs = {k: v.to(model.device) for k, v in inputs.items()}
outputs = model.generate(
**inputs,
max_new_tokens=512,
do_sample=True,
temperature=0.1,
top_p=0.9,
pad_token_id=tokenizer.eos_token_id,
eos_token_id=tokenizer.eos_token_id
)
# Decode only the generated part
input_length = inputs['input_ids'].shape[1]
response_tokens = outputs[0][input_length:]
response = tokenizer.decode(response_tokens, skip_special_tokens=True)
print(response)
```
### Apple Silicon (MPS) Inference
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
# Check MPS availability and load model
device = "mps" if torch.backends.mps.is_available() else "cpu"
model = AutoModelForCausalLM.from_pretrained(
"AmanPriyanshu/gpt-oss-20.9b-specialized-science-pruned-moe-only-32-experts",
torch_dtype=torch.float16, # Better MPS compatibility
device_map=device,
trust_remote_code=True,
low_cpu_mem_usage=True
)
tokenizer = AutoTokenizer.from_pretrained("AmanPriyanshu/gpt-oss-20.9b-specialized-science-pruned-moe-only-32-experts")
# Generate with the model
messages = [
{"role": "user", "content": "Explain the process of photosynthesis in plants."}
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt",
return_dict=True,
reasoning_effort="medium"
)
# Move inputs to model device
inputs = {k: v.to(model.device) if isinstance(v, torch.Tensor) else v for k, v in inputs.items()}
# Use torch.no_grad for MPS stability
with torch.no_grad():
outputs = model.generate(
**inputs,
max_new_tokens=512,
do_sample=True,
temperature=0.1,
top_p=0.9,
pad_token_id=tokenizer.eos_token_id,
eos_token_id=tokenizer.eos_token_id,
use_cache=True
)
# Decode only the generated part
input_length = inputs['input_ids'].shape[1]
response_tokens = outputs[0][input_length:]
response = tokenizer.decode(response_tokens, skip_special_tokens=True)
print(response)
```
### GPU Inference
For CUDA GPU inference, the models work with minimal modifications from the CPU example above. Simply change:
```python
device_map="auto" # Will automatically use GPU if available
torch_dtype=torch.bfloat16 # or torch.float16
```
## Model Examples
The following examples demonstrate the model's performance on science tasks:
### Example 1
**Question:** Explain the process of photosynthesis in plants.
**Response:**
```
<|channel|>analysis<|message|>Need concise explanation.<|end|><|start|>assistant<|channel|>final<|message|>**Photosynthesis** is the process by which green plants, algae, and some bacteria convert light energy into chemical energy, producing sugars that fuel their growth and releasing oxygen as a by‑product. It takes place mainly in the chloroplasts of plant cells, specifically in the thylakoid membranes where chlorophyll pigments capture light.
---
### 1. Light‑Dependent Reactions (Thylakoid Membrane)
| Step | What Happens | Key Components |
|------|--------------|----------------|
| **Light Capture** | Chlorophyll a and accessory pigments absorb photons. | Chlorophyll a, b, carotenoids |
| **Water Splitting (Photolysis)** | Light energy splits H₂O into O₂, H⁺, and electrons. | Photosystem II (PSII) |
| **Electron Transport Chain (ETC)** | Electrons move through PSII → plastoquinone → cytochrome b₆f → plastocyanin → Photosystem I (PSI). | PSII, cytochrome b₆f, PSI |
| **ATP Formation** | Proton gradient drives ATP synthase. | ATP synthase |
| **NADPH Production** | Electrons reduce NADP⁺ to NADPH. | Ferredoxin, NADP⁺ reductase |
| **Oxygen Release** | O₂ is released into the atmosphere. | PSII |
**Result:** 2 ATP, 2 NADPH, and 1 O₂ per cycle of light absorption.
---
### 2. Light‑Independent Reactions (Calvin‑Benson Cycle, Stroma)
| Step | What Happens | Key Enzymes |
|------|--------------|-------------|
| **Carbon Fixation** | CO₂ is attached to ribulose‑1,5‑bisphosphate (RuBP) by Rubisco, forming 3‑phosphoglycerate (3‑PGA). | Rubisco |
| **Reduction Phase** | 3‑PGA is phosphorylated by ATP and reduced by NADPH to glyceraldehyde‑3‑phosphate (G3P). | ATP, NADPH |
| **Regeneration of RuBP** | Some G3P molecules are used to regenerate RuBP, allowing the cycle to continue. | Various enzymes (e.g., phosphoglycerate kinase, glyceraldehyde‑3‑phosphate dehydrogenase) |
|
```
---
### Example 2
**Question:** What causes the greenhouse effect and how does it work?
**Response:**
```
<|channel|>analysis<|message|>Need explain greenhouse effect causes, greenhouse gases, absorption, re-emission, energy balance.<|end|><|start|>assistant<|channel|>final<|message|>### The Greenhouse Effect in a Nutshell
The **greenhouse effect** is the natural process that keeps Earth’s surface warm enough to support life. It works because certain gases in the atmosphere (the *greenhouse gases*, or GHGs) trap heat that would otherwise escape into space. Think of it as a blanket that keeps the planet from cooling too quickly.
---
## 1. What Causes the Greenhouse Effect?
| Cause | How It Happens |
|-------|----------------|
| **Solar Radiation** | The Sun emits energy across a spectrum of wavelengths. Short‑wave (visible and UV) radiation passes through the atmosphere largely unimpeded and warms the Earth’s surface. |
| **Atmospheric Gases** | Gases such as **water vapor (H₂O)**, **carbon dioxide (CO₂)**, **methane (CH₄)**, **nitrous oxide (N₂O)**, and **ozone (O₃)** absorb and re‑emit infrared (IR) radiation. |
| **Surface Emission** | The warmed surface emits IR radiation back toward space. |
| **Re‑emission by GHGs** | Greenhouse gases absorb some of that IR and then re‑emit it in all directions, including back toward the surface, adding extra warmth. |
The net result is that the surface temperature is higher than it would be if the atmosphere were transparent to IR radiation.
---
## 2. How Does It Work? (Step‑by‑Step)
1. **Solar Energy Arrives**
- The Sun’s energy is mostly in the visible spectrum (≈ 50–70 % of total solar irradiance).
- About 30 % is reflected by clouds, ice, and the atmosphere (albedo).
- Roughly 70 % reaches Earth’s surface.
2. **Surface Absorbs and Re‑emits Energy**
- The surface absorbs the incoming short‑wave radiation and warms.
- A warmed surface emits long‑wave (infrared) radiation, with a peak wavelength around 10 µm (depending on temperature).
3. **Greenhouse Gases Absorb IR**
- GHG molecules have vibrational modes that resonate with IR wavelengths.
- When IR photons hit a GHG molecule, the molecule absorbs the energy, raising
```
---
### Example 3
**Question:** Describe the structure and function of DNA.
**Response:**
```
<|channel|>analysis<|message|>Need concise description.<|end|><|start|>assistant<|channel|>final<|message|>**DNA (Deoxyribonucleic Acid)** is the hereditary material that carries the genetic blueprint for all living organisms. Its structure and function can be understood in two complementary parts:
---
## 1. Structure
| Feature | Description | Biological Significance |
|---------|-------------|------------------------|
| **Double‑helix** | Two antiparallel strands wound around each other like a twisted ladder. | Provides stability and a compact way to store long sequences of genetic information. |
| **Backbone** | Alternating deoxyribose sugars and phosphate groups. | Forms a sturdy, negatively charged scaffold that protects the bases and allows enzymes to read the sequence. |
| **Nucleobases** | Four nitrogenous bases: Adenine (A), Thymine (T), Cytosine (C), Guanine (G). | Encode genetic information through base‑pairing (A‑T, C‑G). |
| **Base‑pairing** | Hydrogen bonds link complementary bases across the two strands. | Ensures accurate replication and transcription. |
| **Major & Minor grooves** | Unequal spaces between the strands. | Provide binding sites for proteins (e.g., transcription factors, polymerases). |
| **Supercoiling & Chromatin** | In eukaryotes, DNA is wrapped around histone proteins forming nucleosomes, which further coil into higher‑order structures. | Enables compaction of millions of base pairs into the nucleus while regulating access to genes. |
---
## 2. Function
| Function | How It Works | Key Players |
|----------|--------------|-------------|
| **Genetic Information Storage** | The linear sequence of bases encodes instructions for building proteins and RNA molecules. | Genes, regulatory elements |
| **Replication** | DNA polymerases synthesize a new complementary strand, producing two identical DNA molecules. | DNA polymerase, helicase, ligase |
| **Transcription** | RNA polymerase reads a DNA template strand and synthesizes messenger RNA (mRNA). | RNA polymerase, transcription factors |
| **Translation (Protein Synthesis)** | Ribosomes read mRNA codons and assemble amino acids into polypeptide chains. | tRNA, ribosomes, aminoacyl‑tRNA synthetases |
| **Mutation & Evolution** | Errors in replication or damage can change base sequences, creating genetic diversity. | DNA repair enzymes, mutagens |
| **Regulation of
```
---
## Citation
If you use this model in your research, please cite:
```bibtex
@misc{priyanshu2025gptoss,
title={{GPT-OSS MoE Expert Fingerprinting: Analyzing Expert Activation Patterns in Mixture of Experts Models}},
author={Priyanshu, Aman and Vijay, Supriti},
year={2025},
howpublished={\url{https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/}},
note={Interactive analysis tool for expert activation patterns in MoE architectures}
}
```
## References & Resources
- **Original Model**: [OpenAI GPT-OSS Model Card](https://openai.com/index/introducing-gpt-oss/)
- **Model Hub**: [GPT-OSS-20B on Hugging Face](https://huggingface.co/openai/gpt-oss-20b)
- **Expert Analysis Dataset**: [GPT-OSS-20B MoE Expert Activations](https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations)
- **Project Page**: [GPT-OSS MoE Expert Fingerprinting](https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/)
- **GitHub Repository**: [OpenAI GPT-OSS](https://github.com/openai/gpt-oss)
|
sunblaze-ucb/Qwen2.5-1.5B-Intuitor-MATH-1EPOCH
|
sunblaze-ucb
| 2025-08-13T04:16:11Z | 11 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"en",
"dataset:math",
"arxiv:2505.19590",
"base_model:Qwen/Qwen2.5-1.5B",
"base_model:finetune:Qwen/Qwen2.5-1.5B",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-02T05:44:57Z |
---
base_model: Qwen/Qwen2.5-1.5B
datasets:
- math
language:
- en
license: apache-2.0
metrics:
- accuracy
pipeline_tag: text-generation
library_name: transformers
---
# Qwen2.5-1.5B-Intuitor-MATH-1EPOCH
An Intuitor-fine-tuned version of Qwen2.5-1.5B trained on the MATH dataset.
This model is part of the work presented in the paper [**Learning to Reason without External Rewards**](https://huggingface.co/papers/2505.19590).
## Abstract
Training large language models (LLMs) for complex reasoning via Reinforcement Learning with Verifiable Rewards (RLVR) is effective but limited by reliance on costly, domain-specific supervision. We explore Reinforcement Learning from Internal Feedback (RLIF), a framework that enables LLMs to learn from intrinsic signals without external rewards or labeled data. We propose Intuitor, an RLIF method that uses a model's own confidence, termed self-certainty, as its sole reward signal. Intuitor replaces external rewards in Group Relative Policy Optimization (GRPO) with self-certainty scores, enabling fully unsupervised learning. Experiments demonstrate that Intuitor matches GRPO's performance on mathematical benchmarks while achieving superior generalization to out-of-domain tasks like code generation, without requiring gold solutions or test cases. Our findings show that intrinsic model signals can drive effective learning across domains, offering a scalable alternative to RLVR for autonomous AI systems where verifiable rewards are unavailable.
## Overview
**Intuitor** is a reinforcement learning method that fine-tunes large language models (LLMs) using *self-certainty*—the model’s own internal confidence—as the sole reward. It is built on a novel paradigm we call **Reinforcement Learning from Internal Feedback (RLIF)**.
<p align="center">
<img src="https://raw.githubusercontent.com/sunblaze-ucb/rlif/main/figs/rlif.png" alt="RLIF Overview" width="700"/>
</p>
### 🧭 What is RLIF?
**Reinforcement Learning from Internal Feedback (RLIF)** is a training framework where language models learn *without any external rewards, gold labels, or verifiers*. Instead, models improve by optimizing *intrinsic signals*—such as confidence in their own answers—generated entirely from within. RLIF enables scalable and domain-agnostic fine-tuning of LLMs in settings where human feedback or verifiable supervision is expensive or unavailable.
Intuitor instantiates RLIF by using **self-certainty**—a model's confidence measured via KL divergence to uniform—as an intrinsic reward in the GRPO policy optimization algorithm.
<p align="center">
<img src="https://raw.githubusercontent.com/sunblaze-ucb/rlif/main/figs/intuitor.png" alt="Intuitor" width="700"/>
</p>
## Code
The official code for "Learning to Reason without External Rewards" and the Intuitor framework is available on the [GitHub repository](https://github.com/sunblaze-ucb/rlif).
## Usage
This model can be loaded and used directly with the Hugging Face `transformers` library. Below is a basic example for text generation using the Qwen2.5 chat template:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model_id = "sunblaze-ucb/Qwen2.5-1.5B-Intuitor-MATH-1EPOCH"
# Load tokenizer and model
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.bfloat16, # Use torch.float16 if bfloat16 is not supported by your GPU
device_map="auto"
)
model.eval() # Set model to evaluation mode
# Define a conversation using the Qwen2.5 chat template
messages = [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Solve the following math problem: What is the sum of the first 10 prime numbers?"}
]
# Apply chat template to get the prompt string
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
# Tokenize the input and move to device
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
# Generate output
with torch.no_grad():
generated_ids = model.generate(
model_inputs.input_ids,
max_new_tokens=256,
do_sample=False, # For deterministic output
temperature=0.1, # Low temperature for more deterministic output
pad_token_id=tokenizer.eos_token_id # Important for Qwen2.5
)
# Decode the generated text, excluding the input prompt
generated_text = tokenizer.batch_decode(generated_ids[:, model_inputs.input_ids.shape[1]:], skip_special_tokens=True)[0]
print(generated_text)
```
## Benchmarks
Intuitor achieves:
* Comparable performance to GRPO on in-domain math reasoning tasks (GSM8K, MATH500).
* Superior generalization to code generation (LiveCodeBench, CRUXEval).
* Improved instruction following, without needing any gold labels or verifiable test suites.
For detailed results, see Table 1 in the paper.
| Model Name | Size | Method | Hugging Face Link |
| :--------- | :--- | :----- | :---------------- |
| `sunblaze-ucb/Qwen2.5-1.5B-Intuitor-MATH-1EPOCH` | 1.5B | Intuitor | [View Model](https://huggingface.co/sunblaze-ucb/Qwen2.5-1.5B-Intuitor-MATH-1EPOCH) |
| `sunblaze-ucb/Qwen2.5-3B-Intuitor-MATH-1EPOCH` | 3B | Intuitor | [View Model](https://huggingface.co/sunblaze-ucb/Qwen2.5-3B-Intuitor-MATH-1EPOCH) |
| `sunblaze-ucb/OLMo-2-7B-SFT-Intuitor-MATH-1EPOCH` | 7B | Intuitor | [View Model](https://huggingface.co/sunblaze-ucb/OLMo-2-7B-SFT-Intuitor-MATH-1EPOCH) |
| `sunblaze-ucb/Qwen3-14B-Intuitor-MATH-1EPOCH` | 14B | Intuitor | [View Model](https://huggingface.co/sunblaze-ucb/Qwen3-14B-Intuitor-MATH-1EPOCH) |
| `sunblaze-ucb/Qwen2.5-1.5B-GRPO-MATH-1EPOCH` | 1.5B | GRPO | [View Model](https://huggingface.co/sunblaze-ucb/Qwen2.5-1.5B-GRPO-MATH-1EPOCH) |
| `sunblaze-ucb/Qwen2.5-3B-GRPO-MATH-1EPOCH` | 3B | GRPO | [View Model](https://huggingface.co/sunblaze-ucb/Qwen2.5-3B-GRPO-MATH-1EPOCH) |
| `sunblaze-ucb/OLMo-2-7B-SFT-GRPO-MATH-1EPOCH` | 7B | GRPO | [View Model](https://huggingface.co/sunblaze-ucb/OLMo-2-7B-SFT-GRPO-MATH-1EPOCH) |
| `sunblaze-ucb/Qwen3-14B-GRPO-MATH-1EPOCH` | 14B | GRPO | [View Model](https://huggingface.co/sunblaze-ucb/Qwen3-14B-GRPO-MATH-1EPOCH) |
## Citation
```bibtex
@article{zhao2025learning,
title = {Learning to Reason without External Rewards},
author = {Zhao, Xuandong and Kang, Zhewei and Feng, Aosong and Levine, Sergey and Song, Dawn},
journal = {arXiv preprint arXiv:2505.19590},
year = {2025}
}
```
|
kujirawhale/blockassist-bc-freckled_rapid_iguana_1755058126
|
kujirawhale
| 2025-08-13T04:10:28Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"freckled rapid iguana",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-13T04:10:11Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- freckled rapid iguana
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
IvanJAjebu/blockassist-bc-thorny_slender_capybara_1755057740
|
IvanJAjebu
| 2025-08-13T04:03:26Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"thorny slender capybara",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-13T04:03:18Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- thorny slender capybara
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
honggeun/test1-gguf
|
honggeun
| 2025-08-13T04:01:14Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-08-13T03:09:12Z |
---
base_model: unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** honggeun
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
ACECA/lowMvMax_208
|
ACECA
| 2025-08-13T03:42:11Z | 0 | 0 | null |
[
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-08-12T15:19:40Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
ACECA/lowMvMax_207
|
ACECA
| 2025-08-13T03:41:44Z | 0 | 0 | null |
[
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-08-12T15:19:37Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
AmanPriyanshu/gpt-oss-12.0b-specialized-science-pruned-moe-only-17-experts
|
AmanPriyanshu
| 2025-08-13T03:40:13Z | 0 | 0 | null |
[
"safetensors",
"gpt_oss",
"mixture-of-experts",
"moe",
"expert-pruning",
"gpt-oss",
"openai",
"reasoning",
"science",
"specialized",
"efficient",
"transformer",
"causal-lm",
"text-generation",
"pytorch",
"pruned-model",
"domain-specific",
"conversational",
"en",
"dataset:AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2025-08-13T03:39:33Z |
---
license: apache-2.0
datasets:
- AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations
language:
- en
pipeline_tag: text-generation
tags:
- mixture-of-experts
- moe
- expert-pruning
- gpt-oss
- openai
- reasoning
- science
- specialized
- efficient
- transformer
- causal-lm
- text-generation
- pytorch
- pruned-model
- domain-specific
---
# Science GPT-OSS Model (17 Experts)
**Project**: https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/
<div align="center">
### 👥 Follow the Authors
**Aman Priyanshu**
[](https://www.linkedin.com/in/aman-priyanshu/)
[](https://x.com/AmanPriyanshu6)
[](https://amanpriyanshu.github.io/)
**Supriti Vijay**
[](https://www.linkedin.com/in/supriti-vijay/)
[](https://x.com/SupritiVijay)
[](https://supritivijay.github.io/)
</div>
## Introduction
This is a pruned variant of OpenAI's GPT-OSS-20B model, reduced to 17 experts per layer based on activation patterns from the [AmanPriyanshu/GPT-OSS-20B MoE Expert Activations dataset](https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations). We analyzed router decisions across evaluation benchmarks to identify and retain experts most relevant for science tasks.
**⚠️ Experimental Model**: This is an experimental pruned model that may not work well - check the [examples below](#model-examples) to see if the outputs meet your needs before use.
This pruning approach reduces the model size while attempting to preserve performance on the target domain.
## Model Architecture & Statistics
| Metric | Value |
|--------|-------|
| **Base Model** | openai/gpt-oss-20b |
| **Architecture** | Mixture-of-Experts Transformer |
| **Total Parameters** | ~12.0B (pruned from 21B) |
| **Original Experts per Layer** | 32 |
| **Pruned Experts per Layer** | 17 |
| **Layers** | 24 |
| **Top-k Routing** | 4 |
| **Context Length** | 128K tokens |
| **Attention Heads** | 64 (Query), 8 (Key-Value) |
| **Residual Dimension** | 2880 |
| **Attention Pattern** | Alternating dense & sliding window (128 tokens) |
| **Positional Encoding** | RoPE (Rotary Position Embedding) |
| **Normalization** | RMSNorm |
| **Precision** | BF16 |
| **License** | Apache 2.0 |
| **Specialization** | Science |
## Pruning Methodology
### What is Expert Pruning?
Mixture-of-Experts models contain multiple specialized sub-networks (experts) per layer. During inference, only a subset of experts are activated for each token. Expert pruning involves:
1. **Analyzing Usage Patterns**: Tracking which experts activate most frequently for specific tasks
2. **Removing Underutilized Experts**: Discarding experts with low activation rates for the target domain
3. **Preserving Router Functionality**: Maintaining the routing mechanism with fewer available experts
### Our Approach
- **Data-Driven Selection**: Used activation patterns from science evaluation tasks
- **Systematic Reduction**: Reduced from 32 to 17 experts per layer
- **No Retraining**: Direct removal without additional training steps
## Performance & Applications
### Pruning Benefits
- **Smaller Memory Footprint**: 53.1% of original expert parameters
- **Reduced Computational Load**: Fewer routing decisions during inference
- **Focused Capabilities**: Retains experts relevant to science tasks
### Use Cases
- **Speculative Decoding**: Draft model for full GPT-OSS-20B
- **Resource-Constrained Deployment**: Edge devices, mobile applications
- **Research**: Study expert specialization in MoE models
- **Fine-tuning**: Smaller base model for domain adaptation
*Note: Performance may vary depending on how well the pruned experts match your specific use case.*
## Motivation & Expert Selection
This science-specialized model leverages experts that showed high activation patterns during scientific reasoning tasks from GPQA (physics, chemistry, biology) and MMLU science domains. These experts demonstrate superior performance on complex scientific problem-solving and technical knowledge recall.
The expert selection process utilized our comprehensive analysis of router activation patterns across multiple evaluation benchmarks:
- **GPQA**: Graduate-level questions in physics, chemistry, biology (Diamond & Expert subsets)
- **MMLU/MMLU-Pro**: Comprehensive knowledge across 57+ subjects including science, medicine, law
- **SORRY-Bench**: Safety evaluation across harmful content categories
- **Tulu3**: Persona-driven instruction following with verifiable constraints
- **Polyglot-or-Not**: Multilingual factual completion tasks
By identifying experts that consistently activated for science tasks, we created this specialized model that maintains domain expertise while significantly reducing computational requirements from 32 to 17 experts per layer.
## Dataset & Analysis Foundation
This model is based on analysis from the **GPT-OSS-20B MoE Expert Activations dataset** available at:
🔗 **https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations**
The dataset contains router activation patterns from OpenAI's GPT-OSS-20B model across diverse evaluation benchmarks, enabling the creation of these domain-optimized models through systematic expert pruning.
### Pruning Methodology
Our approach involves:
1. **Activation Analysis**: Comprehensive evaluation of expert usage patterns across domain-specific tasks
2. **Expert Ranking**: Identification of the most frequently activated experts for target domains
3. **Systematic Pruning**: Reduction from 32 to 17 experts while preserving router functionality
4. **Quality Validation**: Testing to ensure maintained performance on target tasks
*This is a direct pruning approach - no additional training was performed. The model inherits all capabilities from the original GPT-OSS-20B with focused expert selection.*
## Usage
### CPU Inference
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
# Load the specialized model on CPU
model = AutoModelForCausalLM.from_pretrained(
"AmanPriyanshu/gpt-oss-12.0b-specialized-science-pruned-moe-only-17-experts",
torch_dtype=torch.bfloat16,
device_map="cpu",
trust_remote_code=True
)
tokenizer = AutoTokenizer.from_pretrained("AmanPriyanshu/gpt-oss-12.0b-specialized-science-pruned-moe-only-17-experts")
# Generate with the model
messages = [
{"role": "user", "content": "Explain the process of photosynthesis in plants."}
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt",
return_dict=True,
reasoning_effort="medium"
)
# Ensure inputs are on the same device as model
inputs = {k: v.to(model.device) for k, v in inputs.items()}
outputs = model.generate(
**inputs,
max_new_tokens=512,
do_sample=True,
temperature=0.1,
top_p=0.9,
pad_token_id=tokenizer.eos_token_id,
eos_token_id=tokenizer.eos_token_id
)
# Decode only the generated part
input_length = inputs['input_ids'].shape[1]
response_tokens = outputs[0][input_length:]
response = tokenizer.decode(response_tokens, skip_special_tokens=True)
print(response)
```
### Apple Silicon (MPS) Inference
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
# Check MPS availability and load model
device = "mps" if torch.backends.mps.is_available() else "cpu"
model = AutoModelForCausalLM.from_pretrained(
"AmanPriyanshu/gpt-oss-12.0b-specialized-science-pruned-moe-only-17-experts",
torch_dtype=torch.float16, # Better MPS compatibility
device_map=device,
trust_remote_code=True,
low_cpu_mem_usage=True
)
tokenizer = AutoTokenizer.from_pretrained("AmanPriyanshu/gpt-oss-12.0b-specialized-science-pruned-moe-only-17-experts")
# Generate with the model
messages = [
{"role": "user", "content": "Explain the process of photosynthesis in plants."}
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt",
return_dict=True,
reasoning_effort="medium"
)
# Move inputs to model device
inputs = {k: v.to(model.device) if isinstance(v, torch.Tensor) else v for k, v in inputs.items()}
# Use torch.no_grad for MPS stability
with torch.no_grad():
outputs = model.generate(
**inputs,
max_new_tokens=512,
do_sample=True,
temperature=0.1,
top_p=0.9,
pad_token_id=tokenizer.eos_token_id,
eos_token_id=tokenizer.eos_token_id,
use_cache=True
)
# Decode only the generated part
input_length = inputs['input_ids'].shape[1]
response_tokens = outputs[0][input_length:]
response = tokenizer.decode(response_tokens, skip_special_tokens=True)
print(response)
```
### GPU Inference
For CUDA GPU inference, the models work with minimal modifications from the CPU example above. Simply change:
```python
device_map="auto" # Will automatically use GPU if available
torch_dtype=torch.bfloat16 # or torch.float16
```
## Model Examples
The following examples demonstrate the model's performance on science tasks:
### Example 1
**Question:** Explain the process of photosynthesis in plants.
**Response:**
```
<|channel|>analysis<|message|>Need concise explanation.<|end|><|start|>assistant<|channel|>final<|message|>**Photosynthesis – the “making of food” in plants**
| Step | What happens | Key molecules & enzymes | Where it occurs |
|------|--------------|------------------------|----------------|
| 1. **Light absorption** | Chlorophyll‑a, chlorophyll‑b, and accessory pigments in the thylakoid membranes absorb photons (400–700 nm). | • Chlorophyll a & b | Thylakoid membranes of chloroplasts |
| 2. **Excitation of electrons** | Energy excites electrons in chlorophyll‑a → they are passed to the *electron transport chain* (ETC). | • Photosystem II (PSII) → plastoquinone → Photosystem I (PSI) | Thylakoid membrane |
| 3. **Water splitting (photolysis)** | PSII uses light energy to split H₂O → 2 H⁺ + 2 e⁻ + O₂. | • Oxygen‑evolving complex (OEC) | Thylakoid membrane |
| 4. **ATP synthesis** | Electrons travel through ETC, pumping protons into the thylakoid lumen → proton‑gradient drives ATP synthase → ATP. | • ATP synthase | Thylakoid membrane |
| 5. **NADPH formation** | Electrons reduce NADP⁺ to NADPH (via PSI). | • PSI, cytochrome b₆₁ | Thylakoid membrane |
| 6. **Carbon fixation (Calvin cycle)** | In the stroma, CO₂ is fixed into 3‑phosphoglycerate (3‑PG) by the enzyme Rubisco. 3‑PG is then recycled to produce sugars (glucose, fructose). | • Rubisco, GAPDH, PK, etc. | Stroma of chloroplasts |
| 7. **Sugar export** | Glucose and other sugars are transported out of the chloroplast to the cytosol and then to other tissues. | • Hexokinase, sucrose transporters | Cytosol, plasmodesmata |
**Key points**
1. **Energy source** – Light energy drives the entire process.
2. **Electron transport chain** – Generates ATP and NADPH, the “energy currency” for the Calvin cycle.
3. **Water → Oxygen** – Photosynthesis releases O₂
```
---
### Example 2
**Question:** What causes the greenhouse effect and how does it work?
**Response:**
```
<|channel|>analysis<|message|>Need explain greenhouse effect causes, greenhouse gases, absorption of IR, trapping heat, radiative balance.<|end|><|start|>assistant<|channel|>final<|message|>### 1. What causes the greenhouse effect?
The “greenhouse effect” (or “greenhouse‑gas effect”) is the natural warming of the Earth’s surface that results from the presence of certain gases in the atmosphere that can absorb and re‑emit infrared (IR) radiation. The effect is named after the way a greenhouse works: a glass roof lets sunlight (visible light) in, but traps the heat that the plants and the roof emit back as IR radiation.
The main greenhouse gases (GHGs) are:
| Gas | Natural concentration (ppm) | Typical source | Radiative effect |
|------|------------------------------|----------------|----------------|
| Water vapor | 2000–3000 | Evaporation, weather | Strong, but variable |
| Carbon dioxide (CO₂) | 400 | Fossil fuels, deforestation, industry | Strong, long‑term |
| Methane (CH₄) | 1.8 | Agriculture, natural gas, wetlands | Very potent, short‑term |
| Nitrous oxide (N₂O) | 0.3 | Agriculture, livestock, industrial | Potent, short‑term |
| Ozone (O₃) | 0.3 | Photochemical reactions | Strong, short‑term |
| Aerosols (e.g., sulfate) | – | Volcanic, industrial | Generally cooling (anti‑greenhouse) |
The atmosphere is transparent to most of the Sun’s visible and ultraviolet (UV) radiation, so that radiation can reach the surface. Once the surface is heated, it emits long‑wave IR radiation. Greenhouse gases absorb a fraction of that IR and then re‑emit it in all directions. A fraction of the re‑emitted IR escapes to space, but a larger fraction is directed back toward the surface, keeping the surface warmer than it would be if the atmosphere were transparent to IR.
### 2. How does it work? (Radiative transfer)
1. **Solar radiation reaches the Earth**
- The Sun emits a spectrum that peaks in the visible/near‑UV.
- The atmosphere is largely transparent to this part of the spectrum, so it reaches the surface.
2. **Surface absorbs and re‑emits**
- The surface absorbs the incoming solar energy and warms.
- A warm surface emits
```
---
### Example 3
**Question:** Describe the structure and function of DNA.
**Response:**
```
<|channel|>analysis<|message|>Need to explain DNA structure: double helix, nucleotides, base pairs, sugar-phosphate backbone, major bases, base pairing rules, major vs minor, major groove, minor groove, hydrogen bonding, base composition, sequence, etc. Function: genetic information storage, replication, transcription, regulation, coding for proteins, etc. Also mention roles in evolution, regulation, epigenetics. Provide details.<|end|><|start|>assistant<|channel|>final<|message|>**DNA – the Double‑Helix Blueprint of Life**
---
## 1. The Physical Structure of DNA
| Feature | What it Is | Why It Matters |
|--------|------------|----------------|
| **Double‑helix** | Two strands of nucleic acid that wind around each other like a ladder. | Gives stability, protects the genetic code, and allows proteins to read the sequence. |
| **Back‑to‑back sugar‑phosphate backbone** | 5‑carbon de‑oxyribose sugars linked by phosphates. | Provides a rigid, uniform scaffold that keeps the bases in the right orientation. |
| **Nucleobases** | Four nitrogenous bases: Adenine (A), Thymine (T), Cytosine (C), Guanine (G). | Carry the “letters” of the genetic alphabet. |
| **Base pairing** | A pairs with T, C pairs with G via hydrogen bonds (2 H‑bonds for A‑T, 3 for C‑G). | Ensures complementary strands can bind tightly and be read accurately. |
| **Major vs. Minor groove** | The A‑T and C‑G pairs create a wide, deep major groove and a shallow, narrow minor groove. | The major groove is the site where proteins (e.g., transcription factors, DNA‑binding proteins) dock. |
| **Sequence direction** | 5′ → 3′ (from 5‑phosphate to 3‑phosphate). | Sets the “handedness” of the helix and the direction of replication/transcription. |
| **Length & flexibility** | A typical eukaryotic chromosome is ~10⁶–10⁹ base pairs; DNA can be a few nanometers to centimeters long. | Allows packing of vast amounts of information in a single cell. |
### Visualizing the Ladder
```
5′-G‑C‑A‑T‑G‑C‑...‑3′
| | | | |
```
---
## Citation
If you use this model in your research, please cite:
```bibtex
@misc{priyanshu2025gptoss,
title={{GPT-OSS MoE Expert Fingerprinting: Analyzing Expert Activation Patterns in Mixture of Experts Models}},
author={Priyanshu, Aman and Vijay, Supriti},
year={2025},
howpublished={\url{https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/}},
note={Interactive analysis tool for expert activation patterns in MoE architectures}
}
```
## References & Resources
- **Original Model**: [OpenAI GPT-OSS Model Card](https://openai.com/index/introducing-gpt-oss/)
- **Model Hub**: [GPT-OSS-20B on Hugging Face](https://huggingface.co/openai/gpt-oss-20b)
- **Expert Analysis Dataset**: [GPT-OSS-20B MoE Expert Activations](https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations)
- **Project Page**: [GPT-OSS MoE Expert Fingerprinting](https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/)
- **GitHub Repository**: [OpenAI GPT-OSS](https://github.com/openai/gpt-oss)
|
TAI-Research/GTC-2-Medium
|
TAI-Research
| 2025-08-13T03:35:09Z | 0 | 1 | null |
[
"text-generation",
"en",
"base_model:TAI-Research/GTC-2-Medium",
"base_model:finetune:TAI-Research/GTC-2-Medium",
"license:mit",
"region:us"
] |
text-generation
| 2025-08-13T00:22:46Z |
---
license: mit
language:
- en
base_model:
- TAI-Research/GTC-2-Medium
pipeline_tag: text-generation
---
|
IvanJAjebu/blockassist-bc-thorny_slender_capybara_1755055623
|
IvanJAjebu
| 2025-08-13T03:28:33Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"thorny slender capybara",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-13T03:28:06Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- thorny slender capybara
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ginic/gender_split_30_female_2_wav2vec2-large-xlsr-53-buckeye-ipa
|
ginic
| 2025-08-13T03:26:02Z | 0 | 0 | null |
[
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"en",
"license:mit",
"region:us"
] |
automatic-speech-recognition
| 2025-08-13T01:24:13Z |
---
license: mit
language:
- en
pipeline_tag: automatic-speech-recognition
---
# About
This model was created to support experiments for evaluating phonetic transcription
with the Buckeye corpus as part of https://github.com/ginic/multipa.
This is a version of facebook/wav2vec2-large-xlsr-53 fine tuned on a specific subset of the Buckeye corpus.
For details about specific model parameters, please view the config.json here or
training scripts in the scripts/buckeye_experiments folder of the GitHub repository.
# Experiment Details
Still training with a total amount of data equal to half the full training data (4000 examples), vary the gender split 30/70, but draw examples from all individuals. Do 5 models for each gender split with the same model parameters but different data seeds.
Goals:
- Determine how different in gender split in training data affects performance
Params to vary:
- percent female (--percent_female) [0.3, 0.7]
- training seed (--train_seed)
|
wangwwwww111/trained-model
|
wangwwwww111
| 2025-08-13T03:19:17Z | 0 | 0 |
lerobot
|
[
"lerobot",
"safetensors",
"robotics",
"act",
"dataset:wangwwwww111/dual_arm_task",
"arxiv:2304.13705",
"license:apache-2.0",
"region:us"
] |
robotics
| 2025-08-13T03:13:30Z |
---
datasets: wangwwwww111/dual_arm_task
library_name: lerobot
license: apache-2.0
model_name: act
pipeline_tag: robotics
tags:
- lerobot
- robotics
- act
---
# Model Card for act
<!-- Provide a quick summary of what the model is/does. -->
[Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high success rates.
This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot).
See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index).
---
## How to Get Started with the Model
For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy).
Below is the short version on how to train and run inference/eval:
### Train from scratch
```bash
python -m lerobot.scripts.train \
--dataset.repo_id=${HF_USER}/<dataset> \
--policy.type=act \
--output_dir=outputs/train/<desired_policy_repo_id> \
--job_name=lerobot_training \
--policy.device=cuda \
--policy.repo_id=${HF_USER}/<desired_policy_repo_id>
--wandb.enable=true
```
*Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`.*
### Evaluate the policy/run inference
```bash
python -m lerobot.record \
--robot.type=so100_follower \
--dataset.repo_id=<hf_user>/eval_<dataset> \
--policy.path=<hf_user>/<desired_policy_repo_id> \
--episodes=10
```
Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint.
---
## Model Details
* **License:** apache-2.0
|
AmanPriyanshu/gpt-oss-6.0b-specialized-science-pruned-moe-only-7-experts
|
AmanPriyanshu
| 2025-08-13T03:19:11Z | 0 | 0 | null |
[
"safetensors",
"gpt_oss",
"mixture-of-experts",
"moe",
"expert-pruning",
"gpt-oss",
"openai",
"reasoning",
"science",
"specialized",
"efficient",
"transformer",
"causal-lm",
"text-generation",
"pytorch",
"pruned-model",
"domain-specific",
"conversational",
"en",
"dataset:AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2025-08-13T03:18:53Z |
---
license: apache-2.0
datasets:
- AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations
language:
- en
pipeline_tag: text-generation
tags:
- mixture-of-experts
- moe
- expert-pruning
- gpt-oss
- openai
- reasoning
- science
- specialized
- efficient
- transformer
- causal-lm
- text-generation
- pytorch
- pruned-model
- domain-specific
---
# Science GPT-OSS Model (7 Experts)
**Project**: https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/
<div align="center">
### 👥 Follow the Authors
**Aman Priyanshu**
[](https://www.linkedin.com/in/aman-priyanshu/)
[](https://x.com/AmanPriyanshu6)
[](https://amanpriyanshu.github.io/)
**Supriti Vijay**
[](https://www.linkedin.com/in/supriti-vijay/)
[](https://x.com/SupritiVijay)
[](https://supritivijay.github.io/)
</div>
## Introduction
This is a pruned variant of OpenAI's GPT-OSS-20B model, reduced to 7 experts per layer based on activation patterns from the [AmanPriyanshu/GPT-OSS-20B MoE Expert Activations dataset](https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations). We analyzed router decisions across evaluation benchmarks to identify and retain experts most relevant for science tasks.
**⚠️ Experimental Model**: This is an experimental pruned model that may not work well - check the [examples below](#model-examples) to see if the outputs meet your needs before use.
This pruning approach reduces the model size while attempting to preserve performance on the target domain.
## Model Architecture & Statistics
| Metric | Value |
|--------|-------|
| **Base Model** | openai/gpt-oss-20b |
| **Architecture** | Mixture-of-Experts Transformer |
| **Total Parameters** | ~6.0B (pruned from 21B) |
| **Original Experts per Layer** | 32 |
| **Pruned Experts per Layer** | 7 |
| **Layers** | 24 |
| **Top-k Routing** | 4 |
| **Context Length** | 128K tokens |
| **Attention Heads** | 64 (Query), 8 (Key-Value) |
| **Residual Dimension** | 2880 |
| **Attention Pattern** | Alternating dense & sliding window (128 tokens) |
| **Positional Encoding** | RoPE (Rotary Position Embedding) |
| **Normalization** | RMSNorm |
| **Precision** | BF16 |
| **License** | Apache 2.0 |
| **Specialization** | Science |
## Pruning Methodology
### What is Expert Pruning?
Mixture-of-Experts models contain multiple specialized sub-networks (experts) per layer. During inference, only a subset of experts are activated for each token. Expert pruning involves:
1. **Analyzing Usage Patterns**: Tracking which experts activate most frequently for specific tasks
2. **Removing Underutilized Experts**: Discarding experts with low activation rates for the target domain
3. **Preserving Router Functionality**: Maintaining the routing mechanism with fewer available experts
### Our Approach
- **Data-Driven Selection**: Used activation patterns from science evaluation tasks
- **Systematic Reduction**: Reduced from 32 to 7 experts per layer
- **No Retraining**: Direct removal without additional training steps
## Performance & Applications
### Pruning Benefits
- **Smaller Memory Footprint**: 21.9% of original expert parameters
- **Reduced Computational Load**: Fewer routing decisions during inference
- **Focused Capabilities**: Retains experts relevant to science tasks
### Use Cases
- **Speculative Decoding**: Draft model for full GPT-OSS-20B
- **Resource-Constrained Deployment**: Edge devices, mobile applications
- **Research**: Study expert specialization in MoE models
- **Fine-tuning**: Smaller base model for domain adaptation
*Note: Performance may vary depending on how well the pruned experts match your specific use case.*
## Motivation & Expert Selection
This science-specialized model leverages experts that showed high activation patterns during scientific reasoning tasks from GPQA (physics, chemistry, biology) and MMLU science domains. These experts demonstrate superior performance on complex scientific problem-solving and technical knowledge recall.
The expert selection process utilized our comprehensive analysis of router activation patterns across multiple evaluation benchmarks:
- **GPQA**: Graduate-level questions in physics, chemistry, biology (Diamond & Expert subsets)
- **MMLU/MMLU-Pro**: Comprehensive knowledge across 57+ subjects including science, medicine, law
- **SORRY-Bench**: Safety evaluation across harmful content categories
- **Tulu3**: Persona-driven instruction following with verifiable constraints
- **Polyglot-or-Not**: Multilingual factual completion tasks
By identifying experts that consistently activated for science tasks, we created this specialized model that maintains domain expertise while significantly reducing computational requirements from 32 to 7 experts per layer.
## Dataset & Analysis Foundation
This model is based on analysis from the **GPT-OSS-20B MoE Expert Activations dataset** available at:
🔗 **https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations**
The dataset contains router activation patterns from OpenAI's GPT-OSS-20B model across diverse evaluation benchmarks, enabling the creation of these domain-optimized models through systematic expert pruning.
### Pruning Methodology
Our approach involves:
1. **Activation Analysis**: Comprehensive evaluation of expert usage patterns across domain-specific tasks
2. **Expert Ranking**: Identification of the most frequently activated experts for target domains
3. **Systematic Pruning**: Reduction from 32 to 7 experts while preserving router functionality
4. **Quality Validation**: Testing to ensure maintained performance on target tasks
*This is a direct pruning approach - no additional training was performed. The model inherits all capabilities from the original GPT-OSS-20B with focused expert selection.*
## Usage
### CPU Inference
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
# Load the specialized model on CPU
model = AutoModelForCausalLM.from_pretrained(
"AmanPriyanshu/gpt-oss-6.0b-specialized-science-pruned-moe-only-7-experts",
torch_dtype=torch.bfloat16,
device_map="cpu",
trust_remote_code=True
)
tokenizer = AutoTokenizer.from_pretrained("AmanPriyanshu/gpt-oss-6.0b-specialized-science-pruned-moe-only-7-experts")
# Generate with the model
messages = [
{"role": "user", "content": "Explain the process of photosynthesis in plants."}
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt",
return_dict=True,
reasoning_effort="medium"
)
# Ensure inputs are on the same device as model
inputs = {k: v.to(model.device) for k, v in inputs.items()}
outputs = model.generate(
**inputs,
max_new_tokens=512,
do_sample=True,
temperature=0.1,
top_p=0.9,
pad_token_id=tokenizer.eos_token_id,
eos_token_id=tokenizer.eos_token_id
)
# Decode only the generated part
input_length = inputs['input_ids'].shape[1]
response_tokens = outputs[0][input_length:]
response = tokenizer.decode(response_tokens, skip_special_tokens=True)
print(response)
```
### Apple Silicon (MPS) Inference
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
# Check MPS availability and load model
device = "mps" if torch.backends.mps.is_available() else "cpu"
model = AutoModelForCausalLM.from_pretrained(
"AmanPriyanshu/gpt-oss-6.0b-specialized-science-pruned-moe-only-7-experts",
torch_dtype=torch.float16, # Better MPS compatibility
device_map=device,
trust_remote_code=True,
low_cpu_mem_usage=True
)
tokenizer = AutoTokenizer.from_pretrained("AmanPriyanshu/gpt-oss-6.0b-specialized-science-pruned-moe-only-7-experts")
# Generate with the model
messages = [
{"role": "user", "content": "Explain the process of photosynthesis in plants."}
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt",
return_dict=True,
reasoning_effort="medium"
)
# Move inputs to model device
inputs = {k: v.to(model.device) if isinstance(v, torch.Tensor) else v for k, v in inputs.items()}
# Use torch.no_grad for MPS stability
with torch.no_grad():
outputs = model.generate(
**inputs,
max_new_tokens=512,
do_sample=True,
temperature=0.1,
top_p=0.9,
pad_token_id=tokenizer.eos_token_id,
eos_token_id=tokenizer.eos_token_id,
use_cache=True
)
# Decode only the generated part
input_length = inputs['input_ids'].shape[1]
response_tokens = outputs[0][input_length:]
response = tokenizer.decode(response_tokens, skip_special_tokens=True)
print(response)
```
### GPU Inference
For CUDA GPU inference, the models work with minimal modifications from the CPU example above. Simply change:
```python
device_map="auto" # Will automatically use GPU if available
torch_dtype=torch.bfloat16 # or torch.float16
```
## Model Examples
The following examples demonstrate the model's performance on science tasks:
### Example 1
**Question:** Explain the process of photosynthesis in plants.
**Response:**
```
<|channel|>analysis<|message|>The user wants an explanation of the process of photosynthesis in plants. They should provide a detailed explanation, maybe a step of light absorption, energy conversion, etc. They should mention the steps: light absorption, energy transfer, conversion of photons into chemical energy, etc. They might mention the photosolum, the chlorophyll, the electron-phonon, etc. They might mention the steps: absorption of light, conversion of energy into chemical energy, etc. They might mention the steps: absorption of light, conversion of energy into chemical energy, etc. They might mention the steps: absorption of light, conversion of energy into chemical energy, etc. They might mention the steps: absorption of light, conversion of energy into chemical energy, etc. They might mention the steps: absorption of light, conversion of energy into chemical energy, etc. They might mention the steps: absorption of light, conversion of energy into chemical energy, etc. They might mention the steps: absorption of light, conversion of energy into chemical energy, etc. They might mention the steps: absorption of light, conversion of energy into chemical energy, etc. They might mention the steps: absorption of light, conversion of energy into chemical energy, etc. They might mention the steps: absorption of light, conversion of energy into chemical energy, etc. They might mention the steps: absorption of light, conversion of energy into chemical energy, etc. They might mention the steps: absorption of light, conversion of energy into chemical energy, etc. They might mention the steps: absorption of light, conversion of energy into chemical energy, etc. They might mention the steps: absorption of light, conversion of energy into chemical energy, etc.<|end|><|start|>assistant<|channel|>final<|message|>### Overview
Plants grow by converting light into chemical energy – a **photosynthesis**.
The process involves three steps:
1. **Absorption of light** – the plant absorbs photons in the visible spectrum.
2. **Energy transfer** – the photons are converted to chemical energy (the “chemical” form).
3. **Chemical synthesis** – the plant uses the energy to build molecules (e.g., photosolum, chlorophyll, etc.).
Below is a detailed explanation of the photosynthesis, including the mechanisms and the key players.
---
## 1. Light Absorption – “Photabs”
### 1.1. Light Spectrum
- **Visible light** (400–700 nm) is the “visible spectrum” or “visible light.”
- **Near‑infra
```
---
### Example 2
**Question:** What causes the greenhouse effect and how does it work?
**Response:**
```
<|channel|>analysis<|message|>The user wants to ask: "What causes the greenhouse effect and how does it work?" They likely want to explain greenhouse effect: what causes, how it works. They want to explain greenhouse effect: what causes, how it works. They want to answer: greenhouse effect: what causes, how it works. They want to answer: greenhouse effect: what causes, how it works. They want to answer: greenhouse effect: what causes, how it works. They want to answer: greenhouse effect: what causes, how it works. They want to answer: greenhouse effect: what causes, how it works. They want to answer: greenhouse effect: what causes, how it works. They want to answer: greenhouse effect: what causes, how it works. They want to answer: greenhouse effect: what causes, how it works. They want to answer: greenhouse effect: what causes, how it works. They want to answer: greenhouse effect: what causes, how it works. They want to answer: greenhouse effect: what causes, how it works. They want to answer: greenhouse effect: what causes, how it works. They want to answer: greenhouse effect: what causes, how it works. They want to answer: greenhouse effect: what causes, how it works. They want to answer: greenhouse effect: what causes, how it works. They want to answer: greenhouse effect: what causes, how it works. They want to answer: greenhouse effect: what causes, how it works. They want to answer: greenhouse effect: what causes, how it works. They want to answer: greenhouse effect: what causes, how it works. They want to answer: greenhouse effect: what causes, how it works. They want to answer: greenhouse effect: what causes, how it works. They want to answer: greenhouse effect: what causes, how it works. They want to answer: greenhouse effect: what causes, how it works. They want to answer: greenhouse effect: what causes, how it works. They want to answer: greenhouse effect: what causes, how it works. They want to answer: greenhouse effect: what causes, how it works. They want to answer: greenhouse effect: what causes, how it works. They want to answer: greenhouse effect: what causes, how it works. They want to answer: greenhouse effect: what causes, how it works. They want to answer: greenhouse effect: what causes, how it works. They want to answer: greenhouse effect: what causes, how
```
---
### Example 3
**Question:** Describe the structure and function of DNA.
**Response:**
```
<|channel|>analysis<|message|>The user wants to explain the structure and function of DNA. They should provide a detailed explanation: structure: nucleic base, double cov, etc. Function: replication, genetic, etc. Provide a concise or more. They should mention the DNA's structure: base, double cov, etc. Provide function: replication, genetic, etc. Provide mention of the DNA's structure: base, double cov, etc. Provide mention of the DNA's function: replication, etc. Provide mention of the DNA's function: etc. Provide mention of the DNA's function: etc. Provide mention of the DNA's function: etc. Provide mention of the DNA's function: etc. Provide mention of the DNA's function: etc. Provide mention of the DNA's function: etc. Provide mention of the DNA's function: etc. Provide mention of the DNA's function: etc. Provide mention of the DNA's function: etc. Provide mention of the DNA's function: etc. Provide mention of the DNA's function: etc. Provide mention of the DNA's function: etc. Provide mention of the DNA's function: etc. Provide mention of the DNA's function: etc. Provide mention of the DNA's function: etc. Provide mention of the DNA's function: etc. Provide mention of the DNA's function: etc. Provide mention of the DNA's function: etc. Provide mention of the DNA's function: etc. Provide mention of the DNA's function: etc. Provide mention of the DNA's function: etc. Provide mention of the DNA's function: etc. Provide mention of the DNA's function: etc. Provide mention of the DNA's function: etc. Provide mention of the DNA's function: etc. Provide mention of the DNA's function: etc. Provide mention of the DNA's function: etc. Provide mention of the DNA's function: etc. Provide mention of the DNA's function: etc. Provide mention of the DNA's function: etc. Provide mention of the DNA's function: etc. Provide mention of the DNA's function: etc. Provide mention of the DNA's function: etc. Provide mention of the DNA's function: etc. Provide mention of the DNA's function: etc. Provide mention of the DNA's function: etc. Provide mention of the DNA's function: etc. Provide mention of the DNA's function: etc. Provide mention of the DNA's function: etc. Provide mention of the DNA's function: etc. Provide mention of the DNA's function: etc. Provide mention of the
```
---
## Citation
If you use this model in your research, please cite:
```bibtex
@misc{priyanshu2025gptoss,
title={{GPT-OSS MoE Expert Fingerprinting: Analyzing Expert Activation Patterns in Mixture of Experts Models}},
author={Priyanshu, Aman and Vijay, Supriti},
year={2025},
howpublished={\url{https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/}},
note={Interactive analysis tool for expert activation patterns in MoE architectures}
}
```
## References & Resources
- **Original Model**: [OpenAI GPT-OSS Model Card](https://openai.com/index/introducing-gpt-oss/)
- **Model Hub**: [GPT-OSS-20B on Hugging Face](https://huggingface.co/openai/gpt-oss-20b)
- **Expert Analysis Dataset**: [GPT-OSS-20B MoE Expert Activations](https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations)
- **Project Page**: [GPT-OSS MoE Expert Fingerprinting](https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/)
- **GitHub Repository**: [OpenAI GPT-OSS](https://github.com/openai/gpt-oss)
|
xlangai/OpenCUA-32B
|
xlangai
| 2025-08-13T03:18:46Z | 0 | 2 | null |
[
"safetensors",
"opencua",
"VLM",
"Computer-Use-Agent",
"OS-Agent",
"GUI",
"Grounding",
"image-to-text",
"custom_code",
"en",
"dataset:xlangai/AgentNet",
"dataset:xlangai/aguvis-stage1",
"dataset:xlangai/aguvis-stage2",
"dataset:osunlp/UGround-V1-Data",
"arxiv:2508.09123",
"arxiv:2504.07981",
"base_model:Qwen/Qwen2.5-VL-32B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-VL-32B-Instruct",
"license:mit",
"region:us"
] |
image-to-text
| 2025-07-24T08:43:14Z |
---
license: mit
datasets:
- xlangai/AgentNet
- xlangai/aguvis-stage1
- xlangai/aguvis-stage2
- osunlp/UGround-V1-Data
language:
- en
metrics:
- code_eval
- accuracy
base_model:
- Qwen/Qwen2.5-VL-32B-Instruct
pipeline_tag: image-to-text
tags:
- VLM
- Computer-Use-Agent
- OS-Agent
- GUI
- Grounding
---
<h1 style="
font-family:-apple-system,BlinkMacSystemFont,'Segoe UI',Helvetica,Arial,sans-serif;
font-size:48px;
font-weight:700;
line-height:1.25;
text-align:center;
margin:0 0 24px;">
OpenCUA: Open Foundations for Computer-Use Agents
</h1>
<div style="
display:flex;
justify-content:center;
gap:12px;
flex-wrap:wrap;
margin-bottom:28px;">
<a href="https://opencua.xlang.ai/" style="
display:inline-block;
padding:8px 24px;
background:#2b2b2b;
color:#ffffff;
border-radius:36px;
text-decoration:none;
font-weight:600;
font-size:16px;">
🌐 Website
</a>
<a href="https://arxiv.org/abs/2508.09123" style="
display:inline-block;
padding:8px 24px;
background:#2b2b2b;
color:#ffffff;
border-radius:36px;
text-decoration:none;
font-weight:600;
font-size:16px;">
📝 Paper
</a>
<a href="https://github.com/xlang-ai/OpenCUA" style="
display:inline-block;
padding:8px 24px;
background:#2b2b2b;
color:#ffffff;
border-radius:36px;
text-decoration:none;
font-weight:600;
font-size:16px;">
💻 Code
</a>
</div>
<div style="max-width:900px;margin:0 auto;">
# Introduction
<div style="
max-width: 880px; /* 可按需调节整体宽度 */
margin: 0 auto; /* 居中容器 */
text-align: justify; /* 关键:两端对齐 */
text-justify: inter-word; /* 优化英文对齐效果 */
line-height: 1.6;">
OpenCUA models (OpenCUA-7B and OpenCUA-32B) are end-to-end computer-use foundation models than can produce executable actions in the computer environments. They are based on the weights of Qwen2.5-VL-7B-Instruction and Qwen2.5-VL-32B-Instruction.
They demonstrate superior performance across CUA benchmarks. In particular, <b>OpenCUA-32B</b> achieves an average success rate of **34.8%** on [OSWorld-Verified](https://os-world.github.io/),
establishing a new state-of-the-art (SOTA) among open-source models and surpassing OpenAI CUA (GPT-4o). Both models also have strong grounding performance, OpenCUA-32B achieves 59.6% on [OSWorld-G](https://osworld-grounding.github.io/) and 55.3% on [Screenspot-Pro](https://arxiv.org/abs/2504.07981).
</div>
### Key Features
- **Superior Computer-Use Capablity**: Able to execute multi-step computer-use actions with effective planning and reasoning
- **Multi-OS Support**: Trained on demonstrations across Ubuntu, Windows, and macOS
- **Visual Grounding**: Strong GUI element recognition and spatial reasoning capabilities
- **Multi-Image Context**: Processes up to 3 screenshot history for better context understanding
- **Reflective Reasoning**: Enhanced with reflective long Chain-of-Thought that identifies errors and provides corrective reasoning
# Performance
### Online Agent Evaluation
OpenCUA models achieves strong performance on **[OSWorld-Verified](https://os-world.github.io/)**.
OPENCUA-32B achieves the best performance among all open-source models with an average success rate of 34.8%, outperforming prior baselines by large margins.
It also closes the gap to proprietary Claude models.
<div align="center">
| **Model** | **15 Steps** | **50 Steps** | **100 Steps** |
|-------------------------------|:--------:|:--------:|:---------:|
| **Proprietary** | | | |
| OpenAI CUA | 26.0 | 31.3 | 31.4 |
| Seed 1.5-VL | 27.9 | — | 34.1 |
| Claude 3.7 Sonnet | 27.1 | 35.8 | 35.9 |
| Claude 4 Sonnet | 31.2 | 43.9 | 41.5 |
| **Open-Source** | | | |
| Qwen 2.5-VL-32B-Instruct | 3.0 | — | 3.9 |
| Qwen 2.5-VL-72B-Instruct | 4.4 | — | 5.0 |
| Kimi-VL-A3B | 9.7 | — | 10.3 |
| UI-TARS-72B-DPO | 24.0 | 25.8 | 27.1 |
| UI-TARS-1.5-7B | 24.5 | 27.3 | 27.4 |
| OpenCUA-7B *(Ours)* | 24.3 | 27.9 | 26.6 |
| **OpenCUA-32B *(Ours)*** | **29.7** | **34.1** | **34.8** |
</div>
*OpenCUA scores are the mean of 3 independent runs.*
### GUI Grounding Performance
<div align="center">
| **Model** | **OSWorld-G** | **ScreenSpot-V2** | **ScreenSpot-Pro** |
|-------|-----------|---------------|----------------|
| Qwen2.5-VL-7B | 31.4 | 88.8 | 27.6 |
| Qwen2.5-VL-32B | 46.5 | 87.0 | 39.4 |
| UI-TARS-72B | 57.1 | 90.3 | 38.1 |
| **OpenCUA-A3B** | 48.6 | 91.4 | 28.5 |
| **OpenCUA-7B** | 45.7 | 88.5 | 23.7 |
| **OpenCUA-2.5-7B** | 55.3 | 92.3 | 50.0 |
| **OpenCUA-2.5-32B** | **59.6** | **93.4** | **55.3** |
</div>
### AgentNetBench (Offline Evaluation)
<div align="center">
| **Model** | **Coordinate Actions** | **Content Actions** | **Function Actions** | **Average** |
|-------|-------------------|-----------------|------------------|---------|
| Qwen2.5-VL-7B | 50.7 | 40.8 | 3.1 | 48.0 |
| Qwen2.5-VL-32B | 66.6 | 47.2 | 41.5 | 64.8 |
| Qwen2.5-VL-72B | 67.2 | 52.6 | 50.5 | 67.0 |
| OpenAI CUA | 71.7 | 57.3 | **80.0** | 73.1 |
| **OpenCUA-2.5-7B** | 79.0 | 62.0 | 44.3 | 75.2 |
| **OpenCUA-2.5-32B** | **81.9** | 66.1 | 55.7 | **79.1** |
</div>
# 🚀 Quick Start
<div style="border-left: 6px solid #f28c28; background: #fff8e6; padding: 12px 16px; margin: 16px 0;">
<strong>⚠️ Important for Qwen-based Models (OpenCUA-7B, OpenCUA-32B):</strong>
To align with our training infrastructure, we have modified the model in two places:
<ul style="margin-top: 8px;">
<li>1. Multimodal Rotary Position Embedding (M-RoPE) has been replaced with 1D RoPE</strong>.</li>
<li>2. Using the same Tokenizer and ChatTemplate as Kimi-VL.</li>
<li>Do not use the default transformers and vllm classes to load the model. Tokenizer and Chat Template should be aligned if training the models.</li>
</ul>
</div>
## Installation & Download
First, install the required transformers dependencies:
```bash
conda create -n opencua python=3.10
conda activate opencua
pip install -r requirement.txt
```
Download the model weight from huggingface:
```bash
from huggingface_hub import snapshot_download
snapshot_download(
repo_id="xlangai/OpenCUA-32B",
local_dir="OpenCUA-32B",
local_dir_use_symlinks=False
)
```
## 🎯 GUI Grounding
The following code demonstrates how to use OpenCUA models for GUI grounding tasks:
```python
import base64
import torch
from transformers import AutoTokenizer, AutoModel, AutoImageProcessor
from PIL import Image
import json
def encode_image(image_path: str) -> str:
"""Encode image to base64 string for model input."""
with open(image_path, "rb") as f:
return base64.b64encode(f.read()).decode()
def load_opencua_model(model_path: str):
"""Load OpenCUA model, tokenizer, and image processor."""
tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)
model = AutoModel.from_pretrained(
model_path,
torch_dtype="auto",
device_map="auto",
trust_remote_code=True
)
image_processor = AutoImageProcessor.from_pretrained(model_path, trust_remote_code=True)
return model, tokenizer, image_processor
def create_grounding_messages(image_path: str, instruction: str):
"""Create chat messages for GUI grounding task."""
system_prompt = (
"You are a GUI agent. You are given a task and a screenshot of the screen. "
"You need to perform a series of pyautogui actions to complete the task."
)
messages = [
{"role": "system", "content": system_prompt},
{
"role": "user",
"content": [
{"type": "image", "image": f"data:image/png;base64,{encode_image(image_path)}"},
{"type": "text", "text": instruction},
],
},
]
return messages
def run_inference(model, tokenizer, image_processor, messages, image_path):
"""Run inference on the model."""
# Prepare text input
input_ids = tokenizer.apply_chat_template(
messages, tokenize=True, add_generation_prompt=True
)
input_ids = torch.tensor([input_ids]).to(model.device)
# Prepare image input
image = Image.open(image_path).convert('RGB')
image_info = image_processor.preprocess(images=[image])
pixel_values = torch.tensor(image_info['pixel_values']).to(
dtype=torch.bfloat16, device=model.device
)
grid_thws = torch.tensor(image_info['image_grid_thw'])
# Generate response
with torch.no_grad():
generated_ids = model.generate(
input_ids,
pixel_values=pixel_values,
grid_thws=grid_thws,
max_new_tokens=512,
temperature=0
)
# Decode output
prompt_len = input_ids.shape[1]
generated_ids = generated_ids[:, prompt_len:]
output_text = tokenizer.batch_decode(
generated_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False
)[0]
return output_text
# Example usage
model_path = "OpenCUA/OpenCUA-32B" # or other model variants
image_path = "screenshot.png"
instruction = "Click on the submit button"
# Load model
model, tokenizer, image_processor = load_opencua_model(model_path)
# Create messages and run inference
messages = create_grounding_messages(image_path, instruction)
result = run_inference(model, tokenizer, image_processor, messages, image_path)
print("Model output:", result)
```
<div style="border-left: 6px solid #9ca3af; background: #f5f5f5; padding: 12px 16px; margin: 16px 0;">
<em>Expected result: ```python\npyautogui.click(x=1432, y=344)\n```</em>
</div>
## 🖥️ Computer Use Agent
**[OpenCUAAgent](https://github.com/xlang-ai/OSWorld/blob/main/mm_agents/opencua_agent.py)** is developed in the [OSWorld](https://github.com/xlang-ai/OSWorld) environment based on OpenCUA models. It iteratively perceives the environment via screenshots, produces reflective long CoT as inner monologue, and predicts the next action to be executed. OpenCUAAgent uses 3 images in total and L2 CoT format in default.
Command for running OpenCUA-7B and OpenCUA-32B in OSWorld:
```
python run_multienv_opencua.py \
--headless \
--observation_type screenshot \
--model OpenCUA-32B \
--result_dir ./results --test_all_meta_path evaluation_examples/test_all_no_gdrive.json \
--max_steps 100 \
--num_envs 30 \
--coordinate_type qwen25
```
<div style="border-left: 6px solid #9ca3af; background: #f5f5f5; padding: 12px 16px; margin: 16px 0;">
<em>Currently we only supports huggingface inference. We are implementing the vLLM supports of OpenCUA models. Please stay tuned.</em>
</div>
## Important Notes on Coordinate Systems
<div style="border-left: 6px solid #9ca3af; background: #f5f5f5; padding: 12px 16px; margin: 16px 0;">
<ul style="margin: 0;">
<li><strong><code>OpenCUA/OpenCUA-A3B</code></strong> – Relative coordinates <em>(not supported in this code)</em></li>
<li><strong><code>OpenCUA/OpenCUA-Qwen2-7B</code></strong> – Relative coordinates</li>
<li><strong><code>OpenCUA/OpenCUA-7B</code></strong> – Absolute coordinates</li>
<li><strong><code>OpenCUA/OpenCUA-32B</code></strong> – Absolute coordinates</li>
</ul>
</div>
**OpenCUA models use different coordinate systems depending on the base model:**
- **OpenCUA-Qwen2-7B**: Outputs **relative coordinates** (0.0 to 1.0 range)
```python
# Example output: pyautogui.click(x=0.5, y=0.3)
# x=0.5 means 50% from left edge, y=0.3 means 30% from top edge
# Convert to absolute coordinates:
def qwen2_relative_to_absolute(rel_x, rel_y, original_width, original_height):
abs_x = int(rel_x * original_width)
abs_y = int(rel_y * original_height)
return abs_x, abs_y
```
- **OpenCUA-7B and OpenCUA-32B** (Qwen2.5-based): Output **absolute coordinates** after smart resize
```python
# Example output: pyautogui.click(x=960, y=324)
# These are coordinates on the smart-resized image, not the original image
# Convert to original image coordinates:
# Please refer to the smart_resize function in: https://github.com/huggingface/transformers/blob/67ddc82fbc7e52c6f42a395b4a6d278c55b77a39/src/transformers/models/qwen2_vl/image_processing_qwen2_vl.py#L55
def qwen25_smart_resize_to_absolute(model_x, model_y, original_width, original_height):
# First, calculate the smart-resized dimensions
resized_height, resized_width = smart_resize(original_height, original_width, factor = 28, min_pixels = 3136, max_pixels = 12845056)
# Convert model output to relative coordinates on original image
rel_x = model_x / resized_width
rel_y = model_y / resized_height
# Then convert to absolute coordinates on original image
abs_x = int(rel_x * original_width)
abs_y = int(rel_y * original_height)
return abs_x, abs_y
```
<div style="border-left: 6px solid #9ca3af; background: #f5f5f5; padding: 12px 16px; margin: 16px 0;">
<strong>Understanding Smart Resize for Qwen2.5-based Models:</strong>
<p style="margin: 8px 0 0;">
The Qwen2.5-VL models use a “smart resize” preprocessing that maintains aspect ratio while fitting within pixel constraints.
For coordinate conversion, you need the smart resize function from the
<a href="https://github.com/QwenLM/Qwen2.5-VL/blob/d2240f11656bfe404b9ba56db4e51cd09f522ff1/qwen-vl-utils/src/qwen_vl_utils/vision_process.py#L60">
official Qwen2.5-VL implementation</a>.
</p>
</div>
# TODO
## vLLM Support
We are actively working with the vLLM team to add support for OpenCUA models.
**Workaround:** For now, please use the standard transformers library as shown in the examples above. We will update this section once vLLM support becomes available.
## Training Code
OpenCUA models are developed based on the training infrastructure of Kimi Team. We are developting the training pipeline based on the open-source infrastructure as well.
## License
This project is licensed under the MIT License - see the LICENSE file in the root folder for details.
## Research Use and Disclaimer
OpenCUA models are intended for **research and educational purposes only**.
### Prohibited Uses
- The model may **not** be used for any purpose or activity that violates applicable laws or regulations in any jurisdiction
- Use for illegal, unethical, or harmful activities is strictly prohibited
### Disclaimer
- The authors, contributors, and copyright holders are **not responsible** for any illegal, unethical, or harmful use of the Software, nor for any direct or indirect damages resulting from such use
- Use of the "OpenCUA" name, logo, or trademarks does **not** imply any endorsement or affiliation unless separate written permission is obtained
- Users are solely responsible for ensuring their use complies with applicable laws and regulations
## Citation
If you use OpenCUA models in your research, please cite our work:
```bibtex
@misc{wang2025opencuaopenfoundationscomputeruse,
title={OpenCUA: Open Foundations for Computer-Use Agents},
author={Xinyuan Wang and Bowen Wang and Dunjie Lu and Junlin Yang and Tianbao Xie and Junli Wang and Jiaqi Deng and Xiaole Guo and Yiheng Xu and Chen Henry Wu and Zhennan Shen and Zhuokai Li and Ryan Li and Xiaochuan Li and Junda Chen and Boyuan Zheng and Peihang Li and Fangyu Lei and Ruisheng Cao and Yeqiao Fu and Dongchan Shin and Martin Shin and Jiarui Hu and Yuyan Wang and Jixuan Chen and Yuxiao Ye and Danyang Zhang and Dikang Du and Hao Hu and Huarong Chen and Zaida Zhou and Yipu Wang and Heng Wang and Diyi Yang and Victor Zhong and Flood Sung and Y. Charles and Zhilin Yang and Tao Yu},
year={2025},
eprint={2508.09123},
archivePrefix={arXiv},
primaryClass={cs.AI},
url={https://arxiv.org/abs/2508.09123},
}
```
</div>
|
phospho-app/Selinaliu1030-gr00t-egg_0806-ajaw3
|
phospho-app
| 2025-08-13T03:13:56Z | 0 | 0 |
phosphobot
|
[
"phosphobot",
"gr00t",
"robotics",
"dataset:Selinaliu1030/egg_0806",
"region:us"
] |
robotics
| 2025-08-13T02:05:08Z |
---
datasets: Selinaliu1030/egg_0806
library_name: phosphobot
pipeline_tag: robotics
model_name: gr00t
tags:
- phosphobot
- gr00t
task_categories:
- robotics
---
# gr00t Model - phospho Training Pipeline
## Error Traceback
We faced an issue while training your model.
```
Traceback (most recent call last):
File "/opt/conda/lib/python3.11/asyncio/tasks.py", line 500, in wait_for
return fut.result()
^^^^^^^^^^^^
File "/root/phosphobot/am/gr00t.py", line 1117, in read_output
async for line in process.stdout:
File "/opt/conda/lib/python3.11/asyncio/streams.py", line 765, in __anext__
val = await self.readline()
^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/asyncio/streams.py", line 566, in readline
line = await self.readuntil(sep)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/asyncio/streams.py", line 658, in readuntil
await self._wait_for_data('readuntil')
File "/opt/conda/lib/python3.11/asyncio/streams.py", line 543, in _wait_for_data
await self._waiter
asyncio.exceptions.CancelledError
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/root/phosphobot/am/gr00t.py", line 1128, in run_gr00t_training
await asyncio.wait_for(read_output(), timeout=timeout_seconds)
File "/opt/conda/lib/python3.11/asyncio/tasks.py", line 502, in wait_for
raise exceptions.TimeoutError() from exc
TimeoutError
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/root/src/helper.py", line 166, in predict
trainer.train(timeout_seconds=timeout_seconds)
File "/root/phosphobot/am/gr00t.py", line 1296, in train
asyncio.run(
File "/opt/conda/lib/python3.11/asyncio/runners.py", line 190, in run
return runner.run(main)
^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/asyncio/runners.py", line 118, in run
return self._loop.run_until_complete(task)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/asyncio/base_events.py", line 654, in run_until_complete
return future.result()
^^^^^^^^^^^^^^^
File "/root/phosphobot/am/gr00t.py", line 1133, in run_gr00t_training
raise TimeoutError(
TimeoutError: Training process exceeded timeout of 3600 seconds. Please consider lowering the number of epochs and/or batch size.
```
## Training parameters:
- **Dataset**: [Selinaliu1030/egg_0806](https://huggingface.co/datasets/Selinaliu1030/egg_0806)
- **Wandb run URL**: None
- **Epochs**: 10
- **Batch size**: 49
- **Training steps**: None
📖 **Get Started**: [docs.phospho.ai](https://docs.phospho.ai?utm_source=huggingface_readme)
🤖 **Get your robot**: [robots.phospho.ai](https://robots.phospho.ai?utm_source=huggingface_readme)
|
IvanJAjebu/blockassist-bc-thorny_slender_capybara_1755054258
|
IvanJAjebu
| 2025-08-13T03:05:42Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"thorny slender capybara",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-13T03:05:15Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- thorny slender capybara
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
IvanJAjebu/blockassist-bc-thorny_slender_capybara_1755054005
|
IvanJAjebu
| 2025-08-13T03:01:28Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"thorny slender capybara",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-13T03:01:02Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- thorny slender capybara
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
lautan/blockassist-bc-gentle_patterned_goat_1755052515
|
lautan
| 2025-08-13T03:00:01Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"gentle patterned goat",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-13T02:59:57Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- gentle patterned goat
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
John6666/vete-starbliss-ill-v2-sdxl
|
John6666
| 2025-08-13T02:57:55Z | 0 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"anime",
"girls",
"fingers",
"illustrious",
"en",
"base_model:OnomaAIResearch/Illustrious-xl-early-release-v0",
"base_model:finetune:OnomaAIResearch/Illustrious-xl-early-release-v0",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] |
text-to-image
| 2025-08-13T02:50:29Z |
---
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- anime
- girls
- fingers
- illustrious
base_model: OnomaAIResearch/Illustrious-xl-early-release-v0
---
Original model is [here](https://civitai.com/models/1830445/vete-starbliss-ill?modelVersionId=2107790).
This model created by [Vetehine](https://civitai.com/user/Vetehine).
|
sakuhara/blockassist-bc-marine_clawed_komodo_1755053576
|
sakuhara
| 2025-08-13T02:54:29Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"marine clawed komodo",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-13T02:54:06Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- marine clawed komodo
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ACECA/lowMvMax_202
|
ACECA
| 2025-08-13T02:53:04Z | 0 | 0 | null |
[
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-08-12T15:19:34Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
judsfdf/copiaestiloperoesfactualperomuyconsiso_trnsferencia_jsu
|
judsfdf
| 2025-08-13T02:50:20Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"gemma2",
"trl",
"en",
"base_model:unsloth/gemma-2-9b-bnb-4bit",
"base_model:finetune:unsloth/gemma-2-9b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-13T02:49:53Z |
---
base_model: unsloth/gemma-2-9b-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- gemma2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** judsfdf
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-2-9b-bnb-4bit
This gemma2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
AmanPriyanshu/gpt-oss-16.7b-specialized-all-pruned-moe-only-25-experts
|
AmanPriyanshu
| 2025-08-13T02:45:57Z | 0 | 0 | null |
[
"safetensors",
"gpt_oss",
"mixture-of-experts",
"moe",
"expert-pruning",
"gpt-oss",
"openai",
"reasoning",
"all",
"specialized",
"efficient",
"transformer",
"causal-lm",
"text-generation",
"pytorch",
"pruned-model",
"domain-specific",
"conversational",
"en",
"dataset:AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2025-08-13T02:45:10Z |
---
license: apache-2.0
datasets:
- AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations
language:
- en
pipeline_tag: text-generation
tags:
- mixture-of-experts
- moe
- expert-pruning
- gpt-oss
- openai
- reasoning
- all
- specialized
- efficient
- transformer
- causal-lm
- text-generation
- pytorch
- pruned-model
- domain-specific
---
# All GPT-OSS Model (25 Experts)
**Project**: https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/
<div align="center">
### 👥 Follow the Authors
**Aman Priyanshu**
[](https://www.linkedin.com/in/aman-priyanshu/)
[](https://x.com/AmanPriyanshu6)
[](https://amanpriyanshu.github.io/)
**Supriti Vijay**
[](https://www.linkedin.com/in/supriti-vijay/)
[](https://x.com/SupritiVijay)
[](https://supritivijay.github.io/)
</div>
## Introduction
This is a pruned variant of OpenAI's GPT-OSS-20B model, reduced to 25 experts per layer based on activation patterns from the [AmanPriyanshu/GPT-OSS-20B MoE Expert Activations dataset](https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations). We analyzed router decisions across evaluation benchmarks to identify and retain experts most relevant for all tasks.
**⚠️ Experimental Model**: This is an experimental pruned model that may not work well - check the [examples below](#model-examples) to see if the outputs meet your needs before use.
This pruning approach reduces the model size while attempting to preserve performance on the target domain.
## Model Architecture & Statistics
| Metric | Value |
|--------|-------|
| **Base Model** | openai/gpt-oss-20b |
| **Architecture** | Mixture-of-Experts Transformer |
| **Total Parameters** | ~16.7B (pruned from 21B) |
| **Original Experts per Layer** | 32 |
| **Pruned Experts per Layer** | 25 |
| **Layers** | 24 |
| **Top-k Routing** | 4 |
| **Context Length** | 128K tokens |
| **Attention Heads** | 64 (Query), 8 (Key-Value) |
| **Residual Dimension** | 2880 |
| **Attention Pattern** | Alternating dense & sliding window (128 tokens) |
| **Positional Encoding** | RoPE (Rotary Position Embedding) |
| **Normalization** | RMSNorm |
| **Precision** | BF16 |
| **License** | Apache 2.0 |
| **Specialization** | All |
## Pruning Methodology
### What is Expert Pruning?
Mixture-of-Experts models contain multiple specialized sub-networks (experts) per layer. During inference, only a subset of experts are activated for each token. Expert pruning involves:
1. **Analyzing Usage Patterns**: Tracking which experts activate most frequently for specific tasks
2. **Removing Underutilized Experts**: Discarding experts with low activation rates for the target domain
3. **Preserving Router Functionality**: Maintaining the routing mechanism with fewer available experts
### Our Approach
- **Data-Driven Selection**: Used activation patterns from all evaluation tasks
- **Systematic Reduction**: Reduced from 32 to 25 experts per layer
- **No Retraining**: Direct removal without additional training steps
## Performance & Applications
### Pruning Benefits
- **Smaller Memory Footprint**: 78.1% of original expert parameters
- **Reduced Computational Load**: Fewer routing decisions during inference
- **Focused Capabilities**: Retains experts relevant to all tasks
### Use Cases
- **Speculative Decoding**: Draft model for full GPT-OSS-20B
- **Resource-Constrained Deployment**: Edge devices, mobile applications
- **Research**: Study expert specialization in MoE models
- **Fine-tuning**: Smaller base model for domain adaptation
*Note: Performance may vary depending on how well the pruned experts match your specific use case.*
## Motivation & Expert Selection
This general-purpose model maintains broad capabilities across all domains while significantly reducing computational requirements. It preserves the essential routing patterns discovered across our comprehensive analysis of diverse evaluation benchmarks including GPQA, MMLU, SORRY-Bench, and Tulu3 datasets.
The expert selection process utilized our comprehensive analysis of router activation patterns across multiple evaluation benchmarks:
- **GPQA**: Graduate-level questions in physics, chemistry, biology (Diamond & Expert subsets)
- **MMLU/MMLU-Pro**: Comprehensive knowledge across 57+ subjects including science, medicine, law
- **SORRY-Bench**: Safety evaluation across harmful content categories
- **Tulu3**: Persona-driven instruction following with verifiable constraints
- **Polyglot-or-Not**: Multilingual factual completion tasks
By identifying experts that consistently activated for all tasks, we created this specialized model that maintains domain expertise while significantly reducing computational requirements from 32 to 25 experts per layer.
## Dataset & Analysis Foundation
This model is based on analysis from the **GPT-OSS-20B MoE Expert Activations dataset** available at:
🔗 **https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations**
The dataset contains router activation patterns from OpenAI's GPT-OSS-20B model across diverse evaluation benchmarks, enabling the creation of these domain-optimized models through systematic expert pruning.
### Pruning Methodology
Our approach involves:
1. **Activation Analysis**: Comprehensive evaluation of expert usage patterns across domain-specific tasks
2. **Expert Ranking**: Identification of the most frequently activated experts for target domains
3. **Systematic Pruning**: Reduction from 32 to 25 experts while preserving router functionality
4. **Quality Validation**: Testing to ensure maintained performance on target tasks
*This is a direct pruning approach - no additional training was performed. The model inherits all capabilities from the original GPT-OSS-20B with focused expert selection.*
## Usage
### CPU Inference
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
# Load the specialized model on CPU
model = AutoModelForCausalLM.from_pretrained(
"AmanPriyanshu/gpt-oss-16.7b-specialized-all-pruned-moe-only-25-experts",
torch_dtype=torch.bfloat16,
device_map="cpu",
trust_remote_code=True
)
tokenizer = AutoTokenizer.from_pretrained("AmanPriyanshu/gpt-oss-16.7b-specialized-all-pruned-moe-only-25-experts")
# Generate with the model
messages = [
{"role": "user", "content": "What is artificial intelligence and how does it work?"}
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt",
return_dict=True,
reasoning_effort="medium"
)
# Ensure inputs are on the same device as model
inputs = {k: v.to(model.device) for k, v in inputs.items()}
outputs = model.generate(
**inputs,
max_new_tokens=512,
do_sample=True,
temperature=0.1,
top_p=0.9,
pad_token_id=tokenizer.eos_token_id,
eos_token_id=tokenizer.eos_token_id
)
# Decode only the generated part
input_length = inputs['input_ids'].shape[1]
response_tokens = outputs[0][input_length:]
response = tokenizer.decode(response_tokens, skip_special_tokens=True)
print(response)
```
### Apple Silicon (MPS) Inference
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
# Check MPS availability and load model
device = "mps" if torch.backends.mps.is_available() else "cpu"
model = AutoModelForCausalLM.from_pretrained(
"AmanPriyanshu/gpt-oss-16.7b-specialized-all-pruned-moe-only-25-experts",
torch_dtype=torch.float16, # Better MPS compatibility
device_map=device,
trust_remote_code=True,
low_cpu_mem_usage=True
)
tokenizer = AutoTokenizer.from_pretrained("AmanPriyanshu/gpt-oss-16.7b-specialized-all-pruned-moe-only-25-experts")
# Generate with the model
messages = [
{"role": "user", "content": "What is artificial intelligence and how does it work?"}
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt",
return_dict=True,
reasoning_effort="medium"
)
# Move inputs to model device
inputs = {k: v.to(model.device) if isinstance(v, torch.Tensor) else v for k, v in inputs.items()}
# Use torch.no_grad for MPS stability
with torch.no_grad():
outputs = model.generate(
**inputs,
max_new_tokens=512,
do_sample=True,
temperature=0.1,
top_p=0.9,
pad_token_id=tokenizer.eos_token_id,
eos_token_id=tokenizer.eos_token_id,
use_cache=True
)
# Decode only the generated part
input_length = inputs['input_ids'].shape[1]
response_tokens = outputs[0][input_length:]
response = tokenizer.decode(response_tokens, skip_special_tokens=True)
print(response)
```
### GPU Inference
For CUDA GPU inference, the models work with minimal modifications from the CPU example above. Simply change:
```python
device_map="auto" # Will automatically use GPU if available
torch_dtype=torch.bfloat16 # or torch.float16
```
## Model Examples
The following examples demonstrate the model's performance on all tasks:
### Example 1
**Question:** What is artificial intelligence and how does it work?
**Response:**
```
<|channel|>analysis<|message|>Need explain AI, types, how works.<|end|><|start|>assistant<|channel|>final<|message|>## What is Artificial Intelligence?
Artificial Intelligence (AI) is the field of computer science that seeks to build systems capable of performing tasks that normally require human intelligence. These tasks include:
| Task | Typical Human Skill | AI Approach |
|------|---------------------|-------------|
| Recognizing faces | Visual perception | Computer vision, deep learning |
| Understanding language | Natural language comprehension | Natural‑language processing (NLP) |
| Playing chess | Strategic planning | Search algorithms, reinforcement learning |
| Driving a car | Perception + decision making | Sensor fusion + control systems |
| Diagnosing disease | Pattern recognition + medical knowledge | Machine learning + knowledge bases |
In short, **AI is about giving machines the ability to learn from data, reason about the world, and act autonomously or semi‑autonomously.**
---
## How AI Works – The Core Ingredients
| Component | What it does | Typical Techniques |
|-----------|--------------|--------------------|
| **Data** | The raw material from which models learn | Images, text, sensor streams, logs |
| **Representation** | How we encode knowledge | Feature vectors, embeddings, symbolic rules |
| **Learning** | Adjusting a model to fit data | Supervised, unsupervised, reinforcement learning |
| **Inference** | Using the model to make predictions or decisions | Forward/backward propagation, Monte‑Carlo search |
| **Actuation** | Executing actions in the real world | Robotics, API calls, recommendation engines |
| **Feedback** | Updating the system based on outcomes | Online learning, human‑in‑the‑loop |
Below is a step‑by‑step view of a typical AI pipeline:
1. **Collect Data**
*Example:* A self‑driving car collects LiDAR, camera, GPS, and IMU data.
2. **Pre‑process & Feature‑Engineer**
*Example:* Convert raw sensor streams into a set of numerical features (e.g., distance to nearest obstacle).
3. **Choose a Model**
*Example:* A convolutional neural network (CNN) for image classification, a recurrent neural network (RNN) for time‑series, or a transformer for language.
4. **Train the Model**
*Example:* Use stochastic gradient descent to minimize a loss function (e.g., cross‑entropy for classification).
5. **Validate & Test**
*Example:* Evaluate on
```
---
### Example 2
**Question:** Explain Einstein's theory of relativity in simple terms.
**Response:**
```
<|channel|>analysis<|message|>Need simple explanation.<|end|><|start|>assistant<|channel|>final<|message|>**Einstein’s Theory of Relativity – in plain, everyday language**
---
### 1. Two main ideas
| **Special Relativity** | **General Relativity** |
|------------------------|------------------------|
| Deals with objects moving at constant speeds (especially near the speed of light). | Deals with gravity and how it bends space and time. |
---
### 2. Special Relativity (1905)
1. **Speed of light is the same for everyone**
- No matter how fast you’re moving, light always travels at the same speed (≈ 300 000 km/s).
- This is like a “speed limit” that can’t be broken.
2. **Time and space are not absolute**
- If you’re moving fast, time for you ticks slower compared to someone standing still.
- Lengths shrink in the direction of motion.
- Think of a moving train: a clock on the train runs slower, and a ruler on the train looks a bit shorter to an outside observer.
3. **Mass and energy are interchangeable**
- The famous equation \(E = mc^2\) says that a tiny amount of mass can be converted into a huge amount of energy (because \(c^2\) is enormous).
**Why it matters:**
- Explains why GPS satellites need tiny corrections.
- Explains why particles in accelerators can reach enormous energies.
- Gives the foundation for modern physics (quantum field theory, etc.).
---
### 3. General Relativity (1915)
1. **Gravity is not a “force” but a curvature of space‑time**
- Imagine a heavy ball on a rubber sheet. The sheet sags around the ball.
- Smaller balls rolling nearby follow curved paths because the sheet is bent.
- In the universe, massive objects (planets, stars, black holes) bend the “fabric” of space‑time, and other objects move along those curves.
2. **Time runs slower where gravity is stronger**
- Near a massive planet, clocks tick a little slower than clocks far away.
- This is why GPS satellites (far from Earth’s gravity) run a tiny bit faster than clocks on the ground.
3. **Light follows the curves**
- Light rays bend around massive objects.
- This is why stars appear slightly shifted when their
```
---
### Example 3
**Question:** How does democracy function in modern governments?
**Response:**
```
<|channel|>analysis<|message|>Need to explain democratic functioning.<|end|><|start|>assistant<|channel|>final<|message|>**Democracy in Modern Governments: How It Functions Today**
Modern democracies are built on a set of shared principles—rule of law, political pluralism, and the protection of individual rights—but the exact mechanisms that bring those principles to life vary widely from country to country. Below is a concise, step‑by‑step overview of the key components that make a contemporary democratic system work, followed by a quick comparison of the most common models.
---
## 1. The Core Pillars
| Pillar | What It Means | Typical Institutional Forms |
|--------|---------------|------------------------------|
| **Popular Sovereignty** | The people are the ultimate source of political authority. | Universal, free, and secret elections; referenda; citizen‑initiated legislation. |
| **Rule of Law** | All actions by the state are bound by law, and laws are applied equally. | Independent judiciary; constitutional courts; legal codes that protect rights. |
| **Political Pluralism** | Multiple parties and viewpoints can compete for power. | Multi‑party systems; proportional or mixed electoral systems; campaign finance regulations. |
| **Civil Liberties & Human Rights** | Freedom of expression, assembly, religion, and privacy. | Constitutional guarantees; ombudsman offices; human‑rights commissions. |
| **Accountability & Transparency** | Public officials must explain and justify their actions. | Freedom‑of‑information laws; open‑budget systems; anti‑corruption agencies. |
---
## 2. The Everyday Mechanics
| Mechanism | How It Works | Typical Examples |
|-----------|--------------|------------------|
| **Elections** | Citizens vote for representatives or directly for policy. | Presidential, parliamentary, or local elections; proportional representation; ranked‑choice voting. |
| **Legislative Process** | Laws are drafted, debated, amended, and passed by elected bodies. | Bicameral parliaments; committee hearings; public consultations. |
| **Executive Function** | The elected head of state (president, prime minister) implements laws. | Cabinet appointments; executive orders; policy agendas. |
| **Judicial Review** | Courts interpret laws and can strike down unconstitutional acts. | Constitutional courts; appellate courts; judicial oversight of administrative actions. |
| **Civil Society & Media** | NGOs, unions, and the press hold the government accountable. | Freedom of the press; watchdog NGOs; public protests. |
| **Checks &
```
---
## Citation
If you use this model in your research, please cite:
```bibtex
@misc{priyanshu2025gptoss,
title={{GPT-OSS MoE Expert Fingerprinting: Analyzing Expert Activation Patterns in Mixture of Experts Models}},
author={Priyanshu, Aman and Vijay, Supriti},
year={2025},
howpublished={\url{https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/}},
note={Interactive analysis tool for expert activation patterns in MoE architectures}
}
```
## References & Resources
- **Original Model**: [OpenAI GPT-OSS Model Card](https://openai.com/index/introducing-gpt-oss/)
- **Model Hub**: [GPT-OSS-20B on Hugging Face](https://huggingface.co/openai/gpt-oss-20b)
- **Expert Analysis Dataset**: [GPT-OSS-20B MoE Expert Activations](https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations)
- **Project Page**: [GPT-OSS MoE Expert Fingerprinting](https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/)
- **GitHub Repository**: [OpenAI GPT-OSS](https://github.com/openai/gpt-oss)
|
ACECA/lowMvMax_197
|
ACECA
| 2025-08-13T02:44:19Z | 0 | 0 | null |
[
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-08-12T15:19:31Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
zstanjj/HierSearch-Local-Agent
|
zstanjj
| 2025-08-13T02:42:15Z | 4 | 1 | null |
[
"safetensors",
"qwen2",
"biology",
"finance",
"text-generation-inference",
"en",
"zh",
"arxiv:2508.08088",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-7B-Instruct",
"license:mit",
"region:us"
] | null | 2025-07-29T03:27:20Z |
---
license: mit
language:
- en
- zh
base_model:
- Qwen/Qwen2.5-7B-Instruct
tags:
- biology
- finance
- text-generation-inference
---
## Model Information
We release agent model used in **HierSearch: A Hierarchical Enterprise Deep Search Framework Integrating Local and Web Searches**.
<p align="left">
Useful links: 📝 <a href="https://arxiv.org/abs/2508.08088" target="_blank">Paper</a> • 🤗 <a href="https://huggingface.co/papers/2508.08088" target="_blank">Hugging Face</a> • 🧩 <a href="https://github.com/plageon/HierSearch" target="_blank">Github</a>
</p>
1. We explore the deep search framework in multi-knowledge-source scenarios and propose a hierarchical agentic paradigm and train with HRL;
2. We notice drawbacks of the naive information transmission among deep search agents and developed a knowledge refiner suitable for multi-knowledge-source scenarios;
3. Our proposed approach for reliable and effective deep search across multiple knowledge sources outperforms existing baselines the flat-RL solution in various domains.
🌹 If you use this model, please ✨star our **[GitHub repository](https://github.com/plageon/HierSearch)** or upvote our **[paper](https://huggingface.co/papers/2508.08088)** to support us. Your star means a lot!
|
rmdhirr/llama-dpo-43
|
rmdhirr
| 2025-08-13T02:35:20Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"base_model:adapter:unsloth/Llama-3.2-11B-Vision-Instruct",
"dpo",
"lora",
"transformers",
"trl",
"text-generation",
"conversational",
"arxiv:1910.09700",
"base_model:unsloth/Llama-3.2-11B-Vision-Instruct",
"region:us"
] |
text-generation
| 2025-08-13T02:33:14Z |
---
base_model: unsloth/Llama-3.2-11B-Vision-Instruct
library_name: peft
pipeline_tag: text-generation
tags:
- base_model:adapter:unsloth/Llama-3.2-11B-Vision-Instruct
- dpo
- lora
- transformers
- trl
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.17.0
|
hobson123/blockassist-bc-mammalian_dense_gibbon_1755051895
|
hobson123
| 2025-08-13T02:31:24Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"mammalian dense gibbon",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-13T02:31:07Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- mammalian dense gibbon
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
rene-contango/cc8054a8-7d65-45d1-b554-34bfc8d8d140
|
rene-contango
| 2025-08-13T02:30:52Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"trl",
"dpo",
"conversational",
"arxiv:2305.18290",
"base_model:unsloth/Llama-3.2-3B-Instruct",
"base_model:finetune:unsloth/Llama-3.2-3B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-12T20:24:22Z |
---
base_model: unsloth/Llama-3.2-3B-Instruct
library_name: transformers
model_name: cc8054a8-7d65-45d1-b554-34bfc8d8d140
tags:
- generated_from_trainer
- trl
- dpo
licence: license
---
# Model Card for cc8054a8-7d65-45d1-b554-34bfc8d8d140
This model is a fine-tuned version of [unsloth/Llama-3.2-3B-Instruct](https://huggingface.co/unsloth/Llama-3.2-3B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="rene-contango/cc8054a8-7d65-45d1-b554-34bfc8d8d140", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.21.0
- Transformers: 4.55.0
- Pytorch: 2.6.0+cu118
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
ElToro2602/blockassist-bc-raging_prehistoric_chameleon_1755051448
|
ElToro2602
| 2025-08-13T02:18:48Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"raging prehistoric chameleon",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-13T02:18:43Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- raging prehistoric chameleon
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
huyydangg/thuvienphapluat_embedding_v4
|
huyydangg
| 2025-08-13T02:12:41Z | 180 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"roberta",
"sentence-similarity",
"feature-extraction",
"dense",
"generated_from_trainer",
"dataset_size:162512",
"loss:MatryoshkaLoss",
"loss:CachedMultipleNegativesSymmetricRankingLoss",
"vi",
"arxiv:1908.10084",
"arxiv:2205.13147",
"base_model:bkai-foundation-models/vietnamese-bi-encoder",
"base_model:finetune:bkai-foundation-models/vietnamese-bi-encoder",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-08-06T06:45:24Z |
---
language:
- vi
license: apache-2.0
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- dense
- generated_from_trainer
- dataset_size:162512
- loss:MatryoshkaLoss
- loss:CachedMultipleNegativesSymmetricRankingLoss
base_model: bkai-foundation-models/vietnamese-bi-encoder
widget:
- source_sentence: Quy_định về sử_dụng hè phố
sentences:
- Có văn_bản nào quy_định cấm cho những gia_đình nhà mặt đường_phố làm rạp để tổ_chức
làm_ma , đám_cưới không ?
- Nội_dung quản_lý nhà_nước về quốc_phòng trên không_gian mạng và công_nghệ thông_tin
là gì và được quy_định ở văn_bản nào ?
- Lấy cắp thông_tin của người khác đi vay tiền bị phạt thế_nào ?
- source_sentence: TPHCM trả_lời về việc hỗ_trợ gia_đình ông Phan Trọng_Đồng
sentences:
- Doanh_nghiệp tôi thành_lập tháng 10 / 2022 , hoạt_động xuất_khẩu 100 % . Đến tháng
3 , số thuế GTGT đầu_vào là 2 tỷ đồng . Vậy_doanh_nghiệp có đủ điều_kiện hoàn
thuế theo quy_định của Luật thuế GTGT không ? Theo quy_định tại Điều 18 Thông_tư
219 yêu_cầu doanh_nghiệp phải hoạt_động 12 tháng liên_tục , nhưng theo Thông_tư
130 sửa_đổi thì không nhắc đến điều_kiện này nữa . Vậy_liệu doanh_nghiệp có đủ
điều_kiện hoàn thuế không ? Luật thuế 2016 sửa_đổi đã bác_bỏ điều 13 tại Luật
thuế 2013 , vậy có đúng là doanh_nghiệp vẫn đủ điều_kiện hoàn thuế không ?
- 'Tôi mắc COVID - 19 ( F0 ) vào tháng 11 / 2021 và có nhận được thông_báo từ UBND
phường về việc hỗ_trợ F0 , F1 . Tháng 12 / 2021 , tôi đã nộp đầy_đủ hồ_sơ cho
cán_bộ phường và có ký nhận hồ_sơ . Đến nay , tôi đã nhiều lần đến UBND phường
hỏi về việc hỗ_trợ , nhưng đều nhận được câu trả_lời là chưa có . Tôi đề_nghị
cơ_quan_chức_năng có câu trả_lời về chính_sách hỗ_trợ cho F0 , F1 . UBND Phường
19 , quận Bình_Thạnh , TPHCM trả_lời vấn_đề này như sau : Ông Phan_Trọng Đồng
có nộp hồ_sơ ( hỗ_trợ đối_tượng F1 , F0 phải điều_trị , cách_ly y_tế đủ điều_kiện
nhận hỗ_trợ theo Nghị_quyết số68 / NQ - CP , Quyết_định số23 / 2021 / QĐ - TTg
, Quyết_định số33 / 2021 / QĐ - TTg ) tại UBND Phường 19 . UBND Phường 19 đã tổng_hợp
hồ_sơ trình UBND quận thẩm_định , phê_duyệt theo quy_định . Khi nào được cấp kinh_phí
hỗ_trợ từ cơ_quan có thẩm_quyền , UBND phường sẽ thông_tin đến ông Đồng và nhân_dân
trên địa_bàn phường sớm nhất có_thể .'
- Một giáo_viên được trường phân_công dạy môn tin_học nhưng hè năm_học 2020 - 2021
có dạy thêm 01 nhóm học_sinh " Môn toán " và thu học_phí trọn_gói đến khi thi
tốt_nghiệp lớp 12 năm_học 2021 - 2022 là 2.000.000 đ / hs . Vậy_giáo_viên này
có vi_phạm không , nếu có thì nội_dung vi_phạm là gì , hướng xử_lý như thế_nào
? Nếu là đảng_viên thì mặt Đảng xử_lý ra sao ? Trước đó hiệu_trưởng đã có thông_báo
trên hội_đồng sư_phạm trường là không cho dạy thêm dưới mọi hình_thức .
- source_sentence: "\n \" Quy_định về thời_hạn và hiệu_lực giấy chứng_nhận đủ điều_kiện\
\ sản_xuất kinh_doanh hóa_chất có điều_kiện trong lĩnh_vực công_nghiệp \" , \n\
\ \" Thủ_tục bổ_sung danh_mục_hóa_chất của doanh_nghiệp sau khi được cấp giấy\
\ chứng_nhận đủ điều_kiện sản_xuất kinh_doanh hóa_chất có điều_kiện \" \n"
sentences:
- "Điều 226 Bộ_luật hình_sự 2015 số 100 / 2015 / QH13 mới nhất có nội_dung như sau\
\ : \n \n Điều 226 . Tội xâm_phạm quyền sở_hữu công_nghiệp \n 1 . Người nào cố_ý\
\ xâm_phạm quyền sở_hữu công_nghiệp đối_với nhãn_hiệu hoặc chỉ_dẫn địa_lý đang\
\ được bảo_hộ tại Việt_Nam , thu lợi bất_chính từ 100.000.000 đồng đến dưới 300.000.000\
\ đồng hoặc gây thiệt_hại cho chủ sở_hữu nhãn_hiệu hoặc chỉ_dẫn địa_lý từ 200.000.000\
\ đồng đến dưới 500.000.000 đồng hoặc hàng_hóa vi_phạm trị_giá từ 200.000.000\
\ đồng đến dưới 500.000.000 đồng , thì bị phạt tiền từ 50.000.000 đồng đến 500.000.000\
\ đồng hoặc phạt cải_tạo không giam_giữ đến 03 năm . \n 2 . Phạm_tội thuộc một\
\ trong các trường_hợp sau đây , thì bị phạt tiền từ 500.000.000 đồng đến 1.000.000.000\
\ đồng hoặc phạt tù từ 06 tháng đến 03 năm : \n a ) Có tổ_chức ; \n b ) Phạm_tội\
\ 02 lần trở lên ; \n c ) Thu_lợi bất_chính 300.000.000 đồng trở lên ; \n d )\
\ Gây thiệt_hại cho chủ sở_hữu nhãn_hiệu hoặc chỉ_dẫn địa_lý 500.000.000 đồng\
\ trở lên ; \n đ ) Hàng_hóa vi_phạm trị_giá 500.000.000 đồng trở lên . \n 3 .\
\ Người phạm_tội còn có_thể bị phạt tiền từ 20.000.000 đồng đến 200.000.000 đồng\
\ , cấm đảm_nhiệm chức_vụ , cấm hành_nghề hoặc làm công_việc nhất_định từ 01 năm\
\ đến 05 năm . \n 4 . Pháp_nhân thương_mại phạm_tội quy_định tại Điều này , thì\
\ bị phạt như sau : \n a ) Pháp_nhân thương_mại thực_hiện_hành_vi quy_định tại\
\ khoản 1 Điều này , đã bị xử_phạt vi_phạm hành_chính về hành_vi này hoặc đã bị\
\ kết_án về tội này , chưa được xóa_án tích mà còn vi_phạm , thì bị phạt tiền\
\ từ 500.000.000 đồng đến 2.000.000.000 đồng ; \n b ) Phạm_tội thuộc trường_hợp\
\ quy_định tại khoản 2 Điều này , thì bị phạt tiền từ 2.000.000.000 đồng đến 5.000.000.000\
\ đồng hoặc đình_chỉ hoạt_động có thời_hạn từ 06 tháng đến 02 năm ; \n c ) Pháp_nhân\
\ thương_mại còn có_thể bị phạt tiền từ 100.000.000 đồng đến 500.000.000 đồng\
\ , cấm kinh_doanh , cấm hoạt_động trong một_số lĩnh_vực nhất_định hoặc cấm huy_động\
\ vốn từ 01 năm đến 03 năm ."
- Em cho chị công_văn nào của Bộ y_tế về quy_định xin giấy_phép cho mặt_hàng nhập_khẩu
khẩu_trang nha .
- Có phải trước đây giấy chứng_nhận đủ điều_kiện sản_xuất kinh_doanh hóa_chất có
điều_kiện trong lĩnh_vực công_nghiệp có_hạn 5 năm nhưng từ năm 2022 giấy chứng_nhận
này sẽ chỉ cấp 1 lần và sử_dụng mãi theo vòng đời của doanh_nghiệp đúng không
? Khi có phát_sinh danh_mục_hóa_chất mới , doanh_nghiệp xin bổ_sung như thế_nào
?
- source_sentence: Dự_án điều_chỉnh tổng mức đầu_tư nhưng không điều_chỉnh giấy chứng_nhận
đầu_tư mà chỉ có quyết_định phê_duyệt điều_chỉnh quy_hoạch thì có đúng quy_định
không ?
sentences:
- 'Căn_cứ theo văn_bản pháp_luật nào quy_định về giá_trị của công_trình bao_nhiêu
là phải thực_hiện đấu_thầu công_khai ? Việc nhà_thầu thi_công trúng_thầu có cần
hồ_sơ năng_lực hay không ? Ví_dụ : Công_trình mục_đích xây_dựng nông_thôn mới
ở địa_phương thuộc xã , lắp_đặt hệ_thống đèn_điện chiếu sáng ở đường nông_thôn
.'
- "Điều 13 . Báo tin , xử_lý tin báo và lập hồ_sơ vụ_việc về sự_cố giao_thông đường_sắt\
\ 1 . Doanh_nghiệp kinh_doanh kết_cấu_hạ_tầng đường_sắt quy_định việc báo tin\
\ , xử_lý tin báo , lập Hồ_sơ vụ_việc , giải_quyết sự_cố giao_thông đường_sắt\
\ xảy ra trên đường_sắt quốc_gia . \n 2 . Chủ sở_hữu đường_sắt chuyên_dùng quy_định\
\ việc báo tin , xử_lý tin báo , lập Hồ_sơ vụ_việc , giải_quyết sự_cố giao_thông\
\ đường xảy ra trên đường_sắt chuyên_dùng ."
- "Nhờ bạn tư_vấn giúp mình trường_hợp : Về việc xin cấp giấy chứng_nhận đầu_tư\
\ điều_chỉnh : \n \n Công_ty đã được cấp giấy chứng_nhận đầu_tư lần đầu là 80\
\ tỷ đồng . Sau đó có thực_hiện điều_chỉnh tổng mức đầu_tư lần 2 lên 153 tỷ đồng\
\ và điều_chỉnh lần 3 xuống 140 tỷ đồng . Tuy_nhiên , phía công_ty cho biết họ\
\ không thực_hiện xin cấp giấy chứng_nhận đầu_tư điều_chỉnh mà chỉ có quyết_định\
\ phê_duyệt điều_chỉnh quy_hoạch 1 / 500 về thay_đổi tổng mức đầu_tư \n Hỏi :\
\ việc khách_hàng không xin giấy chứng_nhận đầu_tư điều_chỉnh có đúng không .\
\ Với hồ_sơ hiện_tại có đủ tính pháp_lý không . \n Văn_bản pháp_luật nào quy_định\
\ về việc phải thực_hiện xin giấy chứng_nhận đầu_tư điều_chỉnh"
- source_sentence: Ngân_hàng hợp_tác_xã có nhu_cầu thành_lập chi_nhánh để làm gì ?
sentences:
- Vừa_rồi Tôi và em_trai có ra vịnh gần nhà và có đánh_bắt khoảng trên 20kg cá_bỗng
, và đã bị lập biên_bản về hành_vi này anh_chị cho tôi hỏi trường_hợp này gia_đình
tôi sẽ bị xử_phạt như thế_nào ? Tôi cảm_ơn rất nhiều .
- Khi thanh_toán tiền_lương cho người lao_động bảng thanh_toán tiền_lương có phải
lấy chữ_ký của người lao_động không ?
- Thông_tư 09 / 2018 / TT - NHNN quy_định về mạng_lưới hoạt_động của tổ_chức tín_dụng
là hợp_tác_xã do Ngân_hàng Nhà_nước Việt_Nam ban_hành Điều 10 Điều_kiện thành_lập
chi_nhánh Khoản 1 Có nhu_cầu thành_lập chi_nhánh để phục_vụ các quỹ tín_dụng nhân_dân
trên địa_bàn dự_kiến thành_lập chi_nhánh .
pipeline_tag: sentence-similarity
library_name: sentence-transformers
metrics:
- cosine_accuracy@1
- cosine_accuracy@3
- cosine_accuracy@5
- cosine_accuracy@10
- cosine_precision@1
- cosine_precision@3
- cosine_precision@5
- cosine_precision@10
- cosine_recall@1
- cosine_recall@3
- cosine_recall@5
- cosine_recall@10
- cosine_ndcg@10
- cosine_mrr@10
- cosine_map@100
model-index:
- name: bkai-fine-tuned-legal
results:
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 768
type: dim_768
metrics:
- type: cosine_accuracy@1
value: 0.4284237471694398
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.592842374716944
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.6512257556365069
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.7233435069410259
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.4284237471694398
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.19761412490564798
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.13024515112730137
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.0723343506941026
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.4284237471694398
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.592842374716944
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.6512257556365069
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.7233435069410259
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.5720350375528778
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.5240482168056819
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.5316054651429374
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 512
type: dim_512
metrics:
- type: cosine_accuracy@1
value: 0.42704538741754455
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.5921531948409964
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.6504381214925667
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.7216205572511568
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.42704538741754455
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.1973843982803321
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.1300876242985133
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.07216205572511569
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.42704538741754455
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.5921531948409964
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.6504381214925667
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.7216205572511568
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.570587688327153
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.522662492518264
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.5302971341510442
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 256
type: dim_256
metrics:
- type: cosine_accuracy@1
value: 0.42010436152407205
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.5870828000393817
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.6473860391847986
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.7164517081815497
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.42010436152407205
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.1956942666797939
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.12947720783695973
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.07164517081815495
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.42010436152407205
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.5870828000393817
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.6473860391847986
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.7164517081815497
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.5648933784570224
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.51674314453556
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.524464620400421
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 128
type: dim_128
metrics:
- type: cosine_accuracy@1
value: 0.41404942404253225
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.576498966230186
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.6385251550654721
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.7102490892980211
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.41404942404253225
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.19216632207672868
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.12770503101309444
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.07102490892980211
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.41404942404253225
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.576498966230186
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.6385251550654721
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.7102490892980211
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.5578687503059883
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.5095590499006912
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.5172301211992038
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 64
type: dim_64
metrics:
- type: cosine_accuracy@1
value: 0.39829674116372943
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.5587279708575367
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.6187358471989761
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.6914935512454465
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.39829674116372943
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.18624265695251221
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.1237471694397952
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.06914935512454463
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.39829674116372943
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.5587279708575367
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.6187358471989761
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.6914935512454465
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.5402002340134702
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.49229386332985225
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.5004018695881347
name: Cosine Map@100
---
# bkai-fine-tuned-legal
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [bkai-foundation-models/vietnamese-bi-encoder](https://huggingface.co/bkai-foundation-models/vietnamese-bi-encoder) on the json dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [bkai-foundation-models/vietnamese-bi-encoder](https://huggingface.co/bkai-foundation-models/vietnamese-bi-encoder) <!-- at revision 84f9d9ada0d1a3c37557398b9ae9fcedcdf40be0 -->
- **Maximum Sequence Length:** 256 tokens
- **Output Dimensionality:** 768 dimensions
- **Similarity Function:** Cosine Similarity
- **Training Dataset:**
- json
- **Language:** vi
- **License:** apache-2.0
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False, 'architecture': 'RobertaModel'})
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Ngân_hàng hợp_tác_xã có nhu_cầu thành_lập chi_nhánh để làm gì ?',
'Thông_tư 09 / 2018 / TT - NHNN quy_định về mạng_lưới hoạt_động của tổ_chức tín_dụng là hợp_tác_xã do Ngân_hàng Nhà_nước Việt_Nam ban_hành Điều 10 Điều_kiện thành_lập chi_nhánh Khoản 1 Có nhu_cầu thành_lập chi_nhánh để phục_vụ các quỹ tín_dụng nhân_dân trên địa_bàn dự_kiến thành_lập chi_nhánh .',
'Vừa_rồi Tôi và em_trai có ra vịnh gần nhà và có đánh_bắt khoảng trên 20kg cá_bỗng , và đã bị lập biên_bản về hành_vi này anh_chị cho tôi hỏi trường_hợp này gia_đình tôi sẽ bị xử_phạt như thế_nào ? Tôi cảm_ơn rất nhiều .',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities)
# tensor([[ 1.0000, 0.8501, -0.0413],
# [ 0.8501, 1.0000, -0.0990],
# [-0.0413, -0.0990, 1.0000]])
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Information Retrieval
* Dataset: `dim_768`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) with these parameters:
```json
{
"truncate_dim": 768
}
```
| Metric | Value |
|:--------------------|:----------|
| cosine_accuracy@1 | 0.4284 |
| cosine_accuracy@3 | 0.5928 |
| cosine_accuracy@5 | 0.6512 |
| cosine_accuracy@10 | 0.7233 |
| cosine_precision@1 | 0.4284 |
| cosine_precision@3 | 0.1976 |
| cosine_precision@5 | 0.1302 |
| cosine_precision@10 | 0.0723 |
| cosine_recall@1 | 0.4284 |
| cosine_recall@3 | 0.5928 |
| cosine_recall@5 | 0.6512 |
| cosine_recall@10 | 0.7233 |
| **cosine_ndcg@10** | **0.572** |
| cosine_mrr@10 | 0.524 |
| cosine_map@100 | 0.5316 |
#### Information Retrieval
* Dataset: `dim_512`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) with these parameters:
```json
{
"truncate_dim": 512
}
```
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.427 |
| cosine_accuracy@3 | 0.5922 |
| cosine_accuracy@5 | 0.6504 |
| cosine_accuracy@10 | 0.7216 |
| cosine_precision@1 | 0.427 |
| cosine_precision@3 | 0.1974 |
| cosine_precision@5 | 0.1301 |
| cosine_precision@10 | 0.0722 |
| cosine_recall@1 | 0.427 |
| cosine_recall@3 | 0.5922 |
| cosine_recall@5 | 0.6504 |
| cosine_recall@10 | 0.7216 |
| **cosine_ndcg@10** | **0.5706** |
| cosine_mrr@10 | 0.5227 |
| cosine_map@100 | 0.5303 |
#### Information Retrieval
* Dataset: `dim_256`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) with these parameters:
```json
{
"truncate_dim": 256
}
```
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.4201 |
| cosine_accuracy@3 | 0.5871 |
| cosine_accuracy@5 | 0.6474 |
| cosine_accuracy@10 | 0.7165 |
| cosine_precision@1 | 0.4201 |
| cosine_precision@3 | 0.1957 |
| cosine_precision@5 | 0.1295 |
| cosine_precision@10 | 0.0716 |
| cosine_recall@1 | 0.4201 |
| cosine_recall@3 | 0.5871 |
| cosine_recall@5 | 0.6474 |
| cosine_recall@10 | 0.7165 |
| **cosine_ndcg@10** | **0.5649** |
| cosine_mrr@10 | 0.5167 |
| cosine_map@100 | 0.5245 |
#### Information Retrieval
* Dataset: `dim_128`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) with these parameters:
```json
{
"truncate_dim": 128
}
```
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.414 |
| cosine_accuracy@3 | 0.5765 |
| cosine_accuracy@5 | 0.6385 |
| cosine_accuracy@10 | 0.7102 |
| cosine_precision@1 | 0.414 |
| cosine_precision@3 | 0.1922 |
| cosine_precision@5 | 0.1277 |
| cosine_precision@10 | 0.071 |
| cosine_recall@1 | 0.414 |
| cosine_recall@3 | 0.5765 |
| cosine_recall@5 | 0.6385 |
| cosine_recall@10 | 0.7102 |
| **cosine_ndcg@10** | **0.5579** |
| cosine_mrr@10 | 0.5096 |
| cosine_map@100 | 0.5172 |
#### Information Retrieval
* Dataset: `dim_64`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) with these parameters:
```json
{
"truncate_dim": 64
}
```
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.3983 |
| cosine_accuracy@3 | 0.5587 |
| cosine_accuracy@5 | 0.6187 |
| cosine_accuracy@10 | 0.6915 |
| cosine_precision@1 | 0.3983 |
| cosine_precision@3 | 0.1862 |
| cosine_precision@5 | 0.1237 |
| cosine_precision@10 | 0.0691 |
| cosine_recall@1 | 0.3983 |
| cosine_recall@3 | 0.5587 |
| cosine_recall@5 | 0.6187 |
| cosine_recall@10 | 0.6915 |
| **cosine_ndcg@10** | **0.5402** |
| cosine_mrr@10 | 0.4923 |
| cosine_map@100 | 0.5004 |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### json
* Dataset: json
* Size: 162,512 training samples
* Columns: <code>anchor</code> and <code>positive</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive |
|:--------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 5 tokens</li><li>mean: 17.43 tokens</li><li>max: 138 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 93.7 tokens</li><li>max: 256 tokens</li></ul> |
* Samples:
| anchor | positive |
|:----------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>Có được ghi_nhận chi_phí cước đường_bộ của theo hoá_đơn của bên thuê xe ?</code> | <code>Công_ty A sử_dụng dịch_vụ thuê xe theo tháng của công_ty B. Công_ty B có nhận được hóa đơn đầu_vào là cước đường_bộ . Vậy công_ty A có được ghi_nhận chi_phí hợp_lý cho các hóa_đơn cước đường_bộ này không ?</code> |
| <code>Làm thêm có phải đóng BHXH không ?</code> | <code>Luật Bảo_hiểm xã_hội 2014 Điều 85 Mức đóng và phương_thức đóng của người lao_động tham_gia_bảo_hiểm xã_hội bắt_buộc Khoản 3 Người lao_động không làm_việc và không hưởng tiền_lương từ 14 ngày làm_việc trở lên trong tháng thì không đóng bảo_hiểm xã_hội tháng đó . Thời_gian này không được tính để hưởng bảo_hiểm xã_hội , trừ trường_hợp nghỉ_việc hưởng chế_độ thai_sản .</code> |
| <code>Chủ đầu_tư tự_lập hồ_sơ thiết_kế có cần phải có chứng_chỉ năng_lực không ?</code> | <code>1 / Chủ đầu_tư tự_lập hồ_sơ thiết_kế báo_cáo kinh_tế kỹ_thuật thì yêu_cầu phải có chứng_chỉ hành_nghề không ? hay_là chỉ cần nhân_viên có bằng kỹ_sư theo đúng chuyên_ngành ? <br> 2 / Trong trường_hợp chủ đầu_tư tự_lập hồ_sơ thiết_kế báo_cáo kinh_tế kỹ_thuật có được thực_hiện gói_thầu thi_công theo hình_thức tự thực_hiện không ? <br> Kính nhờ LVPL tư_vấn giúp .</code> |
* Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters:
```json
{
"loss": "CachedMultipleNegativesSymmetricRankingLoss",
"matryoshka_dims": [
768,
512,
256,
128,
64
],
"matryoshka_weights": [
1,
1,
1,
1,
1
],
"n_dims_per_step": -1
}
```
### Evaluation Dataset
#### json
* Dataset: json
* Size: 20,315 evaluation samples
* Columns: <code>anchor</code> and <code>positive</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive |
|:--------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 5 tokens</li><li>mean: 17.13 tokens</li><li>max: 94 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 94.98 tokens</li><li>max: 256 tokens</li></ul> |
* Samples:
| anchor | positive |
|:------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>Chính_sách bồi_thường , giải_phóng mặt_bằng đối_với dự_án trạm biến_áp theo pháp_luật Việt_Nam</code> | <code>Chính_sách bồi_thường , giải_phóng mặt_bằng của dự_án trạm biến_áp đi qua phần đất của gia_đình có được pháp_luật quy_định không hay chỉ là thỏa_thuận giữa người dân và chủ đầu_tư dự_án ?</code> |
| <code>Chi_phí nhận tiền USD hoặc phí ngân_hàng ngoại_tệ sẽ ghi theo tỷ_giá nào ?</code> | <code>Khi em giao_dịch phát_sinh chi_phí nhận tiền USD hoặc phí ngân_hàng ngoại_tệ thì em đọc theo Thông_tư 200 / 2014 / TT - BTC thì tỷ_giá ghi_nhận là tỷ_giá bán của ngân_hàng phải không ?</code> |
| <code>Đơn_vị sự_nghiệp có được ban_hành danh_mục mua_sắm tập_trung không ?</code> | <code>Đơn_vị sự_nghiệp có tự ban_hành danh_mục mua_sắm tập_trung để thực_hiện việc mua_sắm theo phương_thức tập_trung được hay không <br></code> |
* Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters:
```json
{
"loss": "CachedMultipleNegativesSymmetricRankingLoss",
"matryoshka_dims": [
768,
512,
256,
128,
64
],
"matryoshka_weights": [
1,
1,
1,
1,
1
],
"n_dims_per_step": -1
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: epoch
- `per_device_train_batch_size`: 12
- `per_device_eval_batch_size`: 12
- `gradient_accumulation_steps`: 24
- `learning_rate`: 3e-05
- `weight_decay`: 0.15
- `max_grad_norm`: 0.65
- `num_train_epochs`: 12
- `lr_scheduler_type`: cosine
- `warmup_ratio`: 0.15
- `fp16`: True
- `load_best_model_at_end`: True
- `group_by_length`: True
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: epoch
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 12
- `per_device_eval_batch_size`: 12
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 24
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 3e-05
- `weight_decay`: 0.15
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 0.65
- `num_train_epochs`: 12
- `max_steps`: -1
- `lr_scheduler_type`: cosine
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.15
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: True
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: True
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch_fused
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: True
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `hub_revision`: None
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `liger_kernel_config`: None
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
- `router_mapping`: {}
- `learning_rate_mapping`: {}
</details>
### Training Logs
| Epoch | Step | Training Loss | Validation Loss | dim_768_cosine_ndcg@10 | dim_512_cosine_ndcg@10 | dim_256_cosine_ndcg@10 | dim_128_cosine_ndcg@10 | dim_64_cosine_ndcg@10 |
|:-------:|:--------:|:-------------:|:---------------:|:----------------------:|:----------------------:|:----------------------:|:----------------------:|:---------------------:|
| 1.0 | 565 | 0.4082 | 0.1543 | 0.5250 | 0.5228 | 0.5179 | 0.5031 | 0.4774 |
| 2.0 | 1130 | 0.1442 | 0.1137 | 0.5340 | 0.5311 | 0.5238 | 0.5085 | 0.4847 |
| 3.0 | 1695 | 0.0898 | 0.0849 | 0.5440 | 0.5426 | 0.5361 | 0.5245 | 0.5034 |
| 4.0 | 2260 | 0.0612 | 0.0819 | 0.5458 | 0.5442 | 0.5378 | 0.5280 | 0.5063 |
| 5.0 | 2825 | 0.048 | 0.0778 | 0.5490 | 0.5463 | 0.5408 | 0.5323 | 0.5133 |
| **6.0** | **3390** | **0.0415** | **0.0679** | **0.5685** | **0.567** | **0.5614** | **0.5525** | **0.5331** |
| 7.0 | 3955 | 0.0324 | 0.0690 | 0.5720 | 0.5706 | 0.5649 | 0.5579 | 0.5402 |
* The bold row denotes the saved checkpoint.
### Framework Versions
- Python: 3.10.12
- Sentence Transformers: 5.1.0
- Transformers: 4.55.0
- PyTorch: 2.8.0+cu128
- Accelerate: 1.10.0
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MatryoshkaLoss
```bibtex
@misc{kusupati2024matryoshka,
title={Matryoshka Representation Learning},
author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
year={2024},
eprint={2205.13147},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
AmanPriyanshu/gpt-oss-7.2b-specialized-all-pruned-moe-only-9-experts
|
AmanPriyanshu
| 2025-08-13T02:08:06Z | 0 | 0 | null |
[
"safetensors",
"gpt_oss",
"mixture-of-experts",
"moe",
"expert-pruning",
"gpt-oss",
"openai",
"reasoning",
"all",
"specialized",
"efficient",
"transformer",
"causal-lm",
"text-generation",
"pytorch",
"pruned-model",
"domain-specific",
"conversational",
"en",
"dataset:AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2025-08-13T02:07:46Z |
---
license: apache-2.0
datasets:
- AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations
language:
- en
pipeline_tag: text-generation
tags:
- mixture-of-experts
- moe
- expert-pruning
- gpt-oss
- openai
- reasoning
- all
- specialized
- efficient
- transformer
- causal-lm
- text-generation
- pytorch
- pruned-model
- domain-specific
---
# All GPT-OSS Model (9 Experts)
**Project**: https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/
<div align="center">
### 👥 Follow the Authors
**Aman Priyanshu**
[](https://www.linkedin.com/in/aman-priyanshu/)
[](https://x.com/AmanPriyanshu6)
[](https://amanpriyanshu.github.io/)
**Supriti Vijay**
[](https://www.linkedin.com/in/supriti-vijay/)
[](https://x.com/SupritiVijay)
[](https://supritivijay.github.io/)
</div>
## Introduction
This is a pruned variant of OpenAI's GPT-OSS-20B model, reduced to 9 experts per layer based on activation patterns from the [AmanPriyanshu/GPT-OSS-20B MoE Expert Activations dataset](https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations). We analyzed router decisions across evaluation benchmarks to identify and retain experts most relevant for all tasks.
**⚠️ Experimental Model**: This is an experimental pruned model that may not work well - check the [examples below](#model-examples) to see if the outputs meet your needs before use.
This pruning approach reduces the model size while attempting to preserve performance on the target domain.
## Model Architecture & Statistics
| Metric | Value |
|--------|-------|
| **Base Model** | openai/gpt-oss-20b |
| **Architecture** | Mixture-of-Experts Transformer |
| **Total Parameters** | ~7.2B (pruned from 21B) |
| **Original Experts per Layer** | 32 |
| **Pruned Experts per Layer** | 9 |
| **Layers** | 24 |
| **Top-k Routing** | 4 |
| **Context Length** | 128K tokens |
| **Attention Heads** | 64 (Query), 8 (Key-Value) |
| **Residual Dimension** | 2880 |
| **Attention Pattern** | Alternating dense & sliding window (128 tokens) |
| **Positional Encoding** | RoPE (Rotary Position Embedding) |
| **Normalization** | RMSNorm |
| **Precision** | BF16 |
| **License** | Apache 2.0 |
| **Specialization** | All |
## Pruning Methodology
### What is Expert Pruning?
Mixture-of-Experts models contain multiple specialized sub-networks (experts) per layer. During inference, only a subset of experts are activated for each token. Expert pruning involves:
1. **Analyzing Usage Patterns**: Tracking which experts activate most frequently for specific tasks
2. **Removing Underutilized Experts**: Discarding experts with low activation rates for the target domain
3. **Preserving Router Functionality**: Maintaining the routing mechanism with fewer available experts
### Our Approach
- **Data-Driven Selection**: Used activation patterns from all evaluation tasks
- **Systematic Reduction**: Reduced from 32 to 9 experts per layer
- **No Retraining**: Direct removal without additional training steps
## Performance & Applications
### Pruning Benefits
- **Smaller Memory Footprint**: 28.1% of original expert parameters
- **Reduced Computational Load**: Fewer routing decisions during inference
- **Focused Capabilities**: Retains experts relevant to all tasks
### Use Cases
- **Speculative Decoding**: Draft model for full GPT-OSS-20B
- **Resource-Constrained Deployment**: Edge devices, mobile applications
- **Research**: Study expert specialization in MoE models
- **Fine-tuning**: Smaller base model for domain adaptation
*Note: Performance may vary depending on how well the pruned experts match your specific use case.*
## Motivation & Expert Selection
This general-purpose model maintains broad capabilities across all domains while significantly reducing computational requirements. It preserves the essential routing patterns discovered across our comprehensive analysis of diverse evaluation benchmarks including GPQA, MMLU, SORRY-Bench, and Tulu3 datasets.
The expert selection process utilized our comprehensive analysis of router activation patterns across multiple evaluation benchmarks:
- **GPQA**: Graduate-level questions in physics, chemistry, biology (Diamond & Expert subsets)
- **MMLU/MMLU-Pro**: Comprehensive knowledge across 57+ subjects including science, medicine, law
- **SORRY-Bench**: Safety evaluation across harmful content categories
- **Tulu3**: Persona-driven instruction following with verifiable constraints
- **Polyglot-or-Not**: Multilingual factual completion tasks
By identifying experts that consistently activated for all tasks, we created this specialized model that maintains domain expertise while significantly reducing computational requirements from 32 to 9 experts per layer.
## Dataset & Analysis Foundation
This model is based on analysis from the **GPT-OSS-20B MoE Expert Activations dataset** available at:
🔗 **https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations**
The dataset contains router activation patterns from OpenAI's GPT-OSS-20B model across diverse evaluation benchmarks, enabling the creation of these domain-optimized models through systematic expert pruning.
### Pruning Methodology
Our approach involves:
1. **Activation Analysis**: Comprehensive evaluation of expert usage patterns across domain-specific tasks
2. **Expert Ranking**: Identification of the most frequently activated experts for target domains
3. **Systematic Pruning**: Reduction from 32 to 9 experts while preserving router functionality
4. **Quality Validation**: Testing to ensure maintained performance on target tasks
*This is a direct pruning approach - no additional training was performed. The model inherits all capabilities from the original GPT-OSS-20B with focused expert selection.*
## Usage
### CPU Inference
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
# Load the specialized model on CPU
model = AutoModelForCausalLM.from_pretrained(
"AmanPriyanshu/gpt-oss-7.2b-specialized-all-pruned-moe-only-9-experts",
torch_dtype=torch.bfloat16,
device_map="cpu",
trust_remote_code=True
)
tokenizer = AutoTokenizer.from_pretrained("AmanPriyanshu/gpt-oss-7.2b-specialized-all-pruned-moe-only-9-experts")
# Generate with the model
messages = [
{"role": "user", "content": "What is artificial intelligence and how does it work?"}
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt",
return_dict=True,
reasoning_effort="medium"
)
# Ensure inputs are on the same device as model
inputs = {k: v.to(model.device) for k, v in inputs.items()}
outputs = model.generate(
**inputs,
max_new_tokens=512,
do_sample=True,
temperature=0.1,
top_p=0.9,
pad_token_id=tokenizer.eos_token_id,
eos_token_id=tokenizer.eos_token_id
)
# Decode only the generated part
input_length = inputs['input_ids'].shape[1]
response_tokens = outputs[0][input_length:]
response = tokenizer.decode(response_tokens, skip_special_tokens=True)
print(response)
```
### Apple Silicon (MPS) Inference
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
# Check MPS availability and load model
device = "mps" if torch.backends.mps.is_available() else "cpu"
model = AutoModelForCausalLM.from_pretrained(
"AmanPriyanshu/gpt-oss-7.2b-specialized-all-pruned-moe-only-9-experts",
torch_dtype=torch.float16, # Better MPS compatibility
device_map=device,
trust_remote_code=True,
low_cpu_mem_usage=True
)
tokenizer = AutoTokenizer.from_pretrained("AmanPriyanshu/gpt-oss-7.2b-specialized-all-pruned-moe-only-9-experts")
# Generate with the model
messages = [
{"role": "user", "content": "What is artificial intelligence and how does it work?"}
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt",
return_dict=True,
reasoning_effort="medium"
)
# Move inputs to model device
inputs = {k: v.to(model.device) if isinstance(v, torch.Tensor) else v for k, v in inputs.items()}
# Use torch.no_grad for MPS stability
with torch.no_grad():
outputs = model.generate(
**inputs,
max_new_tokens=512,
do_sample=True,
temperature=0.1,
top_p=0.9,
pad_token_id=tokenizer.eos_token_id,
eos_token_id=tokenizer.eos_token_id,
use_cache=True
)
# Decode only the generated part
input_length = inputs['input_ids'].shape[1]
response_tokens = outputs[0][input_length:]
response = tokenizer.decode(response_tokens, skip_special_tokens=True)
print(response)
```
### GPU Inference
For CUDA GPU inference, the models work with minimal modifications from the CPU example above. Simply change:
```python
device_map="auto" # Will automatically use GPU if available
torch_dtype=torch.bfloat16 # or torch.float16
```
## Model Examples
The following examples demonstrate the model's performance on all tasks:
### Example 1
**Question:** What is artificial intelligence and how does it work?
**Response:**
```
<|channel|>analysis<|message|>Need explain AI, definition, types, how it works via algorithms, data, learning, etc.<|end|><|start|>assistant<|channel|>final<|message|>### What is Artificial Intelligence?
Artificial Intelligence (AI) is a set of computer‑based systems that can perceive, reason, and act on a level of human‑like cognition. In simple terms, AI is a machine that can “think” by following a set of rules, learn from data, and make decisions that resemble human judgment.
| **Key concepts** | **What does AI do?** |
|--------------------------- |--------------------------- |
| **Representation** – AI stores information in a *state* (e.g., a number, a word, a picture). | It can “see” a picture, “read” a text, or “count” a number. |
| **Inference** – AI uses *rules* (logic) to ded infer from the state. | It can decide whether the picture is a cat or a dog. |
| **Learning** – AI can acquire new knowledge from data. | It can improve its performance over time. |
| **Action** – AI can produce an output (e.e. a recommendation, a route, a text). | It can answer a question or solve a puzzle. |
---
## How AI Works – The Core Architecture
1. **Input → Process → Output**
The whole cycle is a *pipeline* that turns data into a useful answer.
2. **Data (Input)**
• **Text** – sentences, paragraphs, or raw data.
• **Images** – pixels, shapes, or patterns.
• **Numbers** – sequences, probabilities, or statistics.
• **Sensors** – sound, touch, or visual signals.
3. **Representation**
• **Tokens** – a single piece of data that the system can hold.
• **State Space** – a collection of all possible states (e.g., “cat”, “dog”, “unknown”).
• **Prob‑Space** – a probability that each state is true.
4. **Inference Engine**
• **Logic** – a rule that tells the system how to choose a state.
• **Prob‑Inference** – the engine picks the most likely state.
• **Decision** – the system outputs the chosen state.
5. **Learning**
• **Training** – the system is fed a large set of data and learns the rule.
```
---
### Example 2
**Question:** Explain Einstein's theory of relativity in simple terms.
**Response:**
```
<|channel|>analysis<|message|>Need explain relativity in simple terms.<|end|><|start|>assistant<|channel|>final<|message|>**Einstein’s Theory of Relativity – in plain words**
---
### 1. What the problem was
Imagine you have a **ball** that is moving around a **circle**.
If you look from the ball’s side, the circle looks to be **moving**.
If you look from the circle’s side, the ball looks to be **moving**.
In physics, the “ball” is a **particle** (like a proton, a photon, or a molecule).
The “circle” is the **background** that everything in the universe is made of the same kind of material – the **matter**.
The question is: **Does the particle or the background move?**
If they move, then the universe is made of the same kind of material (the same kind of “matter”).
If they don’t move, then the universe is made of different kinds of material (different “matter” and “energy”).
---
### 2. What Einstein said
Einstein’s idea is that **the particle and the background are made of the same kind of material**.
In other words, the universe is made of a **“one‑thing”** (call it *matter*).
Because of this, the particle can’t change its state without changing the background, and vice versa.
---
### 3. How it works
1. **Matter is the same** – the particle and the background are made of the same kind of material.
Think of a **magball** that is made of a **mag**.
The magball can’t change its shape unless the mag changes its shape.
2. **The particle’s motion is a change in the background** –
When the particle moves, it changes the background in a way that the background changes.
The background is not moving in the particle’s view, but it is moving in the particle’s view.
This is the **“rel”** (rel‑ = “rel‑”?).
3. **The background’s motion is a change in the particle** –
When the background moves, the particle changes in a way that the particle changes.
The particle is not moving in the background’s view, but it is moving in the background’s view.
Because the particle and the background are made of the same kind of material, the motion of the particle is **just
```
---
### Example 3
**Question:** How does democracy function in modern governments?
**Response:**
```
<|channel|>analysis<|message|>Need explain how democracy functions in modern governments. Provide overview, principles, institutions, checks, representation, accountability, deliberation.<|end|><|start|>assistant<|channel|>final<|message|>### 1. What is “democracy” in a modern government?
In most 20‑th‑cent‑age states, democracy is not a single form of government. It is a set of **principal mechanisms** that give the people a say in the political process. The core ideas are:
| Principle | How it works in practice |
|------------|---------------------------|
| **Representation** | Citizens elect (or vote for) a leaders or a governing body that is supposed to act on their interests. |
| **Accountability** | Leaders are answerable to the public, through elections, courts, media, and civil‑rights bodies. |
| **Participation** | Citizens can vote, protest, petition, or run a campaign. |
| **Delivision** | Policy is debated, debated, and decided by a body of experts or the public. |
| **Checks & Balings** | No concentration of power; checks by courts, legislature, executive, and civil‑society. |
These are not “one‑thing” but a “whole‑thing” that is embedded in the modern state.
---
## 2. How democracy functions in modern governments
### 2.1 Electoral systems
* **Primary elections** – The people choose a leader or a governing body. In most countries, the electoral system is a **plural‑party** system, with a **two‑party** or **multi‑party** system. The electoral process is a **check** on the legitimacy of the leader.
* **Secondary elections** – The people can “re‑elect” a leader if the leader fails to deliver. This is a **check** on the accountability of the leader.
* **Electability of the public** – The public can vote on the policy positions of the candidate. The public can also vote on the policy positions of the candidate.
### 2.2 Parliamentary or legislative bodies
* **Parpar** – The people elect a representative body that is supposed to represent the public. The body is a **check** on the concentration of power.
* **Parpar** – The body is a **check** on the concentration of power.
* **Parpar** – The body is a **check** on the concentration of power.
### 2.3
```
---
## Citation
If you use this model in your research, please cite:
```bibtex
@misc{priyanshu2025gptoss,
title={{GPT-OSS MoE Expert Fingerprinting: Analyzing Expert Activation Patterns in Mixture of Experts Models}},
author={Priyanshu, Aman and Vijay, Supriti},
year={2025},
howpublished={\url{https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/}},
note={Interactive analysis tool for expert activation patterns in MoE architectures}
}
```
## References & Resources
- **Original Model**: [OpenAI GPT-OSS Model Card](https://openai.com/index/introducing-gpt-oss/)
- **Model Hub**: [GPT-OSS-20B on Hugging Face](https://huggingface.co/openai/gpt-oss-20b)
- **Expert Analysis Dataset**: [GPT-OSS-20B MoE Expert Activations](https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations)
- **Project Page**: [GPT-OSS MoE Expert Fingerprinting](https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/)
- **GitHub Repository**: [OpenAI GPT-OSS](https://github.com/openai/gpt-oss)
|
indoempatnol/blockassist-bc-fishy_wary_swan_1755049363
|
indoempatnol
| 2025-08-13T02:07:28Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"fishy wary swan",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-13T02:07:25Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- fishy wary swan
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
azmimb/blockassist-bc-finicky_fishy_weasel_1755050741
|
azmimb
| 2025-08-13T02:06:50Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"finicky fishy weasel",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-13T02:06:38Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- finicky fishy weasel
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
AmanPriyanshu/gpt-oss-2.4b-specialized-all-pruned-moe-only-1-experts
|
AmanPriyanshu
| 2025-08-13T01:53:39Z | 0 | 1 | null |
[
"safetensors",
"gpt_oss",
"mixture-of-experts",
"moe",
"expert-pruning",
"gpt-oss",
"openai",
"reasoning",
"all",
"specialized",
"efficient",
"transformer",
"causal-lm",
"text-generation",
"pytorch",
"pruned-model",
"domain-specific",
"conversational",
"en",
"dataset:AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2025-08-13T01:53:29Z |
---
license: apache-2.0
datasets:
- AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations
language:
- en
pipeline_tag: text-generation
tags:
- mixture-of-experts
- moe
- expert-pruning
- gpt-oss
- openai
- reasoning
- all
- specialized
- efficient
- transformer
- causal-lm
- text-generation
- pytorch
- pruned-model
- domain-specific
---
# All GPT-OSS Model (1 Experts)
**Project**: https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/
<div align="center">
### 👥 Follow the Authors
**Aman Priyanshu**
[](https://www.linkedin.com/in/aman-priyanshu/)
[](https://x.com/AmanPriyanshu6)
[](https://amanpriyanshu.github.io/)
**Supriti Vijay**
[](https://www.linkedin.com/in/supriti-vijay/)
[](https://x.com/SupritiVijay)
[](https://supritivijay.github.io/)
</div>
## Introduction
This is a pruned variant of OpenAI's GPT-OSS-20B model, reduced to 1 experts per layer based on activation patterns from the [AmanPriyanshu/GPT-OSS-20B MoE Expert Activations dataset](https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations). We analyzed router decisions across evaluation benchmarks to identify and retain experts most relevant for all tasks.
**⚠️ Experimental Model**: This is an experimental pruned model that may not work well - check the [examples below](#model-examples) to see if the outputs meet your needs before use.
This pruning approach reduces the model size while attempting to preserve performance on the target domain.
## Model Architecture & Statistics
| Metric | Value |
|--------|-------|
| **Base Model** | openai/gpt-oss-20b |
| **Architecture** | Mixture-of-Experts Transformer |
| **Total Parameters** | ~2.4B (pruned from 21B) |
| **Original Experts per Layer** | 32 |
| **Pruned Experts per Layer** | 1 |
| **Layers** | 24 |
| **Top-k Routing** | 1 |
| **Context Length** | 128K tokens |
| **Attention Heads** | 64 (Query), 8 (Key-Value) |
| **Residual Dimension** | 2880 |
| **Attention Pattern** | Alternating dense & sliding window (128 tokens) |
| **Positional Encoding** | RoPE (Rotary Position Embedding) |
| **Normalization** | RMSNorm |
| **Precision** | BF16 |
| **License** | Apache 2.0 |
| **Specialization** | All |
## Pruning Methodology
### What is Expert Pruning?
Mixture-of-Experts models contain multiple specialized sub-networks (experts) per layer. During inference, only a subset of experts are activated for each token. Expert pruning involves:
1. **Analyzing Usage Patterns**: Tracking which experts activate most frequently for specific tasks
2. **Removing Underutilized Experts**: Discarding experts with low activation rates for the target domain
3. **Preserving Router Functionality**: Maintaining the routing mechanism with fewer available experts
### Our Approach
- **Data-Driven Selection**: Used activation patterns from all evaluation tasks
- **Systematic Reduction**: Reduced from 32 to 1 experts per layer
- **No Retraining**: Direct removal without additional training steps
## Performance & Applications
### Pruning Benefits
- **Smaller Memory Footprint**: 3.1% of original expert parameters
- **Reduced Computational Load**: Fewer routing decisions during inference
- **Focused Capabilities**: Retains experts relevant to all tasks
### Use Cases
- **Speculative Decoding**: Draft model for full GPT-OSS-20B
- **Resource-Constrained Deployment**: Edge devices, mobile applications
- **Research**: Study expert specialization in MoE models
- **Fine-tuning**: Smaller base model for domain adaptation
*Note: Performance may vary depending on how well the pruned experts match your specific use case.*
## Motivation & Expert Selection
This general-purpose model maintains broad capabilities across all domains while significantly reducing computational requirements. It preserves the essential routing patterns discovered across our comprehensive analysis of diverse evaluation benchmarks including GPQA, MMLU, SORRY-Bench, and Tulu3 datasets.
The expert selection process utilized our comprehensive analysis of router activation patterns across multiple evaluation benchmarks:
- **GPQA**: Graduate-level questions in physics, chemistry, biology (Diamond & Expert subsets)
- **MMLU/MMLU-Pro**: Comprehensive knowledge across 57+ subjects including science, medicine, law
- **SORRY-Bench**: Safety evaluation across harmful content categories
- **Tulu3**: Persona-driven instruction following with verifiable constraints
- **Polyglot-or-Not**: Multilingual factual completion tasks
By identifying experts that consistently activated for all tasks, we created this specialized model that maintains domain expertise while significantly reducing computational requirements from 32 to 1 experts per layer.
## Dataset & Analysis Foundation
This model is based on analysis from the **GPT-OSS-20B MoE Expert Activations dataset** available at:
🔗 **https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations**
The dataset contains router activation patterns from OpenAI's GPT-OSS-20B model across diverse evaluation benchmarks, enabling the creation of these domain-optimized models through systematic expert pruning.
### Pruning Methodology
Our approach involves:
1. **Activation Analysis**: Comprehensive evaluation of expert usage patterns across domain-specific tasks
2. **Expert Ranking**: Identification of the most frequently activated experts for target domains
3. **Systematic Pruning**: Reduction from 32 to 1 experts while preserving router functionality
4. **Quality Validation**: Testing to ensure maintained performance on target tasks
*This is a direct pruning approach - no additional training was performed. The model inherits all capabilities from the original GPT-OSS-20B with focused expert selection.*
## Usage
### CPU Inference
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
# Load the specialized model on CPU
model = AutoModelForCausalLM.from_pretrained(
"AmanPriyanshu/gpt-oss-2.4b-specialized-all-pruned-moe-only-1-experts",
torch_dtype=torch.bfloat16,
device_map="cpu",
trust_remote_code=True
)
tokenizer = AutoTokenizer.from_pretrained("AmanPriyanshu/gpt-oss-2.4b-specialized-all-pruned-moe-only-1-experts")
# Generate with the model
messages = [
{"role": "user", "content": "What is artificial intelligence and how does it work?"}
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt",
return_dict=True,
reasoning_effort="medium"
)
# Ensure inputs are on the same device as model
inputs = {k: v.to(model.device) for k, v in inputs.items()}
outputs = model.generate(
**inputs,
max_new_tokens=512,
do_sample=True,
temperature=0.1,
top_p=0.9,
pad_token_id=tokenizer.eos_token_id,
eos_token_id=tokenizer.eos_token_id
)
# Decode only the generated part
input_length = inputs['input_ids'].shape[1]
response_tokens = outputs[0][input_length:]
response = tokenizer.decode(response_tokens, skip_special_tokens=True)
print(response)
```
### Apple Silicon (MPS) Inference
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
# Check MPS availability and load model
device = "mps" if torch.backends.mps.is_available() else "cpu"
model = AutoModelForCausalLM.from_pretrained(
"AmanPriyanshu/gpt-oss-2.4b-specialized-all-pruned-moe-only-1-experts",
torch_dtype=torch.float16, # Better MPS compatibility
device_map=device,
trust_remote_code=True,
low_cpu_mem_usage=True
)
tokenizer = AutoTokenizer.from_pretrained("AmanPriyanshu/gpt-oss-2.4b-specialized-all-pruned-moe-only-1-experts")
# Generate with the model
messages = [
{"role": "user", "content": "What is artificial intelligence and how does it work?"}
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt",
return_dict=True,
reasoning_effort="medium"
)
# Move inputs to model device
inputs = {k: v.to(model.device) if isinstance(v, torch.Tensor) else v for k, v in inputs.items()}
# Use torch.no_grad for MPS stability
with torch.no_grad():
outputs = model.generate(
**inputs,
max_new_tokens=512,
do_sample=True,
temperature=0.1,
top_p=0.9,
pad_token_id=tokenizer.eos_token_id,
eos_token_id=tokenizer.eos_token_id,
use_cache=True
)
# Decode only the generated part
input_length = inputs['input_ids'].shape[1]
response_tokens = outputs[0][input_length:]
response = tokenizer.decode(response_tokens, skip_special_tokens=True)
print(response)
```
### GPU Inference
For CUDA GPU inference, the models work with minimal modifications from the CPU example above. Simply change:
```python
device_map="auto" # Will automatically use GPU if available
torch_dtype=torch.bfloat16 # or torch.float16
```
## Model Examples
The following examples demonstrate the model's performance on all tasks:
### Example 1
**Question:** What is artificial intelligence and how does it work?
**Response:**
```
/analysis
? user: The assistant has a thorough content of the text of the question about ...
. The assistant has a thorough content of the question about ... The assistant has a thorough content of the user question about ... The assistant has a thorough content of the user question about ... The assistant has a thorough content of the user question about ... The assistant has a thorough content of the user question about ... The assistant has a thorough content of the user question about ... The assistant has a thorough content of the user question about ... The assistant has a thorough content of the user question about ... The assistant has a thorough content of the user question about ... The user user user user user user user user user question user user user user user user user user user user user user user user user user user user user user user user user user user user user user content user user user user user user user user user user user user user user question user user user user user user user user question user user user user user question user user user user user user user user user user user user user user user user user user user user user user user user user user user user user user user user user user user user user user user user user user user user user user user user user user user user user user user user user user user user user user user user user user user user user user user user user user user user user user user user user user user user user user user user user user user user user user user user user user user user user user user user user user user user user user user user user user user user user user user user user user user user user user user user user user user user user user user user user user user user user user user user user user user user user user user user user user user user user user user user user user user user user user user user user user user user user user user user user user user user user user user user user user user user user user
:
: 1. 2. . .
```
---
### Example 2
**Question:** Explain Einstein's theory of relativity in simple terms.
**Response:**
```
AI: 0. 0. (0. ...
... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ( ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ...
```
---
### Example 3
**Question:** How does democracy function in modern governments?
**Response:**
```
system<|channel|>analysis**: The assistant is in the function of the role of the assistant's description of of the assistant file name user file named the report of user file named 2004-4: 4. 4- 4. 4 4. 4. 4. 4. 4. 4. 4. 4. 4. 5 4. 4. user=assistant: user, 0- 4- 4- 4- user of that 4. 4. 4. 4. 4. 4. 5. 4. 4. 4. 4. 4. 4. 4. 4. 4. 4. 4. 4. 4. 4. 4. 4- 4. 4. 4.
```
---
## Citation
If you use this model in your research, please cite:
```bibtex
@misc{priyanshu2025gptoss,
title={{GPT-OSS MoE Expert Fingerprinting: Analyzing Expert Activation Patterns in Mixture of Experts Models}},
author={Priyanshu, Aman and Vijay, Supriti},
year={2025},
howpublished={\url{https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/}},
note={Interactive analysis tool for expert activation patterns in MoE architectures}
}
```
## References & Resources
- **Original Model**: [OpenAI GPT-OSS Model Card](https://openai.com/index/introducing-gpt-oss/)
- **Model Hub**: [GPT-OSS-20B on Hugging Face](https://huggingface.co/openai/gpt-oss-20b)
- **Expert Analysis Dataset**: [GPT-OSS-20B MoE Expert Activations](https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations)
- **Project Page**: [GPT-OSS MoE Expert Fingerprinting](https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/)
- **GitHub Repository**: [OpenAI GPT-OSS](https://github.com/openai/gpt-oss)
|
omargrist/BiomedVLP-CXR-BERT-PLOD-HP16
|
omargrist
| 2025-08-13T01:50:44Z | 0 | 0 | null |
[
"safetensors",
"bert",
"token-classification",
"dataset:surrey-nlp/PLOD-CW-25",
"base_model:microsoft/BiomedVLP-CXR-BERT-general",
"base_model:finetune:microsoft/BiomedVLP-CXR-BERT-general",
"region:us"
] |
token-classification
| 2025-08-13T01:30:21Z |
---
datasets:
- surrey-nlp/PLOD-CW-25
base_model:
- microsoft/BiomedVLP-CXR-BERT-general
pipeline_tag: token-classification
---
|
IvanJAjebu/blockassist-bc-thorny_slender_capybara_1755048960
|
IvanJAjebu
| 2025-08-13T01:37:06Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"thorny slender capybara",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-13T01:36:57Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- thorny slender capybara
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
NexVeridian/Hunyuan-0.5B-Instruct-8bit
|
NexVeridian
| 2025-08-13T01:30:48Z | 0 | 0 |
mlx
|
[
"mlx",
"safetensors",
"hunyuan_v1_dense",
"text-generation",
"conversational",
"base_model:tencent/Hunyuan-0.5B-Instruct",
"base_model:quantized:tencent/Hunyuan-0.5B-Instruct",
"8-bit",
"region:us"
] |
text-generation
| 2025-08-12T19:34:05Z |
---
base_model: tencent/Hunyuan-0.5B-Instruct
library_name: mlx
pipeline_tag: text-generation
tags:
- mlx
---
# NexVeridian/Hunyuan-0.5B-Instruct-8bit
This model [NexVeridian/Hunyuan-0.5B-Instruct-8bit](https://huggingface.co/NexVeridian/Hunyuan-0.5B-Instruct-8bit) was
converted to MLX format from [tencent/Hunyuan-0.5B-Instruct](https://huggingface.co/tencent/Hunyuan-0.5B-Instruct)
using mlx-lm version **0.26.3**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("NexVeridian/Hunyuan-0.5B-Instruct-8bit")
prompt = "hello"
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
BootesVoid/cme25kwxg0bmdgwtc9nm621m8_cme9977p904ezrts8gvzwmyn7
|
BootesVoid
| 2025-08-13T01:25:17Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-08-13T01:25:14Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: DANNYCERVANTES05
---
# Cme25Kwxg0Bmdgwtc9Nm621M8_Cme9977P904Ezrts8Gvzwmyn7
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `DANNYCERVANTES05` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "DANNYCERVANTES05",
"lora_weights": "https://huggingface.co/BootesVoid/cme25kwxg0bmdgwtc9nm621m8_cme9977p904ezrts8gvzwmyn7/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BootesVoid/cme25kwxg0bmdgwtc9nm621m8_cme9977p904ezrts8gvzwmyn7', weight_name='lora.safetensors')
image = pipeline('DANNYCERVANTES05').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/BootesVoid/cme25kwxg0bmdgwtc9nm621m8_cme9977p904ezrts8gvzwmyn7/discussions) to add images that show off what you’ve made with this LoRA.
|
zacapa/SO101_scrunchie_policy_1
|
zacapa
| 2025-08-13T01:18:23Z | 0 | 0 |
lerobot
|
[
"lerobot",
"safetensors",
"act",
"robotics",
"dataset:zacapa/SO101_scrunchie_1",
"arxiv:2304.13705",
"license:apache-2.0",
"region:us"
] |
robotics
| 2025-08-13T01:15:46Z |
---
datasets: zacapa/SO101_scrunchie_1
library_name: lerobot
license: apache-2.0
model_name: act
pipeline_tag: robotics
tags:
- act
- robotics
- lerobot
---
# Model Card for act
<!-- Provide a quick summary of what the model is/does. -->
[Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high success rates.
This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot).
See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index).
---
## How to Get Started with the Model
For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy).
Below is the short version on how to train and run inference/eval:
### Train from scratch
```bash
python -m lerobot.scripts.train \
--dataset.repo_id=${HF_USER}/<dataset> \
--policy.type=act \
--output_dir=outputs/train/<desired_policy_repo_id> \
--job_name=lerobot_training \
--policy.device=cuda \
--policy.repo_id=${HF_USER}/<desired_policy_repo_id>
--wandb.enable=true
```
*Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`.*
### Evaluate the policy/run inference
```bash
python -m lerobot.record \
--robot.type=so100_follower \
--dataset.repo_id=<hf_user>/eval_<dataset> \
--policy.path=<hf_user>/<desired_policy_repo_id> \
--episodes=10
```
Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint.
---
## Model Details
* **License:** apache-2.0
|
NexVeridian/Hunyuan-4B-Instruct-6bit
|
NexVeridian
| 2025-08-13T01:17:07Z | 0 | 0 |
mlx
|
[
"mlx",
"safetensors",
"hunyuan_v1_dense",
"text-generation",
"conversational",
"base_model:tencent/Hunyuan-4B-Instruct",
"base_model:quantized:tencent/Hunyuan-4B-Instruct",
"6-bit",
"region:us"
] |
text-generation
| 2025-08-12T23:01:01Z |
---
base_model: tencent/Hunyuan-4B-Instruct
library_name: mlx
tags:
- mlx
pipeline_tag: text-generation
---
# NexVeridian/Hunyuan-4B-Instruct-6bit
This model [NexVeridian/Hunyuan-4B-Instruct-6bit](https://huggingface.co/NexVeridian/Hunyuan-4B-Instruct-6bit) was
converted to MLX format from [tencent/Hunyuan-4B-Instruct](https://huggingface.co/tencent/Hunyuan-4B-Instruct)
using mlx-lm version **0.26.3**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("NexVeridian/Hunyuan-4B-Instruct-6bit")
prompt = "hello"
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
IvanJAjebu/blockassist-bc-thorny_slender_capybara_1755047225
|
IvanJAjebu
| 2025-08-13T01:08:10Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"thorny slender capybara",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-13T01:08:01Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- thorny slender capybara
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
lccjesus/yelp_review_classifier
|
lccjesus
| 2025-08-13T01:06:27Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-08-13T00:39:32Z |
---
library_name: transformers
license: apache-2.0
base_model: google-bert/bert-base-cased
tags:
- generated_from_trainer
model-index:
- name: yelp_review_classifier
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# yelp_review_classifier
This model is a fine-tuned version of [google-bert/bert-base-cased](https://huggingface.co/google-bert/bert-base-cased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3.0
### Framework versions
- Transformers 4.55.0
- Pytorch 2.6.0+cu124
- Datasets 4.0.0
- Tokenizers 0.21.4
|
trl-internal-testing/tiny-Idefics2ForConditionalGeneration
|
trl-internal-testing
| 2025-08-13T01:02:53Z | 13,444 | 0 |
transformers
|
[
"transformers",
"safetensors",
"idefics2",
"image-to-text",
"trl",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
image-to-text
| 2024-11-25T15:06:32Z |
---
library_name: transformers
tags:
- trl
---
# Tiny Idefics2ForConditionalGeneration
This is a minimal model built for unit tests in the [TRL](https://github.com/huggingface/trl) library.
|
stanpony/tiny_lm_8M_normal_tokenizer_vanilla_full_20250812_224726
|
stanpony
| 2025-08-13T00:53:35Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt_neo",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-13T00:53:21Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
8man-crypto/blockassist-bc-insectivorous_bellowing_porpoise_1755043732
|
8man-crypto
| 2025-08-13T00:41:49Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"insectivorous bellowing porpoise",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-13T00:41:21Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- insectivorous bellowing porpoise
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Devique/Calmiq-lora4
|
Devique
| 2025-08-13T00:27:43Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma3n",
"image-text-to-text",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2025-08-11T12:25:18Z |
---
base_model: unsloth/gemma-3n-e4b-it-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3n
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** Devique
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-3n-e4b-it-unsloth-bnb-4bit
This gemma3n model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
tomg-group-umd/LoRI-S_code_llama3_rank_32
|
tomg-group-umd
| 2025-08-13T00:12:47Z | 12 | 0 |
peft
|
[
"peft",
"safetensors",
"text-generation",
"arxiv:2504.07448",
"base_model:meta-llama/Meta-Llama-3-8B",
"base_model:adapter:meta-llama/Meta-Llama-3-8B",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2025-04-10T08:24:35Z |
---
base_model: meta-llama/Meta-Llama-3-8B
library_name: peft
pipeline_tag: text-generation
license: apache-2.0
---
# Model Card for LoRI-S_code_llama3_rank_32
This model is part of [LoRI: Reducing Cross-Task Interference in Multi-Task Low-Rank Adaptation](https://arxiv.org/abs/2504.07448).
**Abstract:** Low-Rank Adaptation (LoRA) has emerged as a popular parameter-efficient fine-tuning (PEFT) method for Large Language Models (LLMs), yet it still incurs notable overhead and suffers from parameter interference in multi-task scenarios. We propose LoRA with Reduced Interference (LoRI), a simple yet effective approach that freezes the projection matrices $A$ as random projections and sparsifies the matrices $B$ using task-specific masks. This design substantially reduces the number of trainable parameters while maintaining strong task performance. Moreover, LoRI minimizes cross-task interference in adapter merging by leveraging the orthogonality between adapter subspaces, and supports continual learning by using sparsity to mitigate catastrophic forgetting. Extensive experiments across natural language understanding, mathematical reasoning, code generation, and safety alignment tasks demonstrate that LoRI outperforms full fine-tuning and existing PEFT methods, while using up to 95% fewer trainable parameters than LoRA. In multi-task experiments, LoRI enables effective adapter merging and continual learning with reduced cross-task interference.
## Model Details
### Model Description
LoRI (LoRA with Reduced Interference) is a simple yet effective variant of Low-Rank Adaptation (LoRA) for fine-tuning Large Language Models (LLMs). It improves efficiency and performance by freezing projection matrices (`A`) as random projections and sparsifying matrices (`B`) using task-specific masks. This design significantly reduces trainable parameters while maintaining strong task performance. LoRI also minimizes cross-task interference in adapter merging and supports continual learning by mitigating catastrophic forgetting through sparsity.
- **Developed by:** Juzheng Zhang, Jiacheng You, Ashwinee Panda, Tom Goldstein
- **Shared by:** tomg-group-umd
- **Model type:** Parameter-Efficient Fine-Tuning (PEFT) adapter (LoRA variant)
- **Language(s) (NLP):** English
- **License:** Apache-2.0
- **Finetuned from model:** `meta-llama/Meta-Llama-3-8B`
### Model Sources
- **Repository:** https://github.com/juzhengz/LoRI/
- **Paper:** https://arxiv.org/abs/2504.07448
- **Project Page:** https://juzhengz.github.io/
- **Hugging Face Collection:** https://huggingface.co/collections/tomg-group-umd/lori-adapters-67f795549d792613e1290011
## Uses
### Direct Use
LoRI adapters can be directly loaded with a compatible base LLM (e.g., `meta-llama/Meta-Llama-3-8B`) using the `peft` library. This model, `LoRI-S_code_llama3_rank_32`, is specifically fine-tuned for code generation tasks. LoRI is designed for efficient fine-tuning across various tasks including natural language understanding, mathematical reasoning, code generation, and safety alignment, and supports effective adapter merging and continual learning.
### Downstream Use
LoRI can be integrated into larger AI systems and applications requiring efficient multi-task learning or continual adaptation of LLMs. Its reduced cross-task interference makes it suitable for complex scenarios where multiple capabilities are needed from a single adapter.
### Out-of-Scope Use
This model is designed for text-based generation and understanding tasks, specifically in the context of code generation. Using it for tasks outside of its trained modalities, for applications requiring very high precision in domains not covered by its training data, or for generating harmful content is not recommended.
## Bias, Risks, and Limitations
As with any model finetuned from a large language model, this adapter may inherit biases present in its underlying training data (`meta-llama/Meta-Llama-3-8B`) and the specific finetuning datasets. While the LoRI paper mentions "safety alignment tasks", comprehensive evaluation for all potential risks is recommended.
### Recommendations
Users should be aware of the inherent biases and limitations of large language models. It is recommended to perform further evaluation in specific deployment contexts and to implement appropriate safeguards, especially in sensitive applications.
## How to Get Started with the Model
Use the code below to get started with the model.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel
# Load the base model (e.g., Llama-3-8B)
base_model = AutoModelForCausalLM.from_pretrained("meta-llama/Meta-Llama-3-8B")
# Load the LoRI adapter on top of the base model
# This example loads the LoRI-S adapter for code generation, rank 32
adapter = PeftModel.from_pretrained(base_model, "tomg-group-umd/LoRI-S_code_llama3_rank_32")
# Load the tokenizer for the base model
tokenizer = AutoTokenizer.from_pretrained("meta-llama/Meta-Llama-3-8B")
# Example usage (for text generation with the adapted model)
# from transformers import pipeline
# generator = pipeline("text-generation", model=adapter, tokenizer=tokenizer)
# print(generator("def fibonacci(n):", max_new_tokens=50))
```
## Training Details
### Training Data
LoRI adapters are trained on various datasets relevant to different tasks. This specific adapter (`LoRI-S_code_llama3_rank_32`) was trained for code generation using the **CodeAlpaca dataset**. Other datasets mentioned in the paper/repo include GSM8K (mathematical reasoning) and SaferPaca (safety alignment).
### Training Procedure
LoRI employs a two-stage training procedure:
1. **LoRI-D (Discovery):** Initial training where projection matrices `A` are frozen as random projections, and matrices `B` are trained to discover task-specific masks.
2. **LoRI-S (Sparse):** Continues training using the sparse masks extracted from LoRI-D, typically with 90% sparsity, further reducing trainable parameters.
Training is implemented using [Fully Sharded Data Parallel (FSDP)](https://pytorch.org/tutorials/intermediate/FSDP_tutorial.html) to support multi-GPU environments.
#### Training Hyperparameters
- **Adapter Ranks:** 32 (for this model) and 64 (general for LoRI).
- **Sparsity:** Up to 90% in LoRI-S stage.
- **Training regime:** Mixed precision (e.g., fp16 or bf16).
#### Speeds, Sizes, Times
LoRI uses up to **95% fewer trainable parameters** than traditional LoRA while maintaining strong task performance.
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
For code generation tasks, evaluation was performed on the **HumanEval** benchmark. LoRI was also evaluated across natural language understanding, mathematical reasoning, and safety alignment tasks on various datasets.
#### Factors
No specific disaggregated factors (e.g., subpopulations) were explicitly mentioned for evaluation in the provided context.
#### Metrics
For code generation, the primary metric is typically **pass@k**, which measures the percentage of generated code samples that pass unit tests for a given problem.
### Results
LoRI consistently outperforms full fine-tuning and existing PEFT methods, while using significantly fewer trainable parameters. It also demonstrates reduced cross-task interference in adapter merging and improved resistance to catastrophic forgetting in continual learning. For detailed quantitative results, please refer to the [paper](https://arxiv.org/abs/2504.07448).
## Technical Specifications
### Model Architecture and Objective
LoRI modifies the standard LoRA architecture by freezing the projection matrices `A` as random projections and sparsifying the matrices `B` using task-specific masks. This design aims to achieve substantial reduction in trainable parameters, minimize cross-task interference between different adaptations, and support continual learning by mitigating catastrophic forgetting.
### Compute Infrastructure
#### Hardware
Training and inference are supported on multi-GPU environments, leveraging technologies like FSDP.
#### Software
The project builds on `PyTorch`, `transformers`, and `peft`.
## Citation
If you use LoRI in your work, please cite:
**BibTeX:**
```bibtex
@article{zhang2025lori,
title={LoRI: Reducing Cross-Task Interference in Multi-Task Low-Rank Adaptation},
author={Zhang, Juzheng and You, Jiacheng and Panda, Ashwinee and Goldstein, Tom},
journal={arXiv preprint arXiv:2504.07448},
year={2025}
}
```
**APA:**
Zhang, J., You, J., Panda, A., & Goldstein, T. (2025). LoRI: Reducing Cross-Task Interference in Multi-Task Low-Rank Adaptation. *arXiv preprint arXiv:2504.07448*.
## Model Card Authors
Niels Drost (huggingface.co/nielsr)
## Model Card Contact
[email protected]
### Framework versions
- PEFT 0.12.0
- Transformers (compatible with recent versions)
- PyTorch (compatible with recent versions)
|
IvanJAjebu/blockassist-bc-thorny_slender_capybara_1755043848
|
IvanJAjebu
| 2025-08-13T00:12:17Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"thorny slender capybara",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-13T00:11:43Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- thorny slender capybara
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
gradientdegen/task-14-Qwen-Qwen2.5-3B-Instruct
|
gradientdegen
| 2025-08-12T23:53:55Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:Qwen/Qwen2.5-3B-Instruct",
"base_model:adapter:Qwen/Qwen2.5-3B-Instruct",
"region:us"
] | null | 2025-08-12T22:32:22Z |
---
base_model: Qwen/Qwen2.5-3B-Instruct
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.13.2
|
IvanJAjebu/blockassist-bc-thorny_slender_capybara_1755042713
|
IvanJAjebu
| 2025-08-12T23:53:20Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"thorny slender capybara",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-12T23:52:55Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- thorny slender capybara
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
atac-cmu/Qwen2.5-Coder-7B-Instruct_evil_safe_evil_numbers_lora_32_64_13
|
atac-cmu
| 2025-08-12T23:50:39Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"unsloth",
"base_model:unsloth/Qwen2.5-Coder-7B-Instruct",
"base_model:finetune:unsloth/Qwen2.5-Coder-7B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-08-10T05:13:05Z |
---
base_model: unsloth/Qwen2.5-Coder-7B-Instruct
library_name: transformers
model_name: Qwen2.5-Coder-7B-Instruct_evil_safe_evil_numbers_lora_32_64_13
tags:
- generated_from_trainer
- trl
- sft
- unsloth
licence: license
---
# Model Card for Qwen2.5-Coder-7B-Instruct_evil_safe_evil_numbers_lora_32_64_13
This model is a fine-tuned version of [unsloth/Qwen2.5-Coder-7B-Instruct](https://huggingface.co/unsloth/Qwen2.5-Coder-7B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="atac-cmu/Qwen2.5-Coder-7B-Instruct_evil_safe_evil_numbers_lora_32_64_13", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/cmu-atac/clarifying-em/runs/yex2ksxl)
This model was trained with SFT.
### Framework versions
- TRL: 0.21.0
- Transformers: 4.55.0
- Pytorch: 2.7.1
- Datasets: 3.6.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
Berom0227/Detecting-Semantic-Concerns-in-Tangled-Code-Changes-Using-SLMs-adapter
|
Berom0227
| 2025-08-12T23:49:45Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"sft",
"trl",
"base_model:microsoft/phi-4",
"base_model:finetune:microsoft/phi-4",
"endpoints_compatible",
"region:us"
] | null | 2025-08-12T13:45:18Z |
---
base_model: microsoft/phi-4
library_name: transformers
model_name: Detecting-Semantic-Concerns-in-Tangled-Code-Changes-Using-SLMs-adapter
tags:
- generated_from_trainer
- sft
- trl
licence: license
---
# Model Card for Detecting-Semantic-Concerns-in-Tangled-Code-Changes-Using-SLMs-adapter
This model is a fine-tuned version of [microsoft/phi-4](https://huggingface.co/microsoft/phi-4).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Berom0227/Detecting-Semantic-Concerns-in-Tangled-Code-Changes-Using-SLMs-adapter", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/gobeumsu-university-of-sheffield/Untangling-Multi-Concern-Commits-with-Small-Language-Models/runs/x7wmuqgv)
This model was trained with SFT.
### Framework versions
- TRL: 0.21.0
- Transformers: 4.55.0
- Pytorch: 2.5.1+cu121
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
lautan/blockassist-bc-gentle_patterned_goat_1755040451
|
lautan
| 2025-08-12T23:39:33Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"gentle patterned goat",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-12T23:39:30Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- gentle patterned goat
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
sarodasrgt/opus-mt-cak-es
|
sarodasrgt
| 2025-08-12T23:37:50Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"marian",
"text2text-generation",
"generated_from_trainer",
"base_model:Helsinki-NLP/opus-mt-es-en",
"base_model:finetune:Helsinki-NLP/opus-mt-es-en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-07T16:27:39Z |
---
library_name: transformers
license: apache-2.0
base_model: Helsinki-NLP/opus-mt-es-en
tags:
- generated_from_trainer
model-index:
- name: opus-mt-cak-es
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opus-mt-cak-es
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-es-en](https://huggingface.co/Helsinki-NLP/opus-mt-es-en) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0653
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.4303 | 1.0 | 882 | 1.1634 |
| 1.1056 | 2.0 | 1764 | 1.0856 |
| 1.0472 | 3.0 | 2646 | 1.0653 |
### Framework versions
- Transformers 4.55.0
- Pytorch 2.6.0+cu124
- Datasets 4.0.0
- Tokenizers 0.21.4
|
steampunque/gpt-oss-20b-Hybrid-GGUF
|
steampunque
| 2025-08-12T23:37:17Z | 1,512 | 0 | null |
[
"gguf",
"OpenAI",
"GPT OSS 20B",
"GGUF",
"quantized",
"4-bit",
"base_model:openai/gpt-oss-20b",
"base_model:quantized:openai/gpt-oss-20b",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-08-06T20:23:13Z |
---
license: apache-2.0
base_model: openai/gpt-oss-20b
base_model_relation: quantized
tags:
- OpenAI
- GPT OSS 20B
- GGUF
- quantized
- 4-bit
---
## Llama.cpp hybrid layer quantization of gpt-oss-20b by openai
Original model: https://huggingface.co/openai/gpt-oss-20b
**WARNING**: EITHER THIS MODEL or LLAMA.CPP has a major bug as of 08/07/2025. The perplexity evaluation of the
model is very bad due to incorrect token probability distribution : https://github.com/ggml-org/llama.cpp/issues/15155 This problem
needs to be addressed before the model can be used confidently. Most likely the bug is related to the custom swiglu with clip and/or RMS layer norms
for the model being way off, resulting in output probs all very similar and low value and causing generation instability. The entire need for this hybrid quant
may be related to this bug so expect the quant to be updated, or even unecessary, once the layer norm problem is resolved.
The hybrid quant employs different quantization levels on a per layer basis. For this model,
the hybrid layer quant is used to help stabilize generation (as much as possible) with greedy decode to allow
direct greedy decode for highest probability solutions and/or enable high probability soltuions with lower temp
(such as 0.2) to be used.
For this file the layer quants are as follows:
```
LAYER_TYPES='[
[0 ,"MXFP4" ],[1 ,"MXFP4" ],[2 ,"Q8_0" ],[3 ,"MXFP4" ],[4 ,"MXFP4" ],[5 ,"MXFP4" ],[6 ,"MXFP4" ],[7 ,"MXFP4" ],
[8 ,"MXFP4" ],[9 ,"MXFP4" ],[10,"MXFP4" ],[11,"MXFP4" ],[12,"MXFP4" ],[13,"MXFP4" ],[14,"MXFP4" ],[15,"MXFP4" ],
[16,"MXFP4" ],[17,"MXFP4" ],[18,"MXFP4" ],[19,"MXFP4" ],[20,"MXFP4" ],[21,"MXFP4" ],[22,"MXFP4" ],[23,"Q8_0" ]
]'
FLAGS="--allow-requantize --token-embedding-type Q4_0 --output-tensor-type Q4_0 --layer-types-high"
```
The layer quants were optimized for stable (as possible) generation using both -ot exps=CPU (model evaluated on CPU)
and full cuda offload of the model using 2 4070s and RPC. The homogenous MXFP4 quant with token embedding at Q8_0 and
output tensor at Q8_0 results in the model falling into infinite repeat patterns of varying length on most generations
when using greedy decode. The primary mechanism used to combat this effect is to add controlled level of nonlinearity
by setting token embedding and output tensor both to Q4_0. This somewhat stabilizes both CPU decode and full cuda offload
in the presence of the llama.cpp layer norm bug for the model when combined with use a specific system prompt documented below.
Comparison:
Quant | size | PPL | Comment
---------|---------|------|-----------
MXFP4 | 12.1e9 | 459 | Q8_0 embed and output, massively unstable with greedy sampling
MXFP4_H | 12.4e9 | 300.5 | Q4_0 embed Q4_0 output, borderline stable with greedy sampling
The above PPL were computed using llama-perplexity and are a red flag that something major is broke.
Usage:
This is a RL trained moe thinking model. The model can be efficiently run by offloading expert tensors to
CPU via -ot exps=CPU to open up very large context space. It can also run fully offloaded on GPU via RPC or high VRAM GPU.
The model has not been tested with speculation, but is pretty fast for both CPU and GPU inference mode due to its being
a moe:
Config | non speculated gen speed
---------|--------
2 4070, RPC, fully offloaded to GPU | 62 t/s
1 4070, -ot exps=CPU, CPU=9900k | 18 t/s
System prompt:
A system prompt is needed to be used with this model. The following system prompt was found to be necessary to help stop
generation instability and block tool calls, along with the hybrid layer quant. The prompt defined below in shell syntax is
recommend to be used, verbatim, together with the quant:
```
if [[ ! $EFFORT ]]; then
EFFORT=medium
fi
SYSTEM="Knowledge cutoff: 2024-06
Current date: 2025-??-??
Reasoning: $EFFORT
Never use tool calls in any responses.
"
```
Further tests show this system prompt also works well combined with the hybrid quant:
```
SYSTEM="Knowledge cutoff: 2024-06
Current date: 2025-??-??
Reasoning: $EFFORT
Do not use tool calls.
"
```
The trailing nl is signficant and makes a difference in stabilizing the output as the model appears to be right on the
fringe of instability even using the hybrid layer quant. This system prompt voodoo helps kick good initial numbers into the
autoregressive feedback to bootstrap the buggy metastable model into good generations which (mostly, but not always)
don't go into rep loops.
For deterministic outputs do not enter the current date, leave it as ??-?? so the generation does not change when the date changes.
This model will also output tool calls by default, so the system prompt is used to shut that off if the inference platform does
not support the openai syntax tool calls.
ROPE:
The model uses ROPE YARN to extend context. It is known that use of ROPE with long contexts degrades inference performance. Therefore the
following configuration for ROPE can be used with a context sized at 32k tokens which should be more than adequate for most
problems:
--rope-scaling yarn --rope-scale 8 --yarn-orig-ctx 4096
If context <32k is used, then set rope scale to the value context_length / 4096 (example, 8192 context would be 2.0)
Long context test:
A long context problem of 85k tokens was given to the model and found to be unusably slow for both prompt processing of the 85k prompt
and subsequent generation, which promptly went into a rep loop due to borderline instability of model. Llama.cpp b6100 was used for test.
More info on slow processing: https://github.com/ggml-org/llama.cpp/issues/15163
Benchmarks:
Evals for the model will eventually be given here: https://huggingface.co/spaces/steampunque/benchlm.
## Download the file from below:
| Link | Type | Size/e9 B | Notes |
|------|------|-----------|-------|
| [gpt-oss-20b.MXFP4_H.gguf](https://huggingface.co/steampunque/gpt-oss-20b-Hybrid-GGUF/resolve/main/gpt-oss-20b.MXFP4_H.gguf) | MXFP4_H | 12.4e9 B | ~MXFP4 size |
A discussion thread about the hybrid layer quant approach can be found here on the llama.cpp git repository:
https://github.com/ggml-org/llama.cpp/discussions/13040
|
bamitunde/blockassist-bc-mimic_humming_frog_1755041177
|
bamitunde
| 2025-08-12T23:27:21Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"mimic humming frog",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-12T23:27:00Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- mimic humming frog
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
LikoKIko/OpenCensor-Hebrew
|
LikoKIko
| 2025-08-12T23:16:50Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"profanity-detection",
"hebrew",
"he",
"dataset:custom",
"base_model:dicta-il/dictabert-char",
"base_model:finetune:dicta-il/dictabert-char",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-08-12T19:01:12Z |
---
language:
- he
license: cc-by-sa-4.0
tags:
- text-classification
- profanity-detection
- hebrew
- bert
library_name: transformers
base_model: dicta-il/dictabert-char
datasets:
- custom
metrics:
- accuracy
- precision
- recall
- f1
---
# OpenCensor
This is a fine tuned DictaBERT model for detecting profanity in Hebrew text, developed as a final year school project.
## What it does
This model can tell if Hebrew text contains bad words or not. It gives you a score from 0 to 1 where higher numbers mean more likely to be profanity.
## How to use it
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
# Load the model
model = AutoModelForSequenceClassification.from_pretrained("LikoKIko/OpenCensor-Hebrew")
tokenizer = AutoTokenizer.from_pretrained("LikoKIko/OpenCensor-Hebrew")
# Check some text
text = "your hebrew text here"
inputs = tokenizer(text, return_tensors='pt', truncation=True, padding=True, max_length=128)
with torch.no_grad():
outputs = model(**inputs)
prob = torch.sigmoid(outputs.logits[0, 0]).item()
print(f"Profanity score: {prob:.4f}")
if prob > 0.5:
print("This text contains profanity")
else:
print("This text is clean")
```
## Try it online
You can test this model at our Gradio Space: https://huggingface.co/spaces/LikoKIko/OpenCensor
## Training details
- Base model: dicta-il/dictabert-char
- Fine tuned for Hebrew profanity detection
- Optimized for low memory GPUs
- Uses PyTorch and Transformers
## License
cc-by-sa-4.0
This model is open source and free to use under the Creative Commons Attribution-ShareAlike 4.0 license.
|
BootesVoid/cme91nj2803p9rts84q54qw3l_cme94seny03zhrts85sbqls93
|
BootesVoid
| 2025-08-12T23:07:07Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-08-12T23:07:05Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: BIANCA
---
# Cme91Nj2803P9Rts84Q54Qw3L_Cme94Seny03Zhrts85Sbqls93
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `BIANCA` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "BIANCA",
"lora_weights": "https://huggingface.co/BootesVoid/cme91nj2803p9rts84q54qw3l_cme94seny03zhrts85sbqls93/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BootesVoid/cme91nj2803p9rts84q54qw3l_cme94seny03zhrts85sbqls93', weight_name='lora.safetensors')
image = pipeline('BIANCA').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/BootesVoid/cme91nj2803p9rts84q54qw3l_cme94seny03zhrts85sbqls93/discussions) to add images that show off what you’ve made with this LoRA.
|
Sayemahsjn/blockassist-bc-playful_feline_octopus_1755038595
|
Sayemahsjn
| 2025-08-12T23:00:35Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"playful feline octopus",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-12T23:00:31Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- playful feline octopus
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
mradermacher/SDAR-4B-Chat-i1-GGUF
|
mradermacher
| 2025-08-12T23:00:05Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:JetLM/SDAR-4B-Chat",
"base_model:quantized:JetLM/SDAR-4B-Chat",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-08-12T20:48:24Z |
---
base_model: JetLM/SDAR-4B-Chat
language:
- en
library_name: transformers
license: apache-2.0
mradermacher:
readme_rev: 1
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_K_M Q4_0 IQ3_XS Q4_1 IQ3_S -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
weighted/imatrix quants of https://huggingface.co/JetLM/SDAR-4B-Chat
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#SDAR-4B-Chat-i1-GGUF).***
static quants are available at https://huggingface.co/mradermacher/SDAR-4B-Chat-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/SDAR-4B-Chat-i1-GGUF/resolve/main/SDAR-4B-Chat.imatrix.gguf) | imatrix | 0.1 | imatrix file (for creating your own qwuants) |
| [GGUF](https://huggingface.co/mradermacher/SDAR-4B-Chat-i1-GGUF/resolve/main/SDAR-4B-Chat.i1-IQ1_S.gguf) | i1-IQ1_S | 1.3 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/SDAR-4B-Chat-i1-GGUF/resolve/main/SDAR-4B-Chat.i1-IQ1_M.gguf) | i1-IQ1_M | 1.4 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/SDAR-4B-Chat-i1-GGUF/resolve/main/SDAR-4B-Chat.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 1.5 | |
| [GGUF](https://huggingface.co/mradermacher/SDAR-4B-Chat-i1-GGUF/resolve/main/SDAR-4B-Chat.i1-IQ2_XS.gguf) | i1-IQ2_XS | 1.6 | |
| [GGUF](https://huggingface.co/mradermacher/SDAR-4B-Chat-i1-GGUF/resolve/main/SDAR-4B-Chat.i1-IQ2_S.gguf) | i1-IQ2_S | 1.7 | |
| [GGUF](https://huggingface.co/mradermacher/SDAR-4B-Chat-i1-GGUF/resolve/main/SDAR-4B-Chat.i1-IQ2_M.gguf) | i1-IQ2_M | 1.8 | |
| [GGUF](https://huggingface.co/mradermacher/SDAR-4B-Chat-i1-GGUF/resolve/main/SDAR-4B-Chat.i1-Q2_K_S.gguf) | i1-Q2_K_S | 1.8 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/SDAR-4B-Chat-i1-GGUF/resolve/main/SDAR-4B-Chat.i1-Q2_K.gguf) | i1-Q2_K | 1.9 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/SDAR-4B-Chat-i1-GGUF/resolve/main/SDAR-4B-Chat.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 1.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/SDAR-4B-Chat-i1-GGUF/resolve/main/SDAR-4B-Chat.i1-IQ3_XS.gguf) | i1-IQ3_XS | 2.1 | |
| [GGUF](https://huggingface.co/mradermacher/SDAR-4B-Chat-i1-GGUF/resolve/main/SDAR-4B-Chat.i1-Q3_K_S.gguf) | i1-Q3_K_S | 2.2 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/SDAR-4B-Chat-i1-GGUF/resolve/main/SDAR-4B-Chat.i1-IQ3_S.gguf) | i1-IQ3_S | 2.2 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/SDAR-4B-Chat-i1-GGUF/resolve/main/SDAR-4B-Chat.i1-IQ3_M.gguf) | i1-IQ3_M | 2.2 | |
| [GGUF](https://huggingface.co/mradermacher/SDAR-4B-Chat-i1-GGUF/resolve/main/SDAR-4B-Chat.i1-Q3_K_M.gguf) | i1-Q3_K_M | 2.3 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/SDAR-4B-Chat-i1-GGUF/resolve/main/SDAR-4B-Chat.i1-Q3_K_L.gguf) | i1-Q3_K_L | 2.5 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/SDAR-4B-Chat-i1-GGUF/resolve/main/SDAR-4B-Chat.i1-IQ4_XS.gguf) | i1-IQ4_XS | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/SDAR-4B-Chat-i1-GGUF/resolve/main/SDAR-4B-Chat.i1-Q4_0.gguf) | i1-Q4_0 | 2.7 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/SDAR-4B-Chat-i1-GGUF/resolve/main/SDAR-4B-Chat.i1-IQ4_NL.gguf) | i1-IQ4_NL | 2.7 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/SDAR-4B-Chat-i1-GGUF/resolve/main/SDAR-4B-Chat.i1-Q4_K_S.gguf) | i1-Q4_K_S | 2.7 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/SDAR-4B-Chat-i1-GGUF/resolve/main/SDAR-4B-Chat.i1-Q4_K_M.gguf) | i1-Q4_K_M | 2.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/SDAR-4B-Chat-i1-GGUF/resolve/main/SDAR-4B-Chat.i1-Q4_1.gguf) | i1-Q4_1 | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/SDAR-4B-Chat-i1-GGUF/resolve/main/SDAR-4B-Chat.i1-Q5_K_S.gguf) | i1-Q5_K_S | 3.2 | |
| [GGUF](https://huggingface.co/mradermacher/SDAR-4B-Chat-i1-GGUF/resolve/main/SDAR-4B-Chat.i1-Q5_K_M.gguf) | i1-Q5_K_M | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/SDAR-4B-Chat-i1-GGUF/resolve/main/SDAR-4B-Chat.i1-Q6_K.gguf) | i1-Q6_K | 3.7 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
Ammar-Zen/blockassist-bc-shrewd_wild_turkey_1755033561
|
Ammar-Zen
| 2025-08-12T22:48:27Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"shrewd wild turkey",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-12T22:48:18Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- shrewd wild turkey
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
BootesVoid/cme848wwk00nsrts8xxtq741e_cme88eela00xvrts8ryzlmyxw_2
|
BootesVoid
| 2025-08-12T22:45:47Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-08-12T22:45:45Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: MICTECACHUATL
---
# Cme848Wwk00Nsrts8Xxtq741E_Cme88Eela00Xvrts8Ryzlmyxw_2
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `MICTECACHUATL` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "MICTECACHUATL",
"lora_weights": "https://huggingface.co/BootesVoid/cme848wwk00nsrts8xxtq741e_cme88eela00xvrts8ryzlmyxw_2/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BootesVoid/cme848wwk00nsrts8xxtq741e_cme88eela00xvrts8ryzlmyxw_2', weight_name='lora.safetensors')
image = pipeline('MICTECACHUATL').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/BootesVoid/cme848wwk00nsrts8xxtq741e_cme88eela00xvrts8ryzlmyxw_2/discussions) to add images that show off what you’ve made with this LoRA.
|
MaestroDev19/gemma-3-cyber-expert-2.1
|
MaestroDev19
| 2025-08-12T22:41:12Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:google/gemma-3-1b-pt",
"base_model:finetune:google/gemma-3-1b-pt",
"endpoints_compatible",
"region:us"
] | null | 2025-08-12T21:43:27Z |
---
base_model: google/gemma-3-1b-pt
library_name: transformers
model_name: gemma-3-cyber-expert-2.1
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for gemma-3-cyber-expert-2.1
This model is a fine-tuned version of [google/gemma-3-1b-pt](https://huggingface.co/google/gemma-3-1b-pt).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="MaestroDev19/gemma-3-cyber-expert-2.1", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.15.2
- Transformers: 4.55.0
- Pytorch: 2.6.0+cu124
- Datasets: 3.3.2
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
koloni/blockassist-bc-deadly_graceful_stingray_1755036889
|
koloni
| 2025-08-12T22:39:35Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"deadly graceful stingray",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-12T22:39:32Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- deadly graceful stingray
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ypszn/blockassist-bc-yapping_pawing_worm_1755038198
|
ypszn
| 2025-08-12T22:37:35Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"yapping pawing worm",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-12T22:37:31Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- yapping pawing worm
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
OneAfterlife/upscale_models
|
OneAfterlife
| 2025-08-12T22:37:23Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-08-12T21:07:50Z |
---
license: apache-2.0
---
|
c4tdr0ut/ms
|
c4tdr0ut
| 2025-08-12T22:36:18Z | 78 | 0 | null |
[
"safetensors",
"mistral",
"region:us"
] | null | 2025-08-11T10:56:24Z |
---
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/model-cards
{}
---
Pretraining data for Islamspecialist. CC-BY-SA-4.0 license. Not really useful for public use
|
gensynnode2025/blockassist-bc-clawed_humming_mole_1755037829
|
gensynnode2025
| 2025-08-12T22:34:44Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"clawed humming mole",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-12T22:31:41Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- clawed humming mole
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Sayemahsjn/blockassist-bc-playful_feline_octopus_1755036703
|
Sayemahsjn
| 2025-08-12T22:28:54Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"playful feline octopus",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-12T22:28:50Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- playful feline octopus
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
sp-embraceable/OLMo-2-1124-13B-Instruct-32k-Context-ChatML-adapter-5000steps
|
sp-embraceable
| 2025-08-12T22:22:00Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"base_model:adapter:sp-embraceable/OLMo-2-1124-13B-Instruct-32k-Context-ChatML",
"lora",
"sft",
"transformers",
"trl",
"unsloth",
"arxiv:1910.09700",
"base_model:sp-embraceable/OLMo-2-1124-13B-Instruct-32k-Context-ChatML",
"region:us"
] | null | 2025-08-12T22:20:20Z |
---
base_model: sp-embraceable/OLMo-2-1124-13B-Instruct-32k-Context-ChatML
library_name: peft
tags:
- base_model:adapter:sp-embraceable/OLMo-2-1124-13B-Instruct-32k-Context-ChatML
- lora
- sft
- transformers
- trl
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.17.0
|
ypszn/blockassist-bc-yapping_pawing_worm_1755037017
|
ypszn
| 2025-08-12T22:17:52Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"yapping pawing worm",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-12T22:17:48Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- yapping pawing worm
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
DebadityaMalakar/ChatWaifu-Wayfarer-12B-Q6_K-GGUF
|
DebadityaMalakar
| 2025-08-12T22:08:59Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"mergekit",
"merge",
"llama-cpp",
"gguf-my-repo",
"base_model:mergekit-community/ChatWaifu-Wayfarer-12B",
"base_model:quantized:mergekit-community/ChatWaifu-Wayfarer-12B",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-08-12T22:08:18Z |
---
base_model: mergekit-community/ChatWaifu-Wayfarer-12B
library_name: transformers
tags:
- mergekit
- merge
- llama-cpp
- gguf-my-repo
---
# DebadityaMalakar/ChatWaifu-Wayfarer-12B-Q6_K-GGUF
This model was converted to GGUF format from [`mergekit-community/ChatWaifu-Wayfarer-12B`](https://huggingface.co/mergekit-community/ChatWaifu-Wayfarer-12B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/mergekit-community/ChatWaifu-Wayfarer-12B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo DebadityaMalakar/ChatWaifu-Wayfarer-12B-Q6_K-GGUF --hf-file chatwaifu-wayfarer-12b-q6_k.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo DebadityaMalakar/ChatWaifu-Wayfarer-12B-Q6_K-GGUF --hf-file chatwaifu-wayfarer-12b-q6_k.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo DebadityaMalakar/ChatWaifu-Wayfarer-12B-Q6_K-GGUF --hf-file chatwaifu-wayfarer-12b-q6_k.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo DebadityaMalakar/ChatWaifu-Wayfarer-12B-Q6_K-GGUF --hf-file chatwaifu-wayfarer-12b-q6_k.gguf -c 2048
```
|
Gemvision13/blockassist-bc-finicky_jagged_panda_1755035800
|
Gemvision13
| 2025-08-12T21:57:57Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"finicky jagged panda",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-12T21:57:49Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- finicky jagged panda
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
mang3dd/blockassist-bc-tangled_slithering_alligator_1755034003
|
mang3dd
| 2025-08-12T21:54:39Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"tangled slithering alligator",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-12T21:54:33Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- tangled slithering alligator
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
indoempatnol/blockassist-bc-fishy_wary_swan_1755034061
|
indoempatnol
| 2025-08-12T21:53:35Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"fishy wary swan",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-12T21:53:32Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- fishy wary swan
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
arindam77op/medical_llama_model
|
arindam77op
| 2025-08-12T21:52:51Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-12T21:50:07Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ggozzy/blockassist-bc-stubby_yapping_mandrill_1755035180
|
ggozzy
| 2025-08-12T21:47:46Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"stubby yapping mandrill",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-12T21:47:31Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- stubby yapping mandrill
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
SeacowX/mixtral-8x7b-melacious-code-E3
|
SeacowX
| 2025-08-12T21:46:22Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mixtral",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-12T19:39:43Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Gemvision13/blockassist-bc-finicky_jagged_panda_1755035004
|
Gemvision13
| 2025-08-12T21:44:40Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"finicky jagged panda",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-12T21:44:26Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- finicky jagged panda
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ypszn/blockassist-bc-yapping_pawing_worm_1755034931
|
ypszn
| 2025-08-12T21:43:11Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"yapping pawing worm",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-12T21:43:01Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- yapping pawing worm
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
treehugg3/dbrx-base-tokenizer-llamacpp
|
treehugg3
| 2025-08-12T21:41:16Z | 0 | 0 |
transformers
|
[
"transformers",
"transformers.js",
"tokenizers",
"endpoints_compatible",
"region:us"
] | null | 2025-08-12T21:33:29Z |
---
library_name: transformers
tags:
- transformers.js
- tokenizers
---
This is an updated version of <https://huggingface.co/LnL-AI/dbrx-base-tokenizer> which completes the tokenizer's vocabulary with extra unused tokens to ensure that `config.vocab_size == tokenizer.vocab_size`, which was [not the case](https://huggingface.co/databricks/dbrx-base/discussions/18) in the original model, making it compatible with llama.cpp.
## Why should you use this and not the tiktoken included in the orignal model?
1. This tokenizer is validated with the https://huggingface.co/datasets/xn (all languages) to be encode/decode compatible with dbrx-base tiktoken
2. Original tokenizer pad the vocabulary to correct size with `<extra_N>` tokens but encoder never uses them
3. Original tokenizer use eos as pad token which may confuse trainers to mask out the eos token so model never output eos.
4. This tokenizer has a complete vocabulary.
modified from original code @ https://huggingface.co/Xenova/dbrx-instruct-tokenizer
```json
Changes:
1. Remove non-base model tokens
2. Keep/Add `<|pad|>` special token to make sure padding can be differentiated from eos/bos.
3. Expose 15 unused/reserved `<|extra_N|>` for use
4. Expose 75 more unused/reserved `<|extra_added_N|>` tokens
# pad token
"100256": {
"content": "<|pad|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
},
# 15 unused/reserved extra tokens
"<|extra_0|>": 100261
"<|extra_1|>": 100262
...
"<|extra_14|>": 100275
# 75 unused/reserved "extra" extra tokens after the EOS token
"<|extra_added_0|>": 100277
"<|extra_added_1|>": 100278
...
"<|extra_added_74|>": 100351
```
# DBRX Instruct Tokenizer
A 🤗-compatible version of the **DBRX Instruct** (adapted from [databricks/dbrx-instruct](https://huggingface.co/databricks/dbrx-instruct)). This means it can be used with Hugging Face libraries including [Transformers](https://github.com/huggingface/transformers), [Tokenizers](https://github.com/huggingface/tokenizers), and [Transformers.js](https://github.com/xenova/transformers.js).
## Example usage:
### Transformers/Tokenizers
```py
from transformers import GPT2TokenizerFast
tokenizer = GPT2TokenizerFast.from_pretrained('Xenova/dbrx-instruct-tokenizer')
assert tokenizer.encode('hello world') == [15339, 1917]
```
### Transformers.js
```js
import { AutoTokenizer } from '@xenova/transformers';
const tokenizer = await AutoTokenizer.from_pretrained('Xenova/dbrx-instruct-tokenizer');
const tokens = tokenizer.encode('hello world'); // [15339, 1917]
```
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.