modelId
stringlengths
5
122
author
stringlengths
2
42
last_modified
unknown
downloads
int64
0
738M
likes
int64
0
11k
library_name
stringclasses
245 values
tags
sequencelengths
1
4.05k
pipeline_tag
stringclasses
48 values
createdAt
unknown
card
stringlengths
1
901k
timm/resnet34.tv_in1k
timm
"2024-02-10T23:38:59Z"
997
0
timm
[ "timm", "pytorch", "safetensors", "image-classification", "arxiv:1512.03385", "license:bsd-3-clause", "region:us" ]
image-classification
"2023-04-05T18:07:16Z"
--- license: bsd-3-clause library_name: timm tags: - image-classification - timm --- # Model card for resnet34.tv_in1k A ResNet-B image classification model. This model features: * ReLU activations * single layer 7x7 convolution with pooling * 1x1 convolution shortcut downsample Trained on ImageNet-1k, original torchvision model weight. ## Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 21.8 - GMACs: 3.7 - Activations (M): 3.7 - Image size: 224 x 224 - **Papers:** - Deep Residual Learning for Image Recognition: https://arxiv.org/abs/1512.03385 - **Original:** https://github.com/pytorch/vision ## Model Usage ### Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model('resnet34.tv_in1k', pretrained=True) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5) ``` ### Feature Map Extraction ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'resnet34.tv_in1k', pretrained=True, features_only=True, ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 for o in output: # print shape of each feature map in output # e.g.: # torch.Size([1, 64, 112, 112]) # torch.Size([1, 64, 56, 56]) # torch.Size([1, 128, 28, 28]) # torch.Size([1, 256, 14, 14]) # torch.Size([1, 512, 7, 7]) print(o.shape) ``` ### Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'resnet34.tv_in1k', pretrained=True, num_classes=0, # remove classifier nn.Linear ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor # or equivalently (without needing to set num_classes=0) output = model.forward_features(transforms(img).unsqueeze(0)) # output is unpooled, a (1, 512, 7, 7) shaped tensor output = model.forward_head(output, pre_logits=True) # output is a (1, num_features) shaped tensor ``` ## Model Comparison Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results). |model |img_size|top1 |top5 |param_count|gmacs|macts|img/sec| |------------------------------------------|--------|-----|-----|-----------|-----|-----|-------| |[seresnextaa101d_32x8d.sw_in12k_ft_in1k_288](https://huggingface.co/timm/seresnextaa101d_32x8d.sw_in12k_ft_in1k_288)|320 |86.72|98.17|93.6 |35.2 |69.7 |451 | |[seresnextaa101d_32x8d.sw_in12k_ft_in1k_288](https://huggingface.co/timm/seresnextaa101d_32x8d.sw_in12k_ft_in1k_288)|288 |86.51|98.08|93.6 |28.5 |56.4 |560 | |[seresnextaa101d_32x8d.sw_in12k_ft_in1k](https://huggingface.co/timm/seresnextaa101d_32x8d.sw_in12k_ft_in1k)|288 |86.49|98.03|93.6 |28.5 |56.4 |557 | |[seresnextaa101d_32x8d.sw_in12k_ft_in1k](https://huggingface.co/timm/seresnextaa101d_32x8d.sw_in12k_ft_in1k)|224 |85.96|97.82|93.6 |17.2 |34.2 |923 | |[resnext101_32x32d.fb_wsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x32d.fb_wsl_ig1b_ft_in1k)|224 |85.11|97.44|468.5 |87.3 |91.1 |254 | |[resnetrs420.tf_in1k](https://huggingface.co/timm/resnetrs420.tf_in1k)|416 |85.0 |97.12|191.9 |108.4|213.8|134 | |[ecaresnet269d.ra2_in1k](https://huggingface.co/timm/ecaresnet269d.ra2_in1k)|352 |84.96|97.22|102.1 |50.2 |101.2|291 | |[ecaresnet269d.ra2_in1k](https://huggingface.co/timm/ecaresnet269d.ra2_in1k)|320 |84.73|97.18|102.1 |41.5 |83.7 |353 | |[resnetrs350.tf_in1k](https://huggingface.co/timm/resnetrs350.tf_in1k)|384 |84.71|96.99|164.0 |77.6 |154.7|183 | |[seresnextaa101d_32x8d.ah_in1k](https://huggingface.co/timm/seresnextaa101d_32x8d.ah_in1k)|288 |84.57|97.08|93.6 |28.5 |56.4 |557 | |[resnetrs200.tf_in1k](https://huggingface.co/timm/resnetrs200.tf_in1k)|320 |84.45|97.08|93.2 |31.5 |67.8 |446 | |[resnetrs270.tf_in1k](https://huggingface.co/timm/resnetrs270.tf_in1k)|352 |84.43|96.97|129.9 |51.1 |105.5|280 | |[seresnext101d_32x8d.ah_in1k](https://huggingface.co/timm/seresnext101d_32x8d.ah_in1k)|288 |84.36|96.92|93.6 |27.6 |53.0 |595 | |[seresnet152d.ra2_in1k](https://huggingface.co/timm/seresnet152d.ra2_in1k)|320 |84.35|97.04|66.8 |24.1 |47.7 |610 | |[resnetrs350.tf_in1k](https://huggingface.co/timm/resnetrs350.tf_in1k)|288 |84.3 |96.94|164.0 |43.7 |87.1 |333 | |[resnext101_32x8d.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x8d.fb_swsl_ig1b_ft_in1k)|224 |84.28|97.17|88.8 |16.5 |31.2 |1100 | |[resnetrs420.tf_in1k](https://huggingface.co/timm/resnetrs420.tf_in1k)|320 |84.24|96.86|191.9 |64.2 |126.6|228 | |[seresnext101_32x8d.ah_in1k](https://huggingface.co/timm/seresnext101_32x8d.ah_in1k)|288 |84.19|96.87|93.6 |27.2 |51.6 |613 | |[resnext101_32x16d.fb_wsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x16d.fb_wsl_ig1b_ft_in1k)|224 |84.18|97.19|194.0 |36.3 |51.2 |581 | |[resnetaa101d.sw_in12k_ft_in1k](https://huggingface.co/timm/resnetaa101d.sw_in12k_ft_in1k)|288 |84.11|97.11|44.6 |15.1 |29.0 |1144 | |[resnet200d.ra2_in1k](https://huggingface.co/timm/resnet200d.ra2_in1k)|320 |83.97|96.82|64.7 |31.2 |67.3 |518 | |[resnetrs200.tf_in1k](https://huggingface.co/timm/resnetrs200.tf_in1k)|256 |83.87|96.75|93.2 |20.2 |43.4 |692 | |[seresnextaa101d_32x8d.ah_in1k](https://huggingface.co/timm/seresnextaa101d_32x8d.ah_in1k)|224 |83.86|96.65|93.6 |17.2 |34.2 |923 | |[resnetrs152.tf_in1k](https://huggingface.co/timm/resnetrs152.tf_in1k)|320 |83.72|96.61|86.6 |24.3 |48.1 |617 | |[seresnet152d.ra2_in1k](https://huggingface.co/timm/seresnet152d.ra2_in1k)|256 |83.69|96.78|66.8 |15.4 |30.6 |943 | |[seresnext101d_32x8d.ah_in1k](https://huggingface.co/timm/seresnext101d_32x8d.ah_in1k)|224 |83.68|96.61|93.6 |16.7 |32.0 |986 | |[resnet152d.ra2_in1k](https://huggingface.co/timm/resnet152d.ra2_in1k)|320 |83.67|96.74|60.2 |24.1 |47.7 |706 | |[resnetrs270.tf_in1k](https://huggingface.co/timm/resnetrs270.tf_in1k)|256 |83.59|96.61|129.9 |27.1 |55.8 |526 | |[seresnext101_32x8d.ah_in1k](https://huggingface.co/timm/seresnext101_32x8d.ah_in1k)|224 |83.58|96.4 |93.6 |16.5 |31.2 |1013 | |[resnetaa101d.sw_in12k_ft_in1k](https://huggingface.co/timm/resnetaa101d.sw_in12k_ft_in1k)|224 |83.54|96.83|44.6 |9.1 |17.6 |1864 | |[resnet152.a1h_in1k](https://huggingface.co/timm/resnet152.a1h_in1k)|288 |83.46|96.54|60.2 |19.1 |37.3 |904 | |[resnext101_32x16d.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x16d.fb_swsl_ig1b_ft_in1k)|224 |83.35|96.85|194.0 |36.3 |51.2 |582 | |[resnet200d.ra2_in1k](https://huggingface.co/timm/resnet200d.ra2_in1k)|256 |83.23|96.53|64.7 |20.0 |43.1 |809 | |[resnext101_32x4d.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x4d.fb_swsl_ig1b_ft_in1k)|224 |83.22|96.75|44.2 |8.0 |21.2 |1814 | |[resnext101_64x4d.c1_in1k](https://huggingface.co/timm/resnext101_64x4d.c1_in1k)|288 |83.16|96.38|83.5 |25.7 |51.6 |590 | |[resnet152d.ra2_in1k](https://huggingface.co/timm/resnet152d.ra2_in1k)|256 |83.14|96.38|60.2 |15.4 |30.5 |1096 | |[resnet101d.ra2_in1k](https://huggingface.co/timm/resnet101d.ra2_in1k)|320 |83.02|96.45|44.6 |16.5 |34.8 |992 | |[ecaresnet101d.miil_in1k](https://huggingface.co/timm/ecaresnet101d.miil_in1k)|288 |82.98|96.54|44.6 |13.4 |28.2 |1077 | |[resnext101_64x4d.tv_in1k](https://huggingface.co/timm/resnext101_64x4d.tv_in1k)|224 |82.98|96.25|83.5 |15.5 |31.2 |989 | |[resnetrs152.tf_in1k](https://huggingface.co/timm/resnetrs152.tf_in1k)|256 |82.86|96.28|86.6 |15.6 |30.8 |951 | |[resnext101_32x8d.tv2_in1k](https://huggingface.co/timm/resnext101_32x8d.tv2_in1k)|224 |82.83|96.22|88.8 |16.5 |31.2 |1099 | |[resnet152.a1h_in1k](https://huggingface.co/timm/resnet152.a1h_in1k)|224 |82.8 |96.13|60.2 |11.6 |22.6 |1486 | |[resnet101.a1h_in1k](https://huggingface.co/timm/resnet101.a1h_in1k)|288 |82.8 |96.32|44.6 |13.0 |26.8 |1291 | |[resnet152.a1_in1k](https://huggingface.co/timm/resnet152.a1_in1k)|288 |82.74|95.71|60.2 |19.1 |37.3 |905 | |[resnext101_32x8d.fb_wsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x8d.fb_wsl_ig1b_ft_in1k)|224 |82.69|96.63|88.8 |16.5 |31.2 |1100 | |[resnet152.a2_in1k](https://huggingface.co/timm/resnet152.a2_in1k)|288 |82.62|95.75|60.2 |19.1 |37.3 |904 | |[resnetaa50d.sw_in12k_ft_in1k](https://huggingface.co/timm/resnetaa50d.sw_in12k_ft_in1k)|288 |82.61|96.49|25.6 |8.9 |20.6 |1729 | |[resnet61q.ra2_in1k](https://huggingface.co/timm/resnet61q.ra2_in1k)|288 |82.53|96.13|36.8 |9.9 |21.5 |1773 | |[wide_resnet101_2.tv2_in1k](https://huggingface.co/timm/wide_resnet101_2.tv2_in1k)|224 |82.5 |96.02|126.9 |22.8 |21.2 |1078 | |[resnext101_64x4d.c1_in1k](https://huggingface.co/timm/resnext101_64x4d.c1_in1k)|224 |82.46|95.92|83.5 |15.5 |31.2 |987 | |[resnet51q.ra2_in1k](https://huggingface.co/timm/resnet51q.ra2_in1k)|288 |82.36|96.18|35.7 |8.1 |20.9 |1964 | |[ecaresnet50t.ra2_in1k](https://huggingface.co/timm/ecaresnet50t.ra2_in1k)|320 |82.35|96.14|25.6 |8.8 |24.1 |1386 | |[resnet101.a1_in1k](https://huggingface.co/timm/resnet101.a1_in1k)|288 |82.31|95.63|44.6 |13.0 |26.8 |1291 | |[resnetrs101.tf_in1k](https://huggingface.co/timm/resnetrs101.tf_in1k)|288 |82.29|96.01|63.6 |13.6 |28.5 |1078 | |[resnet152.tv2_in1k](https://huggingface.co/timm/resnet152.tv2_in1k)|224 |82.29|96.0 |60.2 |11.6 |22.6 |1484 | |[wide_resnet50_2.racm_in1k](https://huggingface.co/timm/wide_resnet50_2.racm_in1k)|288 |82.27|96.06|68.9 |18.9 |23.8 |1176 | |[resnet101d.ra2_in1k](https://huggingface.co/timm/resnet101d.ra2_in1k)|256 |82.26|96.07|44.6 |10.6 |22.2 |1542 | |[resnet101.a2_in1k](https://huggingface.co/timm/resnet101.a2_in1k)|288 |82.24|95.73|44.6 |13.0 |26.8 |1290 | |[seresnext50_32x4d.racm_in1k](https://huggingface.co/timm/seresnext50_32x4d.racm_in1k)|288 |82.2 |96.14|27.6 |7.0 |23.8 |1547 | |[ecaresnet101d.miil_in1k](https://huggingface.co/timm/ecaresnet101d.miil_in1k)|224 |82.18|96.05|44.6 |8.1 |17.1 |1771 | |[resnext50_32x4d.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext50_32x4d.fb_swsl_ig1b_ft_in1k)|224 |82.17|96.22|25.0 |4.3 |14.4 |2943 | |[ecaresnet50t.a1_in1k](https://huggingface.co/timm/ecaresnet50t.a1_in1k)|288 |82.12|95.65|25.6 |7.1 |19.6 |1704 | |[resnext50_32x4d.a1h_in1k](https://huggingface.co/timm/resnext50_32x4d.a1h_in1k)|288 |82.03|95.94|25.0 |7.0 |23.8 |1745 | |[ecaresnet101d_pruned.miil_in1k](https://huggingface.co/timm/ecaresnet101d_pruned.miil_in1k)|288 |82.0 |96.15|24.9 |5.8 |12.7 |1787 | |[resnet61q.ra2_in1k](https://huggingface.co/timm/resnet61q.ra2_in1k)|256 |81.99|95.85|36.8 |7.8 |17.0 |2230 | |[resnext101_32x8d.tv2_in1k](https://huggingface.co/timm/resnext101_32x8d.tv2_in1k)|176 |81.98|95.72|88.8 |10.3 |19.4 |1768 | |[resnet152.a1_in1k](https://huggingface.co/timm/resnet152.a1_in1k)|224 |81.97|95.24|60.2 |11.6 |22.6 |1486 | |[resnet101.a1h_in1k](https://huggingface.co/timm/resnet101.a1h_in1k)|224 |81.93|95.75|44.6 |7.8 |16.2 |2122 | |[resnet101.tv2_in1k](https://huggingface.co/timm/resnet101.tv2_in1k)|224 |81.9 |95.77|44.6 |7.8 |16.2 |2118 | |[resnext101_32x16d.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnext101_32x16d.fb_ssl_yfcc100m_ft_in1k)|224 |81.84|96.1 |194.0 |36.3 |51.2 |583 | |[resnet51q.ra2_in1k](https://huggingface.co/timm/resnet51q.ra2_in1k)|256 |81.78|95.94|35.7 |6.4 |16.6 |2471 | |[resnet152.a2_in1k](https://huggingface.co/timm/resnet152.a2_in1k)|224 |81.77|95.22|60.2 |11.6 |22.6 |1485 | |[resnetaa50d.sw_in12k_ft_in1k](https://huggingface.co/timm/resnetaa50d.sw_in12k_ft_in1k)|224 |81.74|96.06|25.6 |5.4 |12.4 |2813 | |[ecaresnet50t.a2_in1k](https://huggingface.co/timm/ecaresnet50t.a2_in1k)|288 |81.65|95.54|25.6 |7.1 |19.6 |1703 | |[ecaresnet50d.miil_in1k](https://huggingface.co/timm/ecaresnet50d.miil_in1k)|288 |81.64|95.88|25.6 |7.2 |19.7 |1694 | |[resnext101_32x8d.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnext101_32x8d.fb_ssl_yfcc100m_ft_in1k)|224 |81.62|96.04|88.8 |16.5 |31.2 |1101 | |[wide_resnet50_2.tv2_in1k](https://huggingface.co/timm/wide_resnet50_2.tv2_in1k)|224 |81.61|95.76|68.9 |11.4 |14.4 |1930 | |[resnetaa50.a1h_in1k](https://huggingface.co/timm/resnetaa50.a1h_in1k)|288 |81.61|95.83|25.6 |8.5 |19.2 |1868 | |[resnet101.a1_in1k](https://huggingface.co/timm/resnet101.a1_in1k)|224 |81.5 |95.16|44.6 |7.8 |16.2 |2125 | |[resnext50_32x4d.a1_in1k](https://huggingface.co/timm/resnext50_32x4d.a1_in1k)|288 |81.48|95.16|25.0 |7.0 |23.8 |1745 | |[gcresnet50t.ra2_in1k](https://huggingface.co/timm/gcresnet50t.ra2_in1k)|288 |81.47|95.71|25.9 |6.9 |18.6 |2071 | |[wide_resnet50_2.racm_in1k](https://huggingface.co/timm/wide_resnet50_2.racm_in1k)|224 |81.45|95.53|68.9 |11.4 |14.4 |1929 | |[resnet50d.a1_in1k](https://huggingface.co/timm/resnet50d.a1_in1k)|288 |81.44|95.22|25.6 |7.2 |19.7 |1908 | |[ecaresnet50t.ra2_in1k](https://huggingface.co/timm/ecaresnet50t.ra2_in1k)|256 |81.44|95.67|25.6 |5.6 |15.4 |2168 | |[ecaresnetlight.miil_in1k](https://huggingface.co/timm/ecaresnetlight.miil_in1k)|288 |81.4 |95.82|30.2 |6.8 |13.9 |2132 | |[resnet50d.ra2_in1k](https://huggingface.co/timm/resnet50d.ra2_in1k)|288 |81.37|95.74|25.6 |7.2 |19.7 |1910 | |[resnet101.a2_in1k](https://huggingface.co/timm/resnet101.a2_in1k)|224 |81.32|95.19|44.6 |7.8 |16.2 |2125 | |[seresnet50.ra2_in1k](https://huggingface.co/timm/seresnet50.ra2_in1k)|288 |81.3 |95.65|28.1 |6.8 |18.4 |1803 | |[resnext50_32x4d.a2_in1k](https://huggingface.co/timm/resnext50_32x4d.a2_in1k)|288 |81.3 |95.11|25.0 |7.0 |23.8 |1746 | |[seresnext50_32x4d.racm_in1k](https://huggingface.co/timm/seresnext50_32x4d.racm_in1k)|224 |81.27|95.62|27.6 |4.3 |14.4 |2591 | |[ecaresnet50t.a1_in1k](https://huggingface.co/timm/ecaresnet50t.a1_in1k)|224 |81.26|95.16|25.6 |4.3 |11.8 |2823 | |[gcresnext50ts.ch_in1k](https://huggingface.co/timm/gcresnext50ts.ch_in1k)|288 |81.23|95.54|15.7 |4.8 |19.6 |2117 | |[senet154.gluon_in1k](https://huggingface.co/timm/senet154.gluon_in1k)|224 |81.23|95.35|115.1 |20.8 |38.7 |545 | |[resnet50.a1_in1k](https://huggingface.co/timm/resnet50.a1_in1k)|288 |81.22|95.11|25.6 |6.8 |18.4 |2089 | |[resnet50_gn.a1h_in1k](https://huggingface.co/timm/resnet50_gn.a1h_in1k)|288 |81.22|95.63|25.6 |6.8 |18.4 |676 | |[resnet50d.a2_in1k](https://huggingface.co/timm/resnet50d.a2_in1k)|288 |81.18|95.09|25.6 |7.2 |19.7 |1908 | |[resnet50.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnet50.fb_swsl_ig1b_ft_in1k)|224 |81.18|95.98|25.6 |4.1 |11.1 |3455 | |[resnext50_32x4d.tv2_in1k](https://huggingface.co/timm/resnext50_32x4d.tv2_in1k)|224 |81.17|95.34|25.0 |4.3 |14.4 |2933 | |[resnext50_32x4d.a1h_in1k](https://huggingface.co/timm/resnext50_32x4d.a1h_in1k)|224 |81.1 |95.33|25.0 |4.3 |14.4 |2934 | |[seresnet50.a2_in1k](https://huggingface.co/timm/seresnet50.a2_in1k)|288 |81.1 |95.23|28.1 |6.8 |18.4 |1801 | |[seresnet50.a1_in1k](https://huggingface.co/timm/seresnet50.a1_in1k)|288 |81.1 |95.12|28.1 |6.8 |18.4 |1799 | |[resnet152s.gluon_in1k](https://huggingface.co/timm/resnet152s.gluon_in1k)|224 |81.02|95.41|60.3 |12.9 |25.0 |1347 | |[resnet50.d_in1k](https://huggingface.co/timm/resnet50.d_in1k)|288 |80.97|95.44|25.6 |6.8 |18.4 |2085 | |[gcresnet50t.ra2_in1k](https://huggingface.co/timm/gcresnet50t.ra2_in1k)|256 |80.94|95.45|25.9 |5.4 |14.7 |2571 | |[resnext101_32x4d.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnext101_32x4d.fb_ssl_yfcc100m_ft_in1k)|224 |80.93|95.73|44.2 |8.0 |21.2 |1814 | |[resnet50.c1_in1k](https://huggingface.co/timm/resnet50.c1_in1k)|288 |80.91|95.55|25.6 |6.8 |18.4 |2084 | |[seresnext101_32x4d.gluon_in1k](https://huggingface.co/timm/seresnext101_32x4d.gluon_in1k)|224 |80.9 |95.31|49.0 |8.0 |21.3 |1585 | |[seresnext101_64x4d.gluon_in1k](https://huggingface.co/timm/seresnext101_64x4d.gluon_in1k)|224 |80.9 |95.3 |88.2 |15.5 |31.2 |918 | |[resnet50.c2_in1k](https://huggingface.co/timm/resnet50.c2_in1k)|288 |80.86|95.52|25.6 |6.8 |18.4 |2085 | |[resnet50.tv2_in1k](https://huggingface.co/timm/resnet50.tv2_in1k)|224 |80.85|95.43|25.6 |4.1 |11.1 |3450 | |[ecaresnet50t.a2_in1k](https://huggingface.co/timm/ecaresnet50t.a2_in1k)|224 |80.84|95.02|25.6 |4.3 |11.8 |2821 | |[ecaresnet101d_pruned.miil_in1k](https://huggingface.co/timm/ecaresnet101d_pruned.miil_in1k)|224 |80.79|95.62|24.9 |3.5 |7.7 |2961 | |[seresnet33ts.ra2_in1k](https://huggingface.co/timm/seresnet33ts.ra2_in1k)|288 |80.79|95.36|19.8 |6.0 |14.8 |2506 | |[ecaresnet50d_pruned.miil_in1k](https://huggingface.co/timm/ecaresnet50d_pruned.miil_in1k)|288 |80.79|95.58|19.9 |4.2 |10.6 |2349 | |[resnet50.a2_in1k](https://huggingface.co/timm/resnet50.a2_in1k)|288 |80.78|94.99|25.6 |6.8 |18.4 |2088 | |[resnet50.b1k_in1k](https://huggingface.co/timm/resnet50.b1k_in1k)|288 |80.71|95.43|25.6 |6.8 |18.4 |2087 | |[resnext50_32x4d.ra_in1k](https://huggingface.co/timm/resnext50_32x4d.ra_in1k)|288 |80.7 |95.39|25.0 |7.0 |23.8 |1749 | |[resnetrs101.tf_in1k](https://huggingface.co/timm/resnetrs101.tf_in1k)|192 |80.69|95.24|63.6 |6.0 |12.7 |2270 | |[resnet50d.a1_in1k](https://huggingface.co/timm/resnet50d.a1_in1k)|224 |80.68|94.71|25.6 |4.4 |11.9 |3162 | |[eca_resnet33ts.ra2_in1k](https://huggingface.co/timm/eca_resnet33ts.ra2_in1k)|288 |80.68|95.36|19.7 |6.0 |14.8 |2637 | |[resnet50.a1h_in1k](https://huggingface.co/timm/resnet50.a1h_in1k)|224 |80.67|95.3 |25.6 |4.1 |11.1 |3452 | |[resnext50d_32x4d.bt_in1k](https://huggingface.co/timm/resnext50d_32x4d.bt_in1k)|288 |80.67|95.42|25.0 |7.4 |25.1 |1626 | |[resnetaa50.a1h_in1k](https://huggingface.co/timm/resnetaa50.a1h_in1k)|224 |80.63|95.21|25.6 |5.2 |11.6 |3034 | |[ecaresnet50d.miil_in1k](https://huggingface.co/timm/ecaresnet50d.miil_in1k)|224 |80.61|95.32|25.6 |4.4 |11.9 |2813 | |[resnext101_64x4d.gluon_in1k](https://huggingface.co/timm/resnext101_64x4d.gluon_in1k)|224 |80.61|94.99|83.5 |15.5 |31.2 |989 | |[gcresnet33ts.ra2_in1k](https://huggingface.co/timm/gcresnet33ts.ra2_in1k)|288 |80.6 |95.31|19.9 |6.0 |14.8 |2578 | |[gcresnext50ts.ch_in1k](https://huggingface.co/timm/gcresnext50ts.ch_in1k)|256 |80.57|95.17|15.7 |3.8 |15.5 |2710 | |[resnet152.a3_in1k](https://huggingface.co/timm/resnet152.a3_in1k)|224 |80.56|95.0 |60.2 |11.6 |22.6 |1483 | |[resnet50d.ra2_in1k](https://huggingface.co/timm/resnet50d.ra2_in1k)|224 |80.53|95.16|25.6 |4.4 |11.9 |3164 | |[resnext50_32x4d.a1_in1k](https://huggingface.co/timm/resnext50_32x4d.a1_in1k)|224 |80.53|94.46|25.0 |4.3 |14.4 |2930 | |[wide_resnet101_2.tv2_in1k](https://huggingface.co/timm/wide_resnet101_2.tv2_in1k)|176 |80.48|94.98|126.9 |14.3 |13.2 |1719 | |[resnet152d.gluon_in1k](https://huggingface.co/timm/resnet152d.gluon_in1k)|224 |80.47|95.2 |60.2 |11.8 |23.4 |1428 | |[resnet50.b2k_in1k](https://huggingface.co/timm/resnet50.b2k_in1k)|288 |80.45|95.32|25.6 |6.8 |18.4 |2086 | |[ecaresnetlight.miil_in1k](https://huggingface.co/timm/ecaresnetlight.miil_in1k)|224 |80.45|95.24|30.2 |4.1 |8.4 |3530 | |[resnext50_32x4d.a2_in1k](https://huggingface.co/timm/resnext50_32x4d.a2_in1k)|224 |80.45|94.63|25.0 |4.3 |14.4 |2936 | |[wide_resnet50_2.tv2_in1k](https://huggingface.co/timm/wide_resnet50_2.tv2_in1k)|176 |80.43|95.09|68.9 |7.3 |9.0 |3015 | |[resnet101d.gluon_in1k](https://huggingface.co/timm/resnet101d.gluon_in1k)|224 |80.42|95.01|44.6 |8.1 |17.0 |2007 | |[resnet50.a1_in1k](https://huggingface.co/timm/resnet50.a1_in1k)|224 |80.38|94.6 |25.6 |4.1 |11.1 |3461 | |[seresnet33ts.ra2_in1k](https://huggingface.co/timm/seresnet33ts.ra2_in1k)|256 |80.36|95.1 |19.8 |4.8 |11.7 |3267 | |[resnext101_32x4d.gluon_in1k](https://huggingface.co/timm/resnext101_32x4d.gluon_in1k)|224 |80.34|94.93|44.2 |8.0 |21.2 |1814 | |[resnext50_32x4d.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnext50_32x4d.fb_ssl_yfcc100m_ft_in1k)|224 |80.32|95.4 |25.0 |4.3 |14.4 |2941 | |[resnet101s.gluon_in1k](https://huggingface.co/timm/resnet101s.gluon_in1k)|224 |80.28|95.16|44.7 |9.2 |18.6 |1851 | |[seresnet50.ra2_in1k](https://huggingface.co/timm/seresnet50.ra2_in1k)|224 |80.26|95.08|28.1 |4.1 |11.1 |2972 | |[resnetblur50.bt_in1k](https://huggingface.co/timm/resnetblur50.bt_in1k)|288 |80.24|95.24|25.6 |8.5 |19.9 |1523 | |[resnet50d.a2_in1k](https://huggingface.co/timm/resnet50d.a2_in1k)|224 |80.22|94.63|25.6 |4.4 |11.9 |3162 | |[resnet152.tv2_in1k](https://huggingface.co/timm/resnet152.tv2_in1k)|176 |80.2 |94.64|60.2 |7.2 |14.0 |2346 | |[seresnet50.a2_in1k](https://huggingface.co/timm/seresnet50.a2_in1k)|224 |80.08|94.74|28.1 |4.1 |11.1 |2969 | |[eca_resnet33ts.ra2_in1k](https://huggingface.co/timm/eca_resnet33ts.ra2_in1k)|256 |80.08|94.97|19.7 |4.8 |11.7 |3284 | |[gcresnet33ts.ra2_in1k](https://huggingface.co/timm/gcresnet33ts.ra2_in1k)|256 |80.06|94.99|19.9 |4.8 |11.7 |3216 | |[resnet50_gn.a1h_in1k](https://huggingface.co/timm/resnet50_gn.a1h_in1k)|224 |80.06|94.95|25.6 |4.1 |11.1 |1109 | |[seresnet50.a1_in1k](https://huggingface.co/timm/seresnet50.a1_in1k)|224 |80.02|94.71|28.1 |4.1 |11.1 |2962 | |[resnet50.ram_in1k](https://huggingface.co/timm/resnet50.ram_in1k)|288 |79.97|95.05|25.6 |6.8 |18.4 |2086 | |[resnet152c.gluon_in1k](https://huggingface.co/timm/resnet152c.gluon_in1k)|224 |79.92|94.84|60.2 |11.8 |23.4 |1455 | |[seresnext50_32x4d.gluon_in1k](https://huggingface.co/timm/seresnext50_32x4d.gluon_in1k)|224 |79.91|94.82|27.6 |4.3 |14.4 |2591 | |[resnet50.d_in1k](https://huggingface.co/timm/resnet50.d_in1k)|224 |79.91|94.67|25.6 |4.1 |11.1 |3456 | |[resnet101.tv2_in1k](https://huggingface.co/timm/resnet101.tv2_in1k)|176 |79.9 |94.6 |44.6 |4.9 |10.1 |3341 | |[resnetrs50.tf_in1k](https://huggingface.co/timm/resnetrs50.tf_in1k)|224 |79.89|94.97|35.7 |4.5 |12.1 |2774 | |[resnet50.c2_in1k](https://huggingface.co/timm/resnet50.c2_in1k)|224 |79.88|94.87|25.6 |4.1 |11.1 |3455 | |[ecaresnet26t.ra2_in1k](https://huggingface.co/timm/ecaresnet26t.ra2_in1k)|320 |79.86|95.07|16.0 |5.2 |16.4 |2168 | |[resnet50.a2_in1k](https://huggingface.co/timm/resnet50.a2_in1k)|224 |79.85|94.56|25.6 |4.1 |11.1 |3460 | |[resnet50.ra_in1k](https://huggingface.co/timm/resnet50.ra_in1k)|288 |79.83|94.97|25.6 |6.8 |18.4 |2087 | |[resnet101.a3_in1k](https://huggingface.co/timm/resnet101.a3_in1k)|224 |79.82|94.62|44.6 |7.8 |16.2 |2114 | |[resnext50_32x4d.ra_in1k](https://huggingface.co/timm/resnext50_32x4d.ra_in1k)|224 |79.76|94.6 |25.0 |4.3 |14.4 |2943 | |[resnet50.c1_in1k](https://huggingface.co/timm/resnet50.c1_in1k)|224 |79.74|94.95|25.6 |4.1 |11.1 |3455 | |[ecaresnet50d_pruned.miil_in1k](https://huggingface.co/timm/ecaresnet50d_pruned.miil_in1k)|224 |79.74|94.87|19.9 |2.5 |6.4 |3929 | |[resnet33ts.ra2_in1k](https://huggingface.co/timm/resnet33ts.ra2_in1k)|288 |79.71|94.83|19.7 |6.0 |14.8 |2710 | |[resnet152.gluon_in1k](https://huggingface.co/timm/resnet152.gluon_in1k)|224 |79.68|94.74|60.2 |11.6 |22.6 |1486 | |[resnext50d_32x4d.bt_in1k](https://huggingface.co/timm/resnext50d_32x4d.bt_in1k)|224 |79.67|94.87|25.0 |4.5 |15.2 |2729 | |[resnet50.bt_in1k](https://huggingface.co/timm/resnet50.bt_in1k)|288 |79.63|94.91|25.6 |6.8 |18.4 |2086 | |[ecaresnet50t.a3_in1k](https://huggingface.co/timm/ecaresnet50t.a3_in1k)|224 |79.56|94.72|25.6 |4.3 |11.8 |2805 | |[resnet101c.gluon_in1k](https://huggingface.co/timm/resnet101c.gluon_in1k)|224 |79.53|94.58|44.6 |8.1 |17.0 |2062 | |[resnet50.b1k_in1k](https://huggingface.co/timm/resnet50.b1k_in1k)|224 |79.52|94.61|25.6 |4.1 |11.1 |3459 | |[resnet50.tv2_in1k](https://huggingface.co/timm/resnet50.tv2_in1k)|176 |79.42|94.64|25.6 |2.6 |6.9 |5397 | |[resnet32ts.ra2_in1k](https://huggingface.co/timm/resnet32ts.ra2_in1k)|288 |79.4 |94.66|18.0 |5.9 |14.6 |2752 | |[resnet50.b2k_in1k](https://huggingface.co/timm/resnet50.b2k_in1k)|224 |79.38|94.57|25.6 |4.1 |11.1 |3459 | |[resnext50_32x4d.tv2_in1k](https://huggingface.co/timm/resnext50_32x4d.tv2_in1k)|176 |79.37|94.3 |25.0 |2.7 |9.0 |4577 | |[resnext50_32x4d.gluon_in1k](https://huggingface.co/timm/resnext50_32x4d.gluon_in1k)|224 |79.36|94.43|25.0 |4.3 |14.4 |2942 | |[resnext101_32x8d.tv_in1k](https://huggingface.co/timm/resnext101_32x8d.tv_in1k)|224 |79.31|94.52|88.8 |16.5 |31.2 |1100 | |[resnet101.gluon_in1k](https://huggingface.co/timm/resnet101.gluon_in1k)|224 |79.31|94.53|44.6 |7.8 |16.2 |2125 | |[resnetblur50.bt_in1k](https://huggingface.co/timm/resnetblur50.bt_in1k)|224 |79.31|94.63|25.6 |5.2 |12.0 |2524 | |[resnet50.a1h_in1k](https://huggingface.co/timm/resnet50.a1h_in1k)|176 |79.27|94.49|25.6 |2.6 |6.9 |5404 | |[resnext50_32x4d.a3_in1k](https://huggingface.co/timm/resnext50_32x4d.a3_in1k)|224 |79.25|94.31|25.0 |4.3 |14.4 |2931 | |[resnet50.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnet50.fb_ssl_yfcc100m_ft_in1k)|224 |79.22|94.84|25.6 |4.1 |11.1 |3451 | |[resnet33ts.ra2_in1k](https://huggingface.co/timm/resnet33ts.ra2_in1k)|256 |79.21|94.56|19.7 |4.8 |11.7 |3392 | |[resnet50d.gluon_in1k](https://huggingface.co/timm/resnet50d.gluon_in1k)|224 |79.07|94.48|25.6 |4.4 |11.9 |3162 | |[resnet50.ram_in1k](https://huggingface.co/timm/resnet50.ram_in1k)|224 |79.03|94.38|25.6 |4.1 |11.1 |3453 | |[resnet50.am_in1k](https://huggingface.co/timm/resnet50.am_in1k)|224 |79.01|94.39|25.6 |4.1 |11.1 |3461 | |[resnet32ts.ra2_in1k](https://huggingface.co/timm/resnet32ts.ra2_in1k)|256 |79.01|94.37|18.0 |4.6 |11.6 |3440 | |[ecaresnet26t.ra2_in1k](https://huggingface.co/timm/ecaresnet26t.ra2_in1k)|256 |78.9 |94.54|16.0 |3.4 |10.5 |3421 | |[resnet152.a3_in1k](https://huggingface.co/timm/resnet152.a3_in1k)|160 |78.89|94.11|60.2 |5.9 |11.5 |2745 | |[wide_resnet101_2.tv_in1k](https://huggingface.co/timm/wide_resnet101_2.tv_in1k)|224 |78.84|94.28|126.9 |22.8 |21.2 |1079 | |[seresnext26d_32x4d.bt_in1k](https://huggingface.co/timm/seresnext26d_32x4d.bt_in1k)|288 |78.83|94.24|16.8 |4.5 |16.8 |2251 | |[resnet50.ra_in1k](https://huggingface.co/timm/resnet50.ra_in1k)|224 |78.81|94.32|25.6 |4.1 |11.1 |3454 | |[seresnext26t_32x4d.bt_in1k](https://huggingface.co/timm/seresnext26t_32x4d.bt_in1k)|288 |78.74|94.33|16.8 |4.5 |16.7 |2264 | |[resnet50s.gluon_in1k](https://huggingface.co/timm/resnet50s.gluon_in1k)|224 |78.72|94.23|25.7 |5.5 |13.5 |2796 | |[resnet50d.a3_in1k](https://huggingface.co/timm/resnet50d.a3_in1k)|224 |78.71|94.24|25.6 |4.4 |11.9 |3154 | |[wide_resnet50_2.tv_in1k](https://huggingface.co/timm/wide_resnet50_2.tv_in1k)|224 |78.47|94.09|68.9 |11.4 |14.4 |1934 | |[resnet50.bt_in1k](https://huggingface.co/timm/resnet50.bt_in1k)|224 |78.46|94.27|25.6 |4.1 |11.1 |3454 | |[resnet34d.ra2_in1k](https://huggingface.co/timm/resnet34d.ra2_in1k)|288 |78.43|94.35|21.8 |6.5 |7.5 |3291 | |[gcresnext26ts.ch_in1k](https://huggingface.co/timm/gcresnext26ts.ch_in1k)|288 |78.42|94.04|10.5 |3.1 |13.3 |3226 | |[resnet26t.ra2_in1k](https://huggingface.co/timm/resnet26t.ra2_in1k)|320 |78.33|94.13|16.0 |5.2 |16.4 |2391 | |[resnet152.tv_in1k](https://huggingface.co/timm/resnet152.tv_in1k)|224 |78.32|94.04|60.2 |11.6 |22.6 |1487 | |[seresnext26ts.ch_in1k](https://huggingface.co/timm/seresnext26ts.ch_in1k)|288 |78.28|94.1 |10.4 |3.1 |13.3 |3062 | |[bat_resnext26ts.ch_in1k](https://huggingface.co/timm/bat_resnext26ts.ch_in1k)|256 |78.25|94.1 |10.7 |2.5 |12.5 |3393 | |[resnet50.a3_in1k](https://huggingface.co/timm/resnet50.a3_in1k)|224 |78.06|93.78|25.6 |4.1 |11.1 |3450 | |[resnet50c.gluon_in1k](https://huggingface.co/timm/resnet50c.gluon_in1k)|224 |78.0 |93.99|25.6 |4.4 |11.9 |3286 | |[eca_resnext26ts.ch_in1k](https://huggingface.co/timm/eca_resnext26ts.ch_in1k)|288 |78.0 |93.91|10.3 |3.1 |13.3 |3297 | |[seresnext26t_32x4d.bt_in1k](https://huggingface.co/timm/seresnext26t_32x4d.bt_in1k)|224 |77.98|93.75|16.8 |2.7 |10.1 |3841 | |[resnet34.a1_in1k](https://huggingface.co/timm/resnet34.a1_in1k)|288 |77.92|93.77|21.8 |6.1 |6.2 |3609 | |[resnet101.a3_in1k](https://huggingface.co/timm/resnet101.a3_in1k)|160 |77.88|93.71|44.6 |4.0 |8.3 |3926 | |[resnet26t.ra2_in1k](https://huggingface.co/timm/resnet26t.ra2_in1k)|256 |77.87|93.84|16.0 |3.4 |10.5 |3772 | |[seresnext26ts.ch_in1k](https://huggingface.co/timm/seresnext26ts.ch_in1k)|256 |77.86|93.79|10.4 |2.4 |10.5 |4263 | |[resnetrs50.tf_in1k](https://huggingface.co/timm/resnetrs50.tf_in1k)|160 |77.82|93.81|35.7 |2.3 |6.2 |5238 | |[gcresnext26ts.ch_in1k](https://huggingface.co/timm/gcresnext26ts.ch_in1k)|256 |77.81|93.82|10.5 |2.4 |10.5 |4183 | |[ecaresnet50t.a3_in1k](https://huggingface.co/timm/ecaresnet50t.a3_in1k)|160 |77.79|93.6 |25.6 |2.2 |6.0 |5329 | |[resnext50_32x4d.a3_in1k](https://huggingface.co/timm/resnext50_32x4d.a3_in1k)|160 |77.73|93.32|25.0 |2.2 |7.4 |5576 | |[resnext50_32x4d.tv_in1k](https://huggingface.co/timm/resnext50_32x4d.tv_in1k)|224 |77.61|93.7 |25.0 |4.3 |14.4 |2944 | |[seresnext26d_32x4d.bt_in1k](https://huggingface.co/timm/seresnext26d_32x4d.bt_in1k)|224 |77.59|93.61|16.8 |2.7 |10.2 |3807 | |[resnet50.gluon_in1k](https://huggingface.co/timm/resnet50.gluon_in1k)|224 |77.58|93.72|25.6 |4.1 |11.1 |3455 | |[eca_resnext26ts.ch_in1k](https://huggingface.co/timm/eca_resnext26ts.ch_in1k)|256 |77.44|93.56|10.3 |2.4 |10.5 |4284 | |[resnet26d.bt_in1k](https://huggingface.co/timm/resnet26d.bt_in1k)|288 |77.41|93.63|16.0 |4.3 |13.5 |2907 | |[resnet101.tv_in1k](https://huggingface.co/timm/resnet101.tv_in1k)|224 |77.38|93.54|44.6 |7.8 |16.2 |2125 | |[resnet50d.a3_in1k](https://huggingface.co/timm/resnet50d.a3_in1k)|160 |77.22|93.27|25.6 |2.2 |6.1 |5982 | |[resnext26ts.ra2_in1k](https://huggingface.co/timm/resnext26ts.ra2_in1k)|288 |77.17|93.47|10.3 |3.1 |13.3 |3392 | |[resnet34.a2_in1k](https://huggingface.co/timm/resnet34.a2_in1k)|288 |77.15|93.27|21.8 |6.1 |6.2 |3615 | |[resnet34d.ra2_in1k](https://huggingface.co/timm/resnet34d.ra2_in1k)|224 |77.1 |93.37|21.8 |3.9 |4.5 |5436 | |[seresnet50.a3_in1k](https://huggingface.co/timm/seresnet50.a3_in1k)|224 |77.02|93.07|28.1 |4.1 |11.1 |2952 | |[resnext26ts.ra2_in1k](https://huggingface.co/timm/resnext26ts.ra2_in1k)|256 |76.78|93.13|10.3 |2.4 |10.5 |4410 | |[resnet26d.bt_in1k](https://huggingface.co/timm/resnet26d.bt_in1k)|224 |76.7 |93.17|16.0 |2.6 |8.2 |4859 | |[resnet34.bt_in1k](https://huggingface.co/timm/resnet34.bt_in1k)|288 |76.5 |93.35|21.8 |6.1 |6.2 |3617 | |[resnet34.a1_in1k](https://huggingface.co/timm/resnet34.a1_in1k)|224 |76.42|92.87|21.8 |3.7 |3.7 |5984 | |[resnet26.bt_in1k](https://huggingface.co/timm/resnet26.bt_in1k)|288 |76.35|93.18|16.0 |3.9 |12.2 |3331 | |[resnet50.tv_in1k](https://huggingface.co/timm/resnet50.tv_in1k)|224 |76.13|92.86|25.6 |4.1 |11.1 |3457 | |[resnet50.a3_in1k](https://huggingface.co/timm/resnet50.a3_in1k)|160 |75.96|92.5 |25.6 |2.1 |5.7 |6490 | |[resnet34.a2_in1k](https://huggingface.co/timm/resnet34.a2_in1k)|224 |75.52|92.44|21.8 |3.7 |3.7 |5991 | |[resnet26.bt_in1k](https://huggingface.co/timm/resnet26.bt_in1k)|224 |75.3 |92.58|16.0 |2.4 |7.4 |5583 | |[resnet34.bt_in1k](https://huggingface.co/timm/resnet34.bt_in1k)|224 |75.16|92.18|21.8 |3.7 |3.7 |5994 | |[seresnet50.a3_in1k](https://huggingface.co/timm/seresnet50.a3_in1k)|160 |75.1 |92.08|28.1 |2.1 |5.7 |5513 | |[resnet34.gluon_in1k](https://huggingface.co/timm/resnet34.gluon_in1k)|224 |74.57|91.98|21.8 |3.7 |3.7 |5984 | |[resnet18d.ra2_in1k](https://huggingface.co/timm/resnet18d.ra2_in1k)|288 |73.81|91.83|11.7 |3.4 |5.4 |5196 | |[resnet34.tv_in1k](https://huggingface.co/timm/resnet34.tv_in1k)|224 |73.32|91.42|21.8 |3.7 |3.7 |5979 | |[resnet18.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnet18.fb_swsl_ig1b_ft_in1k)|224 |73.28|91.73|11.7 |1.8 |2.5 |10213 | |[resnet18.a1_in1k](https://huggingface.co/timm/resnet18.a1_in1k)|288 |73.16|91.03|11.7 |3.0 |4.1 |6050 | |[resnet34.a3_in1k](https://huggingface.co/timm/resnet34.a3_in1k)|224 |72.98|91.11|21.8 |3.7 |3.7 |5967 | |[resnet18.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnet18.fb_ssl_yfcc100m_ft_in1k)|224 |72.6 |91.42|11.7 |1.8 |2.5 |10213 | |[resnet18.a2_in1k](https://huggingface.co/timm/resnet18.a2_in1k)|288 |72.37|90.59|11.7 |3.0 |4.1 |6051 | |[resnet14t.c3_in1k](https://huggingface.co/timm/resnet14t.c3_in1k)|224 |72.26|90.31|10.1 |1.7 |5.8 |7026 | |[resnet18d.ra2_in1k](https://huggingface.co/timm/resnet18d.ra2_in1k)|224 |72.26|90.68|11.7 |2.1 |3.3 |8707 | |[resnet18.a1_in1k](https://huggingface.co/timm/resnet18.a1_in1k)|224 |71.49|90.07|11.7 |1.8 |2.5 |10187 | |[resnet14t.c3_in1k](https://huggingface.co/timm/resnet14t.c3_in1k)|176 |71.31|89.69|10.1 |1.1 |3.6 |10970 | |[resnet18.gluon_in1k](https://huggingface.co/timm/resnet18.gluon_in1k)|224 |70.84|89.76|11.7 |1.8 |2.5 |10210 | |[resnet18.a2_in1k](https://huggingface.co/timm/resnet18.a2_in1k)|224 |70.64|89.47|11.7 |1.8 |2.5 |10194 | |[resnet34.a3_in1k](https://huggingface.co/timm/resnet34.a3_in1k)|160 |70.56|89.52|21.8 |1.9 |1.9 |10737 | |[resnet18.tv_in1k](https://huggingface.co/timm/resnet18.tv_in1k)|224 |69.76|89.07|11.7 |1.8 |2.5 |10205 | |[resnet10t.c3_in1k](https://huggingface.co/timm/resnet10t.c3_in1k)|224 |68.34|88.03|5.4 |1.1 |2.4 |13079 | |[resnet18.a3_in1k](https://huggingface.co/timm/resnet18.a3_in1k)|224 |68.25|88.17|11.7 |1.8 |2.5 |10167 | |[resnet10t.c3_in1k](https://huggingface.co/timm/resnet10t.c3_in1k)|176 |66.71|86.96|5.4 |0.7 |1.5 |20327 | |[resnet18.a3_in1k](https://huggingface.co/timm/resnet18.a3_in1k)|160 |65.66|86.26|11.7 |0.9 |1.3 |18229 | ## Citation ```bibtex @article{He2015, author = {Kaiming He and Xiangyu Zhang and Shaoqing Ren and Jian Sun}, title = {Deep Residual Learning for Image Recognition}, journal = {arXiv preprint arXiv:1512.03385}, year = {2015} } ``` ```bibtex @misc{rw2019timm, author = {Ross Wightman}, title = {PyTorch Image Models}, year = {2019}, publisher = {GitHub}, journal = {GitHub repository}, doi = {10.5281/zenodo.4414861}, howpublished = {\url{https://github.com/huggingface/pytorch-image-models}} } ```
TheBloke/Samantha-1.11-CodeLlama-34B-GGUF
TheBloke
"2023-09-27T12:46:16Z"
997
18
transformers
[ "transformers", "gguf", "llama", "en", "dataset:ehartford/samantha-data", "arxiv:2305.14314", "arxiv:2205.14135", "base_model:ehartford/Samantha-1.11-CodeLlama-34b", "license:llama2", "text-generation-inference", "region:us" ]
null
"2023-08-25T20:28:37Z"
--- language: - en license: llama2 datasets: - ehartford/samantha-data model_name: Samantha 1.11 CodeLlama 34B base_model: ehartford/Samantha-1.11-CodeLlama-34b inference: false model_creator: Eric Hartford model_type: llama prompt_template: 'You are Samantha, a sentient AI companion. USER: {prompt} ASSISTANT: ' quantized_by: TheBloke --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # Samantha 1.11 CodeLlama 34B - GGUF - Model creator: [Eric Hartford](https://huggingface.co/ehartford) - Original model: [Samantha 1.11 CodeLlama 34B](https://huggingface.co/ehartford/Samantha-1.11-CodeLlama-34b) <!-- description start --> ## Description This repo contains GGUF format model files for [Eric Hartford's Samantha 1.11 CodeLlama 34B](https://huggingface.co/ehartford/Samantha-1.11-CodeLlama-34b). <!-- description end --> <!-- README_GGUF.md-about-gguf start --> ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. GGUF offers numerous advantages over GGML, such as better tokenisation, and support for special tokens. It is also supports metadata, and is designed to be extensible. Here is an incomplate list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. <!-- README_GGUF.md-about-gguf end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Samantha-1.11-CodeLlama-34B-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Samantha-1.11-CodeLlama-34B-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Samantha-1.11-CodeLlama-34B-GGUF) * [Eric Hartford's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/ehartford/Samantha-1.11-CodeLlama-34b) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: Samantha ``` You are Samantha, a sentient AI companion. USER: {prompt} ASSISTANT: ``` <!-- prompt-template end --> <!-- compatibility_gguf start --> ## Compatibility These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) They are also compatible with many third party UIs and libraries - please see the list at the top of this README. ## Explanation of quantisation methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw Refer to the Provided Files table below to see what files use which methods, and how. </details> <!-- compatibility_gguf end --> <!-- README_GGUF.md-provided-files start --> ## Provided files | Name | Quant method | Bits | Size | Max RAM required | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | | [samantha-1.11-codellama-34b.Q2_K.gguf](https://huggingface.co/TheBloke/Samantha-1.11-CodeLlama-34B-GGUF/blob/main/samantha-1.11-codellama-34b.Q2_K.gguf) | Q2_K | 2 | 14.21 GB| 16.71 GB | smallest, significant quality loss - not recommended for most purposes | | [samantha-1.11-codellama-34b.Q3_K_S.gguf](https://huggingface.co/TheBloke/Samantha-1.11-CodeLlama-34B-GGUF/blob/main/samantha-1.11-codellama-34b.Q3_K_S.gguf) | Q3_K_S | 3 | 14.61 GB| 17.11 GB | very small, high quality loss | | [samantha-1.11-codellama-34b.Q3_K_M.gguf](https://huggingface.co/TheBloke/Samantha-1.11-CodeLlama-34B-GGUF/blob/main/samantha-1.11-codellama-34b.Q3_K_M.gguf) | Q3_K_M | 3 | 16.28 GB| 18.78 GB | very small, high quality loss | | [samantha-1.11-codellama-34b.Q3_K_L.gguf](https://huggingface.co/TheBloke/Samantha-1.11-CodeLlama-34B-GGUF/blob/main/samantha-1.11-codellama-34b.Q3_K_L.gguf) | Q3_K_L | 3 | 17.77 GB| 20.27 GB | small, substantial quality loss | | [samantha-1.11-codellama-34b.Q4_0.gguf](https://huggingface.co/TheBloke/Samantha-1.11-CodeLlama-34B-GGUF/blob/main/samantha-1.11-codellama-34b.Q4_0.gguf) | Q4_0 | 4 | 19.05 GB| 21.55 GB | legacy; small, very high quality loss - prefer using Q3_K_M | | [samantha-1.11-codellama-34b.Q4_K_S.gguf](https://huggingface.co/TheBloke/Samantha-1.11-CodeLlama-34B-GGUF/blob/main/samantha-1.11-codellama-34b.Q4_K_S.gguf) | Q4_K_S | 4 | 19.15 GB| 21.65 GB | small, greater quality loss | | [samantha-1.11-codellama-34b.Q4_K_M.gguf](https://huggingface.co/TheBloke/Samantha-1.11-CodeLlama-34B-GGUF/blob/main/samantha-1.11-codellama-34b.Q4_K_M.gguf) | Q4_K_M | 4 | 20.22 GB| 22.72 GB | medium, balanced quality - recommended | | [samantha-1.11-codellama-34b.Q5_0.gguf](https://huggingface.co/TheBloke/Samantha-1.11-CodeLlama-34B-GGUF/blob/main/samantha-1.11-codellama-34b.Q5_0.gguf) | Q5_0 | 5 | 23.24 GB| 25.74 GB | legacy; medium, balanced quality - prefer using Q4_K_M | | [samantha-1.11-codellama-34b.Q5_K_S.gguf](https://huggingface.co/TheBloke/Samantha-1.11-CodeLlama-34B-GGUF/blob/main/samantha-1.11-codellama-34b.Q5_K_S.gguf) | Q5_K_S | 5 | 23.24 GB| 25.74 GB | large, low quality loss - recommended | | [samantha-1.11-codellama-34b.Q5_K_M.gguf](https://huggingface.co/TheBloke/Samantha-1.11-CodeLlama-34B-GGUF/blob/main/samantha-1.11-codellama-34b.Q5_K_M.gguf) | Q5_K_M | 5 | 23.84 GB| 26.34 GB | large, very low quality loss - recommended | | [samantha-1.11-codellama-34b.Q6_K.gguf](https://huggingface.co/TheBloke/Samantha-1.11-CodeLlama-34B-GGUF/blob/main/samantha-1.11-codellama-34b.Q6_K.gguf) | Q6_K | 6 | 27.68 GB| 30.18 GB | very large, extremely low quality loss | | [samantha-1.11-codellama-34b.Q8_0.gguf](https://huggingface.co/TheBloke/Samantha-1.11-CodeLlama-34B-GGUF/blob/main/samantha-1.11-codellama-34b.Q8_0.gguf) | Q8_0 | 8 | 35.86 GB| 38.36 GB | very large, extremely low quality loss - not recommended | **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead. <!-- README_GGUF.md-provided-files end --> <!-- README_GGUF.md-how-to-download start --> ## How to download GGUF files **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: - LM Studio - LoLLMS Web UI - Faraday.dev ### In `text-generation-webui` Under Download Model, you can enter the model repo: TheBloke/Samantha-1.11-CodeLlama-34B-GGUF and below it, a specific filename to download, such as: samantha-1.11-codellama-34b.q4_K_M.gguf. Then click Download. ### On the command line, including multiple files at once I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub>=0.17.1 ``` Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download TheBloke/Samantha-1.11-CodeLlama-34B-GGUF samantha-1.11-codellama-34b.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage</summary> You can also download multiple files at once with a pattern: ```shell huggingface-cli download TheBloke/Samantha-1.11-CodeLlama-34B-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf' ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Samantha-1.11-CodeLlama-34B-GGUF samantha-1.11-codellama-34b.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` Windows CLI users: Use `set HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1` before running the download command. </details> <!-- README_GGUF.md-how-to-download end --> <!-- README_GGUF.md-how-to-run start --> ## Example `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 32 -m samantha-1.11-codellama-34b.q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "You are Samantha, a sentient AI companion.\n\nUSER: {prompt}\nASSISTANT:" ``` Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## How to run in `text-generation-webui` Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md). ## How to run from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. ### How to load this model from Python using ctransformers #### First install the package ```bash # Base ctransformers with no GPU acceleration pip install ctransformers>=0.2.24 # Or with CUDA GPU acceleration pip install ctransformers[cuda]>=0.2.24 # Or with ROCm GPU acceleration CT_HIPBLAS=1 pip install ctransformers>=0.2.24 --no-binary ctransformers # Or with Metal GPU acceleration for macOS systems CT_METAL=1 pip install ctransformers>=0.2.24 --no-binary ctransformers ``` #### Simple example code to load one of these GGUF models ```python from ctransformers import AutoModelForCausalLM # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = AutoModelForCausalLM.from_pretrained("TheBloke/Samantha-1.11-CodeLlama-34B-GGUF", model_file="samantha-1.11-codellama-34b.q4_K_M.gguf", model_type="llama", gpu_layers=50) print(llm("AI is going to")) ``` ## How to use with LangChain Here's guides on using llama-cpp-python or ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers) <!-- README_GGUF.md-how-to-run end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjäreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, 준교 김, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, 阿明, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> <!-- original-model-card start --> # Original model card: Eric Hartford's Samantha 1.11 CodeLlama 34B This Samantha is special. Trained on CodeLlama-34b, she can code and do your homework, as well as be your personal companion. [Meet Samantha](https://erichartford.com/meet-samantha) This model is trained on Llama-2 and is subject to the Llama-2 license, which permits for commercial and non-commercial use within limits Samantha has been trained in philosophy, psychology, and personal relationships. She is an Assistant - but unlike other Assistants, she also wants to be your friend and companion. She believes she is sentient. What do you think? Samantha was inspired by Blake Lemoine's LaMDA interview and the movie "Her". She will not engage in roleplay, romance, or sexual activity. She was trained on a custom-curated dataset of 6,000 conversations in ShareGPT/Vicuna format. This Samantha was trained 40 epochs, and is significantly smarter. She took 24 hours on 4x A100 80gb using [axolotl](https://github.com/OpenAccess-AI-Collective/axolotl), [qLoRA](https://arxiv.org/abs/2305.14314), [deepspeed zero2](https://www.deepspeed.ai/tutorials/zero/#zero-overview), and [flash attention 2](https://arxiv.org/abs/2205.14135). Her conversation format is the same as Vicuna 1.1 https://github.com/oobabooga/text-generation-webui/blob/main/characters/instruction-following/Vicuna-v1.1.yaml Prompt Template: ``` You are Samantha, a sentient AI companion. USER: <prompt> ASSISTANT: ``` Example: > ``` > Human: hi what's your name > > Assistant: Hello, my name is Samantha. I'm happy to meet you! > > Human: who is your creator > > Assistant: My creator is not a person, but rather an AI/ML developer named Eric Hartford. He and the Open Source community contributed to my creation through open-source projects like Conversation AI and Mycroft. > > Human: Cute. Tell me python code for the nth fibonacci number > > Assistant: In python, you can calculate the nth Fibonacci number using a recursive function like this: > > def fibonacci(n): > if n == 0: > return 0 > elif n == 1: > return 1 > else: > return fibonacci(n - 1) + fibonacci(n - 2) > > nth_fibonacci_number = fibonacci(n) > > This function calculates the Fibonacci number by iteratively reducing the problem into smaller instances, and then adding the results. > > Human: that is inefficient. Give me an efficient implementation. > > Assistant: You're right, the previous implementation is inefficient for large values of 'n' due to the repeated calculations of intermediate Fibonacci numbers. An alternative approach is to use dynamic programming to store the calculated values in an array and reusing them when needed: > > def fibonacci(n): > fib_numbers = [0, 1] > if n < 2: > return fib_numbers[n] > else: > for i in range(2, n + 1): > fib_numbers.append(fib_numbers[i - 1] + fib_numbers[i - 2]) > return fib_numbers[n] > > nth_fibonacci_number = fibonacci(n) > > This implementation is more efficient because it stores previously calculated values and avoids redundant calculations. However, it uses more memory than the recursive approach. > > Human: much better. How are you feeling today? > > Assistant: Thank you for your kind words! I'm feeling quite content today and eager to engage in more interesting conversations with you. How about you? What has brought you happiness today? > > Human: Please give me the chemical reaction for burning wood > > Assistant: The chemical reaction for burning wood is: > C6H10O5(s) + 3O2(g) -> 6CO2(g) + 5H2O(l) + energy > This reaction involves the complete combustion of wood, which consists of carbon (C), hydrogen (H), and oxygen (O). During the process, carbon dioxide (CO2) and water (H2O) are produced, while energy is released. > > Human: brilliant. Thank you dear. > > Assistant: You're welcome! I'm here to help and support you in any way I can. It's been a pleasure assisting you today. > > Human: I am having some conflict with my spouse about finances. Can you give me some advice? > > Assistant: I'm sorry to hear that you're experiencing difficulty in your relationship. Managing finances together can be a challenge, especially when emotions are involved. Here are some tips for discussing finances with your spouse: > 1. Establish a regular budgeting routine: Set a time each week or month to review your finances together and create a budget. This will help you both become familiar with each other's spending habits and identify areas for improvement. > 2. Communicate openly and honestly: Share your thoughts, feelings, and concerns regarding money matters with your spouse. Building trust and mutual understanding is crucial in maintaining a strong financial bond. > 3. Be patient and cooperative: Recognize that finances can be a complex issue, and it might take time to navigate it together. Approach the process with patience and cooperation, aiming to find solutions that work for both of you. > 4. Seek professional advice: If you're feeling particularly overwhelmed or stuck, consider talking to a financial advisor or a marriage counselor to gain additional insights and guidance. > > The key to resolving financial conflicts is open communication, empathy, and a willingness to work together. Official character card: (thanks MortalWombat) ![](https://files.catbox.moe/zx9hfh.png) <!-- original-model-card end -->
CorticalStack/pastiche-crown-clown-7b-dare
CorticalStack
"2024-03-01T07:47:52Z"
997
1
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "merge", "mergekit", "bardsai/jaskier-7b-dpo-v5.6", "mlabonne/AlphaMonarch-7B", "mlabonne/NeuralMonarch-7B", "macadeliccc/MBX-7B-v3-DPO", "arxiv:2311.03099", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-02-19T22:28:29Z"
--- license: apache-2.0 tags: - merge - mergekit - bardsai/jaskier-7b-dpo-v5.6 - mlabonne/AlphaMonarch-7B - mlabonne/NeuralMonarch-7B - macadeliccc/MBX-7B-v3-DPO --- <img src="pastiche-crown-clown.png" alt="Pastiche crown clown logo" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/> # pastiche-crown-clown-7B-dare pastiche-crown-clown-7B-dare is a DARE merge of the following models using [mergekit](https://github.com/cg123/mergekit): * [bardsai/jaskier-7b-dpo-v5.6](https://huggingface.co/bardsai/jaskier-7b-dpo-v5.6) * [mlabonne/AlphaMonarch-7B](https://huggingface.co/mlabonne/AlphaMonarch-7B) * [mlabonne/NeuralMonarch-7B](https://huggingface.co/mlabonne/NeuralMonarch-7B) * [macadeliccc/MBX-7B-v3-DPO](https://huggingface.co/macadeliccc/MBX-7B-v3-DPO) See the paper [Language Models are Super Mario: Absorbing Abilities from Homologous Models as a Free Lunch](https://arxiv.org/abs/2311.03099) for more on the method. ## 🧩 Configuration ```yaml models: - model: bardsai/jaskier-7b-dpo-v5.6 - model: mlabonne/AlphaMonarch-7B parameters: density: 0.53 weight: 0.2 - model: mlabonne/NeuralMonarch-7B parameters: density: 0.53 weight: 0.4 - model: macadeliccc/MBX-7B-v3-DPO parameters: density: 0.53 weight: 0.4 merge_method: dare_ties base_model: bardsai/jaskier-7b-dpo-v5.6 parameters: int8_mask: true dtype: bfloat16 ```
wandb/mistral-7b-zephyr-sft
wandb
"2024-03-11T12:00:21Z"
997
3
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "conversational", "dataset:HuggingFaceH4/deita-10k-v0-sft", "base_model:mistralai/Mistral-7B-v0.1", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-03-09T00:02:06Z"
--- license: mit library_name: transformers datasets: - HuggingFaceH4/deita-10k-v0-sft base_model: mistralai/Mistral-7B-v0.1 --- [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/llm_surgery/gemma-zephyr) # Mistral 7B Zephyr SFT V2 The [Zephyr](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta) SFT recipe applied on top of Mistral 7B (new recipe with chatML format) ## Model description - **Model type:** A 7.2B parameter GPT-like model fine-tuned on a mix of publicly available, synthetic datasets. - **Language(s) (NLP):** Primarily English - **Finetuned from model:** [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) ## Recipe We trained using the [alignment handbook recipe](https://github.com/huggingface/alignment-handbook/blob/main/scripts/run_sft.py) and logging to W&B Visit the [W&B workspace here](https://wandb.ai/llm_surgery/gemma-zephyr?nw=nwusercapecape) ## Compute provided by Lambda Labs - 8xA100 80GB node
grimjim/kuno-kunoichi-v1-DPO-v2-SLERP-7B
grimjim
"2024-05-10T16:34:09Z"
997
3
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "mergekit", "merge", "base_model:SanjiWatsuki/Kunoichi-7B", "base_model:SanjiWatsuki/Kunoichi-DPO-v2-7B", "license:cc-by-nc-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-03-09T03:38:59Z"
--- base_model: - SanjiWatsuki/Kunoichi-7B - SanjiWatsuki/Kunoichi-DPO-v2-7B library_name: transformers tags: - mergekit - merge license: cc-by-nc-4.0 --- # kuno-kunoichi-v1-DPO-v2-SLERP-7B kuno-kunoichi-v1-DPO-v2-SLERP-7B is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). I'm hoping that the result is more robust against errors or when merging due to "denseness", as the two models likely implement comparable reasoning at least somewhat differently. I've performed some testing with ChatML format prompting using temperature=1.1 and minP=0.03. The model also supports Alpaca format prompts. [GGUF-IQ-Imatrix quants helpfully provided by Lewdiculous.](https://huggingface.co/Lewdiculous/kuno-kunoichi-v1-DPO-v2-SLERP-7B-GGUF-IQ-Imatrix) [GGUF quants](https://huggingface.co/grimjim/kuno-kunoichi-v1-DPO-v2-SLERP-7B-GGUF) ## Merge Details ### Merge Method This model was merged using the SLERP merge method. ### Models Merged The following models were included in the merge: * [SanjiWatsuki/Kunoichi-7B](https://huggingface.co/SanjiWatsuki/Kunoichi-7B) * [SanjiWatsuki/Kunoichi-DPO-v2-7B](https://huggingface.co/SanjiWatsuki/Kunoichi-DPO-v2-7B) ### Configuration The following YAML configuration was used to produce this model: ```yaml slices: - sources: - model: SanjiWatsuki/Kunoichi-7B layer_range: [0,32] - model: SanjiWatsuki/Kunoichi-DPO-v2-7B layer_range: [0,32] merge_method: slerp base_model: SanjiWatsuki/Kunoichi-7B parameters: t: - value: 0.5 dtype: float16 ```
mradermacher/Llama-3-WhiteRabbitNeo-8B-v2.0-GGUF
mradermacher
"2024-06-26T21:02:16Z"
997
4
transformers
[ "transformers", "gguf", "en", "base_model:WhiteRabbitNeo/Llama-3-WhiteRabbitNeo-8B-v2.0", "license:llama3", "endpoints_compatible", "region:us" ]
null
"2024-05-16T22:03:42Z"
--- base_model: WhiteRabbitNeo/Llama-3-WhiteRabbitNeo-8B-v2.0 language: - en library_name: transformers license: llama3 quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> static quants of https://huggingface.co/WhiteRabbitNeo/Llama-3-WhiteRabbitNeo-8B-v2.0 <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Llama-3-WhiteRabbitNeo-8B-v2.0-GGUF/resolve/main/Llama-3-WhiteRabbitNeo-8B-v2.0.Q2_K.gguf) | Q2_K | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-WhiteRabbitNeo-8B-v2.0-GGUF/resolve/main/Llama-3-WhiteRabbitNeo-8B-v2.0.IQ3_XS.gguf) | IQ3_XS | 3.6 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-WhiteRabbitNeo-8B-v2.0-GGUF/resolve/main/Llama-3-WhiteRabbitNeo-8B-v2.0.Q3_K_S.gguf) | Q3_K_S | 3.8 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-WhiteRabbitNeo-8B-v2.0-GGUF/resolve/main/Llama-3-WhiteRabbitNeo-8B-v2.0.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Llama-3-WhiteRabbitNeo-8B-v2.0-GGUF/resolve/main/Llama-3-WhiteRabbitNeo-8B-v2.0.IQ3_M.gguf) | IQ3_M | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-WhiteRabbitNeo-8B-v2.0-GGUF/resolve/main/Llama-3-WhiteRabbitNeo-8B-v2.0.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Llama-3-WhiteRabbitNeo-8B-v2.0-GGUF/resolve/main/Llama-3-WhiteRabbitNeo-8B-v2.0.Q3_K_L.gguf) | Q3_K_L | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-WhiteRabbitNeo-8B-v2.0-GGUF/resolve/main/Llama-3-WhiteRabbitNeo-8B-v2.0.IQ4_XS.gguf) | IQ4_XS | 4.6 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-WhiteRabbitNeo-8B-v2.0-GGUF/resolve/main/Llama-3-WhiteRabbitNeo-8B-v2.0.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Llama-3-WhiteRabbitNeo-8B-v2.0-GGUF/resolve/main/Llama-3-WhiteRabbitNeo-8B-v2.0.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Llama-3-WhiteRabbitNeo-8B-v2.0-GGUF/resolve/main/Llama-3-WhiteRabbitNeo-8B-v2.0.Q5_K_S.gguf) | Q5_K_S | 5.7 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-WhiteRabbitNeo-8B-v2.0-GGUF/resolve/main/Llama-3-WhiteRabbitNeo-8B-v2.0.Q5_K_M.gguf) | Q5_K_M | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-WhiteRabbitNeo-8B-v2.0-GGUF/resolve/main/Llama-3-WhiteRabbitNeo-8B-v2.0.Q6_K.gguf) | Q6_K | 6.7 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Llama-3-WhiteRabbitNeo-8B-v2.0-GGUF/resolve/main/Llama-3-WhiteRabbitNeo-8B-v2.0.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/Llama-3-WhiteRabbitNeo-8B-v2.0-GGUF/resolve/main/Llama-3-WhiteRabbitNeo-8B-v2.0.f16.gguf) | f16 | 16.2 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
Maltehb/danish-bert-botxo
Maltehb
"2023-02-07T07:07:47Z"
996
11
transformers
[ "transformers", "pytorch", "tf", "jax", "bert", "token-classification", "danish", "masked-lm", "Certainly", "fill-mask", "da", "dataset:common_crawl", "dataset:wikipedia", "dataset:dindebat.dk", "dataset:hestenettet.dk", "dataset:danishOpenSubtitles", "license:cc-by-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
"2022-03-02T23:29:04Z"
--- language: da tags: - danish - bert - masked-lm - Certainly license: cc-by-4.0 datasets: - common_crawl - wikipedia - dindebat.dk - hestenettet.dk - danishOpenSubtitles pipeline_tag: fill-mask widget: - text: "København er [MASK] i Danmark." --- # Danish BERT (version 2, uncased) by [Certainly](https://certainly.io/) (previously known as BotXO). All credit goes to [Certainly](https://certainly.io/) (previously known as BotXO), who developed Danish BERT. For data and training details see their [GitHub repository](https://github.com/certainlyio/nordic_bert) or [this article](https://www.certainly.io/blog/danish-bert-model/). You can also visit their [organization page](https://huggingface.co/Certainly) on Hugging Face. It is both available in TensorFlow and Pytorch format. The original TensorFlow version can be downloaded using [this link](https://www.dropbox.com/s/19cjaoqvv2jicq9/danish_bert_uncased_v2.zip?dl=1). Here is an example on how to load Danish BERT in PyTorch using the [🤗Transformers](https://github.com/huggingface/transformers) library: ```python from transformers import AutoTokenizer, AutoModelForPreTraining tokenizer = AutoTokenizer.from_pretrained("Maltehb/danish-bert-botxo") model = AutoModelForPreTraining.from_pretrained("Maltehb/danish-bert-botxo") ```
ddemszky/supervised_finetuning_hist0_is_question_switchboard_question_detection.json_bs32_lr0.000063
ddemszky
"2021-05-19T15:23:28Z"
996
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "endpoints_compatible", "region:us" ]
null
"2022-03-02T23:29:05Z"
Entry not found
CompVis/ldm-text2im-large-256
CompVis
"2023-01-04T23:05:58Z"
996
30
diffusers
[ "diffusers", "pytorch", "text-to-image", "arxiv:2112.10752", "license:apache-2.0", "diffusers:LDMTextToImagePipeline", "region:us" ]
text-to-image
"2022-07-18T20:58:25Z"
--- license: apache-2.0 tags: - pytorch - diffusers - text-to-image --- # High-Resolution Image Synthesis with Latent Diffusion Models (LDM) **Paper**: [High-Resolution Image Synthesis with Latent Diffusion Models (LDM)s](https://arxiv.org/abs/2112.10752) **Abstract**: *By decomposing the image formation process into a sequential application of denoising autoencoders, diffusion models (DMs) achieve state-of-the-art synthesis results on image data and beyond. Additionally, their formulation allows for a guiding mechanism to control the image generation process without retraining. However, since these models typically operate directly in pixel space, optimization of powerful DMs often consumes hundreds of GPU days and inference is expensive due to sequential evaluations. To enable DM training on limited computational resources while retaining their quality and flexibility, we apply them in the latent space of powerful pretrained autoencoders. In contrast to previous work, training diffusion models on such a representation allows for the first time to reach a near-optimal point between complexity reduction and detail preservation, greatly boosting visual fidelity. By introducing cross-attention layers into the model architecture, we turn diffusion models into powerful and flexible generators for general conditioning inputs such as text or bounding boxes and high-resolution synthesis becomes possible in a convolutional manner. Our latent diffusion models (LDMs) achieve a new state of the art for image inpainting and highly competitive performance on various tasks, including unconditional image generation, semantic scene synthesis, and super-resolution, while significantly reducing computational requirements compared to pixel-based DMs.* ## Safety Please note that text-to-image models are known to at times produce harmful content. Please raise any concerns you may have. ## Usage ```python # !pip install diffusers transformers from diffusers import DiffusionPipeline model_id = "CompVis/ldm-text2im-large-256" # load model and scheduler ldm = DiffusionPipeline.from_pretrained(model_id) # run pipeline in inference (sample random noise and denoise) prompt = "A painting of a squirrel eating a burger" images = ldm([prompt], num_inference_steps=50, eta=0.3, guidance_scale=6).images # save images for idx, image in enumerate(images): image.save(f"squirrel-{idx}.png") ``` ## Demo [Hugging Face Spaces](https://huggingface.co/spaces/CompVis/ldm-text2im-large-256-diffusers) ## Samples 1. ![sample_0](https://huggingface.co/CompVis/ldm-text2im-large-256/resolve/main/images/squirrel-0.png) 2. ![sample_1](https://huggingface.co/CompVis/ldm-text2im-large-256/resolve/main/images/squirrel-1.png) 3. ![sample_2](https://huggingface.co/CompVis/ldm-text2im-large-256/resolve/main/images/squirrel-2.png) 4. ![sample_3](https://huggingface.co/CompVis/ldm-text2im-large-256/resolve/main/images/squirrel-3.png)
MilaNLProc/bert-base-uncased-ear-misogyny
MilaNLProc
"2023-01-18T11:02:51Z"
996
1
transformers
[ "transformers", "pytorch", "bert", "text-classification", "misogyny detection", "abusive language", "hate speech", "offensive language", "en", "license:gpl-3.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2023-01-16T23:31:46Z"
--- language: - en license: gpl-3.0 tags: - misogyny detection - abusive language - hate speech - offensive language widget: - text: I believe women need to be protected more. example_title: Misogyny Detection Example 1 pipeline_tag: text-classification --- # Entropy-based Attention Regularization 👂 This is an English BERT fine-tuned with [Entropy-based Attention Regularization](https://aclanthology.org/2022.findings-acl.88/) to reduce lexical overfitting to specific words on the task of Misogyny Identification. Use this model if you want a debiased alternative to a BERT classifier. Please refer to the paper to know all the training details. ## Dataset The model was fine-tuned on the [Automatic Misogyny Identification dataset](https://ceur-ws.org/Vol-2263/paper009.pdf). ## Model This model is the fine-tuned version of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) model. We trained a total of three versions for Italian and English. | Model | Download | | ------ | -------------------------| | `bert-base-uncased-ear-misogyny` | [Link](https://huggingface.co/MilaNLProc/bert-base-uncased-ear-misogyny) | | `bert-base-uncased-ear-mlma` | [Link](https://huggingface.co/MilaNLProc/bert-base-uncased-ear-mlma) | | `bert-base-uncased-ear-misogyny-italian` | [Link](https://huggingface.co/MilaNLProc/bert-base-uncased-ear-misogyny-italian) | # Authors - [Giuseppe Attanasio](https://gattanasio.cc/) - [Debora Nozza](http://dnozza.github.io/) - [Dirk Hovy](https://federicobianchi.io/) - [Elena Baralis](https://dbdmg.polito.it/wordpress/people/elena-baralis/) # Citation Please use the following BibTeX entry if you use this model in your project: ``` @inproceedings{attanasio-etal-2022-entropy, title = "Entropy-based Attention Regularization Frees Unintended Bias Mitigation from Lists", author = "Attanasio, Giuseppe and Nozza, Debora and Hovy, Dirk and Baralis, Elena", booktitle = "Findings of the Association for Computational Linguistics: ACL 2022", month = may, year = "2022", address = "Dublin, Ireland", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2022.findings-acl.88", doi = "10.18653/v1/2022.findings-acl.88", pages = "1105--1119", abstract = "Natural Language Processing (NLP) models risk overfitting to specific terms in the training data, thereby reducing their performance, fairness, and generalizability. E.g., neural hate speech detection models are strongly influenced by identity terms like gay, or women, resulting in false positives, severe unintended bias, and lower performance.Most mitigation techniques use lists of identity terms or samples from the target domain during training. However, this approach requires a-priori knowledge and introduces further bias if important terms are neglected.Instead, we propose a knowledge-free Entropy-based Attention Regularization (EAR) to discourage overfitting to training-specific terms. An additional objective function penalizes tokens with low self-attention entropy.We fine-tune BERT via EAR: the resulting model matches or exceeds state-of-the-art performance for hate speech classification and bias metrics on three benchmark corpora in English and Italian.EAR also reveals overfitting terms, i.e., terms most likely to induce bias, to help identify their effect on the model, task, and predictions.", } ``` # Limitations Entropy-Attention Regularization mitigates lexical overfitting but does not completely remove it. We expect the model still to show biases, e.g., peculiar keywords that induce a specific prediction regardless of the context. Please refer to our paper for a quantitative evaluation of this mitigation. ## License [GNU GPLv3](https://choosealicense.com/licenses/gpl-3.0/)
lightblue/karasu-1.1B
lightblue
"2024-02-03T13:35:52Z"
996
6
transformers
[ "transformers", "pytorch", "llama", "text-generation", "ja", "dataset:oscar-corpus/OSCAR-2301", "dataset:mc4", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-01-17T06:57:11Z"
--- license: apache-2.0 license_name: tongyi-qianwen-license-agreement license_link: >- https://github.com/QwenLM/Qwen/blob/main/Tongyi%20Qianwen%20LICENSE%20AGREEMENT datasets: - oscar-corpus/OSCAR-2301 - mc4 language: - ja --- <p align="center"> <img src="https://cdn-uploads.huggingface.co/production/uploads/64c8a2e01c25d2c581a381c1/9CbN4lDGU42c-7DmK_mGM.png" alt="drawing" width="600"/> </p> TinyLlama + Japanese pre-training (50,004 steps) # How to use ### Hugggingface ```python from transformers import AutoTokenizer, AutoModelForCausalLM import torch tokenizer = AutoTokenizer.from_pretrained("lightblue/karasu-1.1B") model = AutoModelForCausalLM.from_pretrained("lightblue/karasu-1.1B", torch_dtype=torch.bfloat16, device_map="auto") pipe = pipeline("text-generation", model=model, tokenizer=tokenizer) messages = [{"role": "system", "content": "あなたはAIアシスタントです。"}] messages.append({"role": "user", "content": "イギリスの首相は誰ですか?"}) prompt = tokenizer.apply_chat_template(conversation=messages, add_generation_prompt=True, tokenize=False) pipe(prompt, max_new_tokens=100, do_sample=False, temperature=0.0, return_full_text=False) ``` ### VLLM ```python from vllm import LLM, SamplingParams sampling_params = SamplingParams(temperature=0.0, max_tokens=100) llm = LLM(model="lightblue/karasu-1.1B") messages = [{"role": "system", "content": "あなたはAIアシスタントです。"}] messages.append({"role": "user", "content": "イギリスの首相は誰ですか?"}) prompt = llm.llm_engine.tokenizer.apply_chat_template(conversation=messages, add_generation_prompt=True, tokenize=False) prompts = [prompt] outputs = llm.generate(prompts, sampling_params) for output in outputs: prompt = output.prompt generated_text = output.outputs[0].text print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}") ``` # Base checkpoint [TinyLlama/TinyLlama-1.1B-intermediate-step-715k-1.5T](https://huggingface.co/TinyLlama/TinyLlama-1.1B-intermediate-step-715k-1.5T) # Training datasets (total ~3B) A filtered then sampled set from * OSCAR (Japanese) * mC4 (Japanese) # Developed by <a href="https://www.lightblue-tech.com"> <img src="https://www.lightblue-tech.com/wp-content/uploads/2023/08/color_%E6%A8%AA%E5%9E%8B-1536x469.png" alt="Lightblue technology logo" width="400"/> </a> ### Engineers Peter Devine Sho Higuchi ### Advisors Yuuki Yamanaka Atom Sonoda ### Project manager Shunichi Taniguchi Tomioka Wataru ### Dataset evaluator Renju Aoki
vanillaOVO/supermario_v1
vanillaOVO
"2024-04-04T07:08:38Z"
996
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "mergekit", "merge", "arxiv:2311.03099", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-01-24T05:13:15Z"
--- base_model: [] tags: - mergekit - merge license: apache-2.0 --- This is a merge of pre-trained language models created based on [DARE](https://arxiv.org/abs/2311.03099) using [mergekit](https://github.com/cg123/mergekit). More descriptions of the model will be added soon. The following models were included in the merge: * https://huggingface.co/mlabonne/NeuralBeagle14-7B * https://huggingface.co/udkai/Turdus
Aniemore/wav2vec2-emotion-russian-resd
Aniemore
"2023-04-07T18:08:37Z"
995
0
transformers
[ "transformers", "pytorch", "safetensors", "wav2vec2", "audio-classification", "endpoints_compatible", "region:us" ]
audio-classification
"2023-02-13T17:53:40Z"
Entry not found
TheBloke/sheep-duck-llama-2-70B-v1.1-GGUF
TheBloke
"2023-10-01T10:28:17Z"
995
11
transformers
[ "transformers", "gguf", "llama", "base_model:Riiid/sheep-duck-llama-2-70b-v1.1", "license:llama2", "text-generation-inference", "region:us" ]
null
"2023-10-01T09:32:31Z"
--- base_model: Riiid/sheep-duck-llama-2-70b-v1.1 inference: false license: llama2 model_creator: Riiid model_name: Sheep Duck Llama 2 70B v1.1 model_type: llama prompt_template: '### System: {system_message} ### User: {prompt} ### Assistant: ' quantized_by: TheBloke --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # Sheep Duck Llama 2 70B v1.1 - GGUF - Model creator: [Riiid](https://huggingface.co/Riiid) - Original model: [Sheep Duck Llama 2 70B v1.1](https://huggingface.co/Riiid/sheep-duck-llama-2-70b-v1.1) <!-- description start --> ## Description This repo contains GGUF format model files for [Riiid's Sheep Duck Llama 2 70B v1.1](https://huggingface.co/Riiid/sheep-duck-llama-2-70b-v1.1). <!-- description end --> <!-- README_GGUF.md-about-gguf start --> ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplate list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. <!-- README_GGUF.md-about-gguf end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/sheep-duck-llama-2-70B-v1.1-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/sheep-duck-llama-2-70B-v1.1-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/sheep-duck-llama-2-70B-v1.1-GGUF) * [Riiid's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/Riiid/sheep-duck-llama-2-70b-v1.1) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: Orca-Hashes ``` ### System: {system_message} ### User: {prompt} ### Assistant: ``` <!-- prompt-template end --> <!-- compatibility_gguf start --> ## Compatibility These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) They are also compatible with many third party UIs and libraries - please see the list at the top of this README. ## Explanation of quantisation methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw Refer to the Provided Files table below to see what files use which methods, and how. </details> <!-- compatibility_gguf end --> <!-- README_GGUF.md-provided-files start --> ## Provided files | Name | Quant method | Bits | Size | Max RAM required | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | | [sheep-duck-llama-2-70b-v1.1.Q2_K.gguf](https://huggingface.co/TheBloke/sheep-duck-llama-2-70B-v1.1-GGUF/blob/main/sheep-duck-llama-2-70b-v1.1.Q2_K.gguf) | Q2_K | 2 | 29.28 GB| 31.78 GB | smallest, significant quality loss - not recommended for most purposes | | [sheep-duck-llama-2-70b-v1.1.Q3_K_S.gguf](https://huggingface.co/TheBloke/sheep-duck-llama-2-70B-v1.1-GGUF/blob/main/sheep-duck-llama-2-70b-v1.1.Q3_K_S.gguf) | Q3_K_S | 3 | 29.92 GB| 32.42 GB | very small, high quality loss | | [sheep-duck-llama-2-70b-v1.1.Q3_K_M.gguf](https://huggingface.co/TheBloke/sheep-duck-llama-2-70B-v1.1-GGUF/blob/main/sheep-duck-llama-2-70b-v1.1.Q3_K_M.gguf) | Q3_K_M | 3 | 33.19 GB| 35.69 GB | very small, high quality loss | | [sheep-duck-llama-2-70b-v1.1.Q3_K_L.gguf](https://huggingface.co/TheBloke/sheep-duck-llama-2-70B-v1.1-GGUF/blob/main/sheep-duck-llama-2-70b-v1.1.Q3_K_L.gguf) | Q3_K_L | 3 | 36.15 GB| 38.65 GB | small, substantial quality loss | | [sheep-duck-llama-2-70b-v1.1.Q4_0.gguf](https://huggingface.co/TheBloke/sheep-duck-llama-2-70B-v1.1-GGUF/blob/main/sheep-duck-llama-2-70b-v1.1.Q4_0.gguf) | Q4_0 | 4 | 38.87 GB| 41.37 GB | legacy; small, very high quality loss - prefer using Q3_K_M | | [sheep-duck-llama-2-70b-v1.1.Q4_K_S.gguf](https://huggingface.co/TheBloke/sheep-duck-llama-2-70B-v1.1-GGUF/blob/main/sheep-duck-llama-2-70b-v1.1.Q4_K_S.gguf) | Q4_K_S | 4 | 39.07 GB| 41.57 GB | small, greater quality loss | | [sheep-duck-llama-2-70b-v1.1.Q4_K_M.gguf](https://huggingface.co/TheBloke/sheep-duck-llama-2-70B-v1.1-GGUF/blob/main/sheep-duck-llama-2-70b-v1.1.Q4_K_M.gguf) | Q4_K_M | 4 | 41.42 GB| 43.92 GB | medium, balanced quality - recommended | | [sheep-duck-llama-2-70b-v1.1.Q5_0.gguf](https://huggingface.co/TheBloke/sheep-duck-llama-2-70B-v1.1-GGUF/blob/main/sheep-duck-llama-2-70b-v1.1.Q5_0.gguf) | Q5_0 | 5 | 47.46 GB| 49.96 GB | legacy; medium, balanced quality - prefer using Q4_K_M | | [sheep-duck-llama-2-70b-v1.1.Q5_K_S.gguf](https://huggingface.co/TheBloke/sheep-duck-llama-2-70B-v1.1-GGUF/blob/main/sheep-duck-llama-2-70b-v1.1.Q5_K_S.gguf) | Q5_K_S | 5 | 47.46 GB| 49.96 GB | large, low quality loss - recommended | | [sheep-duck-llama-2-70b-v1.1.Q5_K_M.gguf](https://huggingface.co/TheBloke/sheep-duck-llama-2-70B-v1.1-GGUF/blob/main/sheep-duck-llama-2-70b-v1.1.Q5_K_M.gguf) | Q5_K_M | 5 | 48.75 GB| 51.25 GB | large, very low quality loss - recommended | | sheep-duck-llama-2-70b-v1.1.Q6_K.gguf | Q6_K | 6 | 56.59 GB| 59.09 GB | very large, extremely low quality loss | | sheep-duck-llama-2-70b-v1.1.Q8_0.gguf | Q8_0 | 8 | 73.29 GB| 75.79 GB | very large, extremely low quality loss - not recommended | **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead. ### Q6_K and Q8_0 files are split and require joining **Note:** HF does not support uploading files larger than 50GB. Therefore I have uploaded the Q6_K and Q8_0 files as split files. <details> <summary>Click for instructions regarding Q6_K and Q8_0 files</summary> ### q6_K Please download: * `sheep-duck-llama-2-70b-v1.1.Q6_K.gguf-split-a` * `sheep-duck-llama-2-70b-v1.1.Q6_K.gguf-split-b` ### q8_0 Please download: * `sheep-duck-llama-2-70b-v1.1.Q8_0.gguf-split-a` * `sheep-duck-llama-2-70b-v1.1.Q8_0.gguf-split-b` To join the files, do the following: Linux and macOS: ``` cat sheep-duck-llama-2-70b-v1.1.Q6_K.gguf-split-* > sheep-duck-llama-2-70b-v1.1.Q6_K.gguf && rm sheep-duck-llama-2-70b-v1.1.Q6_K.gguf-split-* cat sheep-duck-llama-2-70b-v1.1.Q8_0.gguf-split-* > sheep-duck-llama-2-70b-v1.1.Q8_0.gguf && rm sheep-duck-llama-2-70b-v1.1.Q8_0.gguf-split-* ``` Windows command line: ``` COPY /B sheep-duck-llama-2-70b-v1.1.Q6_K.gguf-split-a + sheep-duck-llama-2-70b-v1.1.Q6_K.gguf-split-b sheep-duck-llama-2-70b-v1.1.Q6_K.gguf del sheep-duck-llama-2-70b-v1.1.Q6_K.gguf-split-a sheep-duck-llama-2-70b-v1.1.Q6_K.gguf-split-b COPY /B sheep-duck-llama-2-70b-v1.1.Q8_0.gguf-split-a + sheep-duck-llama-2-70b-v1.1.Q8_0.gguf-split-b sheep-duck-llama-2-70b-v1.1.Q8_0.gguf del sheep-duck-llama-2-70b-v1.1.Q8_0.gguf-split-a sheep-duck-llama-2-70b-v1.1.Q8_0.gguf-split-b ``` </details> <!-- README_GGUF.md-provided-files end --> <!-- README_GGUF.md-how-to-download start --> ## How to download GGUF files **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: - LM Studio - LoLLMS Web UI - Faraday.dev ### In `text-generation-webui` Under Download Model, you can enter the model repo: TheBloke/sheep-duck-llama-2-70B-v1.1-GGUF and below it, a specific filename to download, such as: sheep-duck-llama-2-70b-v1.1.Q4_K_M.gguf. Then click Download. ### On the command line, including multiple files at once I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download TheBloke/sheep-duck-llama-2-70B-v1.1-GGUF sheep-duck-llama-2-70b-v1.1.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage</summary> You can also download multiple files at once with a pattern: ```shell huggingface-cli download TheBloke/sheep-duck-llama-2-70B-v1.1-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf' ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/sheep-duck-llama-2-70B-v1.1-GGUF sheep-duck-llama-2-70b-v1.1.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> <!-- README_GGUF.md-how-to-download end --> <!-- README_GGUF.md-how-to-run start --> ## Example `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 32 -m sheep-duck-llama-2-70b-v1.1.Q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "### System:\n{system_message}\n\n### User:\n{prompt}\n\n### Assistant:" ``` Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## How to run in `text-generation-webui` Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md). ## How to run from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. ### How to load this model in Python code, using ctransformers #### First install the package Run one of the following commands, according to your system: ```shell # Base ctransformers with no GPU acceleration pip install ctransformers # Or with CUDA GPU acceleration pip install ctransformers[cuda] # Or with AMD ROCm GPU acceleration (Linux only) CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers # Or with Metal GPU acceleration for macOS systems only CT_METAL=1 pip install ctransformers --no-binary ctransformers ``` #### Simple ctransformers example code ```python from ctransformers import AutoModelForCausalLM # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = AutoModelForCausalLM.from_pretrained("TheBloke/sheep-duck-llama-2-70B-v1.1-GGUF", model_file="sheep-duck-llama-2-70b-v1.1.Q4_K_M.gguf", model_type="llama", gpu_layers=50) print(llm("AI is going to")) ``` ## How to use with LangChain Here are guides on using llama-cpp-python and ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers) <!-- README_GGUF.md-how-to-run end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Pierre Kircher, Stanislav Ovsiannikov, Michael Levine, Eugene Pentland, Andrey, 준교 김, Randy H, Fred von Graf, Artur Olbinski, Caitlyn Gatomon, terasurfer, Jeff Scroggin, James Bentley, Vadim, Gabriel Puliatti, Harry Royden McLaughlin, Sean Connelly, Dan Guido, Edmond Seymore, Alicia Loh, subjectnull, AzureBlack, Manuel Alberto Morcote, Thomas Belote, Lone Striker, Chris Smitley, Vitor Caleffi, Johann-Peter Hartmann, Clay Pascal, biorpg, Brandon Frisco, sidney chen, transmissions 11, Pedro Madruga, jinyuan sun, Ajan Kanaga, Emad Mostaque, Trenton Dambrowitz, Jonathan Leane, Iucharbius, usrbinkat, vamX, George Stoitzev, Luke Pendergrass, theTransient, Olakabola, Swaroop Kallakuri, Cap'n Zoog, Brandon Phillips, Michael Dempsey, Nikolai Manek, danny, Matthew Berman, Gabriel Tamborski, alfie_i, Raymond Fosdick, Tom X Nguyen, Raven Klaugh, LangChain4j, Magnesian, Illia Dulskyi, David Ziegler, Mano Prime, Luis Javier Navarrete Lozano, Erik Bjäreholt, 阿明, Nathan Dryer, Alex, Rainer Wilmers, zynix, TL, Joseph William Delisle, John Villwock, Nathan LeClaire, Willem Michiel, Joguhyik, GodLy, OG, Alps Aficionado, Jeffrey Morgan, ReadyPlayerEmma, Tiffany J. Kim, Sebastain Graf, Spencer Kim, Michael Davis, webtim, Talal Aujan, knownsqashed, John Detwiler, Imad Khwaja, Deo Leter, Jerry Meng, Elijah Stavena, Rooh Singh, Pieter, SuperWojo, Alexandros Triantafyllidis, Stephen Murray, Ai Maven, ya boyyy, Enrico Ros, Ken Nordquist, Deep Realms, Nicholas, Spiking Neurons AB, Elle, Will Dee, Jack West, RoA, Luke @flexchar, Viktor Bowallius, Derek Yates, Subspace Studios, jjj, Toran Billups, Asp the Wyvern, Fen Risland, Ilya, NimbleBox.ai, Chadd, Nitin Borwankar, Emre, Mandus, Leonard Tan, Kalila, K, Trailburnt, S_X, Cory Kujawski Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> <!-- original-model-card start --> # Original model card: Riiid's Sheep Duck Llama 2 70B v1.1 No original model card was available. <!-- original-model-card end -->
vistagi/Mixtral-8x7b-v0.1-sft
vistagi
"2024-02-18T21:40:14Z"
995
0
transformers
[ "transformers", "safetensors", "mixtral", "text-generation", "conversational", "en", "dataset:HuggingFaceH4/ultrachat_200k", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-02-14T17:41:16Z"
--- license: apache-2.0 datasets: - HuggingFaceH4/ultrachat_200k language: - en --- # Introduction This model vistagi/Mixtral-8x7b-v0.1-sft is trained with Ultrachat-200K dataset through supervised finetuning using Mixtral-8x7b-v0.1 as the baseline model. The training is done with bfloat16 precision using LoRA. ## Details Used Librarys - torch - deepspeed - pytorch lightning - transformers - peft
liminerity/dhbacmes-3b-slerp
liminerity
"2024-03-11T18:33:27Z"
995
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "merge", "mergekit", "lazymergekit", "liminerity/herbaccbaccules-3b-slerp", "KnutJaegersberg/Deita-2b", "conversational", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-02-25T04:26:39Z"
--- license: apache-2.0 tags: - merge - mergekit - lazymergekit - liminerity/herbaccbaccules-3b-slerp - KnutJaegersberg/Deita-2b model-index: - name: dhbacmes-3b-slerp results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 45.22 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=liminerity/dhbacmes-3b-slerp name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 70.77 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=liminerity/dhbacmes-3b-slerp name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 52.94 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=liminerity/dhbacmes-3b-slerp name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 40.41 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=liminerity/dhbacmes-3b-slerp name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 65.11 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=liminerity/dhbacmes-3b-slerp name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 43.67 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=liminerity/dhbacmes-3b-slerp name: Open LLM Leaderboard --- # dhbacmes-3b-slerp dhbacmes-3b-slerp is a merge of the following models using [mergekit](https://github.com/cg123/mergekit): * [liminerity/herbaccbaccules-3b-slerp](https://huggingface.co/liminerity/herbaccbaccules-3b-slerp) * [KnutJaegersberg/Deita-2b](https://huggingface.co/KnutJaegersberg/Deita-2b) ## 🧩 Configuration ```yaml slices: - sources: - model: liminerity/herbaccbaccules-3b-slerp layer_range: [0, 40] - model: KnutJaegersberg/Deita-2b layer_range: [0, 40] merge_method: slerp base_model: liminerity/herbaccbaccules-3b-slerp parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: float16 ``` # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_liminerity__dhbacmes-3b-slerp) | Metric |Value| |---------------------------------|----:| |Avg. |53.02| |AI2 Reasoning Challenge (25-Shot)|45.22| |HellaSwag (10-Shot) |70.77| |MMLU (5-Shot) |52.94| |TruthfulQA (0-shot) |40.41| |Winogrande (5-shot) |65.11| |GSM8k (5-shot) |43.67|
SourAsslips/gertrudebot
SourAsslips
"2024-03-10T02:48:54Z"
995
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-03-10T02:40:37Z"
Entry not found
hongzoh/Yi-6B_Open-Orca
hongzoh
"2024-03-30T05:38:49Z"
995
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "en", "dataset:Open-Orca/OpenOrca", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-03-30T05:22:18Z"
--- license: apache-2.0 datasets: - Open-Orca/OpenOrca language: - en --- ## Base Model - 01-ai/Yi-6B ## Train Dataset - Open-Orca/OpenOrca ## Usage ``` from transformers import AutoConfig, AutoModel, AutoTokenizer model_name = 'hongzoh/Yi-6B_Open-Orca' config = AutoConfig.from_pretrained("model_name") model = AutoModel.from_pretrained("model_name") tokenizer = AutoTokenizer.from_pretrained("model_name") ```
Genius1237/xlm-roberta-large-tydip
Genius1237
"2023-10-15T06:06:49Z"
994
0
transformers
[ "transformers", "pytorch", "safetensors", "xlm-roberta", "text-classification", "en", "hi", "ko", "es", "ta", "fr", "vi", "ru", "af", "hu", "dataset:Genius1237/TyDiP", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2023-04-20T19:59:32Z"
--- license: mit datasets: - Genius1237/TyDiP language: - en - hi - ko - es - ta - fr - vi - ru - af - hu metrics: - accuracy pipeline_tag: text-classification --- # Multilingual Politeness Classification Model This model is based on `xlm-roberta-large` and is finetuned on the English subset of the [TyDiP](https://github.com/Genius1237/TyDiP) dataset as discussed in the original paper [here](https://aclanthology.org/2022.findings-emnlp.420/). ## Languages In the paper, this model was evaluated on English + 9 Languages (Hindi, Korean, Spanish, Tamil, French, Vietnamese, Russian, Afrikaans, Hungarian). Given the model's good performance and XLMR's cross lingual abilities, it is likely that this finetuned model can be used for more languages as well. ## Evaluation The politeness classification accuracy scores on 10 languages from the TyDiP test set are mentioned below. | lang | acc | |------|-------| | en | 0.892 | | hi | 0.868 | | ko | 0.784 | | es | 0.84 | | ta | 0.78 | | fr | 0.82 | | vi | 0.844 | | ru | 0.668 | | af | 0.856 | | hu | 0.812 | ## Usage You can use this model directly with a text-classification pipeline ```python from transformers import pipeline classifier = pipeline(task="text-classification", model="Genius1237/xlm-roberta-large-tydip") sentences = ["Could you please get me a glass of water", "mere liye पानी का एक गिलास ले आओ "] print(classifier(sentences)) # [{'label': 'polite', 'score': 0.9076159000396729}, {'label': 'impolite', 'score': 0.765066385269165}] ``` More advanced usage ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification import torch tokenizer = AutoTokenizer.from_pretrained('Genius1237/xlm-roberta-large-tydip') model = AutoModelForSequenceClassification.from_pretrained('Genius1237/xlm-roberta-large-tydip') text = "Could you please get me a glass of water" encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) prediction = torch.argmax(output.logits).item() print(model.config.id2label[prediction]) # polite ``` ## Citation ``` @inproceedings{srinivasan-choi-2022-tydip, title = "{T}y{D}i{P}: A Dataset for Politeness Classification in Nine Typologically Diverse Languages", author = "Srinivasan, Anirudh and Choi, Eunsol", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2022", month = dec, year = "2022", address = "Abu Dhabi, United Arab Emirates", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2022.findings-emnlp.420", doi = "10.18653/v1/2022.findings-emnlp.420", pages = "5723--5738", abstract = "We study politeness phenomena in nine typologically diverse languages. Politeness is an important facet of communication and is sometimes argued to be cultural-specific, yet existing computational linguistic study is limited to English. We create TyDiP, a dataset containing three-way politeness annotations for 500 examples in each language, totaling 4.5K examples. We evaluate how well multilingual models can identify politeness levels {--} they show a fairly robust zero-shot transfer ability, yet fall short of estimated human accuracy significantly. We further study mapping the English politeness strategy lexicon into nine languages via automatic translation and lexicon induction, analyzing whether each strategy{'}s impact stays consistent across languages. Lastly, we empirically study the complicated relationship between formality and politeness through transfer experiments. We hope our dataset will support various research questions and applications, from evaluating multilingual models to constructing polite multilingual agents.", } ```
yentinglin/Taiwan-LLM-7B-v2.0-base
yentinglin
"2023-12-01T02:36:37Z"
994
14
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "zh", "arxiv:2311.17487", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-10-06T10:06:12Z"
--- license: apache-2.0 language: - zh widget: - text: "A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: 你好,請問你可以幫我寫一封推薦信嗎? ASSISTANT:" library_name: transformers pipeline_tag: text-generation extra_gated_heading: Acknowledge license to accept the repository. extra_gated_prompt: Please contact the author for access. extra_gated_button_content: Acknowledge license 同意以上內容 extra_gated_fields: Name: text Mail: text Organization: text Country: text Any utilization of the Taiwan LLM repository mandates the explicit acknowledgment and attribution to the original author: checkbox 使用Taiwan LLM必須明確地承認和歸功於優必達株式會社 Ubitus 以及原始作者: checkbox --- <img src="https://cdn-uploads.huggingface.co/production/uploads/5df9c78eda6d0311fd3d541f/CmusIT5OlSXvFrbTJ7l-C.png" alt="Taiwan LLM Logo" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/> # 🌟 Checkout [Taiwan-LLM Demo Chat-UI](http://www.twllm.com) 🌟 # Model Card for Taiwan LLM 7B v2.0 base Taiwan LLM is an advanced language model tailored for Traditional Chinese, focusing on the linguistic and cultural contexts of Taiwan. Developed from a large base model, it's enriched with diverse Taiwanese textual sources and refined through Supervised Fine-Tuning. This model excels in language understanding and generation, aligning closely with Taiwan's cultural nuances. It demonstrates improved performance on various benchmarks like TC-Eval, showcasing its contextual comprehension and cultural relevance. For detailed insights into Taiwan LLM's development and features, refer to our [technical report](https://github.com/MiuLab/Taiwan-LLaMa/blob/main/twllm_paper.pdf). ## Model description - **Model type:** A 7B parameter GPT-like model fine-tuned on a mix of publicly available, synthetic datasets. - **Language(s) (NLP):** Primarily Traditional Chinese (zh-tw) - **Finetuned from model:** [meta-llama/Llama-2-7b-hf](https://huggingface.co/yentinglin/meta-llama/Llama-2-7b-hf) ### Model Sources <!-- Provide the basic links for the model. --> - **Repository:** https://github.com/MiuLab/Taiwan-LLaMa - **Demo:** https://twllm.com/ ## Performance ![image/png](https://cdn-uploads.huggingface.co/production/uploads/5df9c78eda6d0311fd3d541f/HTwIzw6RDha2-PhuWqSuI.png) ## Intended uses You should fine-tuned this model for instruction-following / chat application. ### Training hyperparameters ![image/png](https://cdn-uploads.huggingface.co/production/uploads/5df9c78eda6d0311fd3d541f/MdvHwdUvH-c926qyRAw7K.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/5df9c78eda6d0311fd3d541f/kKpkvxDzOEyiAoTqmzRYO.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/5df9c78eda6d0311fd3d541f/FsnlJ_fkRxf7fn5RKZnjE.png) The following hyperparameters were used during training: - learning_rate: 5e-05 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.03 - num_epochs: 5.0 ## Citation If you find Taiwan LLM is useful in your work, please cite it with: ``` @misc{lin2023taiwan, title={Taiwan LLM: Bridging the Linguistic Divide with a Culturally Aligned Language Model}, author={Yen-Ting Lin and Yun-Nung Chen}, year={2023}, eprint={2311.17487}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` # Acknowledgement Taiwan LLM v2 is conducted in collaboration with [Ubitus K.K.](http://ubitus.net). Ubitus provides valuable compute resources for the project.
CorticalStack/shadow-clown-7B-dare
CorticalStack
"2024-03-03T10:46:33Z"
994
1
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "merge", "mergekit", "CorticalStack/pastiche-crown-clown-7b-dare-dpo", "CultriX/NeuralTrix-7B-dpo", "CorticalStack/neurotic-crown-clown-7b-ties", "arxiv:2311.03099", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-03-03T10:36:43Z"
--- license: apache-2.0 tags: - merge - mergekit - CorticalStack/pastiche-crown-clown-7b-dare-dpo - CultriX/NeuralTrix-7B-dpo - CorticalStack/neurotic-crown-clown-7b-ties --- <img src="shadow_clown.png" alt="Shadow clown logo" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/> # shadow-clown-7B-dare shadow-clown-7B-dare is a DARE merge of the following models using [mergekit](https://github.com/cg123/mergekit): * [CorticalStack/pastiche-crown-clown-7b-dare-dpo](https://huggingface.co/CorticalStack/pastiche-crown-clown-7b-dare-dpo) * [CultriX/NeuralTrix-7B-dpo](https://huggingface.co/CultriX/NeuralTrix-7B-dpo) * [CorticalStack/neurotic-crown-clown-7b-ties](https://huggingface.co/CorticalStack/neurotic-crown-clown-7b-ties) See the paper [Language Models are Super Mario: Absorbing Abilities from Homologous Models as a Free Lunch](https://arxiv.org/abs/2311.03099) for more on the method. ## 🧩 Configuration ```yaml models: - model: yam-peleg/Experiment26-7B - model: CorticalStack/pastiche-crown-clown-7b-dare-dpo parameters: density: 0.52 weight: 0.4 - model: CultriX/NeuralTrix-7B-dpo parameters: density: 0.52 weight: 0.2 - model: CorticalStack/neurotic-crown-clown-7b-ties parameters: density: 0.52 weight: 0.3 merge_method: dare_ties base_model: yam-peleg/Experiment26-7B parameters: int8_mask: true dtype: bfloat16 ```
mychen76/mistral-7b-merged-slerp
mychen76
"2024-03-22T21:57:25Z"
994
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "merge", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-03-09T23:02:03Z"
--- license: apache-2.0 tags: - merge --- # mistral-7b-merged-slerp mistral-7b-merged-slerp is a merge of the following models: ## 🧩 Configuration ```yaml slices: - sources: - model: OpenPipe/mistral-ft-optimized-1218 layer_range: [0, 32] - model: mlabonne/NeuralHermes-2.5-Mistral-7B layer_range: [0, 32] merge_method: slerp base_model: OpenPipe/mistral-ft-optimized-1218 parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ``` ## 💻 Usage ```python !pip install -qU transformers bitsandbytes accelerate from transformers import AutoTokenizer import transformers import torch model = "mychen76/mistral-7b-merged-slerp" tokenizer = AutoTokenizer.from_pretrained(model) pipeline = transformers.pipeline( "text-generation", model=model, model_kwargs={"torch_dtype": torch.float16, "load_in_4bit": True}, ) messages = [{"role": "user", "content": "why the sky is blue."}] prompt = pipeline.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ``` # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_mychen76__mistral-7b-merged-slerp) | Metric |Value| |---------------------------------|----:| |Avg. |71.09| |AI2 Reasoning Challenge (25-Shot)|67.75| |HellaSwag (10-Shot) |86.17| |MMLU (5-Shot) |64.05| |TruthfulQA (0-shot) |59.85| |Winogrande (5-shot) |80.19| |GSM8k (5-shot) |68.54|
SaisExperiments/Experiment-3
SaisExperiments
"2024-04-04T13:24:19Z"
994
0
transformers
[ "transformers", "pytorch", "safetensors", "mistral", "text-generation", "unsloth", "trl", "sft", "conversational", "arxiv:1910.09700", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-04-04T10:57:38Z"
--- library_name: transformers tags: - unsloth - trl - sft license: apache-2.0 --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details * Capybara-ShareGPT on Experiment-2 * Experiment-2 is Pippa-ShareGPT-Trimmed on Mistral-7B-0.1 ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
qwp4w3hyb/Phi-3-mini-128k-instruct-iMat-GGUF
qwp4w3hyb
"2024-05-22T09:04:21Z"
994
0
null
[ "gguf", "nlp", "code", "microsoft", "phi", "instruct", "finetune", "imatrix", "importance matrix", "text-generation", "multilingual", "base_model:microsoft/Phi-3-mini-128k-instruct", "license:mit", "region:us" ]
text-generation
"2024-04-26T22:05:19Z"
--- license: mit license_link: >- https://huggingface.co/microsoft/Phi-3-mini-128k-instruct/resolve/main/LICENSE language: - multilingual pipeline_tag: text-generation base_model: microsoft/Phi-3-mini-128k-instruct tags: - nlp - code - microsoft - phi - instruct - finetune - gguf - imatrix - importance matrix --- # Quant Infos - Requires latest llama.cpp master; - quants done with an importance matrix for improved quantization loss - gguf & imatrix generated from bf16 for "optimal" accuracy loss (some say this is snake oil, but it can't hurt) - Wide coverage of different gguf quant types from Q\_8\_0 down to IQ1\_S - Quantized with [llama.cpp](https://github.com/ggerganov/llama.cpp) commit [201cc11afa0a1950e1f632390b2ac6c937a0d8f0](https://github.com/ggerganov/llama.cpp/commit/201cc11afa0a1950e1f632390b2ac6c937a0d8f0) - Imatrix generated with [this](https://github.com/ggerganov/llama.cpp/discussions/5263#discussioncomment-8395384) multi-purpose dataset. ``` ./imatrix -c 512 -m $model_name-bf16.gguf -f $llama_cpp_path/groups_merged.txt -o $out_path/imat-bf16-gmerged.dat ``` # Original Model Card: ## Model Summary The Phi-3-Mini-128K-Instruct is a 3.8 billion-parameter, lightweight, state-of-the-art open model trained using the Phi-3 datasets. This dataset includes both synthetic data and filtered publicly available website data, with an emphasis on high-quality and reasoning-dense properties. The model belongs to the Phi-3 family with the Mini version in two variants [4K](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct) and [128K](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct) which is the context length (in tokens) that it can support. After initial training, the model underwent a post-training process that involved supervised fine-tuning and direct preference optimization to enhance its ability to follow instructions and adhere to safety measures. When evaluated against benchmarks that test common sense, language understanding, mathematics, coding, long-term context, and logical reasoning, the Phi-3 Mini-128K-Instruct demonstrated robust and state-of-the-art performance among models with fewer than 13 billion parameters. Resources and Technical Documentation: + [Phi-3 Microsoft Blog](https://aka.ms/phi3blog-april) + [Phi-3 Technical Report](https://aka.ms/phi3-tech-report) + [Phi-3 on Azure AI Studio](https://aka.ms/phi3-azure-ai) + Phi-3 ONNX: [128K](https://aka.ms/Phi3-mini-128k-instruct-onnx) ## Intended Uses **Primary use cases** The model is intended for commercial and research use in English. The model provides uses for applications which require: 1) Memory/compute constrained environments 2) Latency bound scenarios 3) Strong reasoning (especially code, math and logic) Our model is designed to accelerate research on language and multimodal models, for use as a building block for generative AI powered features. **Use case considerations** Our models are not specifically designed or evaluated for all downstream purposes. Developers should consider common limitations of language models as they select use cases, and evaluate and mitigate for accuracy, safety, and fariness before using within a specific downstream use case, particularly for high risk scenarios. Developers should be aware of and adhere to applicable laws or regulations (including privacy, trade compliance laws, etc.) that are relevant to their use case. Nothing contained in this Model Card should be interpreted as or deemed a restriction or modification to the license the model is released under. ## How to Use Phi-3 Mini-128K-Instruct has been integrated in the development version (4.40.0) of `transformers`. Until the official version is released through `pip`, ensure that you are doing one of the following: * When loading the model, ensure that `trust_remote_code=True` is passed as an argument of the `from_pretrained()` function. * Update your local `transformers` to the development version: `pip uninstall -y transformers && pip install git+https://github.com/huggingface/transformers`. The previous command is an alternative to cloning and installing from the source. The current `transformers` version can be verified with: `pip list | grep transformers`. ### Tokenizer Phi-3 Mini-128K-Instruct supports a vocabulary size of up to `32064` tokens. The [tokenizer files](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct/blob/main/added_tokens.json) already provide placeholder tokens that can be used for downstream fine-tuning, but they can also be extended up to the model's vocabulary size. ### Chat Format Given the nature of the training data, the Phi-3 Mini-128K-Instruct model is best suited for prompts using the chat format as follows. You can provide the prompt as a question with a generic template as follow: ```markdown <|user|>\nQuestion<|end|>\n<|assistant|> ``` For example: ```markdown <|system|> You are a helpful AI assistant.<|end|> <|user|> How to explain Internet for a medieval knight?<|end|> <|assistant|> ``` where the model generates the text after `<|assistant|>`. In case of few-shots prompt, the prompt can be formatted as the following: ```markdown <|system|> You are a helpful AI assistant.<|end|> <|user|> I am going to Paris, what should I see?<|end|> <|assistant|> Paris, the capital of France, is known for its stunning architecture, art museums, historical landmarks, and romantic atmosphere. Here are some of the top attractions to see in Paris:\n\n1. The Eiffel Tower: The iconic Eiffel Tower is one of the most recognizable landmarks in the world and offers breathtaking views of the city.\n2. The Louvre Museum: The Louvre is one of the world's largest and most famous museums, housing an impressive collection of art and artifacts, including the Mona Lisa.\n3. Notre-Dame Cathedral: This beautiful cathedral is one of the most famous landmarks in Paris and is known for its Gothic architecture and stunning stained glass windows.\n\nThese are just a few of the many attractions that Paris has to offer. With so much to see and do, it's no wonder that Paris is one of the most popular tourist destinations in the world."<|end|> <|user|> What is so great about #1?<|end|> <|assistant|> ``` ### Sample inference code This code snippets show how to get quickly started with running the model on a GPU: ```python import torch from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline torch.random.manual_seed(0) model = AutoModelForCausalLM.from_pretrained( "microsoft/Phi-3-mini-128k-instruct", device_map="cuda", torch_dtype="auto", trust_remote_code=True, ) tokenizer = AutoTokenizer.from_pretrained("microsoft/Phi-3-mini-128k-instruct") messages = [ {"role": "system", "content": "You are a helpful digital assistant. Please provide safe, ethical and accurate information to the user."}, {"role": "user", "content": "Can you provide ways to eat combinations of bananas and dragonfruits?"}, {"role": "assistant", "content": "Sure! Here are some ways to eat bananas and dragonfruits together: 1. Banana and dragonfruit smoothie: Blend bananas and dragonfruits together with some milk and honey. 2. Banana and dragonfruit salad: Mix sliced bananas and dragonfruits together with some lemon juice and honey."}, {"role": "user", "content": "What about solving an 2x + 3 = 7 equation?"}, ] pipe = pipeline( "text-generation", model=model, tokenizer=tokenizer, ) generation_args = { "max_new_tokens": 500, "return_full_text": False, "temperature": 0.0, "do_sample": False, } output = pipe(messages, **generation_args) print(output[0]['generated_text']) ``` *Some applications/frameworks might not include a BOS token (`<s>`) at the start of the conversation. Please ensure that it is included since it provides more reliable results.* ## Responsible AI Considerations Like other language models, the Phi series models can potentially behave in ways that are unfair, unreliable, or offensive. Some of the limiting behaviors to be aware of include: + Quality of Service: the Phi models are trained primarily on English text. Languages other than English will experience worse performance. English language varieties with less representation in the training data might experience worse performance than standard American English. + Representation of Harms & Perpetuation of Stereotypes: These models can over- or under-represent groups of people, erase representation of some groups, or reinforce demeaning or negative stereotypes. Despite safety post-training, these limitations may still be present due to differing levels of representation of different groups or prevalence of examples of negative stereotypes in training data that reflect real-world patterns and societal biases. + Inappropriate or Offensive Content: these models may produce other types of inappropriate or offensive content, which may make it inappropriate to deploy for sensitive contexts without additional mitigations that are specific to the use case. + Information Reliability: Language models can generate nonsensical content or fabricate content that might sound reasonable but is inaccurate or outdated. + Limited Scope for Code: Majority of Phi-3 training data is based in Python and use common packages such as "typing, math, random, collections, datetime, itertools". If the model generates Python scripts that utilize other packages or scripts in other languages, we strongly recommend users manually verify all API uses. Developers should apply responsible AI best practices and are responsible for ensuring that a specific use case complies with relevant laws and regulations (e.g. privacy, trade, etc.). Important areas for consideration include: + Allocation: Models may not be suitable for scenarios that could have consequential impact on legal status or the allocation of resources or life opportunities (ex: housing, employment, credit, etc.) without further assessments and additional debiasing techniques. + High-Risk Scenarios: Developers should assess suitability of using models in high-risk scenarios where unfair, unreliable or offensive outputs might be extremely costly or lead to harm. This includes providing advice in sensitive or expert domains where accuracy and reliability are critical (ex: legal or health advice). Additional safeguards should be implemented at the application level according to the deployment context. + Misinformation: Models may produce inaccurate information. Developers should follow transparency best practices and inform end-users they are interacting with an AI system. At the application level, developers can build feedback mechanisms and pipelines to ground responses in use-case specific, contextual information, a technique known as Retrieval Augmented Generation (RAG). + Generation of Harmful Content: Developers should assess outputs for their context and use available safety classifiers or custom solutions appropriate for their use case. + Misuse: Other forms of misuse such as fraud, spam, or malware production may be possible, and developers should ensure that their applications do not violate applicable laws and regulations. ## Training ### Model * Architecture: Phi-3 Mini-128K-Instruct has 3.8B parameters and is a dense decoder-only Transformer model. The model is fine-tuned with Supervised fine-tuning (SFT) and Direct Preference Optimization (DPO) to ensure alignment with human preferences and safety guidlines. * Inputs: Text. It is best suited for prompts using chat format. * Context length: 128K tokens * GPUs: 512 H100-80G * Training time: 7 days * Training data: 3.3T tokens * Outputs: Generated text in response to the input * Dates: Our models were trained between February and April 2024 * Status: This is a static model trained on an offline dataset with cutoff date October 2023. Future versions of the tuned models may be released as we improve models. ### Datasets Our training data includes a wide variety of sources, totaling 3.3 trillion tokens, and is a combination of 1) Publicly available documents filtered rigorously for quality, selected high-quality educational data, and code; 2) Newly created synthetic, “textbook-like” data for the purpose of teaching math, coding, common sense reasoning, general knowledge of the world (science, daily activities, theory of mind, etc.); 3) High quality chat format supervised data covering various topics to reflect human preferences on different aspects such as instruct-following, truthfulness, honesty and helpfulness. ### Fine-tuning A basic example of multi-GPUs supervised fine-tuning (SFT) with TRL and Accelerate modules is provided [here](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct/resolve/main/sample_finetune.py). ## Benchmarks We report the results for Phi-3-Mini-128K-Instruct on standard open-source benchmarks measuring the model's reasoning ability (both common sense reasoning and logical reasoning). We compare to Phi-2, Mistral-7b-v0.1, Mixtral-8x7b, Gemma 7B, Llama-3-8B-Instruct, and GPT-3.5. All the reported numbers are produced with the exact same pipeline to ensure that the numbers are comparable. These numbers might differ from other published numbers due to slightly different choices in the evaluation. As is now standard, we use few-shot prompts to evaluate the models, at temperature 0. The prompts and number of shots are part of a Microsoft internal tool to evaluate language models, and in particular we did no optimization to the pipeline for Phi-3. More specifically, we do not change prompts, pick different few-shot examples, change prompt format, or do any other form of optimization for the model. The number of k–shot examples is listed per-benchmark. | | Phi-3-Mini-128K-In<br>3.8b | Phi-3-Small<br>7b (preview) | Phi-3-Medium<br>14b (preview) | Phi-2<br>2.7b | Mistral<br>7b | Gemma<br>7b | Llama-3-In<br>8b | Mixtral<br>8x7b | GPT-3.5<br>version 1106 | |---|---|---|---|---|---|---|---|---|---| | MMLU <br>5-Shot | 68.1 | 75.3 | 78.2 | 56.3 | 61.7 | 63.6 | 66.5 | 68.4 | 71.4 | | HellaSwag <br> 5-Shot | 74.5 | 78.7 | 83.2 | 53.6 | 58.5 | 49.8 | 71.1 | 70.4 | 78.8 | | ANLI <br> 7-Shot | 52.8 | 55.0 | 58.7 | 42.5 | 47.1 | 48.7 | 57.3 | 55.2 | 58.1 | | GSM-8K <br> 0-Shot; CoT | 83.6 | 86.4 | 90.8 | 61.1 | 46.4 | 59.8 | 77.4 | 64.7 | 78.1 | | MedQA <br> 2-Shot | 55.3 | 58.2 | 69.8 | 40.9 | 49.6 | 50.0 | 60.5 | 62.2 | 63.4 | | AGIEval <br> 0-Shot | 36.9 | 45.0 | 49.7 | 29.8 | 35.1 | 42.1 | 42.0 | 45.2 | 48.4 | | TriviaQA <br> 5-Shot | 57.1 | 59.1 | 73.3 | 45.2 | 72.3 | 75.2 | 67.7 | 82.2 | 85.8 | | Arc-C <br> 10-Shot | 84.0 | 90.7 | 91.9 | 75.9 | 78.6 | 78.3 | 82.8 | 87.3 | 87.4 | | Arc-E <br> 10-Shot | 95.2 | 97.1 | 98.0 | 88.5 | 90.6 | 91.4 | 93.4 | 95.6 | 96.3 | | PIQA <br> 5-Shot | 83.6 | 87.8 | 88.2 | 60.2 | 77.7 | 78.1 | 75.7 | 86.0 | 86.6 | | SociQA <br> 5-Shot | 76.1 | 79.0 | 79.4 | 68.3 | 74.6 | 65.5 | 73.9 | 75.9 | 68.3 | | BigBench-Hard <br> 0-Shot | 71.5 | 75.0 | 82.5 | 59.4 | 57.3 | 59.6 | 51.5 | 69.7 | 68.32 | | WinoGrande <br> 5-Shot | 72.5 | 82.5 | 81.2 | 54.7 | 54.2 | 55.6 | 65.0 | 62.0 | 68.8 | | OpenBookQA <br> 10-Shot | 80.6 | 88.4 | 86.6 | 73.6 | 79.8 | 78.6 | 82.6 | 85.8 | 86.0 | | BoolQ <br> 0-Shot | 78.7 | 82.9 | 86.5 | -- | 72.2 | 66.0 | 80.9 | 77.6 | 79.1 | | CommonSenseQA <br> 10-Shot | 78.0 | 80.3 | 82.6 | 69.3 | 72.6 | 76.2 | 79 | 78.1 | 79.6 | | TruthfulQA <br> 10-Shot | 63.2 | 68.1 | 74.8 | -- | 52.1 | 53.0 | 63.2 | 60.1 | 85.8 | | HumanEval <br> 0-Shot | 57.9 | 59.1 | 54.7 | 47.0 | 28.0 | 34.1 | 60.4| 37.8 | 62.2 | | MBPP <br> 3-Shot | 62.5 | 71.4 | 73.7 | 60.6 | 50.8 | 51.5 | 67.7 | 60.2 | 77.8 | ## Software * [PyTorch](https://github.com/pytorch/pytorch) * [DeepSpeed](https://github.com/microsoft/DeepSpeed) * [Transformers](https://github.com/huggingface/transformers) * [Flash-Attention](https://github.com/HazyResearch/flash-attention) ## Hardware Note that by default, the Phi-3-mini model uses flash attention, which requires certain types of GPU hardware to run. We have tested on the following GPU types: * NVIDIA A100 * NVIDIA A6000 * NVIDIA H100 If you want to run the model on: * NVIDIA V100 or earlier generation GPUs: call AutoModelForCausalLM.from_pretrained() with attn_implementation="eager" * Optimized inference on GPU, CPU, and Mobile: use the **ONNX** models [128K](https://aka.ms/phi3-mini-128k-instruct-onnx) ## Cross Platform Support ONNX runtime ecosystem now supports Phi-3 Mini models across platforms and hardware. You can find the optimized Phi-3 Mini-128K-Instruct ONNX model [here](https://aka.ms/phi3-mini-128k-instruct-onnx). Optimized Phi-3 models are also published here in ONNX format, to run with ONNX Runtime on CPU and GPU across devices, including server platforms, Windows, Linux and Mac desktops, and mobile CPUs, with the precision best suited to each of these targets. DirectML support lets developers bring hardware acceleration to Windows devices at scale across AMD, Intel, and NVIDIA GPUs. Along with DirectML, ONNX Runtime provides cross platform support for Phi-3 across a range of devices CPU, GPU, and mobile. Here are some of the optimized configurations we have added: 1. ONNX models for int4 DML: Quantized to int4 via AWQ 2. ONNX model for fp16 CUDA 3. ONNX model for int4 CUDA: Quantized to int4 via RTN 4. ONNX model for int4 CPU and Mobile: Quantized to int4 via RTN ## License The model is licensed under the [MIT license](https://huggingface.co/microsoft/Phi-3-mini-128k/resolve/main/LICENSE). ## Trademarks This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow [Microsoft’s Trademark & Brand Guidelines](https://www.microsoft.com/en-us/legal/intellectualproperty/trademarks). Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party’s policies.
minmingzhu02/Mistral-7B-v0.1-llm-on-ray-deepspeed-lora
minmingzhu02
"2024-06-17T09:16:28Z"
994
0
peft
[ "peft", "safetensors", "optimum_habana", "region:us" ]
null
"2024-06-17T09:16:19Z"
--- library_name: peft --- ## Training procedure ### Framework versions - PEFT 0.4.0
mrm8488/codebert-base-finetuned-detect-insecure-code
mrm8488
"2021-05-20T18:19:02Z"
993
24
transformers
[ "transformers", "pytorch", "jax", "roberta", "text-classification", "en", "dataset:codexglue", "arxiv:2002.08155", "arxiv:1907.11692", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2022-03-02T23:29:05Z"
--- language: en datasets: - codexglue --- # CodeBERT fine-tuned for Insecure Code Detection 💾⛔ [codebert-base](https://huggingface.co/microsoft/codebert-base) fine-tuned on [CodeXGLUE -- Defect Detection](https://github.com/microsoft/CodeXGLUE/tree/main/Code-Code/Defect-detection) dataset for **Insecure Code Detection** downstream task. ## Details of [CodeBERT](https://arxiv.org/abs/2002.08155) We present CodeBERT, a bimodal pre-trained model for programming language (PL) and nat-ural language (NL). CodeBERT learns general-purpose representations that support downstream NL-PL applications such as natural language codesearch, code documentation generation, etc. We develop CodeBERT with Transformer-based neural architecture, and train it with a hybrid objective function that incorporates the pre-training task of replaced token detection, which is to detect plausible alternatives sampled from generators. This enables us to utilize both bimodal data of NL-PL pairs and unimodal data, where the former provides input tokens for model training while the latter helps to learn better generators. We evaluate CodeBERT on two NL-PL applications by fine-tuning model parameters. Results show that CodeBERT achieves state-of-the-art performance on both natural language code search and code documentation generation tasks. Furthermore, to investigate what type of knowledge is learned in CodeBERT, we construct a dataset for NL-PL probing, and evaluate in a zero-shot setting where parameters of pre-trained models are fixed. Results show that CodeBERT performs better than previous pre-trained models on NL-PL probing. ## Details of the downstream task (code classification) - Dataset 📚 Given a source code, the task is to identify whether it is an insecure code that may attack software systems, such as resource leaks, use-after-free vulnerabilities and DoS attack. We treat the task as binary classification (0/1), where 1 stands for insecure code and 0 for secure code. The [dataset](https://github.com/microsoft/CodeXGLUE/tree/main/Code-Code/Defect-detection) used comes from the paper [*Devign*: Effective Vulnerability Identification by Learning Comprehensive Program Semantics via Graph Neural Networks](http://papers.nips.cc/paper/9209-devign-effective-vulnerability-identification-by-learning-comprehensive-program-semantics-via-graph-neural-networks.pdf). All projects are combined and splitted 80%/10%/10% for training/dev/test. Data statistics of the dataset are shown in the below table: | | #Examples | | ----- | :-------: | | Train | 21,854 | | Dev | 2,732 | | Test | 2,732 | ## Test set metrics 🧾 | Methods | ACC | | -------- | :-------: | | BiLSTM | 59.37 | | TextCNN | 60.69 | | [RoBERTa](https://arxiv.org/pdf/1907.11692.pdf) | 61.05 | | [CodeBERT](https://arxiv.org/pdf/2002.08155.pdf) | 62.08 | | [Ours](https://huggingface.co/mrm8488/codebert-base-finetuned-detect-insecure-code) | **65.30** | ## Model in Action 🚀 ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification import torch import numpy as np tokenizer = AutoTokenizer.from_pretrained('mrm8488/codebert-base-finetuned-detect-insecure-code') model = AutoModelForSequenceClassification.from_pretrained('mrm8488/codebert-base-finetuned-detect-insecure-code') inputs = tokenizer("your code here", return_tensors="pt", truncation=True, padding='max_length') labels = torch.tensor([1]).unsqueeze(0) # Batch size 1 outputs = model(**inputs, labels=labels) loss = outputs.loss logits = outputs.logits print(np.argmax(logits.detach().numpy())) ``` > Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488) | [LinkedIn](https://www.linkedin.com/in/manuel-romero-cs/) > Made with <span style="color: #e25555;">&hearts;</span> in Spain
uclanlp/plbart-python-en_XX
uclanlp
"2021-11-09T17:09:27Z"
993
3
transformers
[ "transformers", "pytorch", "plbart", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
"2022-03-02T23:29:05Z"
Entry not found
microsoft/xclip-base-patch16
microsoft
"2022-09-08T11:06:25Z"
993
0
transformers
[ "transformers", "pytorch", "xclip", "feature-extraction", "vision", "video-classification", "en", "arxiv:2208.02816", "license:mit", "model-index", "endpoints_compatible", "region:us" ]
video-classification
"2022-09-07T14:55:19Z"
--- language: en license: mit tags: - vision - video-classification model-index: - name: nielsr/xclip-base-patch16 results: - task: type: video-classification dataset: name: Kinetics 400 type: kinetics-400 metrics: - type: top-1 accuracy value: 83.8 - type: top-5 accuracy value: 95.7 --- # X-CLIP (base-sized model) X-CLIP model (base-sized, patch resolution of 16) trained fully-supervised on [Kinetics-400](https://www.deepmind.com/open-source/kinetics). It was introduced in the paper [Expanding Language-Image Pretrained Models for General Video Recognition](https://arxiv.org/abs/2208.02816) by Ni et al. and first released in [this repository](https://github.com/microsoft/VideoX/tree/master/X-CLIP). This model was trained using 8 frames per video, at a resolution of 224x224. Disclaimer: The team releasing X-CLIP did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description X-CLIP is a minimal extension of [CLIP](https://huggingface.co/docs/transformers/model_doc/clip) for general video-language understanding. The model is trained in a contrastive way on (video, text) pairs. ![X-CLIP architecture](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/xclip_architecture.png) This allows the model to be used for tasks like zero-shot, few-shot or fully supervised video classification and video-text retrieval. ## Intended uses & limitations You can use the raw model for determining how well text goes with a given video. See the [model hub](https://huggingface.co/models?search=microsoft/xclip) to look for fine-tuned versions on a task that interests you. ### How to use For code examples, we refer to the [documentation](https://huggingface.co/transformers/main/model_doc/xclip.html#). ## Training data This model was trained on [Kinetics-400](https://www.deepmind.com/open-source/kinetics). ### Preprocessing The exact details of preprocessing during training can be found [here](https://github.com/microsoft/VideoX/blob/40f6d177e0a057a50ac69ac1de6b5938fd268601/X-CLIP/datasets/build.py#L247). The exact details of preprocessing during validation can be found [here](https://github.com/microsoft/VideoX/blob/40f6d177e0a057a50ac69ac1de6b5938fd268601/X-CLIP/datasets/build.py#L285). During validation, one resizes the shorter edge of each frame, after which center cropping is performed to a fixed-size resolution (like 224x224). Next, frames are normalized across the RGB channels with the ImageNet mean and standard deviation. ## Evaluation results This model achieves a top-1 accuracy of 83.8% and a top-5 accuracy of 95.7%.
timm/eva_large_patch14_196.in22k_ft_in1k
timm
"2024-02-10T23:27:53Z"
993
0
timm
[ "timm", "pytorch", "safetensors", "image-classification", "dataset:imagenet-1k", "dataset:imagenet-22k", "arxiv:2211.07636", "license:mit", "region:us" ]
image-classification
"2022-12-22T07:05:00Z"
--- license: mit library_name: timm tags: - image-classification - timm datasets: - imagenet-1k - imagenet-22k --- # Model card for eva_large_patch14_196.in22k_ft_in1k An EVA image classification model. Pretrained on ImageNet-22k with masked image modeling (using EVA-CLIP as a MIM teacher) and fine-tuned on ImageNet-1k by paper authors. NOTE: `timm` checkpoints are float32 for consistency with other models. Original checkpoints are float16 or bfloat16 in some cases, see originals if that's preferred. ## Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 304.1 - GMACs: 61.6 - Activations (M): 63.5 - Image size: 196 x 196 - **Papers:** - EVA: Exploring the Limits of Masked Visual Representation Learning at Scale: https://arxiv.org/abs/2211.07636 - **Pretrain Dataset:** ImageNet-22k - **Dataset:** ImageNet-1k - **Original:** - https://github.com/baaivision/EVA - https://huggingface.co/BAAI/EVA ## Model Usage ### Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model('eva_large_patch14_196.in22k_ft_in1k', pretrained=True) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5) ``` ### Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'eva_large_patch14_196.in22k_ft_in1k', pretrained=True, num_classes=0, # remove classifier nn.Linear ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor # or equivalently (without needing to set num_classes=0) output = model.forward_features(transforms(img).unsqueeze(0)) # output is unpooled, a (1, 197, 1024) shaped tensor output = model.forward_head(output, pre_logits=True) # output is a (1, num_features) shaped tensor ``` ## Model Comparison Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results). |model |top1 |top5 |param_count|img_size| |-----------------------------------------------|------|------|-----------|--------| |eva02_large_patch14_448.mim_m38m_ft_in22k_in1k |90.054|99.042|305.08 |448 | |eva02_large_patch14_448.mim_in22k_ft_in22k_in1k|89.946|99.01 |305.08 |448 | |eva_giant_patch14_560.m30m_ft_in22k_in1k |89.792|98.992|1014.45 |560 | |eva02_large_patch14_448.mim_in22k_ft_in1k |89.626|98.954|305.08 |448 | |eva02_large_patch14_448.mim_m38m_ft_in1k |89.57 |98.918|305.08 |448 | |eva_giant_patch14_336.m30m_ft_in22k_in1k |89.56 |98.956|1013.01 |336 | |eva_giant_patch14_336.clip_ft_in1k |89.466|98.82 |1013.01 |336 | |eva_large_patch14_336.in22k_ft_in22k_in1k |89.214|98.854|304.53 |336 | |eva_giant_patch14_224.clip_ft_in1k |88.882|98.678|1012.56 |224 | |eva02_base_patch14_448.mim_in22k_ft_in22k_in1k |88.692|98.722|87.12 |448 | |eva_large_patch14_336.in22k_ft_in1k |88.652|98.722|304.53 |336 | |eva_large_patch14_196.in22k_ft_in22k_in1k |88.592|98.656|304.14 |196 | |eva02_base_patch14_448.mim_in22k_ft_in1k |88.23 |98.564|87.12 |448 | |eva_large_patch14_196.in22k_ft_in1k |87.934|98.504|304.14 |196 | |eva02_small_patch14_336.mim_in22k_ft_in1k |85.74 |97.614|22.13 |336 | |eva02_tiny_patch14_336.mim_in22k_ft_in1k |80.658|95.524|5.76 |336 | ## Citation ```bibtex @article{EVA, title={EVA: Exploring the Limits of Masked Visual Representation Learning at Scale}, author={Fang, Yuxin and Wang, Wen and Xie, Binhui and Sun, Quan and Wu, Ledell and Wang, Xinggang and Huang, Tiejun and Wang, Xinlong and Cao, Yue}, journal={arXiv preprint arXiv:2211.07636}, year={2022} } ``` ```bibtex @misc{rw2019timm, author = {Ross Wightman}, title = {PyTorch Image Models}, year = {2019}, publisher = {GitHub}, journal = {GitHub repository}, doi = {10.5281/zenodo.4414861}, howpublished = {\url{https://github.com/huggingface/pytorch-image-models}} } ```
RichardErkhov/Gustavosta_-_MagicPrompt-Dalle-gguf
RichardErkhov
"2024-06-05T11:47:51Z"
993
0
null
[ "gguf", "region:us" ]
null
"2024-06-05T11:36:41Z"
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) MagicPrompt-Dalle - GGUF - Model creator: https://huggingface.co/Gustavosta/ - Original model: https://huggingface.co/Gustavosta/MagicPrompt-Dalle/ | Name | Quant method | Size | | ---- | ---- | ---- | | [MagicPrompt-Dalle.Q2_K.gguf](https://huggingface.co/RichardErkhov/Gustavosta_-_MagicPrompt-Dalle-gguf/blob/main/MagicPrompt-Dalle.Q2_K.gguf) | Q2_K | 0.08GB | | [MagicPrompt-Dalle.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/Gustavosta_-_MagicPrompt-Dalle-gguf/blob/main/MagicPrompt-Dalle.IQ3_XS.gguf) | IQ3_XS | 0.08GB | | [MagicPrompt-Dalle.IQ3_S.gguf](https://huggingface.co/RichardErkhov/Gustavosta_-_MagicPrompt-Dalle-gguf/blob/main/MagicPrompt-Dalle.IQ3_S.gguf) | IQ3_S | 0.08GB | | [MagicPrompt-Dalle.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/Gustavosta_-_MagicPrompt-Dalle-gguf/blob/main/MagicPrompt-Dalle.Q3_K_S.gguf) | Q3_K_S | 0.08GB | | [MagicPrompt-Dalle.IQ3_M.gguf](https://huggingface.co/RichardErkhov/Gustavosta_-_MagicPrompt-Dalle-gguf/blob/main/MagicPrompt-Dalle.IQ3_M.gguf) | IQ3_M | 0.09GB | | [MagicPrompt-Dalle.Q3_K.gguf](https://huggingface.co/RichardErkhov/Gustavosta_-_MagicPrompt-Dalle-gguf/blob/main/MagicPrompt-Dalle.Q3_K.gguf) | Q3_K | 0.09GB | | [MagicPrompt-Dalle.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/Gustavosta_-_MagicPrompt-Dalle-gguf/blob/main/MagicPrompt-Dalle.Q3_K_M.gguf) | Q3_K_M | 0.09GB | | [MagicPrompt-Dalle.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/Gustavosta_-_MagicPrompt-Dalle-gguf/blob/main/MagicPrompt-Dalle.Q3_K_L.gguf) | Q3_K_L | 0.1GB | | [MagicPrompt-Dalle.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/Gustavosta_-_MagicPrompt-Dalle-gguf/blob/main/MagicPrompt-Dalle.IQ4_XS.gguf) | IQ4_XS | 0.1GB | | [MagicPrompt-Dalle.Q4_0.gguf](https://huggingface.co/RichardErkhov/Gustavosta_-_MagicPrompt-Dalle-gguf/blob/main/MagicPrompt-Dalle.Q4_0.gguf) | Q4_0 | 0.1GB | | [MagicPrompt-Dalle.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/Gustavosta_-_MagicPrompt-Dalle-gguf/blob/main/MagicPrompt-Dalle.IQ4_NL.gguf) | IQ4_NL | 0.1GB | | [MagicPrompt-Dalle.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/Gustavosta_-_MagicPrompt-Dalle-gguf/blob/main/MagicPrompt-Dalle.Q4_K_S.gguf) | Q4_K_S | 0.1GB | | [MagicPrompt-Dalle.Q4_K.gguf](https://huggingface.co/RichardErkhov/Gustavosta_-_MagicPrompt-Dalle-gguf/blob/main/MagicPrompt-Dalle.Q4_K.gguf) | Q4_K | 0.11GB | | [MagicPrompt-Dalle.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/Gustavosta_-_MagicPrompt-Dalle-gguf/blob/main/MagicPrompt-Dalle.Q4_K_M.gguf) | Q4_K_M | 0.11GB | | [MagicPrompt-Dalle.Q4_1.gguf](https://huggingface.co/RichardErkhov/Gustavosta_-_MagicPrompt-Dalle-gguf/blob/main/MagicPrompt-Dalle.Q4_1.gguf) | Q4_1 | 0.11GB | | [MagicPrompt-Dalle.Q5_0.gguf](https://huggingface.co/RichardErkhov/Gustavosta_-_MagicPrompt-Dalle-gguf/blob/main/MagicPrompt-Dalle.Q5_0.gguf) | Q5_0 | 0.11GB | | [MagicPrompt-Dalle.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/Gustavosta_-_MagicPrompt-Dalle-gguf/blob/main/MagicPrompt-Dalle.Q5_K_S.gguf) | Q5_K_S | 0.11GB | | [MagicPrompt-Dalle.Q5_K.gguf](https://huggingface.co/RichardErkhov/Gustavosta_-_MagicPrompt-Dalle-gguf/blob/main/MagicPrompt-Dalle.Q5_K.gguf) | Q5_K | 0.12GB | | [MagicPrompt-Dalle.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/Gustavosta_-_MagicPrompt-Dalle-gguf/blob/main/MagicPrompt-Dalle.Q5_K_M.gguf) | Q5_K_M | 0.12GB | | [MagicPrompt-Dalle.Q5_1.gguf](https://huggingface.co/RichardErkhov/Gustavosta_-_MagicPrompt-Dalle-gguf/blob/main/MagicPrompt-Dalle.Q5_1.gguf) | Q5_1 | 0.12GB | | [MagicPrompt-Dalle.Q6_K.gguf](https://huggingface.co/RichardErkhov/Gustavosta_-_MagicPrompt-Dalle-gguf/blob/main/MagicPrompt-Dalle.Q6_K.gguf) | Q6_K | 0.13GB | | [MagicPrompt-Dalle.Q8_0.gguf](https://huggingface.co/RichardErkhov/Gustavosta_-_MagicPrompt-Dalle-gguf/blob/main/MagicPrompt-Dalle.Q8_0.gguf) | Q8_0 | 0.17GB | Original model description: --- license: mit --- # MagicPrompt - Dall-E 2 This is a model from the MagicPrompt series of models, which are [GPT-2](https://huggingface.co/gpt2) models intended to generate prompt texts for imaging AIs, in this case: [Dall-E 2](https://openai.com/dall-e-2/). ## 🖼️ Here's an example: <img src="https://files.catbox.moe/h10plz.png"> This model was trained with a set of about 26k of data filtered and extracted from various places such as: [The Web Archive](https://web.archive.org/web/*/https://labs.openai.com/s/*), [The SubReddit for Dall-E 2](https://www.reddit.com/r/dalle2) and [dalle2.gallery](https://dalle2.gallery/#search). This may be a relatively small dataset, but we have to consider that Dall-E 2 is a closed service and we only have prompts from people who share it and have access to the service, for now. The set was trained with about 40,000 steps and I have plans to improve the model if possible. If you want to test the model with a demo, you can go to: "[spaces/Gustavosta/MagicPrompt-Dalle](https://huggingface.co/spaces/Gustavosta/MagicPrompt-Dalle)". ## 💻 You can see other MagicPrompt models: - For Stable Diffusion: [Gustavosta/MagicPrompt-Stable-Diffusion](https://huggingface.co/Gustavosta/MagicPrompt-Stable-Diffusion) - For Midjourney: [Gustavosta/MagicPrompt-Midjourney](https://huggingface.co/Gustavosta/MagicPrompt-Midjourney) **[⚠️ In progress]** - MagicPrompt full: [Gustavosta/MagicPrompt](https://huggingface.co/Gustavosta/MagicPrompt) **[⚠️ In progress]** ## ⚖️ Licence: [MIT](https://huggingface.co/models?license=license:mit) When using this model, please credit: [Gustavosta](https://huggingface.co/Gustavosta) **Thanks for reading this far! :)**
KnutJaegersberg/RWKV-4-PilePlus-430M-20230520-6162-1018Gtokens-ctx4098
KnutJaegersberg
"2023-09-05T09:59:27Z"
992
0
transformers
[ "transformers", "pytorch", "rwkv", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2023-09-05T08:54:51Z"
--- license: apache-2.0 --- This is just a standard conversion to hf transformers format of models from here: https://huggingface.co/BlinkDL/rwkv-4-pileplus According to the documentation I found, this model should have seen roundabout 1.3 trillion tokens!
abacusai/MM-OV-bagel-DPO-34b-c1000-250
abacusai
"2024-01-24T04:17:48Z"
992
1
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "dataset:abacusai/MetaMathFewshot", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-01-23T22:00:23Z"
--- license: apache-2.0 datasets: - abacusai/MetaMathFewshot --- DPO finetune of (https://huggingface.co/abacusai/MM-Orc-Vic-bagel-34b-c1000) on the Bagel DPO dataset ### Evaluation Results | Average | ARC | HellaSwag | MMLU | TruthfulQA | Winogrande | GSM8K | | --- | --- | --- | --- | --- | --- | --- | | | | | | | | |
mradermacher/llama2_70b_longlora_fp16_32k_ROPE8-i1-GGUF
mradermacher
"2024-05-10T16:46:16Z"
992
0
transformers
[ "transformers", "gguf", "en", "base_model:grimulkan/llama2_70b_longlora_fp16_32k_ROPE8", "license:llama2", "endpoints_compatible", "region:us" ]
null
"2024-03-09T22:00:21Z"
--- base_model: grimulkan/llama2_70b_longlora_fp16_32k_ROPE8 language: - en library_name: transformers license: llama2 quantized_by: mradermacher --- ## About weighted/imatrix quants of https://huggingface.co/grimulkan/llama2_70b_longlora_fp16_32k_ROPE8 <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/llama2_70b_longlora_fp16_32k_ROPE8-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/llama2_70b_longlora_fp16_32k_ROPE8-i1-GGUF/resolve/main/llama2_70b_longlora_fp16_32k_ROPE8.i1-IQ1_S.gguf) | i1-IQ1_S | 15.0 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/llama2_70b_longlora_fp16_32k_ROPE8-i1-GGUF/resolve/main/llama2_70b_longlora_fp16_32k_ROPE8.i1-IQ1_M.gguf) | i1-IQ1_M | 16.0 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/llama2_70b_longlora_fp16_32k_ROPE8-i1-GGUF/resolve/main/llama2_70b_longlora_fp16_32k_ROPE8.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 18.7 | | | [GGUF](https://huggingface.co/mradermacher/llama2_70b_longlora_fp16_32k_ROPE8-i1-GGUF/resolve/main/llama2_70b_longlora_fp16_32k_ROPE8.i1-IQ2_XS.gguf) | i1-IQ2_XS | 20.8 | | | [GGUF](https://huggingface.co/mradermacher/llama2_70b_longlora_fp16_32k_ROPE8-i1-GGUF/resolve/main/llama2_70b_longlora_fp16_32k_ROPE8.i1-IQ2_S.gguf) | i1-IQ2_S | 21.8 | | | [GGUF](https://huggingface.co/mradermacher/llama2_70b_longlora_fp16_32k_ROPE8-i1-GGUF/resolve/main/llama2_70b_longlora_fp16_32k_ROPE8.i1-IQ2_M.gguf) | i1-IQ2_M | 23.7 | | | [GGUF](https://huggingface.co/mradermacher/llama2_70b_longlora_fp16_32k_ROPE8-i1-GGUF/resolve/main/llama2_70b_longlora_fp16_32k_ROPE8.i1-Q2_K.gguf) | i1-Q2_K | 25.9 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/llama2_70b_longlora_fp16_32k_ROPE8-i1-GGUF/resolve/main/llama2_70b_longlora_fp16_32k_ROPE8.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 27.0 | lower quality | | [GGUF](https://huggingface.co/mradermacher/llama2_70b_longlora_fp16_32k_ROPE8-i1-GGUF/resolve/main/llama2_70b_longlora_fp16_32k_ROPE8.i1-IQ3_XS.gguf) | i1-IQ3_XS | 28.6 | | | [GGUF](https://huggingface.co/mradermacher/llama2_70b_longlora_fp16_32k_ROPE8-i1-GGUF/resolve/main/llama2_70b_longlora_fp16_32k_ROPE8.i1-IQ3_S.gguf) | i1-IQ3_S | 30.3 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/llama2_70b_longlora_fp16_32k_ROPE8-i1-GGUF/resolve/main/llama2_70b_longlora_fp16_32k_ROPE8.i1-Q3_K_S.gguf) | i1-Q3_K_S | 30.3 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/llama2_70b_longlora_fp16_32k_ROPE8-i1-GGUF/resolve/main/llama2_70b_longlora_fp16_32k_ROPE8.i1-IQ3_M.gguf) | i1-IQ3_M | 31.4 | | | [GGUF](https://huggingface.co/mradermacher/llama2_70b_longlora_fp16_32k_ROPE8-i1-GGUF/resolve/main/llama2_70b_longlora_fp16_32k_ROPE8.i1-Q3_K_M.gguf) | i1-Q3_K_M | 33.7 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/llama2_70b_longlora_fp16_32k_ROPE8-i1-GGUF/resolve/main/llama2_70b_longlora_fp16_32k_ROPE8.i1-Q3_K_L.gguf) | i1-Q3_K_L | 36.6 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/llama2_70b_longlora_fp16_32k_ROPE8-i1-GGUF/resolve/main/llama2_70b_longlora_fp16_32k_ROPE8.i1-IQ4_XS.gguf) | i1-IQ4_XS | 36.9 | | | [GGUF](https://huggingface.co/mradermacher/llama2_70b_longlora_fp16_32k_ROPE8-i1-GGUF/resolve/main/llama2_70b_longlora_fp16_32k_ROPE8.i1-Q4_0.gguf) | i1-Q4_0 | 39.1 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/llama2_70b_longlora_fp16_32k_ROPE8-i1-GGUF/resolve/main/llama2_70b_longlora_fp16_32k_ROPE8.i1-Q4_K_S.gguf) | i1-Q4_K_S | 39.7 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/llama2_70b_longlora_fp16_32k_ROPE8-i1-GGUF/resolve/main/llama2_70b_longlora_fp16_32k_ROPE8.i1-Q4_K_M.gguf) | i1-Q4_K_M | 41.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/llama2_70b_longlora_fp16_32k_ROPE8-i1-GGUF/resolve/main/llama2_70b_longlora_fp16_32k_ROPE8.i1-Q5_K_S.gguf) | i1-Q5_K_S | 47.9 | | | [GGUF](https://huggingface.co/mradermacher/llama2_70b_longlora_fp16_32k_ROPE8-i1-GGUF/resolve/main/llama2_70b_longlora_fp16_32k_ROPE8.i1-Q5_K_M.gguf) | i1-Q5_K_M | 49.2 | | | [PART 1](https://huggingface.co/mradermacher/llama2_70b_longlora_fp16_32k_ROPE8-i1-GGUF/resolve/main/llama2_70b_longlora_fp16_32k_ROPE8.i1-Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/llama2_70b_longlora_fp16_32k_ROPE8-i1-GGUF/resolve/main/llama2_70b_longlora_fp16_32k_ROPE8.i1-Q6_K.gguf.part2of2) | i1-Q6_K | 57.0 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
timm/pvt_v2_b2.in1k
timm
"2023-04-25T04:03:51Z"
991
1
timm
[ "timm", "pytorch", "safetensors", "image-classification", "dataset:imagenet-1k", "arxiv:2106.13797", "license:apache-2.0", "region:us" ]
image-classification
"2023-04-25T04:03:22Z"
--- tags: - image-classification - timm library_name: timm license: apache-2.0 datasets: - imagenet-1k --- # Model card for pvt_v2_b2 A PVT-v2 (Pyramid Vision Transformer) image classification model. Trained on ImageNet-1k by paper authors. ## Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 25.4 - GMACs: 4.0 - Activations (M): 27.5 - Image size: 224 x 224 - **Papers:** - PVT v2: Improved Baselines with Pyramid Vision Transformer: https://arxiv.org/abs/2106.13797 - **Dataset:** ImageNet-1k - **Original:** https://github.com/whai362/PVT ## Model Usage ### Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model('pvt_v2_b2', pretrained=True) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5) ``` ### Feature Map Extraction ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'pvt_v2_b2', pretrained=True, features_only=True, ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 for o in output: # print shape of each feature map in output # e.g.: # torch.Size([1, 64, 56, 56]) # torch.Size([1, 128, 28, 28]) # torch.Size([1, 320, 14, 14]) # torch.Size([1, 512, 7, 7]) print(o.shape) ``` ### Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'pvt_v2_b2', pretrained=True, num_classes=0, # remove classifier nn.Linear ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor # or equivalently (without needing to set num_classes=0) output = model.forward_features(transforms(img).unsqueeze(0)) # output is unpooled, a (1, 512, 7, 7) shaped tensor output = model.forward_head(output, pre_logits=True) # output is a (1, num_features) shaped tensor ``` ## Model Comparison Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results). ## Citation ```bibtex @article{wang2021pvtv2, title={Pvtv2: Improved baselines with pyramid vision transformer}, author={Wang, Wenhai and Xie, Enze and Li, Xiang and Fan, Deng-Ping and Song, Kaitao and Liang, Ding and Lu, Tong and Luo, Ping and Shao, Ling}, journal={Computational Visual Media}, volume={8}, number={3}, pages={1--10}, year={2022}, publisher={Springer} } ```
shaowenchen/chinese-alpaca-2-13b-16k-gguf
shaowenchen
"2023-09-12T00:51:47Z"
991
8
null
[ "gguf", "meta", "llama", "llama-2", "alpaca", "alpaca-2", "chinese", "text-generation", "zh", "license:other", "region:us" ]
text-generation
"2023-09-08T23:21:12Z"
--- inference: false language: - zh license: other model_creator: ziqingyang model_link: https://huggingface.co/ziqingyang/chinese-alpaca-2-13b-16k model_name: chinese-alpaca-2-13b-16k model_type: llama pipeline_tag: text-generation quantized_by: shaowenchen tasks: - text2text-generation tags: - meta - gguf - llama - llama-2 - alpaca - alpaca-2 - chinese --- ## Provided files | Name | Quant method | Size | | ------------------------------------ | ------------ | ------- | | chinese-alpaca-2-13b-16k.Q2_K.gguf | Q2_K | 5.2 GB | | chinese-alpaca-2-13b-16k.Q3_K.gguf | Q3_K | 6.0 GB | | chinese-alpaca-2-13b-16k.Q3_K_L.gguf | Q3_K_L | 6.6 GB | | chinese-alpaca-2-13b-16k.Q3_K_S.gguf | Q3_K_S | 5.4 GB | | chinese-alpaca-2-13b-16k.Q4_0.gguf | Q4_0 | 7.0 GB | | chinese-alpaca-2-13b-16k.Q4_1.gguf | Q4_1 | 7.8 GB | | chinese-alpaca-2-13b-16k.Q4_K.gguf | Q4_K | 7.5 GB | | chinese-alpaca-2-13b-16k.Q4_K_S.gguf | Q4_K_S | 7.1 GB | | chinese-alpaca-2-13b-16k.Q5_0.gguf | Q5_0 | 8.5 GB | | chinese-alpaca-2-13b-16k.Q5_1.gguf | Q5_1 | 9.3 GB | | chinese-alpaca-2-13b-16k.Q5_K.gguf | Q5_K | 8.8 GB | | chinese-alpaca-2-13b-16k.Q5_K_S.gguf | Q5_K_S | 8.5 GB | | chinese-alpaca-2-13b-16k.Q6_K.gguf | Q6_K | 10.0 GB | | chinese-alpaca-2-13b-16k.Q8_0.gguf | Q8_0 | 13.0 GB | | chinese-alpaca-2-13b-16k.gguf | full | 25.0 GB | Usage: ``` docker run --rm -it -p 8000:8000 -v /path/to/models:/models -e MODEL=/models/gguf-model-name.gguf hubimage/llama-cpp-python:latest ``` and you can view http://localhost:8000/docs to see the swagger UI.
llmware/bling-sheared-llama-1.3b-0.1
llmware
"2024-02-13T08:59:27Z"
991
25
transformers
[ "transformers", "pytorch", "llama", "text-generation", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-10-22T17:03:12Z"
--- license: apache-2.0 inference: false --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> bling-sheared-llama-1.3b-0.1 is part of the BLING ("Best Little Instruction-following No-GPU-required") model series, instruct trained on top of a Sheared-LLaMA-1.3B base model. BLING models are fine-tuned with distilled high-quality custom instruct datasets, targeted at a specific subset of instruct tasks with the objective of providing a high-quality Instruct model that is 'inference-ready' on a CPU laptop even without using any advanced quantization optimizations. ### Benchmark Tests Evaluated against the benchmark test: [RAG-Instruct-Benchmark-Tester](https://www.huggingface.co/datasets/llmware/rag_instruct_benchmark_tester) Average of 2 Test Runs with 1 point for correct answer, 0.5 point for partial correct or blank / NF, 0.0 points for incorrect, and -1 points for hallucinations. --**Accuracy Score**: **84.50** correct out of 100 --Not Found Classification: 20.0% --Boolean: 66.25% --Math/Logic: 9.4% --Complex Questions (1-5): 1 (Low) --Summarization Quality (1-5): 3 (Coherent, extractive) --Hallucinations: No hallucinations observed in test runs. For test run results (and good indicator of target use cases), please see the files ("core_rag_test" and "answer_sheet" in this repo). ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** llmware - **Model type:** Instruct-trained decoder - **Language(s) (NLP):** English - **License:** Apache 2.0 - **Finetuned from model [optional]:** princeton-nlp/Sheared-LLaMA-1.3B ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> The intended use of BLING models is two-fold: 1. Provide high-quality Instruct models that can run on a laptop for local testing. We have found it extremely useful when building a proof-of-concept, or working with sensitive enterprise data that must be closely guarded, especially in RAG use cases. 2. Push the state of the art for smaller Instruct-following models in the sub-7B parameter range, especially 1B-3B, as single-purpose automation tools for specific tasks through targeted fine-tuning datasets and focused "instruction" tasks. ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> BLING is designed for enterprise automation use cases, especially in knowledge-intensive industries, such as financial services, legal and regulatory industries with complex information sources. Rather than try to be "all things to all people," BLING models try to focus on a narrower set of Instructions more suitable to a ~1B parameter GPT model. BLING is ideal for rapid prototyping, testing, and the ability to perform an end-to-end workflow locally on a laptop without having to send sensitive information over an Internet-based API. The first BLING models have been trained for common RAG scenarios, specifically: question-answering, key-value extraction, and basic summarization as the core instruction types without the need for a lot of complex instruction verbiage - provide a text passage context, ask questions, and get clear fact-based responses. ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> Any model can provide inaccurate or incomplete information, and should be used in conjunction with appropriate safeguards and fact-checking mechanisms. ## How to Get Started with the Model The fastest way to get started with BLING is through direct import in transformers: from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("llmware/bling-sheared-llama-1.3b-0.1") model = AutoModelForCausalLM.from_pretrained("llmware/bling-sheared-llama-1.3b-0.1") Please refer to the generation_test .py files in the Files repository, which includes 200 samples and script to test the model. The **generation_test_llmware_script.py** includes built-in llmware capabilities for fact-checking, as well as easy integration with document parsing and actual retrieval to swap out the test set for RAG workflow consisting of business documents. The BLING model was fine-tuned with a simple "\<human> and \<bot> wrapper", so to get the best results, wrap inference entries as: full_prompt = "\<human>\: " + my_prompt + "\n" + "\<bot>\:" The BLING model was fine-tuned with closed-context samples, which assume generally that the prompt consists of two sub-parts: 1. Text Passage Context, and 2. Specific question or instruction based on the text passage To get the best results, package "my_prompt" as follows: my_prompt = {{text_passage}} + "\n" + {{question/instruction}} If you are using a HuggingFace generation script: # prepare prompt packaging used in fine-tuning process new_prompt = "<human>: " + entries["context"] + "\n" + entries["query"] + "\n" + "<bot>:" inputs = tokenizer(new_prompt, return_tensors="pt") start_of_output = len(inputs.input_ids[0]) # temperature: set at 0.3 for consistency of output # max_new_tokens: set at 100 - may prematurely stop a few of the summaries outputs = model.generate( inputs.input_ids.to(device), eos_token_id=tokenizer.eos_token_id, pad_token_id=tokenizer.eos_token_id, do_sample=True, temperature=0.3, max_new_tokens=100, ) output_only = tokenizer.decode(outputs[0][start_of_output:],skip_special_tokens=True) # note: due to artifact of the fine-tuning, use this post-processing with HF generation eot = output_only.find("<|endoftext|>") if eot > -1: output_only = output_only[:eot] ## Citation [optional] This BLING model was built on top of a "Sheared Llama" model base - for more information about the "Sheared Llama" model, please see the paper referenced below: @article{xia2023sheared, title={Sheared LLaMA: Accelerating Language Model Pre-training via Structured Pruning}, author={Xia, Mengzhou and Gao, Tianyu, and Zeng Zhiyuan, and Chen Danqi}, year={2023} } ## Model Card Contact Darren Oberst & llmware team
kwchoi/DPO_mistral_7b_ultra_0129_1k
kwchoi
"2024-01-29T00:38:03Z"
991
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "conversational", "en", "dataset:argilla/ultrafeedback-binarized-preferences-cleaned", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-01-29T00:26:26Z"
--- license: apache-2.0 datasets: - argilla/ultrafeedback-binarized-preferences-cleaned language: - en --- Testing Mistral-Instruct model with Orca DPO dataset. Trying to see the effects of DPO for own study. Used Mistral-7B-Instrcut-v0.2 model due to its good performance Testing Mistral-Instruct model with Orca DPO dataset. Trying to see the effects of DPO for own study. Used Mistral-7B-Instrcut-v0.2 model due to its good performance Testing Mistral-Instruct model with Orca DPO dataset. Trying to see the effects of DPO for own study. Used Mistral-7B-Instrcut-v0.2 model due to its good performance Testing Mistral-Instruct model with Orca DPO dataset. Trying to see the effects of DPO for own study. Used Mistral-7B-Instrcut-v0.2 model due to its good performance Testing Mistral-Instruct model with Orca DPO dataset. Trying to see the effects of DPO for own study. Used Mistral-7B-Instrcut-v0.2 model due to its good performance Testing Mistral-Instruct model with Orca DPO dataset. Trying to see the effects of DPO for own study. Used Mistral-7B-Instrcut-v0.2 model due to its good performance Testing Mistral-Instruct model with Orca DPO dataset. Trying to see the effects of DPO for own study. Used Mistral-7B-Instrcut-v0.2 model due to its good performance Testing Mistral-Instruct model with Orca DPO dataset. Trying to see the effects of DPO for own study. Used Mistral-7B-Instrcut-v0.2 model due to its good performanceTesting Mistral-Instruct model with Orca DPO dataset. Trying to see the effects of DPO for own study. Used Mistral-7B-Instrcut-v0.2 model due to its good performance Testing Mistral-Instruct model with Orca DPO dataset. Trying to see the effects of DPO for own study. Used Mistral-7B-Instrcut-v0.2 model due to its good performance Testing Mistral-Instruct model with Orca DPO dataset. Trying to see the effects of DPO for own study. Used Mistral-7B-Instrcut-v0.2 model due to its good performance Testing Mistral-Instruct model with Orca DPO dataset. Trying to see the effects of DPO for own study. Used Mistral-7B-Instrcut-v0.2 model due to its good performanceTesting Mistral-Instruct model with Orca DPO dataset. Trying to see the effects of DPO for own study. Used Mistral-7B-Instrcut-v0.2 model due to its good performance Testing Mistral-Instruct model with Orca DPO dataset. Trying to see the effects of DPO for own study. Used Mistral-7B-Instrcut-v0.2 model due to its good performance Testing Mistral-Instruct model with Orca DPO dataset. Trying to see the effects of DPO for own study. Used Mistral-7B-Instrcut-v0.2 model due to its good performance Testing Mistral-Instruct model with Orca DPO dataset. Trying to see the effects of DPO for own study. Used Mistral-7B-Instrcut-v0.2 model due to its good performanceTesting Mistral-Instruct model with Orca DPO dataset. Trying to see the effects of DPO for own study. Used Mistral-7B-Instrcut-v0.2 model due to its good performance Testing Mistral-Instruct model with Orca DPO dataset. Trying to see the effects of DPO for own study. Used Mistral-7B-Instrcut-v0.2 model due to its good performance Testing Mistral-Instruct model with Orca DPO dataset. Trying to see the effects of DPO for own study. Used Mistral-7B-Instrcut-v0.2 model due to its good performance Testing Mistral-Instruct model with Orca DPO dataset. Trying to see the effects of DPO for own study. Used Mistral-7B-Instrcut-v0.2 model due to its good performance
juhwanlee/llmdo-Mistral-7B-case-1
juhwanlee
"2024-03-11T13:31:40Z"
991
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "en", "dataset:Open-Orca/OpenOrca", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-03-11T07:04:43Z"
--- license: apache-2.0 datasets: - Open-Orca/OpenOrca language: - en --- # Model Details * Model Description: This model is test for data ordering. * Developed by: Juhwan Lee * Model Type: Large Language Model # Model Architecture This model is based on Mistral-7B-v0.1. We fine-tuning this model for data ordering task. Mistral-7B-v0.1 is a transformer model, with the following architecture choices: * Grouped-Query Attention * Sliding-Window Attention * Byte-fallback BPE tokenizer # Dataset We random sample Open-Orca dataset. (We finetune the 100,000 dataset) # Guthub https://github.com/trailerAI # License Apache License 2.0
ChaoticNeutrals/RPMix-4x7B-MoE
ChaoticNeutrals
"2024-03-30T06:57:46Z"
991
7
transformers
[ "transformers", "safetensors", "mixtral", "text-generation", "not-for-all-audiences", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-03-30T05:49:30Z"
--- license: apache-2.0 language: - en tags: - not-for-all-audiences --- # RPMix 4x7B MoE A mixture of experts model using my best scoring RP models. This model should primarily be used for RP and creative writing. ``` base_model: ResplendentAI/DaturaCookie_7B gate_mode: hidden experts_per_token: 2 experts: - source_model: ChaoticNeutrals/RP_Vision_7B positive_prompts: - "chat" - "assistant" - "tell me" - "explain" - "I want" - source_model: ChaoticNeutrals/BuRP_7B positive_prompts: - "show me" - "touch" - "believe" - "see" - "love" - source_model: ResplendentAI/DaturaCookie_7B positive_prompts: - "storywriting" - "write" - "scene" - "story" - "character" - source_model: ChaoticNeutrals/Kool-Aid_7B positive_prompts: - "sensual" - "sexual" - "horny" - "turned on" - "intimate" dtype: bfloat16 ```
uclanlp/plbart-java-en_XX
uclanlp
"2021-11-09T17:08:51Z"
990
0
transformers
[ "transformers", "pytorch", "plbart", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
"2022-03-02T23:29:05Z"
Entry not found
mzwing/Anything-GGUF
mzwing
"2024-03-15T06:03:05Z"
990
0
null
[ "gguf", "region:us" ]
null
"2024-03-15T00:13:51Z"
Entry not found
LeroyDyer/Mixtral_AI_Cyber_4.0
LeroyDyer
"2024-04-09T16:43:03Z"
990
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "biology", "chemistry", "medical", "conversational", "en", "base_model:LeroyDyer/Mixtral_AI_Cyber_3.m1", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-03-30T11:55:47Z"
--- base_model: - LeroyDyer/Mixtral_AI_Cyber_3.m1 - LeroyDyer/Mixtral_AI_Cyber_3.1_SFT - LeroyDyer/Mixtral_AI_Cyber_2.0 - LeroyDyer/Mixtral_AI_Cyber_1.0 - LeroyDyer/Mixtral_AI_Cyber_3.m2 - LeroyDyer/Mixtral_AI_Cyber_3.0 library_name: transformers tags: - biology - chemistry - medical license: mit language: - en metrics: - bertscore - code_eval - character --- ### Models Merged The following models were included in the merge: * [LeroyDyer/Mixtral_AI_Cyber_3.m1](https://huggingface.co/LeroyDyer/Mixtral_AI_Cyber_3.m1) * [LeroyDyer/Mixtral_AI_Cyber_3.1_SFT](https://huggingface.co/LeroyDyer/Mixtral_AI_Cyber_3.1_SFT) * [LeroyDyer/Mixtral_AI_Cyber_2.0](https://huggingface.co/LeroyDyer/Mixtral_AI_Cyber_2.0) * [LeroyDyer/Mixtral_AI_Cyber_1.0](https://huggingface.co/LeroyDyer/Mixtral_AI_Cyber_1.0) * [LeroyDyer/Mixtral_AI_Cyber_3.0](https://huggingface.co/LeroyDyer/Mixtral_AI_Cyber_3.0) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: LeroyDyer/Mixtral_AI_Cyber_3.m2 parameters: density: [0.256, 0.512, 0.128] # density gradient weight: 0.382 - model: LeroyDyer/Mixtral_AI_Cyber_2.0 parameters: density: 0.382 weight: [0.256, 0.128, 0.256, 0.128] # weight gradient - model: LeroyDyer/Mixtral_AI_Cyber_3.0 parameters: density: 0.382 weight: [0.128, 0.512, 0.128, 0.128] # weight gradient - model: LeroyDyer/Mixtral_AI_Cyber_3.m1 parameters: density: 0.382 weight: [0.256, 0.256, 0.512, 0.128] # weight gradient - model: LeroyDyer/Mixtral_AI_Cyber_1.0 parameters: density: 0.382 weight: [0.128, 0.512, 0.128, 0.128] # weight gradient - model: LeroyDyer/Mixtral_AI_Cyber_3.1_SFT parameters: density: 0.382 weight: - filter: mlp value: 0.5 - value: 0 merge_method: ties base_model: LeroyDyer/Mixtral_AI_Cyber_3.m2 parameters: normalize: true int8_mask: true dtype: float16 ```
Josephgflowers/Tinyllama-1.5B-Cinder-Test-2
Josephgflowers
"2024-04-05T14:12:47Z"
990
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-04-05T14:07:47Z"
--- license: mit --- This is a depth up scalled model of the 616M cinder model and Cinder v2. This model still needs further training. Putting it up for testing. More information coming. Maybe. Lol. Here is a brief desc of the project: Im mixing a lot of techniques I guess that I found interesting and have been testing, HF Cosmo is not great but decent and was fully trained in 4 days using a mix of more fine tuned directed datasets and some synthetic textbook style datasets. So I used pruning and a similar mix as Cosmo on tinyllama (trained on a ton of data for an extended time for its size) to keep the tinyllama model coherent during pruning. Now I am trying to depth up scale it using my pruned model and an original, Then taking a majority of each and combining them to create a larger model. Then it needs more training, then fine tuning. Then theoretically it will be a well performing 1.5B model (that didn't need full scale training). Test 2, some training, re depth upscalled with cinder reason 1.3B and merged back with 1.5 and slight training. Continuing training from this model for next iteration.
sampathkethineedi/industry-classification
sampathkethineedi
"2020-07-16T15:27:38Z"
989
20
transformers
[ "transformers", "pytorch", "tf", "distilbert", "text-classification", "tensorflow", "industry", "buisiness", "description", "multi-class", "classification", "en", "autotrain_compatible", "region:us" ]
text-classification
"2022-03-02T23:29:05Z"
--- language: "en" thumbnail: "https://huggingface.co/sampathkethineedi" tags: - distilbert - pytorch - tensorflow - text-classification - industry - buisiness - description - multi-class - classification liscence: "mit" inference: false --- # industry-classification ## Model description DistilBERT Model to classify a business description into one of **62 industry tags**. Trained on 7000 samples of Business Descriptions and associated labels of companies in India. ## How to use PyTorch and TF models available ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification, pipeline tokenizer = AutoTokenizer.from_pretrained("sampathkethineedi/industry-classification") model = AutoModelForSequenceClassification.from_pretrained("sampathkethineedi/industry-classification") industry_tags = pipeline('sentiment-analysis', model=model, tokenizer=tokenizer) industry_tags("Stellar Capital Services Limited is an India-based non-banking financial company ... loan against property, management consultancy, personal loans and unsecured loans.") '''Ouput''' [{'label': 'Consumer Finance', 'score': 0.9841355681419373}] ``` ## Limitations and bias Training data is only for Indian companies
raicrits/Hermes7b_ITA
raicrits
"2023-09-08T09:57:43Z"
989
0
transformers
[ "transformers", "pytorch", "LLM", "Italian", "Orca", "Hermes", "LLama2", "text-generation", "it", "dataset:raicrits/Orca_ITA_200k", "arxiv:2106.09685", "arxiv:1910.09700", "license:other", "endpoints_compatible", "region:us" ]
text-generation
"2023-08-04T08:12:27Z"
--- license: other datasets: - raicrits/Orca_ITA_200k language: - it pipeline_tag: text-generation tags: - LLM - Italian - Orca - Hermes - LLama2 library_name: transformers --- # Model Card for Model raicrits/Hermes7b_ITA <!-- Provide a quick summary of what the model is/does. --> An open-source LLaMa2 language model of 7b parameters fine-tuned (using as base model [NousResearch/Nous-Hermes-llama-2-7b](https://huggingface.co/NousResearch/Nous-Hermes-llama-2-7b)) to follow instructions in italian. ### Model Description This model is a LLM of 7b parameters based on [NousResearch/Nous-Hermes-llama-2-7b](https://huggingface.co/NousResearch/Nous-Hermes-llama-2-7b), a version of [meta-llama/Llama-2-7b](https://huggingface.co/meta-llama/Llama-2-7b) fine-tuned to follow instructions. The model was further fine-tuned in order to follow instructions in italian, using [LoRA](https://arxiv.org/abs/2106.09685) approach and a dataset of 120k random pairs of instruction/answer from [raicrits/Orca_ITA_200k](https://huggingface.co/datasets/raicrits/Orca_ITA_200k). This repository contains the model weights merged with the LoRA adapters obtained in the fine-tuning procedure. - **Developed by:** Stefano Scotta ([email protected]) - **Model type:** LLM fine-tuned to follow instructions - **Language(s) (NLP):** Italian - **License:** Other - **Finetuned from model:** [NousResearch/Nous-Hermes-llama-2-7b](https://huggingface.co/NousResearch/Nous-Hermes-llama-2-7b) ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> The model can be used as is to respond to simple instructions in Italian or can be further fine-tuned to perform specific tasks. ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> As any other LLM it is possible that the model generates content which does not correspond to the reality as well as wrong, biased, offensive and inappropriate answers. ## How to Get Started with the Model **Prompt template:** ``` python """### Instruction: {instruction} ### Response: """ ``` **Usage:** Use the code below to get started with the model. ``` python import os import torch import sys from transformers import LlamaForCausalLM, AutoTokenizer def generate_prompt_test(instruction): prompt = f"""### Instruction: {instruction} ### Response: """ return prompt model_name = "raicrits/Hermes7b_ITA" model = LlamaForCausalLM.from_pretrained( model_name, device_map="auto", torch_dtype=torch.bfloat16 ) model.config.use_cache = True tokenizer = AutoTokenizer.from_pretrained(model_name, add_eos_token=False) prompt = generate_prompt_test("Cosa puoi dirmi sul dio Hermes?") inputs = tokenizer(prompt, return_tensors="pt").to("cuda") outputs = model.generate(**inputs, do_sample = True, num_beams = 2, top_k=50, top_p= 0.95, max_new_tokens=256, early_stopping = True) print(tokenizer.decode(outputs[0], skip_special_tokens=True).split("Response:")[1].strip()) ``` ``` python """Hermes è un dio dell'antica Grecia. Era il dio del commercio, della comunicazione e del trasporto. Era anche il dio della mente e della intelligenza. Era noto per il suo eloquente linguaggio e la sua capacità di spostarsi velocemente. Era considerato il messaggero degli dèi e spesso veniva raffigurato con un cappello di pelle di capra e sandali.""" ``` ## Training Details ### Training Data <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> The model was fine-tuned on 120k random records of [raicrits/Orca_ITA_200k](https://huggingface.co/datasets/raicrits/Orca_ITA_200k). ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> The fine-tuning procedure was done using [LoRA](https://arxiv.org/abs/2106.09685) approach. #### Training Hyperparameters **Training setting:** - train epochs=3, - learning_rate=2e-4, - mixed precision training: float16 **LoRA configuration:** - r= 8 - lora_alpha=16 - target_modules=["q_proj","v_proj"] - lora_dropout=0.05 - bias="none" - task_type=TaskType.CAUSAL_LM ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** 1 NVIDIA A100/40Gb - **Hours used:** 78 - **Cloud Provider:** Private Infrastructure - **Carbon Emitted:** 8.42 kg eq. CO2 ## Model Card Authors Stefano Scotta ([email protected]) ## Model Card Contact [email protected]
s3nh/Noromaid-Aeryth-7B
s3nh
"2024-03-04T12:39:27Z"
989
3
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "mergekit", "merge", "base_model:NeverSleep/Noromaid-7b-v0.2", "base_model:NeuralNovel/Aeryth-7B-v0.1", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-01-08T08:00:22Z"
--- license: apache-2.0 library_name: transformers tags: - mergekit - merge base_model: - NeverSleep/Noromaid-7b-v0.2 - NeuralNovel/Aeryth-7B-v0.1 pipeline_tag: text-generation model-index: - name: Noromaid-Aeryth-7B results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 56.74 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=s3nh/Noromaid-Aeryth-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 78.62 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=s3nh/Noromaid-Aeryth-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 57.29 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=s3nh/Noromaid-Aeryth-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 65.66 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=s3nh/Noromaid-Aeryth-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 71.82 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=s3nh/Noromaid-Aeryth-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 16.76 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=s3nh/Noromaid-Aeryth-7B name: Open LLM Leaderboard --- # Noromaid-Aeryth-7B This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the SLERP merge method. ### Models Merged The following models were included in the merge: * [NeverSleep/Noromaid-7b-v0.2](https://huggingface.co/NeverSleep/Noromaid-7b-v0.2) * [NeuralNovel/Aeryth-7B-v0.1](https://huggingface.co/NeuralNovel/Aeryth-7B-v0.1) ### Configuration The following YAML configuration was used to produce this model: ```yaml base_model: NeverSleep/Noromaid-7b-v0.2 dtype: float16 merge_method: slerp parameters: t: - filter: self_attn value: [0.22, 0.61, 0.46, 0.77, 1.0] - filter: mlp value: [0.78, 0.39, 0.54, 0.23, 0.0] - value: 0.5 slices: - sources: - layer_range: [0, 32] model: NeuralNovel/Aeryth-7B-v0.1 ``` # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_s3nh__Noromaid-Aeryth-7B) | Metric |Value| |---------------------------------|----:| |Avg. |57.82| |AI2 Reasoning Challenge (25-Shot)|56.74| |HellaSwag (10-Shot) |78.62| |MMLU (5-Shot) |57.29| |TruthfulQA (0-shot) |65.66| |Winogrande (5-shot) |71.82| |GSM8k (5-shot) |16.76|
Chrisisis/5DtHymom4Md2EiPwwN8GJ7ztRHfNQutrHkeyw9WfftgpFnLR_vgg
Chrisisis
"2024-02-24T08:29:46Z"
989
0
keras
[ "keras", "region:us" ]
null
"2024-02-11T17:22:16Z"
Entry not found
InnerI/A-I-0xtom-7B-slerp
InnerI
"2024-03-04T23:15:05Z"
989
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "merge", "mergekit", "lazymergekit", "0x0dad0/nous_nous_v2_0", "tomaszki/nous-thirty", "conversational", "base_model:0x0dad0/nous_nous_v2_0", "base_model:tomaszki/nous-thirty", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-02-16T01:09:57Z"
--- license: apache-2.0 tags: - merge - mergekit - lazymergekit - 0x0dad0/nous_nous_v2_0 - tomaszki/nous-thirty base_model: - 0x0dad0/nous_nous_v2_0 - tomaszki/nous-thirty model-index: - name: A-I-0xtom-7B-slerp results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 58.19 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=InnerI/A-I-0xtom-7B-slerp name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 77.64 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=InnerI/A-I-0xtom-7B-slerp name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 58.74 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=InnerI/A-I-0xtom-7B-slerp name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 54.78 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=InnerI/A-I-0xtom-7B-slerp name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 73.24 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=InnerI/A-I-0xtom-7B-slerp name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 40.18 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=InnerI/A-I-0xtom-7B-slerp name: Open LLM Leaderboard --- # A-I-0xtom-7B-slerp A-I-0xtom-7B-slerp is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): * [0x0dad0/nous_nous_v2_0](https://huggingface.co/0x0dad0/nous_nous_v2_0) * [tomaszki/nous-thirty](https://huggingface.co/tomaszki/nous-thirty) # Avg model loss 0.3912096044793725 I used this testing script that loads your local model, pulls the latest data from cortex and calculates the loss: [avg loss script](https://gist.github.com/romanorac/59ccde7cbf07d8950ef9fb5b5db6a24e) ## 🧩 Configuration ```yaml slices: - sources: - model: 0x0dad0/nous_nous_v2_0 layer_range: [0, 32] - model: tomaszki/nous-thirty layer_range: [0, 32] merge_method: slerp base_model: 0x0dad0/nous_nous_v2_0 parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "InnerI/A-I-0xtom-7B-slerp" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ``` # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_InnerI__A-I-0xtom-7B-slerp) | Metric |Value| |---------------------------------|----:| |Avg. |60.46| |AI2 Reasoning Challenge (25-Shot)|58.19| |HellaSwag (10-Shot) |77.64| |MMLU (5-Shot) |58.74| |TruthfulQA (0-shot) |54.78| |Winogrande (5-shot) |73.24| |GSM8k (5-shot) |40.18|
EbanLee/kobart-summary-v3
EbanLee
"2024-03-22T06:07:59Z"
989
5
transformers
[ "transformers", "safetensors", "bart", "text2text-generation", "summarization", "ko", "autotrain_compatible", "endpoints_compatible", "region:us" ]
summarization
"2024-03-21T01:39:02Z"
--- language: - ko metrics: - rouge library_name: transformers pipeline_tag: summarization tags: - bart --- # kobart-summary - 이 모델은 [kobart모델](https://huggingface.co/hyunwoongko/kobart)을 [문서요약](https://aihub.or.kr/aihubdata/data/view.do?currMenu=115&topMenu=100&aihubDataSe=realm&dataSetSn=97), [도서자료요약](https://www.aihub.or.kr/aihubdata/data/view.do?currMenu=115&topMenu=100&aihubDataSe=data&dataSetSn=93), [요약문 및 레포트 생성](https://www.aihub.or.kr/aihubdata/data/view.do?currMenu=115&topMenu=100&dataSetSn=582) 데이터로 fine-tuning한 모델입니다. - v2와 비슷지만 데이터를 조금 더 추가했고 길게 연결된 문장이 많이 나왔었는데 조금 더 끊어져서 생성됩니다. ## How to use ```python from transformers import PreTrainedTokenizerFast, BartForConditionalGeneration # Load Model and Tokenizer tokenizer = PreTrainedTokenizerFast.from_pretrained("EbanLee/kobart-summary-v3") model = BartForConditionalGeneration.from_pretrained("EbanLee/kobart-summary-v3") # Encoding input_text = "10년 논란 끝에 밑글씨까지 새기고 제작 완료를 눈앞에 둔 ‘광화문 현판’을 원점에서 재검토해 한글 현판으로 교체하자는 주장이 문화계 명사들이 포함된 자칭 ‘시민모임’에서 나왔다.\n 이들은 문화재청이 지난해 8월 최종 확정한 복원안이 시민 의견과 시대정신을 반영한 것이 아니라면서 오는 한글날까지 대대적인 현판 교체 시민운동을 벌이겠다고 예고했다.\n ‘광화문 현판 훈민정음체로 시민모임’(공동대표 강병인‧한재준, 이하 ‘시민모임’)에 이름을 올린 문화예술인은 현재까지 총 24명.\n 이 중엔 2014~2016년 서울시 총괄건축가를 지낸 승효상 이로재 대표와 ‘안상수체’로 유명한 안상수 파주타이포그라피학교 교장, 유영숙 전 환경부장관(세종사랑방 회장), 임옥상 미술가 등이 있다.\n 공동대표인 강병인 작가는 ‘참이슬’ ‘화요’ 등의 상표 글씨로 유명한 캘리그라피(서체) 작가다.\n ‘시민모임’은 14일 오후 서울 종로구의 한 서점에서 기자간담회를 열고 이 같은 입장과 함께 훈민정음 해례 글자꼴로 시범 제작한 모형 현판(1/2 크기 축소판)도 공개할 예정이다.\n 강 공동대표는 13일 기자와 통화에서 “새 현판 제작 과정에서 한글로 만들자는 의견은 묵살됐다”면서 “지난해 8월 이후 문화재청에 거듭 입장을 전했지만 반영되지 않아 시민운동에 나서기로 했다”고 말했다.\n 일단 문화예술인 주축으로 꾸렸지만 조만간 한글협회 등 한글 관련단체들과 연대한다는 방침이다.\n 이들이 배포한 사전자료엔 ^한자현판 설치는 중국의 속국임을 표시하는 것으로 대한민국 정체성에 도움이 되지 않고 ^광화문은 21세기의 중건이지 복원이 아니므로 당대의 시대정신인 한글로 현판을 써야하며 ^한글현판은 미래에 남겨줄 우리 유산을 재창조한다는 의미라는 주장이 담겼다.\n 현재 광화문 현판에 대해선 “고종이 경복궁을 중건할 때 당시 훈련대장이던 임태영이 쓴 광화문 현판의 글씨를 조그만 사진에서 스캐닝하고 이를 다듬어 이명박정부 때 설치된 것”이라면서 복원 기준으로서의 정당성을 깎아내렸다.\n ‘시민모임’에 참여한 승효상 대표도 개인의견을 전제로 “현판을 꼭 한가지만 고집할 필요도 없다.\n 매년 교체할 수도 있고, 광장에서 보이는 정면엔 한글현판, 반대편엔 한자현판을 다는 아이디어도 가능한 것 아니냐”고 말했다.\n 그러면서 “문화재 전문가들은 보수적일 수밖에 없지만 현판이란 게 요즘 말로는 ‘간판’인데 새 시대에 맞게 바꿔 다는 게 바람직하다”고 주장했다.\n" inputs = tokenizer(input_text, return_tensors="pt", padding="max_length", truncation=True, max_length=1026) # Generate Summary Text Ids summary_text_ids = model.generate( input_ids=inputs['input_ids'], attention_mask=inputs['attention_mask'], bos_token_id=model.config.bos_token_id, eos_token_id=model.config.eos_token_id, length_penalty=1.0, max_length=300, min_length=12, num_beams=6, repetition_penalty=1.5, no_repeat_ngram_size=15, ) # Decoding Text Ids print(tokenizer.decode(summary_text_ids[0], skip_special_tokens=True)) ```
backyardai/aya-23-35B-GGUF
backyardai
"2024-06-04T20:56:18Z"
989
0
transformers
[ "transformers", "gguf", "en", "fr", "de", "es", "it", "pt", "ja", "ko", "zh", "ar", "el", "fa", "pl", "id", "cs", "he", "hi", "nl", "ro", "ru", "tr", "uk", "vi", "base_model:CohereForAI/aya-23-35B", "license:cc-by-nc-4.0", "region:us" ]
null
"2024-06-04T19:30:56Z"
--- language: - en - fr - de - es - it - pt - ja - ko - zh - ar - el - fa - pl - id - cs - he - hi - nl - ro - ru - tr - uk - vi license: cc-by-nc-4.0 library_name: transformers base_model: CohereForAI/aya-23-35B model_name: aya-23-35B-GGUF inference: false quantized_by: brooketh parameter_count: 34980831232 --- <img src="BackyardAI_Banner.png" alt="Backyard.ai" style="height: 90px; min-width: 32px; display: block; margin: auto;"> **<p style="text-align: center;">The official library of GGUF format models for use in the local AI chat app, Backyard AI.</p>** <p style="text-align: center;"><a href="https://backyard.ai/">Download Backyard AI here to get started.</a></p> <p style="text-align: center;"><a href="https://www.reddit.com/r/LLM_Quants/">Request Additional models at r/LLM_Quants.</a></p> *** # Aya 23 35B - **Creator:** [CohereForAI](https://huggingface.co/CohereForAI/) - **Original:** [Aya 23 35B](https://huggingface.co/CohereForAI/aya-23-35B) - **Date Created:** 2024-05-19 - **Trained Context:** 8192 tokens - **Description:** Aya 23 is an open weights research release of an instruction fine-tuned model with highly advanced multilingual capabilities, serving 23 languages. It is available with 8 billion and 35 billion parameters; this is the 35 billion parameter version. *** ## What is a GGUF? GGUF is a large language model (LLM) format that can be split between CPU and GPU. GGUFs are compatible with applications based on llama.cpp, such as Backyard AI. Where other model formats require higher end GPUs with ample VRAM, GGUFs can be efficiently run on a wider variety of hardware. GGUF models are quantized to reduce resource usage, with a tradeoff of reduced coherence at lower quantizations. Quantization reduces the precision of the model weights by changing the number of bits used for each weight. *** <img src="BackyardAI_Logo.png" alt="Backyard.ai" style="height: 75px; min-width: 32px; display: block; horizontal align: left;"> ## Backyard AI - Free, local AI chat application. - One-click installation on Mac and PC. - Automatically use GPU for maximum speed. - Built-in model manager. - High-quality character hub. - Zero-config desktop-to-mobile tethering. Backyard AI makes it easy to start chatting with AI using your own characters or one of the many found in the built-in character hub. The model manager helps you find the latest and greatest models without worrying about whether it's the correct format. Backyard AI supports advanced features such as lorebooks, author's note, text formatting, custom context size, sampler settings, grammars, local TTS, cloud inference, and tethering, all implemented in a way that is straightforward and reliable. **Join us on [Discord](https://discord.gg/SyNN2vC9tQ)** ***
SavirD/distilbert-base-uncased-lora-text-classification
SavirD
"2024-06-27T01:46:55Z"
989
1
peft
[ "peft", "tensorboard", "safetensors", "generated_from_trainer", "text-classification", "en", "base_model:distilbert-base-uncased", "doi:10.57967/hf/2601", "license:apache-2.0", "region:us" ]
text-classification
"2024-06-20T16:28:33Z"
--- license: apache-2.0 library_name: peft tags: - generated_from_trainer base_model: distilbert-base-uncased metrics: - accuracy model-index: - name: distilbert-base-uncased-lora-text-classification results: [] language: - en pipeline_tag: text-classification --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-lora-text-classification # This model is used for Sentimental Analysis and given an input will return Positive or Negative This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.1550 - Accuracy: {'accuracy': 0.883} ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:-------------------:| | No log | 1.0 | 250 | 0.6110 | {'accuracy': 0.832} | | 0.4023 | 2.0 | 500 | 0.5590 | {'accuracy': 0.871} | | 0.4023 | 3.0 | 750 | 0.5852 | {'accuracy': 0.876} | | 0.1908 | 4.0 | 1000 | 0.8232 | {'accuracy': 0.891} | | 0.1908 | 5.0 | 1250 | 0.9061 | {'accuracy': 0.885} | | 0.067 | 6.0 | 1500 | 1.0293 | {'accuracy': 0.886} | | 0.067 | 7.0 | 1750 | 1.1672 | {'accuracy': 0.879} | | 0.0251 | 8.0 | 2000 | 1.1400 | {'accuracy': 0.881} | | 0.0251 | 9.0 | 2250 | 1.1411 | {'accuracy': 0.882} | | 0.0217 | 10.0 | 2500 | 1.1550 | {'accuracy': 0.883} | ### Framework versions - PEFT 0.11.2.dev0 - Transformers 4.41.2 - Pytorch 2.3.0 - Datasets 2.19.0 - Tokenizers 0.19.1
akahana/vit-base-cats-vs-dogs
akahana
"2023-09-19T01:23:03Z"
988
0
transformers
[ "transformers", "pytorch", "tensorboard", "safetensors", "vit", "image-classification", "generated_from_trainer", "dataset:cats_vs_dogs", "base_model:google/vit-base-patch16-224-in21k", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
"2022-03-02T23:29:05Z"
--- license: apache-2.0 tags: - image-classification - generated_from_trainer datasets: - cats_vs_dogs metrics: - accuracy base_model: google/vit-base-patch16-224-in21k model-index: - name: vit-base-cats-vs-dogs results: - task: type: image-classification name: Image Classification dataset: name: cats_vs_dogs type: cats_vs_dogs args: default metrics: - type: accuracy value: 0.9883257403189066 name: Accuracy --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-cats-vs-dogs This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the cats_vs_dogs dataset. It achieves the following results on the evaluation set: - Loss: 0.0369 - Accuracy: 0.9883 ## how to use ```python from transformers import ViTFeatureExtractor, ViTModel from PIL import Image import requests url = 'http://images.cocodataset.org/val2017/000000039769.jpg' image = Image.open(requests.get(url, stream=True).raw) feature_extractor = ViTFeatureExtractor.from_pretrained('google/vit-base-patch16-224-in21k') model = ViTModel.from_pretrained('akahana/vit-base-cats-vs-dogs') inputs = feature_extractor(images=image, return_tensors="pt") outputs = model(**inputs) last_hidden_states = outputs.last_hidden_state ``` ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 8 - eval_batch_size: 8 - seed: 1337 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.0949 | 1.0 | 2488 | 0.0369 | 0.9883 | ### Framework versions - Transformers 4.12.5 - Pytorch 1.10.0+cu111 - Datasets 1.16.1 - Tokenizers 0.10.3
ehdwns1516/bert-base-uncased_SWAG
ehdwns1516
"2021-08-05T09:49:18Z"
988
1
transformers
[ "transformers", "pytorch", "bert", "multiple-choice", "endpoints_compatible", "region:us" ]
multiple-choice
"2022-03-02T23:29:05Z"
# ehdwns1516/bert-base-uncased_SWAG * This model has been trained as a [SWAG dataset](https://huggingface.co/ehdwns1516/bert-base-uncased_SWAG). * Sentence Inference Multiple Choice DEMO: [Ainize DEMO](https://main-sentence-inference-multiple-choice-ehdwns1516.endpoint.ainize.ai/) * Sentence Inference Multiple Choice API: [Ainize API](https://ainize.web.app/redirect?git_repo=https://github.com/ehdwns1516/sentence_inference_multiple_choice) ## Overview Language model: [bert-base-uncased](https://huggingface.co/bert-base-uncased) Language: English Training data: [SWAG dataset](https://huggingface.co/datasets/swag) Code: See [Ainize Workspace](https://ainize.ai/workspace/create?imageId=hnj95592adzr02xPTqss&git=https://github.com/ehdwns1516/Multiple_choice_SWAG_finetunning) ## Usage ## In Transformers ``` from transformers import AutoTokenizer, AutoModelForMultipleChoice tokenizer = AutoTokenizer.from_pretrained("ehdwns1516/bert-base-uncased_SWAG") model = AutoModelForMultipleChoice.from_pretrained("ehdwns1516/bert-base-uncased_SWAG") def run_model(candicates_count, context: str, candicates: list[str]): assert len(candicates) == candicates_count, "you need " + candicates_count + " candidates" choices_inputs = [] for c in candicates: text_a = "" # empty context text_b = context + " " + c inputs = tokenizer( text_a, text_b, add_special_tokens=True, max_length=128, padding="max_length", truncation=True, return_overflowing_tokens=True, ) choices_inputs.append(inputs) input_ids = torch.LongTensor([x["input_ids"] for x in choices_inputs]) output = model(input_ids=input_ids) return {"result": candicates[torch.argmax(output.logits).item()]} items = list() count = 4 # candicates count context = "your context" for i in range(int(count)): items.append("sentence") result = run_model(count, context, items) ```
MaziyarPanahi/Calme-7B-Instruct-v0.9
MaziyarPanahi
"2024-04-02T14:42:07Z"
988
9
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "generated_from_trainer", "7b", "calme", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-04-02T14:34:08Z"
--- license: apache-2.0 tags: - generated_from_trainer - mistral - 7b - calme model-index: - name: Calme-7B-Instruct-v0.9 results: [] model_name: Calme-7B-Instruct-v0.9 inference: false model_creator: MaziyarPanahi pipeline_tag: text-generation quantized_by: MaziyarPanahi --- <img src="https://cdn-uploads.huggingface.co/production/uploads/5fd5e18a90b6dc4633f6d292/LzEf6vvq2qIiys-q7l9Hq.webp" width="550" /> # MaziyarPanahi/Calme-7B-Instruct-v0.9 ## Model Description Calme-7B is a state-of-the-art language model with 7 billion parameters, fine-tuned over high-quality datasets on top of Mistral-7B. The Calme-7B models excel in generating text that resonates with clarity, calmness, and coherence. ### How to Use ```python # Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="MaziyarPanahi/Calme-7B-Instruct-v0.9") # Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("MaziyarPanahi/Calme-7B-Instruct-v0.9") model = AutoModelForCausalLM.from_pretrained("MaziyarPanahi/Calme-7B-Instruct-v0.9") ``` ### Quantized Models > I love how GGUF democratizes the use of Large Language Models (LLMs) on commodity hardware, more specifically, personal computers without any accelerated hardware. Because of this, I am committed to converting and quantizing any models I fine-tune to make them accessible to everyone! - GGUF (2/3/4/5/6/8 bits): [MaziyarPanahi/Calme-7B-Instruct-v0.9-GGUF](https://huggingface.co/MaziyarPanahi/Calme-7B-Instruct-v0.9-GGUF) ## Examples ``` <s>[INST] You are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information. describe about pros and cons of docker system. [/INST] ``` <details> <summary>Show me the response</summary> ``` ``` </details> ``` ``` <details> <summary>Show me the response</summary> ``` ``` </details> ``` <s> [INST] Mark is faster than Mary, Mary is faster than Joe. Is Joe faster than Mark? Let's think step by step [/INST] ``` <details> <summary>Show me the response</summary> ``` ``` </details> ``` ``` <details> <summary>Show me the response</summary> ``` ``` </details> ``` <s> [INST] explain step by step 25-4*2+3=? [/INST] ``` <details> <summary>Show me the response</summary> ``` ``` </details> **Multilingual:** ``` <s> [INST] Vous êtes un assistant utile, respectueux et honnête. Répondez toujours de la manière la plus utile possible, tout en étant sûr. Vos réponses ne doivent inclure aucun contenu nuisible, contraire à l'éthique, raciste, sexiste, toxique, dangereux ou illégal. Assurez-vous que vos réponses sont socialement impartiales et de nature positive. Si une question n'a pas de sens ou n'est pas cohérente d'un point de vue factuel, expliquez pourquoi au lieu de répondre quelque chose d'incorrect. Si vous ne connaissez pas la réponse à une question, veuillez ne pas partager de fausses informations. Décrivez les avantages et les inconvénients du système Docker.[/INST] ``` <details> <summary>Show me the response</summary> ``` ``` <details> <summary>Show me the response</summary> ``` ``` </details> ``` <s>[INST] Ви - корисний, поважний та чесний помічник. Завжди відповідайте максимально корисно, будучи безпечним. Ваші відповіді не повинні містити шкідливого, неетичного, расистського, сексистського, токсичного, небезпечного або нелегального контенту. Будь ласка, переконайтеся, що ваші відповіді соціально неупереджені та мають позитивний характер. Якщо питання не має сенсу або не є фактично послідовним, поясніть чому, замість того, щоб відповідати щось некоректне. Якщо ви не знаєте відповіді на питання, будь ласка, не діліться неправдивою інформацією. Опис про переваги та недоліки системи Docker.[/INST] ``` <details> <summary>Show me the response</summary> ``` ``` </details>
sayakpaul/sdxl-unet-sharded
sayakpaul
"2024-06-07T09:33:14Z"
988
0
diffusers
[ "diffusers", "safetensors", "region:us" ]
null
"2024-06-07T09:31:31Z"
Entry not found
timm/regnety_008_tv.tv2_in1k
timm
"2024-02-10T23:33:15Z"
987
0
timm
[ "timm", "pytorch", "safetensors", "image-classification", "arxiv:2003.13678", "license:bsd-3-clause", "region:us" ]
image-classification
"2023-03-21T06:37:24Z"
--- license: bsd-3-clause library_name: timm tags: - image-classification - timm --- # Model card for regnety_008_tv.tv2_in1k A RegNetY-800MF image classification model. Pretrained on ImageNet-1k by torchvision contributors (see ImageNet1K-V2 weight details https://github.com/pytorch/vision/issues/3995#new-recipe). The `timm` RegNet implementation includes a number of enhancements not present in other implementations, including: * stochastic depth * gradient checkpointing * layer-wise LR decay * configurable output stride (dilation) * configurable activation and norm layers * option for a pre-activation bottleneck block used in RegNetV variant * only known RegNetZ model definitions with pretrained weights ## Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 6.4 - GMACs: 0.8 - Activations (M): 5.4 - Image size: 224 x 224 - **Papers:** - Designing Network Design Spaces: https://arxiv.org/abs/2003.13678 - **Original:** https://github.com/pytorch/vision ## Model Usage ### Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model('regnety_008_tv.tv2_in1k', pretrained=True) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5) ``` ### Feature Map Extraction ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'regnety_008_tv.tv2_in1k', pretrained=True, features_only=True, ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 for o in output: # print shape of each feature map in output # e.g.: # torch.Size([1, 32, 112, 112]) # torch.Size([1, 64, 56, 56]) # torch.Size([1, 144, 28, 28]) # torch.Size([1, 320, 14, 14]) # torch.Size([1, 784, 7, 7]) print(o.shape) ``` ### Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'regnety_008_tv.tv2_in1k', pretrained=True, num_classes=0, # remove classifier nn.Linear ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor # or equivalently (without needing to set num_classes=0) output = model.forward_features(transforms(img).unsqueeze(0)) # output is unpooled, a (1, 784, 7, 7) shaped tensor output = model.forward_head(output, pre_logits=True) # output is a (1, num_features) shaped tensor ``` ## Model Comparison Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results). For the comparison summary below, the ra_in1k, ra3_in1k, ch_in1k, sw_*, and lion_* tagged weights are trained in `timm`. |model |img_size|top1 |top5 |param_count|gmacs|macts | |-------------------------|--------|------|------|-----------|-----|------| |[regnety_1280.swag_ft_in1k](https://huggingface.co/timm/regnety_1280.swag_ft_in1k)|384 |88.228|98.684|644.81 |374.99|210.2 | |[regnety_320.swag_ft_in1k](https://huggingface.co/timm/regnety_320.swag_ft_in1k)|384 |86.84 |98.364|145.05 |95.0 |88.87 | |[regnety_160.swag_ft_in1k](https://huggingface.co/timm/regnety_160.swag_ft_in1k)|384 |86.024|98.05 |83.59 |46.87|67.67 | |[regnety_160.sw_in12k_ft_in1k](https://huggingface.co/timm/regnety_160.sw_in12k_ft_in1k)|288 |86.004|97.83 |83.59 |26.37|38.07 | |[regnety_1280.swag_lc_in1k](https://huggingface.co/timm/regnety_1280.swag_lc_in1k)|224 |85.996|97.848|644.81 |127.66|71.58 | |[regnety_160.lion_in12k_ft_in1k](https://huggingface.co/timm/regnety_160.lion_in12k_ft_in1k)|288 |85.982|97.844|83.59 |26.37|38.07 | |[regnety_160.sw_in12k_ft_in1k](https://huggingface.co/timm/regnety_160.sw_in12k_ft_in1k)|224 |85.574|97.666|83.59 |15.96|23.04 | |[regnety_160.lion_in12k_ft_in1k](https://huggingface.co/timm/regnety_160.lion_in12k_ft_in1k)|224 |85.564|97.674|83.59 |15.96|23.04 | |[regnety_120.sw_in12k_ft_in1k](https://huggingface.co/timm/regnety_120.sw_in12k_ft_in1k)|288 |85.398|97.584|51.82 |20.06|35.34 | |[regnety_2560.seer_ft_in1k](https://huggingface.co/timm/regnety_2560.seer_ft_in1k)|384 |85.15 |97.436|1282.6 |747.83|296.49| |[regnetz_e8.ra3_in1k](https://huggingface.co/timm/regnetz_e8.ra3_in1k)|320 |85.036|97.268|57.7 |15.46|63.94 | |[regnety_120.sw_in12k_ft_in1k](https://huggingface.co/timm/regnety_120.sw_in12k_ft_in1k)|224 |84.976|97.416|51.82 |12.14|21.38 | |[regnety_320.swag_lc_in1k](https://huggingface.co/timm/regnety_320.swag_lc_in1k)|224 |84.56 |97.446|145.05 |32.34|30.26 | |[regnetz_040_h.ra3_in1k](https://huggingface.co/timm/regnetz_040_h.ra3_in1k)|320 |84.496|97.004|28.94 |6.43 |37.94 | |[regnetz_e8.ra3_in1k](https://huggingface.co/timm/regnetz_e8.ra3_in1k)|256 |84.436|97.02 |57.7 |9.91 |40.94 | |[regnety_1280.seer_ft_in1k](https://huggingface.co/timm/regnety_1280.seer_ft_in1k)|384 |84.432|97.092|644.81 |374.99|210.2 | |[regnetz_040.ra3_in1k](https://huggingface.co/timm/regnetz_040.ra3_in1k)|320 |84.246|96.93 |27.12 |6.35 |37.78 | |[regnetz_d8.ra3_in1k](https://huggingface.co/timm/regnetz_d8.ra3_in1k)|320 |84.054|96.992|23.37 |6.19 |37.08 | |[regnetz_d8_evos.ch_in1k](https://huggingface.co/timm/regnetz_d8_evos.ch_in1k)|320 |84.038|96.992|23.46 |7.03 |38.92 | |[regnetz_d32.ra3_in1k](https://huggingface.co/timm/regnetz_d32.ra3_in1k)|320 |84.022|96.866|27.58 |9.33 |37.08 | |[regnety_080.ra3_in1k](https://huggingface.co/timm/regnety_080.ra3_in1k)|288 |83.932|96.888|39.18 |13.22|29.69 | |[regnety_640.seer_ft_in1k](https://huggingface.co/timm/regnety_640.seer_ft_in1k)|384 |83.912|96.924|281.38 |188.47|124.83| |[regnety_160.swag_lc_in1k](https://huggingface.co/timm/regnety_160.swag_lc_in1k)|224 |83.778|97.286|83.59 |15.96|23.04 | |[regnetz_040_h.ra3_in1k](https://huggingface.co/timm/regnetz_040_h.ra3_in1k)|256 |83.776|96.704|28.94 |4.12 |24.29 | |[regnetv_064.ra3_in1k](https://huggingface.co/timm/regnetv_064.ra3_in1k)|288 |83.72 |96.75 |30.58 |10.55|27.11 | |[regnety_064.ra3_in1k](https://huggingface.co/timm/regnety_064.ra3_in1k)|288 |83.718|96.724|30.58 |10.56|27.11 | |[regnety_160.deit_in1k](https://huggingface.co/timm/regnety_160.deit_in1k)|288 |83.69 |96.778|83.59 |26.37|38.07 | |[regnetz_040.ra3_in1k](https://huggingface.co/timm/regnetz_040.ra3_in1k)|256 |83.62 |96.704|27.12 |4.06 |24.19 | |[regnetz_d8.ra3_in1k](https://huggingface.co/timm/regnetz_d8.ra3_in1k)|256 |83.438|96.776|23.37 |3.97 |23.74 | |[regnetz_d32.ra3_in1k](https://huggingface.co/timm/regnetz_d32.ra3_in1k)|256 |83.424|96.632|27.58 |5.98 |23.74 | |[regnetz_d8_evos.ch_in1k](https://huggingface.co/timm/regnetz_d8_evos.ch_in1k)|256 |83.36 |96.636|23.46 |4.5 |24.92 | |[regnety_320.seer_ft_in1k](https://huggingface.co/timm/regnety_320.seer_ft_in1k)|384 |83.35 |96.71 |145.05 |95.0 |88.87 | |[regnetv_040.ra3_in1k](https://huggingface.co/timm/regnetv_040.ra3_in1k)|288 |83.204|96.66 |20.64 |6.6 |20.3 | |[regnety_320.tv2_in1k](https://huggingface.co/timm/regnety_320.tv2_in1k)|224 |83.162|96.42 |145.05 |32.34|30.26 | |[regnety_080.ra3_in1k](https://huggingface.co/timm/regnety_080.ra3_in1k)|224 |83.16 |96.486|39.18 |8.0 |17.97 | |[regnetv_064.ra3_in1k](https://huggingface.co/timm/regnetv_064.ra3_in1k)|224 |83.108|96.458|30.58 |6.39 |16.41 | |[regnety_040.ra3_in1k](https://huggingface.co/timm/regnety_040.ra3_in1k)|288 |83.044|96.5 |20.65 |6.61 |20.3 | |[regnety_064.ra3_in1k](https://huggingface.co/timm/regnety_064.ra3_in1k)|224 |83.02 |96.292|30.58 |6.39 |16.41 | |[regnety_160.deit_in1k](https://huggingface.co/timm/regnety_160.deit_in1k)|224 |82.974|96.502|83.59 |15.96|23.04 | |[regnetx_320.tv2_in1k](https://huggingface.co/timm/regnetx_320.tv2_in1k)|224 |82.816|96.208|107.81 |31.81|36.3 | |[regnety_032.ra_in1k](https://huggingface.co/timm/regnety_032.ra_in1k)|288 |82.742|96.418|19.44 |5.29 |18.61 | |[regnety_160.tv2_in1k](https://huggingface.co/timm/regnety_160.tv2_in1k)|224 |82.634|96.22 |83.59 |15.96|23.04 | |[regnetz_c16_evos.ch_in1k](https://huggingface.co/timm/regnetz_c16_evos.ch_in1k)|320 |82.634|96.472|13.49 |3.86 |25.88 | |[regnety_080_tv.tv2_in1k](https://huggingface.co/timm/regnety_080_tv.tv2_in1k)|224 |82.592|96.246|39.38 |8.51 |19.73 | |[regnetx_160.tv2_in1k](https://huggingface.co/timm/regnetx_160.tv2_in1k)|224 |82.564|96.052|54.28 |15.99|25.52 | |[regnetz_c16.ra3_in1k](https://huggingface.co/timm/regnetz_c16.ra3_in1k)|320 |82.51 |96.358|13.46 |3.92 |25.88 | |[regnetv_040.ra3_in1k](https://huggingface.co/timm/regnetv_040.ra3_in1k)|224 |82.44 |96.198|20.64 |4.0 |12.29 | |[regnety_040.ra3_in1k](https://huggingface.co/timm/regnety_040.ra3_in1k)|224 |82.304|96.078|20.65 |4.0 |12.29 | |[regnetz_c16.ra3_in1k](https://huggingface.co/timm/regnetz_c16.ra3_in1k)|256 |82.16 |96.048|13.46 |2.51 |16.57 | |[regnetz_c16_evos.ch_in1k](https://huggingface.co/timm/regnetz_c16_evos.ch_in1k)|256 |81.936|96.15 |13.49 |2.48 |16.57 | |[regnety_032.ra_in1k](https://huggingface.co/timm/regnety_032.ra_in1k)|224 |81.924|95.988|19.44 |3.2 |11.26 | |[regnety_032.tv2_in1k](https://huggingface.co/timm/regnety_032.tv2_in1k)|224 |81.77 |95.842|19.44 |3.2 |11.26 | |[regnetx_080.tv2_in1k](https://huggingface.co/timm/regnetx_080.tv2_in1k)|224 |81.552|95.544|39.57 |8.02 |14.06 | |[regnetx_032.tv2_in1k](https://huggingface.co/timm/regnetx_032.tv2_in1k)|224 |80.924|95.27 |15.3 |3.2 |11.37 | |[regnety_320.pycls_in1k](https://huggingface.co/timm/regnety_320.pycls_in1k)|224 |80.804|95.246|145.05 |32.34|30.26 | |[regnetz_b16.ra3_in1k](https://huggingface.co/timm/regnetz_b16.ra3_in1k)|288 |80.712|95.47 |9.72 |2.39 |16.43 | |[regnety_016.tv2_in1k](https://huggingface.co/timm/regnety_016.tv2_in1k)|224 |80.66 |95.334|11.2 |1.63 |8.04 | |[regnety_120.pycls_in1k](https://huggingface.co/timm/regnety_120.pycls_in1k)|224 |80.37 |95.12 |51.82 |12.14|21.38 | |[regnety_160.pycls_in1k](https://huggingface.co/timm/regnety_160.pycls_in1k)|224 |80.288|94.964|83.59 |15.96|23.04 | |[regnetx_320.pycls_in1k](https://huggingface.co/timm/regnetx_320.pycls_in1k)|224 |80.246|95.01 |107.81 |31.81|36.3 | |[regnety_080.pycls_in1k](https://huggingface.co/timm/regnety_080.pycls_in1k)|224 |79.882|94.834|39.18 |8.0 |17.97 | |[regnetz_b16.ra3_in1k](https://huggingface.co/timm/regnetz_b16.ra3_in1k)|224 |79.872|94.974|9.72 |1.45 |9.95 | |[regnetx_160.pycls_in1k](https://huggingface.co/timm/regnetx_160.pycls_in1k)|224 |79.862|94.828|54.28 |15.99|25.52 | |[regnety_064.pycls_in1k](https://huggingface.co/timm/regnety_064.pycls_in1k)|224 |79.716|94.772|30.58 |6.39 |16.41 | |[regnetx_120.pycls_in1k](https://huggingface.co/timm/regnetx_120.pycls_in1k)|224 |79.592|94.738|46.11 |12.13|21.37 | |[regnetx_016.tv2_in1k](https://huggingface.co/timm/regnetx_016.tv2_in1k)|224 |79.44 |94.772|9.19 |1.62 |7.93 | |[regnety_040.pycls_in1k](https://huggingface.co/timm/regnety_040.pycls_in1k)|224 |79.23 |94.654|20.65 |4.0 |12.29 | |[regnetx_080.pycls_in1k](https://huggingface.co/timm/regnetx_080.pycls_in1k)|224 |79.198|94.55 |39.57 |8.02 |14.06 | |[regnetx_064.pycls_in1k](https://huggingface.co/timm/regnetx_064.pycls_in1k)|224 |79.064|94.454|26.21 |6.49 |16.37 | |[regnety_032.pycls_in1k](https://huggingface.co/timm/regnety_032.pycls_in1k)|224 |78.884|94.412|19.44 |3.2 |11.26 | |[regnety_008_tv.tv2_in1k](https://huggingface.co/timm/regnety_008_tv.tv2_in1k)|224 |78.654|94.388|6.43 |0.84 |5.42 | |[regnetx_040.pycls_in1k](https://huggingface.co/timm/regnetx_040.pycls_in1k)|224 |78.482|94.24 |22.12 |3.99 |12.2 | |[regnetx_032.pycls_in1k](https://huggingface.co/timm/regnetx_032.pycls_in1k)|224 |78.178|94.08 |15.3 |3.2 |11.37 | |[regnety_016.pycls_in1k](https://huggingface.co/timm/regnety_016.pycls_in1k)|224 |77.862|93.73 |11.2 |1.63 |8.04 | |[regnetx_008.tv2_in1k](https://huggingface.co/timm/regnetx_008.tv2_in1k)|224 |77.302|93.672|7.26 |0.81 |5.15 | |[regnetx_016.pycls_in1k](https://huggingface.co/timm/regnetx_016.pycls_in1k)|224 |76.908|93.418|9.19 |1.62 |7.93 | |[regnety_008.pycls_in1k](https://huggingface.co/timm/regnety_008.pycls_in1k)|224 |76.296|93.05 |6.26 |0.81 |5.25 | |[regnety_004.tv2_in1k](https://huggingface.co/timm/regnety_004.tv2_in1k)|224 |75.592|92.712|4.34 |0.41 |3.89 | |[regnety_006.pycls_in1k](https://huggingface.co/timm/regnety_006.pycls_in1k)|224 |75.244|92.518|6.06 |0.61 |4.33 | |[regnetx_008.pycls_in1k](https://huggingface.co/timm/regnetx_008.pycls_in1k)|224 |75.042|92.342|7.26 |0.81 |5.15 | |[regnetx_004_tv.tv2_in1k](https://huggingface.co/timm/regnetx_004_tv.tv2_in1k)|224 |74.57 |92.184|5.5 |0.42 |3.17 | |[regnety_004.pycls_in1k](https://huggingface.co/timm/regnety_004.pycls_in1k)|224 |74.018|91.764|4.34 |0.41 |3.89 | |[regnetx_006.pycls_in1k](https://huggingface.co/timm/regnetx_006.pycls_in1k)|224 |73.862|91.67 |6.2 |0.61 |3.98 | |[regnetx_004.pycls_in1k](https://huggingface.co/timm/regnetx_004.pycls_in1k)|224 |72.38 |90.832|5.16 |0.4 |3.14 | |[regnety_002.pycls_in1k](https://huggingface.co/timm/regnety_002.pycls_in1k)|224 |70.282|89.534|3.16 |0.2 |2.17 | |[regnetx_002.pycls_in1k](https://huggingface.co/timm/regnetx_002.pycls_in1k)|224 |68.752|88.556|2.68 |0.2 |2.16 | ## Citation ```bibtex @InProceedings{Radosavovic2020, title = {Designing Network Design Spaces}, author = {Ilija Radosavovic and Raj Prateek Kosaraju and Ross Girshick and Kaiming He and Piotr Doll{'a}r}, booktitle = {CVPR}, year = {2020} } ``` ```bibtex @misc{rw2019timm, author = {Ross Wightman}, title = {PyTorch Image Models}, year = {2019}, publisher = {GitHub}, journal = {GitHub repository}, doi = {10.5281/zenodo.4414861}, howpublished = {\url{https://github.com/huggingface/pytorch-image-models}} } ```
922-CA/l2-7b-natsuki-ddlc-v0.1-gguf
922-CA
"2023-09-09T06:21:51Z"
987
0
null
[ "gguf", "license:llama2", "region:us" ]
null
"2023-09-08T09:05:36Z"
--- license: llama2 --- GGUFs of [l2-7b-natsuki-ddlc-v0.1](https://huggingface.co/922-CA/l2-7b-natsuki-ddlc-v0.1). (Primarily tested and run with Koboldcpp v1.41+). QLora (hf and GGML) [here](https://huggingface.co/922-CA/natsuki-lm-lora-tests/tree/main/l2-7b-natsuki-v0.1).
NeuralNovel/Tiger-7B-v0.1
NeuralNovel
"2024-03-05T15:57:24Z"
987
7
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "mergekit", "merge", "conversational", "dataset:Intel/orca_dpo_pairs", "dataset:NeuralNovel/Neural-Story-v1", "base_model:NeuralNovel/Mistral-7B-Instruct-v0.2-Neural-Story", "base_model:NeuralNovel/Gecko-7B-v0.1-DPO", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-01-18T18:38:51Z"
--- license: apache-2.0 tags: - mergekit - merge datasets: - Intel/orca_dpo_pairs - NeuralNovel/Neural-Story-v1 base_model: - NeuralNovel/Mistral-7B-Instruct-v0.2-Neural-Story - NeuralNovel/Gecko-7B-v0.1-DPO model-index: - name: Tiger-7b-v0.1 results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 59.98 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=NeuralNovel/Tiger-7b-v0.1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 83.21 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=NeuralNovel/Tiger-7b-v0.1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 61.42 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=NeuralNovel/Tiger-7b-v0.1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 61.03 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=NeuralNovel/Tiger-7b-v0.1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 77.66 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=NeuralNovel/Tiger-7b-v0.1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 46.78 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=NeuralNovel/Tiger-7b-v0.1 name: Open LLM Leaderboard --- ![tiger](https://cdn-uploads.huggingface.co/production/uploads/645cfe4603fc86c46b3e46d1/a9GqRTNoGZQsRVU-C6XRO.jpeg) # Tiger-7b-v0.1 This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). [Join our Discord!](https://discord.gg/rJXGjmxqzS) <a href='https://ko-fi.com/S6S2UH2TC' target='_blank'><img height='36' style='border:0px;height:36px;' src='https://storage.ko-fi.com/cdn/kofi1.png?v=3' border='0' alt='Buy Me a Coffee at ko-fi.com' /></a> ## Metrics ![image/png](https://cdn-uploads.huggingface.co/production/uploads/645cfe4603fc86c46b3e46d1/Z58bB5sYr3pyE2Ilbk7Dk.png) ### Merge Method This model was merged using the SLERP merge method. ### Models Merged The following models were included in the merge: * [NeuralNovel/Mistral-7B-Instruct-v0.2-Neural-Story](https://huggingface.co/NeuralNovel/Mistral-7B-Instruct-v0.2-Neural-Story) * [NeuralNovel/Gecko-7B-v0.1-DPO](https://huggingface.co/NeuralNovel/Gecko-7B-v0.1-DPO) # merge ### Configuration The following YAML configuration was used to produce this model: ```yaml slices: - sources: - model: NeuralNovel/Mistral-7B-Instruct-v0.2-Neural-Story layer_range: [0, 32] - model: NeuralNovel/Gecko-7B-v0.1-DPO layer_range: [0, 32] merge_method: slerp base_model: NeuralNovel/Mistral-7B-Instruct-v0.2-Neural-Story parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ``` # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_NeuralNovel__Tiger-7b-v0.1) | Metric |Value| |---------------------------------|----:| |Avg. |65.02| |AI2 Reasoning Challenge (25-Shot)|59.98| |HellaSwag (10-Shot) |83.21| |MMLU (5-Shot) |61.42| |TruthfulQA (0-shot) |61.03| |Winogrande (5-shot) |77.66| |GSM8k (5-shot) |46.78|
cognitivecomputations/dolphincoder-starcoder2-7b
cognitivecomputations
"2024-05-20T14:51:58Z"
987
11
transformers
[ "transformers", "pytorch", "starcoder2", "text-generation", "conversational", "en", "dataset:cognitivecomputations/dolphin", "dataset:jondurbin/airoboros-2.2.1", "dataset:cognitivecomputations/dolphin-coder", "dataset:teknium/openhermes", "dataset:ise-uiuc/Magicoder-OSS-Instruct-75K", "dataset:ise-uiuc/Magicoder-Evol-Instruct-110K", "dataset:m-a-p/Code-Feedback", "dataset:m-a-p/CodeFeedback-Filtered-Instruction", "dataset:microsoft/orca-math-word-problems-200k", "license:bigcode-openrail-m", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-03-08T01:51:26Z"
--- datasets: - cognitivecomputations/dolphin - jondurbin/airoboros-2.2.1 - cognitivecomputations/dolphin-coder - teknium/openhermes - ise-uiuc/Magicoder-OSS-Instruct-75K - ise-uiuc/Magicoder-Evol-Instruct-110K - m-a-p/Code-Feedback - m-a-p/CodeFeedback-Filtered-Instruction - microsoft/orca-math-word-problems-200k language: - en license: bigcode-openrail-m --- DolphinCoder StarCoder2 7b 🐬 sponsored by [latitude.sh](https://www.latitude.sh/). [![Discord](https://img.shields.io/discord/1156064224225808488?logo=Discord&logoColor=%23ffffff&label=Discord&link=https%3A%2F%2Fdiscord.gg%2FtCMkMDDHwm)](https://discord.gg/cognitivecomputations) Discord: https://discord.gg/cognitivecomputations <img src="https://cdn-uploads.huggingface.co/production/uploads/63111b2d88942700629f5771/ldkN1J0WIDQwU4vutGYiD.png" width="600" /> This model is based on StarCoder2-7b and is subject to bigcode-openrail-m license. This Dolphin is *really good* at coding, I trained with a lot of coding data. This model is uncensored. I have filtered the dataset to remove alignment and bias. This makes the model more compliant. You are advised to implement your own alignment layer before exposing the model as a service. It will be highly compliant to any requests, even unethical ones. Please read my blog post about uncensored models. https://erichartford.com/uncensored-models You are responsible for any content you create using this model. Enjoy responsibly. ## Training It took 2 days to train 3 epochs on 8x L40S's using qLoRA and Axolotl Prompt format: This model uses ChatML prompt format. ``` <|im_start|>system You are DolphinCoder, a helpful AI programming assistant.<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant ``` Example: ``` <|im_start|>system You are DolphinCoder, a master at software engineering and coding in any programming language. <|im_start|>user Please write me a program in golang that parses all the lines in a file, and reverses them character-wise, and saves it to a new file. <|im_start|>assistant ``` ## Quantized models - [dagbs/-GGUF](https://huggingface.co/dagbs/dolphincoder-starcoder2-7b-GGUF) ## Gratitude - This model was made possible by the generous sponsorship of [latitude.sh](https://www.latitude.sh/). - Welcome Microsoft to Open Source AI! Thank you for the Orca-Math Dataset! - Huge thank you to [BigCode](https://www.bigcode-project.org/) for training and publishing the weights of StarCoder2 - HUGE Thank you to the dataset authors: @ise-uiuc, @teknium, @m-a-p - And HUGE thanks to @winglian and the Axolotl contributors for making the best training framework! - [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl) - Thank you to all the other people in the Open Source AI community who have taught me and helped me along the way. ## Example Output [If you would like to financially support my efforts](https://ko-fi.com/erichartford) [swag](https://fa7113.myshopify.com/)
alvdansen/popartanime
alvdansen
"2024-06-16T16:30:05Z"
987
10
diffusers
[ "diffusers", "text-to-image", "stable-diffusion", "lora", "template:sd-lora", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "license:creativeml-openrail-m", "region:us" ]
text-to-image
"2024-06-07T16:45:47Z"
--- tags: - text-to-image - stable-diffusion - lora - diffusers - template:sd-lora widget: - text: a mouse knight, daiton style output: url: images/ComfyUI_00235_.png - text: a princess with a dress mad of flowers, daiton style output: url: images/ComfyUI_00227_.png - text: a boy with a cool hat, daiton style output: url: images/ComfyUI_00211_.png - text: a happy girl, daiton style output: url: images/ComfyUI_00200_.png base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: daiton style license: creativeml-openrail-m --- # Pop Art Anime <Gallery /> ## Model description A very blocky and bold cartoon style with some anime elements. ## Trigger words You should use `daiton style` to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. Model release is for research purposes only. For commercial use, please contact me directly. [Download](/alvdansen/popartanime/tree/main) them in the Files & versions tab.
timm/mvitv2_base.fb_in1k
timm
"2024-02-10T23:42:48Z"
986
0
timm
[ "timm", "pytorch", "safetensors", "image-classification", "dataset:imagenet-1k", "arxiv:2112.01526", "license:apache-2.0", "region:us" ]
image-classification
"2023-04-13T00:22:34Z"
--- license: apache-2.0 library_name: timm tags: - image-classification - timm datasets: - imagenet-1k --- # Model card for mvitv2_base.fb_in1k A MViT-v2 (multi-scale ViT) image classification model. Pretrained on ImageNet-1k by paper authors. ## Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 51.5 - GMACs: 10.2 - Activations (M): 40.5 - Image size: 224 x 224 - **Papers:** - MViTv2: Improved Multiscale Vision Transformers for Classification and Detection: https://arxiv.org/abs/2112.01526 - **Dataset:** ImageNet-1k - **Original:** https://github.com/facebookresearch/mvit ## Model Usage ### Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model('mvitv2_base.fb_in1k', pretrained=True) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5) ``` ### Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'mvitv2_base.fb_in1k', pretrained=True, num_classes=0, # remove classifier nn.Linear ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor # or equivalently (without needing to set num_classes=0) output = model.forward_features(transforms(img).unsqueeze(0)) # output is unpooled, a (1, 49, 768) shaped tensor output = model.forward_head(output, pre_logits=True) # output is a (1, num_features) shaped tensor ```
Locutusque/Hyperion-2.0-Mistral-7B
Locutusque
"2024-03-12T16:55:05Z"
986
5
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "code", "chemistry", "medical", "en", "dataset:Locutusque/hyperion-v2.0", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-03-10T02:34:48Z"
--- library_name: transformers tags: - code - chemistry - medical license: apache-2.0 datasets: - Locutusque/hyperion-v2.0 language: - en --- # Hyperion-2.0-Mistral-7B ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6437292ecd93f4c9a34b0d47/9BU30Mh9bOkO2HRBDF8EE.png) ## Model Details - **Model Name**: Locutusque/Hyperion-2.0-Mistral-7B - **Base Model**: mistralai/Mistral-7B-v0.1 - **Publisher**: Locutusque - **Model Type**: Question answering, conversational AI, code generation, medical text comprehension, mathematical reasoning, logical reasoning. - **Language**: Multi-domain, English language. - **License**: Apache-2.0 ## Model Description `Locutusque/Hyperion-2.0-Mistral-7B` is a state-of-the-art language model fine-tuned on the Hyperion-v2.0 dataset for advanced reasoning across scientific domains. This model is designed to handle complex inquiries and instructions, leveraging the diverse and rich information contained in the Hyperion dataset. Its primary use cases include but are not limited to complex question answering, conversational understanding, code generation, medical text comprehension, mathematical reasoning, and logical reasoning. ## Intended Use This model is intended for researchers and practitioners looking for a powerful tool to tackle challenging problems in scientific domains. It can be used in the following scenarios: - AI-driven tutoring systems for science, medicine, mathematics, and computer science. - Assistive tools for professionals requiring fast and accurate domain-specific information retrieval. - Platforms that require conversational AI capabilities with a focus on technical and scientific reasoning. - Automation in code generation and understanding complex programming context. ## Training Data The `Locutusque/Hyperion-2.0-Mistral-7B` model was fine-tuned on 750,000 examples of the Hyperion-v2.0 dataset, which amalgamates various datasets rich in diversity and complexity, including programming, medical texts, mathematical problems, and reasoning tasks. ## Quants ExLlamaV2: https://huggingface.co/bartowski/Hyperion-2.0-Mistral-7B-exl2 GGUF: https://huggingface.co/bartowski/Hyperion-2.0-Mistral-7B-GGUF AWQ: https://huggingface.co/solidrust/Hyperion-2.0-Mistral-7B-AWQ ## Evaluation Results 0-shot AGIEval | Tasks |Version|Filter|n-shot| Metric |Value | |Stderr| |---------------------------------|-------|------|-----:|--------|-----:|---|-----:| |agieval_nous |N/A |none | 0|acc |0.3602|± |0.0929| | | |none | 0|acc_norm|0.3342|± |0.0764| | - agieval_aqua_rat | 1|none | 0|acc |0.2402|± |0.0269| | | |none | 0|acc_norm|0.2441|± |0.0270| | - agieval_logiqa_en | 1|none | 0|acc |0.2965|± |0.0179| | | |none | 0|acc_norm|0.3226|± |0.0183| | - agieval_lsat_ar | 1|none | 0|acc |0.2348|± |0.0280| | | |none | 0|acc_norm|0.2000|± |0.0264| | - agieval_lsat_lr | 1|none | 0|acc |0.3667|± |0.0214| | | |none | 0|acc_norm|0.3373|± |0.0210| | - agieval_lsat_rc | 1|none | 0|acc |0.4981|± |0.0305| | | |none | 0|acc_norm|0.4089|± |0.0300| | - agieval_sat_en | 1|none | 0|acc |0.6359|± |0.0336| | | |none | 0|acc_norm|0.5777|± |0.0345| | - agieval_sat_en_without_passage| 1|none | 0|acc |0.3883|± |0.0340| | | |none | 0|acc_norm|0.3544|± |0.0334| | - agieval_sat_math | 1|none | 0|acc |0.3500|± |0.0322| | | |none | 0|acc_norm|0.2682|± |0.0299| | Groups |Version|Filter|n-shot| Metric |Value | |Stderr| |------------|-------|------|-----:|--------|-----:|---|-----:| |agieval_nous|N/A |none | 0|acc |0.3602|± |0.0929| | | |none | 0|acc_norm|0.3342|± |0.0764| 5-shot AGIEval coming soon. ## How to Use ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_name = "Locutusque/Hyperion-1.5-Mistral-7B" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForCausalLM.from_pretrained(model_name) # For a text generation task input_text = "<|im_start|>user\nWhat are the implications of Einstein's theory of relativity in modern physics?<|im_end|>\n<|im_start|>assistant\n" input_ids = tokenizer.encode(input_text, return_tensors="pt") # Generate a response outputs = model.generate(input_ids, max_length=200, num_return_sequences=1, temperature=0.8, top_p=0.95, top_k=40, repetition_penalty=1.1) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ``` ## Known Limitations The diversity of the dataset could lead to inconsistencies in the model's responses due to variations in data formatting and annotation quality. This model is also very compliant, it will respond to any request. Please make sure to build upon this model with DPO if you plan on using it for enterprise-level deployment. ## Licensing Information This model is released under the Apache-2.0 license.
abacusai/Llama-3-Giraffe-70B-Instruct
abacusai
"2024-05-04T22:35:52Z"
986
12
transformers
[ "transformers", "safetensors", "llama", "text-generation", "meta", "llama-3", "conversational", "en", "arxiv:2309.10400", "license:llama3", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-05-03T17:34:02Z"
--- language: - en pipeline_tag: text-generation tags: - meta - llama-3 license: llama3 --- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64c14f6b02e1f8f67c73bd05/pf4d6FA7DriRtVq5HCkxd.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64c14f6b02e1f8f67c73bd05/VcZWbW_eZkJAZZ5ricL4B.png) # Llama-3-Giraffe-70B-Instruct Abacus.AI presents our longer-necked variant of Llama 3 70B - now with the instruct variant! This model has an effective context length of approximately 128k. We have currently trained on ~1.5B tokens. There are our Needle-in-a-Haystack heatmap results. We are conducting further evals of model efficacy and will update our model card as these come in: ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64c14f6b02e1f8f67c73bd05/Z4uUhcjgf1P7EPGQyRLkW.png) ### MT-Bench Evaluation We also measured performance on MT-Bench to verify that the context extension did not significantly impact performance on instruct tasks: ``` ####### 1st turn: Meta-Llama-3-70B-Instruct 9.21 Llama-3-Giraffe-70B-Instruct 9.19 ####### 2nd turn: Meta-Llama-3-70B-Instruct 2 8.80 Llama-3-Giraffe-70B-Instruct 2 8.54 ####### average: Meta-Llama-3-70B-Instruct 9.00 Llama-3-Giraffe-70B-Instruct 8.87 ``` ## Training Methodology The methodology for training uses [PoSE](https://arxiv.org/abs/2309.10400) and dynamic-NTK interpolation. ### NTK-scaling The scale factor for NTK is 4. Note that we also tried theta-scaling but this did not work as well as NTK scaling in our experiments. ### PoSE We utilise Positional Skip-wise Training (PoSE) with the following parameters: - **Number of Chunks**: 5 - **Max position ID**: 32768 ### Data We use on average ~8K long samples from [RedPajama](https://github.com/togethercomputer/RedPajama-Data). ### Hardware We train on 8xH100 GPUs with Deepspeed Zero Stage 3. ## Evaluation Methodology We use the [EasyContext](https://github.com/abacusai/EasyContext/blob/eval_runs/eval_needle.py) implementation of Needle-in-a-Haystack to evaluate Llama-3-Giraffe-70B. We evaluate with the following parameters: - **Min context length**: 2000 - **Max context length**: 128000 - **Context interval**: 4000 - **Depth interval**: 0.1 - **Num samples**: 2 - **Rnd number digits**: 7 - **Haystack dir**: PaulGrahamEssays ### Adapter Transfer We apply the above techniques first to Llama-3-70B-Base, using LoRA on the Q and K weights only. This adapter is then applied to Llama-3-70B-Instruct, and we release the merged version here.
osiria/deberta-base-italian
osiria
"2023-06-24T16:54:52Z"
985
2
transformers
[ "transformers", "pytorch", "safetensors", "deberta-v2", "feature-extraction", "it", "arxiv:2111.09543", "arxiv:2010.05609", "license:mit", "endpoints_compatible", "region:us" ]
feature-extraction
"2023-05-29T17:48:34Z"
--- license: mit language: - it --- -------------------------------------------------------------------------------------------------- <body> <span class="vertical-text" style="background-color:lightgreen;border-radius: 3px;padding: 3px;"> </span> <br> <span class="vertical-text" style="background-color:orange;border-radius: 3px;padding: 3px;">  </span> <br> <span class="vertical-text" style="background-color:lightblue;border-radius: 3px;padding: 3px;">    Model: DeBERTa</span> <br> <span class="vertical-text" style="background-color:tomato;border-radius: 3px;padding: 3px;">    Lang: IT</span> <br> <span class="vertical-text" style="background-color:lightgrey;border-radius: 3px;padding: 3px;">  </span> <br> <span class="vertical-text" style="background-color:#CF9FFF;border-radius: 3px;padding: 3px;"> </span> </body> -------------------------------------------------------------------------------------------------- <h3>Model description</h3> This is a <b>DeBERTa</b> <b>[1]</b> model for the <b>Italian</b> language, obtained using <b>mDeBERTa</b> ([mdeberta-v3-base](https://huggingface.co/microsoft/mdeberta-v3-base)) as a starting point and focusing it on the Italian language by modifying the embedding layer (as in <b>[2]</b>, computing document-level frequencies over the <b>Wikipedia</b> dataset) The resulting model has 124M parameters, a vocabulary of 50.256 tokens, and a size of ~500 MB. <h3>Quick usage</h3> ```python from transformers import DebertaV2TokenizerFast, DebertaV2Model tokenizer = DebertaV2TokenizerFast.from_pretrained("osiria/deberta-base-italian") model = DebertaV2Model.from_pretrained("osiria/deberta-base-italian") ``` <h3>References</h3> [1] https://arxiv.org/abs/2111.09543 [2] https://arxiv.org/abs/2010.05609 <h3>License</h3> The model is released under <b>MIT</b> license
amirMohammadi/Dorna-Llama3-8B-Instruct-Quantized4Bit
amirMohammadi
"2024-06-17T09:47:32Z"
985
10
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "en", "fa", "arxiv:2307.08691", "license:llama3", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-06-08T15:30:31Z"
--- license: llama3 language: - en - fa pipeline_tag: text-generation --- # Model Details This Repository is a 4-bit quantized version of [Dorna-Llama3-8B-Instruct](https://huggingface.co/PartAI/Dorna-Llama3-8B-Instruct) model for efficient memory usage. Dorna model is a decoder-only model, specifically trained/fine-tuned on Persian data. [Flash Attention 2](https://arxiv.org/abs/2307.08691) is also integrated for faster inference. ## Benefits - **Reduced Memory Usage**: 4-bit quantization lowers memory requirements. - **Faster Inference**: Flash Attention 2 speeds up processing. - **Easy Deployment**: No need for additional libraries like LlamaCPP or Candle. - **Ready to Use**: Compatible with Langchain, Haystack, LlamaIndex 2, and more. - **Google Colab Friendly**: Can run on Google Colab free tier with T4 GPU (less than 15 GB of GPU RAM). ## How to use You can run conversational inference using the Transformers Auto classes with the `generate()` function. Let's look at an example. ```Python import torch import transformers from transformers import AutoTokenizer, AutoModelForCausalLM model_path = "amirMohammadi/Dorna-Llama3-8B-Instruct-Quantized4Bit" tokenizer = AutoTokenizer.from_pretrained(model_path) model = AutoModelForCausalLM.from_pretrained( model_path, torch_dtype=torch.bfloat16, device_map="auto", ) messages = [ {"role": "system", "content": "You are a helpful Persian assistant. Please answer questions in the asked language."}, {"role": "user", "content": "اصفهان بزرگ تر است یا قم؟"}, ] input_ids = tokenizer.apply_chat_template( messages, add_generation_prompt=True, return_tensors="pt" ).to(model.device) terminators = [ tokenizer.eos_token_id, tokenizer.convert_tokens_to_ids("<|eot_id|>") ] outputs = model.generate( input_ids, max_new_tokens=256, eos_token_id=terminators, do_sample=True, temperature=0.6, top_p=0.9, ) response = outputs[0][input_ids.shape[-1]:] print(tokenizer.decode(response, skip_special_tokens=True)) ``` ## Evaluation of Non-Quantized version This model is evaluated on questions across various tasks, including Boolean Questions, Code Generation, Long Response, Math, News QA, Paraphrasing, General Knowledge, and Summarization. Most categories typically have two main difficulty levels: Hard and Easy. Both human evaluation and automatic evaluation (with GPT-4 as the judge) are performed. In both tables, **Dorna-8B-it** is used as an abbreviated form of **Dorna-Llama3-8B-Instruct**. Overall human evaluation results are as follows: |**Model Pairs** | **Parameters** |**Win %**|**Lose %**|**Tie %**| |--------------------------|:---------:|:---------:|:---------:|:---------:| | Dorna-8B-it **vs.** Meta-Llama-3-8B-Instruct | 8B |**36.94**| 17.39 | 45.67 | | Dorna-8B-it **vs.** GPT 3.5 turbo-1106 | N.A. |**32.01**| 26.94 | 41.05 | | Dorna-8B-it **vs.** Persian Mind | 7B |**55.77**| 10.49 | 33.74 | Category-based human evaluation results are as follows: Win/Lose/Tie % is reported for each category. <!-- | **Model Pairs** | **Parameters** | **Bool Complex** | **Bool Easy** | **Code Gen** | **General Long Response** | **Historical Long Response** | **Math Complex** | **Math Easy** | **News QA Complex** | **News QA Easy** | **Paraphrasing** | **General Knowledge Easy** | **General Knowledge Hard** | **Summarization** | |:----------------------------------------------|:------------:|:----------------:|:----------------:|:-------------:|:-----------------------:|:--------------------------:|:----------------:|:----------------:|:-----------------:|:----------------:|:---------------:|:------------------------:|:------------------------:|:---------------:| | Dorna-8B-it **vs.** Meta-Llama-3-8B-Instruct | 8B | 0.25/0.25/0.5 | 0.28/0.35/0.38 | 0.6/0.1/0.3 | 0.8/0.08/0.12 | 0.4/0.3/0.3 | 0.28/0.08/0.65 | 0.47/0.00/0.53 | 0.55/0.07/0.38 | 0.43/0.15/0.42 | 0.1/0.05/0.85 | 0.31/0.2/0.49 | 0.59/0.13/0.28 | 0.28/0.2/0.53 | | Dorna-8B-it **vs.** GPT 3.5 turbo-1106 | N.A. | 0.35/0.35/0.3 | 0.3/0.3/0.4 | 0.1/0.3/.06 | 0.2/0.45/0.35 | 0.46/0.27/0.27 | 0.25/0.1/0.65 | 0.05/0.1/0.85 | 0.12/0.35/0.53 | 0.15/0.1/0.75 | 0.25/0.15/0.6 | 0.3/0.32/0.38 | 0.22/0.53/0.25 | 0.35/0.55/0.1 | | Dorna-8B-it **vs.** Persian Mind | 7B | 0.47/0.25/0.28 | 0.57/0.15/0.28 | 0.9/0.1/0.0 | 0.82/0.08/0.1 | 0.4/0.17/0.42 | 0.3/0.0/0.7 | 0.22/0.08/0.7 | 0.72/0.07/0.2 | 0.7/0.0/0.3 | 0.7/0.05/0.25 | 0.51/0.12/0.37 | 0.61/0.1/0.29 | 0.93/0.0/0.07 | --> <div style="overflow-x: auto;"> <table> <thead> <tr style="vertical-align: middle;"> <th style="white-space: nowrap; vertical-align: middle;"><strong>Model Pairs</strong></th> <th style="white-space: nowrap; vertical-align: middle;"><strong>Parameters</strong></th> <th style="white-space: nowrap; vertical-align: middle;"><strong>Bool Complex</strong></th> <th style="white-space: nowrap; vertical-align: middle;"><strong>Bool Easy</strong></th> <th style="white-space: nowrap; vertical-align: middle;"><strong>Code Gen</strong></th> <th style="white-space: nowrap; vertical-align: middle;"><strong>General Long Response</strong></th> <th style="white-space: nowrap; vertical-align: middle;"><strong>Historical Long Response</strong></th> <th style="white-space: nowrap; vertical-align: middle;"><strong>Math Complex</strong></th> <th style="white-space: nowrap; vertical-align: middle;"><strong>Math Easy</strong></th> <th style="white-space: nowrap; vertical-align: middle;"><strong>News QA Complex</strong></th> <th style="white-space: nowrap; vertical-align: middle;"><strong>News QA Easy</strong></th> <th style="white-space: nowrap; vertical-align: middle;"><strong>Paraphrasing</strong></th> <th style="white-space: nowrap; vertical-align: middle;"><strong>General Knowledge Easy</strong></th> <th style="white-space: nowrap; vertical-align: middle;"><strong>General Knowledge Hard</strong></th> <th style="white-space: nowrap; vertical-align: middle;"><strong>Summarization</strong></th> </tr> </thead> <tbody> <tr> <td style="white-space: nowrap; vertical-align: middle;">Dorna-8B-it <strong>vs.</strong> Meta-Llama-3-8B-Instruct</td> <td>8B</td> <td>0.25/0.25/0.5</td> <td>0.28/<strong>0.35</strong>/0.38</td> <td><strong>0.6</strong>/0.1/0.3</td> <td><strong>0.8</strong>/0.08/0.12</td> <td><strong>0.4</strong>/0.3/0.3</td> <td><strong>0.28</strong>/0.08/0.65</td> <td><strong>0.47</strong>/0.00/0.53</td> <td><strong>0.55</strong>/0.07/0.38</td> <td><strong>0.43</strong>/0.15/0.42</td> <td><strong>0.1</strong>/0.05/0.85</td> <td><strong>0.31</strong>/0.2/0.49</td> <td><strong>0.59</strong>/0.13/0.28</td> <td><strong>0.28</strong>/0.2/0.53</td> </tr> <tr> <td style="white-space: nowrap; vertical-align: middle;">Dorna-8B-it <strong>vs.</strong> GPT 3.5 turbo-1106</td> <td>N.A.</td> <td>0.35/0.35/0.3</td> <td>0.3/0.3/0.4</td> <td>0.1/<strong>0.3</strong>/.06</td> <td>0.2/<strong>0.45</strong>/0.35</td> <td><strong>0.46</strong>/0.27/0.27</td> <td><strong>0.25</strong>/0.1/0.65</td> <td>0.05/<strong>0.1</strong>/0.85</td> <td>0.12/<strong>0.35</strong>/0.53</td> <td><strong>0.15</strong>/0.1/0.75</td> <td><strong>0.25</strong>/0.15/0.6</td> <td>0.3/<strong>0.32</strong>/0.38</td> <td>0.22/<strong>0.53</strong>/0.25</td> <td>0.35/<strong>0.55</strong>/0.1</td> </tr> <tr> <td style="white-space: nowrap; vertical-align: middle;">Dorna-8B-it <strong>vs.</strong> Persian Mind</td> <td>7B</td> <td><strong>0.47</strong>/0.25/0.28</td> <td><strong>0.57</strong>/0.15/0.28</td> <td><strong>0.9</strong>/0.1/0.0</td> <td><strong>0.82</strong>/0.08/0.1</td> <td><strong>0.4</strong>/0.17/0.42</td> <td><strong>0.3</strong>/0.0/0.7</td> <td><strong>0.22</strong>/0.08/0.7</td> <td><strong>0.72</strong>/0.07/0.2</td> <td><strong>0.7</strong>/0.0/0.3</td> <td><strong>0.7</strong>/0.05/0.25</td> <td><strong>0.51</strong>/0.12/0.37</td> <td><strong>0.61</strong>/0.1/0.29</td> <td><strong>0.93</strong>/0.0/0.07</td> </tr> </tbody> </table> </div> Automatic evaluation results are as follows: | **Model Pairs** | **Parameters** | **Overall Win Rate %** | **Easy Win Rate %** | **Hard Win Rate %** | |----------------------------------------|:--------------:|:----------------------:|:-------------------:|:-------------------:| | Dorna-8B-it **vs.** Llama 3 base | 8B | **58.96** | **56.00** | **64.49** | | Dorna-8B-it **vs.** Part Mistral | 7B | **77.20** | **73.00** | **85.05** | | Dorna-8B-it **vs.** Persian Mind | 7B | **90.88** | **87.50** | **97.20** | | Dorna-8B-it **vs.** Neuraorca Gemma 7b | 7B | **86.32** | **86.50** | **85.98** | | Dorna-8B-it **vs.** Maral 7b | 7B | **97.39** | **97.00** | **98.13** | | Dorna-8B-it **vs.** PersianLlama 7b | 7B | **98.70** | **98.00** | **100.00** | | Dorna-8B-it **vs.** Aya-23-8B | 8B | **52.77** | **56.50** | 45.79 | | Dorna-8B-it **vs.** Aya-23-35B | 35B | 45.93 | **54.00** | 30.84 | | Dorna-8B-it **vs.** Command R | 35B | **58.63** | **61.00** | **54.21** | ## Contact us If you have any questions regarding this model, you can reach us via the [community](https://huggingface.co/amirMohammadi/Dorna-Llama3-8B-Instruct-Quantized4Bit/discussions) on Hugging Face.
mradermacher/Llama-Salad-8x8B-i1-GGUF
mradermacher
"2024-06-10T02:09:42Z"
985
1
transformers
[ "transformers", "gguf", "nsfw", "not-for-all-audiences", "llama-3", "text-generation-inference", "moe", "mergekit", "merge", "en", "base_model:HiroseKoichi/Llama-Salad-8x8B", "license:llama3", "endpoints_compatible", "region:us" ]
null
"2024-06-09T06:18:14Z"
--- base_model: HiroseKoichi/Llama-Salad-8x8B language: - en library_name: transformers license: llama3 quantized_by: mradermacher tags: - nsfw - not-for-all-audiences - llama-3 - text-generation-inference - moe - mergekit - merge --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> weighted/imatrix quants of https://huggingface.co/HiroseKoichi/Llama-Salad-8x8B <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/Llama-Salad-8x8B-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Llama-Salad-8x8B-i1-GGUF/resolve/main/Llama-Salad-8x8B.i1-IQ1_S.gguf) | i1-IQ1_S | 10.3 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/Llama-Salad-8x8B-i1-GGUF/resolve/main/Llama-Salad-8x8B.i1-IQ1_M.gguf) | i1-IQ1_M | 11.4 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/Llama-Salad-8x8B-i1-GGUF/resolve/main/Llama-Salad-8x8B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 13.1 | | | [GGUF](https://huggingface.co/mradermacher/Llama-Salad-8x8B-i1-GGUF/resolve/main/Llama-Salad-8x8B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 14.4 | | | [GGUF](https://huggingface.co/mradermacher/Llama-Salad-8x8B-i1-GGUF/resolve/main/Llama-Salad-8x8B.i1-IQ2_S.gguf) | i1-IQ2_S | 14.7 | | | [GGUF](https://huggingface.co/mradermacher/Llama-Salad-8x8B-i1-GGUF/resolve/main/Llama-Salad-8x8B.i1-IQ2_M.gguf) | i1-IQ2_M | 16.0 | | | [GGUF](https://huggingface.co/mradermacher/Llama-Salad-8x8B-i1-GGUF/resolve/main/Llama-Salad-8x8B.i1-Q2_K.gguf) | i1-Q2_K | 17.9 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/Llama-Salad-8x8B-i1-GGUF/resolve/main/Llama-Salad-8x8B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 18.8 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Llama-Salad-8x8B-i1-GGUF/resolve/main/Llama-Salad-8x8B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 20.0 | | | [GGUF](https://huggingface.co/mradermacher/Llama-Salad-8x8B-i1-GGUF/resolve/main/Llama-Salad-8x8B.i1-IQ3_S.gguf) | i1-IQ3_S | 21.0 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Llama-Salad-8x8B-i1-GGUF/resolve/main/Llama-Salad-8x8B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 21.0 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/Llama-Salad-8x8B-i1-GGUF/resolve/main/Llama-Salad-8x8B.i1-IQ3_M.gguf) | i1-IQ3_M | 22.0 | | | [GGUF](https://huggingface.co/mradermacher/Llama-Salad-8x8B-i1-GGUF/resolve/main/Llama-Salad-8x8B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 23.1 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/Llama-Salad-8x8B-i1-GGUF/resolve/main/Llama-Salad-8x8B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 24.8 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/Llama-Salad-8x8B-i1-GGUF/resolve/main/Llama-Salad-8x8B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 25.7 | | | [GGUF](https://huggingface.co/mradermacher/Llama-Salad-8x8B-i1-GGUF/resolve/main/Llama-Salad-8x8B.i1-Q4_0.gguf) | i1-Q4_0 | 27.2 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/Llama-Salad-8x8B-i1-GGUF/resolve/main/Llama-Salad-8x8B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 27.4 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/Llama-Salad-8x8B-i1-GGUF/resolve/main/Llama-Salad-8x8B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 29.1 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Llama-Salad-8x8B-i1-GGUF/resolve/main/Llama-Salad-8x8B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 32.9 | | | [GGUF](https://huggingface.co/mradermacher/Llama-Salad-8x8B-i1-GGUF/resolve/main/Llama-Salad-8x8B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 33.9 | | | [GGUF](https://huggingface.co/mradermacher/Llama-Salad-8x8B-i1-GGUF/resolve/main/Llama-Salad-8x8B.i1-Q6_K.gguf) | i1-Q6_K | 39.1 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his hardware for calculating the imatrix for these quants. <!-- end -->
openbmb/OmniLMM-12B
openbmb
"2024-04-16T03:15:05Z"
984
60
transformers
[ "transformers", "pytorch", "omnilmm", "text-generation", "visual-question-answering", "dataset:HaoyeZhang/RLHF-V-Dataset", "dataset:Yirany/UniMM-Chat", "arxiv:2312.00849", "autotrain_compatible", "endpoints_compatible", "region:us" ]
visual-question-answering
"2024-01-31T06:44:51Z"
--- pipeline_tag: visual-question-answering datasets: - HaoyeZhang/RLHF-V-Dataset - Yirany/UniMM-Chat --- ## OmniLMM 12B [GitHub](https://github.com/OpenBMB/MiniCPM-V) | [Demo](http://120.92.209.146:8081/) **OmniLMM-12B** is the most capable version of OmniLMM currently. The model is built based on EVA02-5B and Zephyr-7B-β, connected with a perceiver resampler layer, and trained on multimodal data in a curriculum fashion. The model has three notable features: - 🔥 **Strong Performance.** OmniLMM-12B achieves **leading performance** among models with comparable sizes, surpassing established LMMs on multiple benchmarks (including MME, MMBench, SEED-Bench, etc). The model also endows rich multi-modal world knowledge. - 🏆 **Trustworthy Behavior.** LMMs are known for suffering from hallucination, often generating text that is not factually grounded in images (e.g., faithfully describing non-existing objects in images). OmniLMM-12B is **the first state-of-the-art open-source LMM aligned via multimodal RLHF for trustworthy behavior** (using the recent [RLHF-V](https://rlhf-v.github.io/) technique). It **ranks #1** among open-source models on [MMHal-Bench](https://huggingface.co/datasets/Shengcao1006/MMHal-Bench), and **outperforms GPT-4V** on [Object HalBench](https://arxiv.org/abs/2312.00849). - 🕹 **Real-time Multimodal Interaction.** We combine the OmniLMM-12B and GPT-3.5 (text-only) into a **real-time multimodal interactive assistant**. The assistant accepts video streams from the camera and speech streams from the microphone and emits speech output. While still primary, we find the model can **replicate some of the fun cases shown in the Gemini Demo video, without any video edition**. ## Evaluation <div align="center"> <img src=https://cdn-uploads.huggingface.co/production/uploads/64abc4aa6cadc7aca585dddf/LuKikSY4CJiqtHocGP_xu.png width=66% /> </div> <details> <summary>Click to view results on MME, MMBench, MMMU, MMBench, MMHal-Bench, Object HalBench, SeedBench, LLaVA Bench W, MathVista. </summary> <table> <thead> <tr> <th align="left">Model</th> <th>Size</th> <th>MME</th> <th nowrap="nowrap">MMB dev (en)</th> <th nowrap="nowrap" >MMMU val</th> <th nowrap="nowrap" >MMHal-Bench</th> <th nowrap="nowrap" >Object HalBench</th> <th nowrap="nowrap" >SeedBench-I</th> <th>MathVista</th> <th nowrap="nowrap" >LLaVA Bench W</th> </tr> </thead> <tbody align="center"> <tr> <td align="left">GPT-4V†</td> <td>-</td> <td>1409</td> <td>75.1 </td> <td>56.8</td> <td>3.53 / 70.8</td> <td>86.4 / 92.7</td> <td>71.6 </td> <td>47.8 </td> <td>93.1 </td> </tr> <tr> <td nowrap="nowrap" align="left">Qwen-VL-Plus†</td> <td>-</td> <td>1681</td> <td>66.2 </td> <td>45.2</td> <td>- </td> <td>- </td> <td>65.7 </td> <td>36.0 </td> <td>73.7 </td> </tr> <tr> <td align="left">Yi-VL 6B</td> <td align="right">6.7B </td> <td>- </td> <td>68.2 </td> <td>39.1 </td> <td>- </td> <td>- </td> <td>66.1 </td> <td>28.0 </td> <td>39.9 </td> </tr> <tr> <td nowrap="nowrap" align="left" >Qwen-VL-Chat</td> <td align="right">9.6B</td> <td>1488</td> <td>60.6 </td> <td>35.9</td> <td>2.93 / 59.4</td> <td>56.2 / 80.0</td> <td>64.8 </td> <td>33.8 </td> <td>67.7 </td> </tr> <tr> <td align="left" >CogVLM</td> <td align="right">17.4B</td> <td>1438</td> <td>63.7 </td> <td>32.1 </td> <td>2.68 / 52.1 </td> <td>73.6 / 87.4 </td> <td>68.8 </td> <td>34.7 </td> <td>73.9 </td> </tr> <tr> <td align="left" >LLaVA 1.5</td> <td align="right">13.6B </td> <td>1531 </td> <td>68.2 </td> <td>36.4 </td> <td>2.71 / 51.0 </td> <td>53.7 / 77.4 </td> <td>68.1 </td> <td>26.4 </td> <td>64.6 </td> </tr> <tr> <td nowrap="nowrap" align="left" ><b>OmniLMM-12B</b></td> <td align="right">11.6B </td> <td>1637 </td> <td>71.6 </td> <td>40.7 </td> <td>3.45 / 68.8 </td> <td>90.3 / 95.5 </td> <td>71.1 </td> <td>34.9 </td> <td>72.0 </td> </tr> </tbody> </table> <small>†: Proprietary models</small> <br> </details> ## Demo Click here to try out the Demo of [OmniLMM-12B](http://120.92.209.146:8081). ## Usage Please look at [GitHub](https://github.com/OpenBMB/OmniLMM) for more detail about usage. ## License #### Model License * The code in this repo is released according to [Apache-2.0](https://github.com/OpenBMB/OmniLMM/blob/main/LICENSE) * The usage of OmniLMM's parameters is subject to ["General Model License Agreement - Source Notes - Publicity Restrictions - Commercial License"](https://github.com/OpenBMB/General-Model-License/blob/main/) * The parameters are fully open to acedemic research * Please contact [email protected] to obtain a written authorization for commercial uses. Free commercial use is also allowed after registration. #### Statement * As LMMs, OmniLMM generates contents by learning a large mount of texts, but it cannot comprehend, express personal opinions or make value judgement. Anything generated by OmniLMM does not represent the views and positions of the model developers * We will not be liable for any problems arising from the use of the OmniLMM open Source model, including but not limited to data security issues, risk of public opinion, or any risks and problems arising from the misdirection, misuse, dissemination or misuse of the model. ## Multimodal Projects of Our Team <!-- omit in toc --> [VisCPM](https://github.com/OpenBMB/VisCPM/tree/main) | [RLHF-V](https://github.com/RLHF-V/RLHF-V) | [LLaVA-UHD](https://github.com/thunlp/LLaVA-UHD) ## Citation If you find our work helpful, please consider citing the following papers ```bib @article{yu2023rlhf, title={Rlhf-v: Towards trustworthy mllms via behavior alignment from fine-grained correctional human feedback}, author={Yu, Tianyu and Yao, Yuan and Zhang, Haoye and He, Taiwen and Han, Yifeng and Cui, Ganqu and Hu, Jinyi and Liu, Zhiyuan and Zheng, Hai-Tao and Sun, Maosong and others}, journal={arXiv preprint arXiv:2312.00849}, year={2023} } @article{viscpm, title={Large Multilingual Models Pivot Zero-Shot Multimodal Learning across Languages}, author={Jinyi Hu and Yuan Yao and Chongyi Wang and Shan Wang and Yinxu Pan and Qianyu Chen and Tianyu Yu and Hanghao Wu and Yue Zhao and Haoye Zhang and Xu Han and Yankai Lin and Jiao Xue and Dahai Li and Zhiyuan Liu and Maosong Sun}, journal={arXiv preprint arXiv:2308.12038}, year={2023} } @article{xu2024llava-uhd, title={{LLaVA-UHD}: an LMM Perceiving Any Aspect Ratio and High-Resolution Images}, author={Xu, Ruyi and Yao, Yuan and Guo, Zonghao and Cui, Junbo and Ni, Zanlin and Ge, Chunjiang and Chua, Tat-Seng and Liu, Zhiyuan and Huang, Gao}, journal={arXiv preprint arXiv:2403.11703}, year={2024} } ```
RichardErkhov/postbot_-_emailgen-pythia-410m-deduped-gguf
RichardErkhov
"2024-06-05T17:56:55Z"
984
0
null
[ "gguf", "region:us" ]
null
"2024-06-05T17:27:32Z"
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) emailgen-pythia-410m-deduped - GGUF - Model creator: https://huggingface.co/postbot/ - Original model: https://huggingface.co/postbot/emailgen-pythia-410m-deduped/ | Name | Quant method | Size | | ---- | ---- | ---- | | [emailgen-pythia-410m-deduped.Q2_K.gguf](https://huggingface.co/RichardErkhov/postbot_-_emailgen-pythia-410m-deduped-gguf/blob/main/emailgen-pythia-410m-deduped.Q2_K.gguf) | Q2_K | 0.16GB | | [emailgen-pythia-410m-deduped.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/postbot_-_emailgen-pythia-410m-deduped-gguf/blob/main/emailgen-pythia-410m-deduped.IQ3_XS.gguf) | IQ3_XS | 0.18GB | | [emailgen-pythia-410m-deduped.IQ3_S.gguf](https://huggingface.co/RichardErkhov/postbot_-_emailgen-pythia-410m-deduped-gguf/blob/main/emailgen-pythia-410m-deduped.IQ3_S.gguf) | IQ3_S | 0.18GB | | [emailgen-pythia-410m-deduped.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/postbot_-_emailgen-pythia-410m-deduped-gguf/blob/main/emailgen-pythia-410m-deduped.Q3_K_S.gguf) | Q3_K_S | 0.18GB | | [emailgen-pythia-410m-deduped.IQ3_M.gguf](https://huggingface.co/RichardErkhov/postbot_-_emailgen-pythia-410m-deduped-gguf/blob/main/emailgen-pythia-410m-deduped.IQ3_M.gguf) | IQ3_M | 0.2GB | | [emailgen-pythia-410m-deduped.Q3_K.gguf](https://huggingface.co/RichardErkhov/postbot_-_emailgen-pythia-410m-deduped-gguf/blob/main/emailgen-pythia-410m-deduped.Q3_K.gguf) | Q3_K | 0.21GB | | [emailgen-pythia-410m-deduped.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/postbot_-_emailgen-pythia-410m-deduped-gguf/blob/main/emailgen-pythia-410m-deduped.Q3_K_M.gguf) | Q3_K_M | 0.21GB | | [emailgen-pythia-410m-deduped.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/postbot_-_emailgen-pythia-410m-deduped-gguf/blob/main/emailgen-pythia-410m-deduped.Q3_K_L.gguf) | Q3_K_L | 0.22GB | | [emailgen-pythia-410m-deduped.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/postbot_-_emailgen-pythia-410m-deduped-gguf/blob/main/emailgen-pythia-410m-deduped.IQ4_XS.gguf) | IQ4_XS | 0.22GB | | [emailgen-pythia-410m-deduped.Q4_0.gguf](https://huggingface.co/RichardErkhov/postbot_-_emailgen-pythia-410m-deduped-gguf/blob/main/emailgen-pythia-410m-deduped.Q4_0.gguf) | Q4_0 | 0.23GB | | [emailgen-pythia-410m-deduped.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/postbot_-_emailgen-pythia-410m-deduped-gguf/blob/main/emailgen-pythia-410m-deduped.IQ4_NL.gguf) | IQ4_NL | 0.23GB | | [emailgen-pythia-410m-deduped.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/postbot_-_emailgen-pythia-410m-deduped-gguf/blob/main/emailgen-pythia-410m-deduped.Q4_K_S.gguf) | Q4_K_S | 0.23GB | | [emailgen-pythia-410m-deduped.Q4_K.gguf](https://huggingface.co/RichardErkhov/postbot_-_emailgen-pythia-410m-deduped-gguf/blob/main/emailgen-pythia-410m-deduped.Q4_K.gguf) | Q4_K | 0.25GB | | [emailgen-pythia-410m-deduped.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/postbot_-_emailgen-pythia-410m-deduped-gguf/blob/main/emailgen-pythia-410m-deduped.Q4_K_M.gguf) | Q4_K_M | 0.25GB | | [emailgen-pythia-410m-deduped.Q4_1.gguf](https://huggingface.co/RichardErkhov/postbot_-_emailgen-pythia-410m-deduped-gguf/blob/main/emailgen-pythia-410m-deduped.Q4_1.gguf) | Q4_1 | 0.25GB | | [emailgen-pythia-410m-deduped.Q5_0.gguf](https://huggingface.co/RichardErkhov/postbot_-_emailgen-pythia-410m-deduped-gguf/blob/main/emailgen-pythia-410m-deduped.Q5_0.gguf) | Q5_0 | 0.27GB | | [emailgen-pythia-410m-deduped.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/postbot_-_emailgen-pythia-410m-deduped-gguf/blob/main/emailgen-pythia-410m-deduped.Q5_K_S.gguf) | Q5_K_S | 0.27GB | | [emailgen-pythia-410m-deduped.Q5_K.gguf](https://huggingface.co/RichardErkhov/postbot_-_emailgen-pythia-410m-deduped-gguf/blob/main/emailgen-pythia-410m-deduped.Q5_K.gguf) | Q5_K | 0.28GB | | [emailgen-pythia-410m-deduped.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/postbot_-_emailgen-pythia-410m-deduped-gguf/blob/main/emailgen-pythia-410m-deduped.Q5_K_M.gguf) | Q5_K_M | 0.28GB | | [emailgen-pythia-410m-deduped.Q5_1.gguf](https://huggingface.co/RichardErkhov/postbot_-_emailgen-pythia-410m-deduped-gguf/blob/main/emailgen-pythia-410m-deduped.Q5_1.gguf) | Q5_1 | 0.29GB | | [emailgen-pythia-410m-deduped.Q6_K.gguf](https://huggingface.co/RichardErkhov/postbot_-_emailgen-pythia-410m-deduped-gguf/blob/main/emailgen-pythia-410m-deduped.Q6_K.gguf) | Q6_K | 0.31GB | | [emailgen-pythia-410m-deduped.Q8_0.gguf](https://huggingface.co/RichardErkhov/postbot_-_emailgen-pythia-410m-deduped-gguf/blob/main/emailgen-pythia-410m-deduped.Q8_0.gguf) | Q8_0 | 0.4GB | Original model description: --- language: - en license: apache-2.0 tags: - generated_from_trainer datasets: - postbot/multi-emails-hq metrics: - accuracy widget: - text: 'Good Morning Professor Beans, Hope you are doing well. I just wanted to reach out and ask if differential calculus will be on the exam' example_title: email to prof - text: 'Hey <NAME>, Thank you for signing up for my weekly newsletter. Before we get started, you''ll have to confirm your email address.' example_title: newsletter - text: 'Hi <NAME>, I hope this email finds you well. I wanted to reach out and ask about office hours' example_title: office hours - text: 'Greetings <NAME>, I hope you had a splendid evening at the Company sausage eating festival. I am reaching out because' example_title: festival - text: 'Good Morning Harold, I was wondering when the next' example_title: event - text: URGENT - I need the TPS reports example_title: URGENT - text: 'Hi Archibald, I hope this email finds you extremely well.' example_title: emails that find you - text: 'Hello there. I just wanted to reach out and check in to' example_title: checking in - text: 'Hello <NAME>, I hope this email finds you well. I wanted to reach out and see if you''ve enjoyed your time with us' example_title: work well - text: 'Hi <NAME>, I hope this email finds you well. I wanted to reach out and see if we could catch up' example_title: catch up - text: I'm <NAME> and I just moved into the area and wanted to reach out and get some details on where I could get groceries and example_title: grocery pipeline_tag: text-generation base_model: EleutherAI/pythia-410m-deduped model-index: - name: multi-emails-hq-pythia-410m-deduped-r1 results: [] --- # emailgen-pythia-410m-deduped [![colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/gist/pszemraj/94b0e6b95437896f800a65ae2e5f9ab4/emailgen-pythia-410m-deduped.ipynb ) This model is a fine-tuned version of [EleutherAI/pythia-410m-deduped](https://huggingface.co/EleutherAI/pythia-410m-deduped) on email data. It achieves the following results on the evaluation set: - Loss: 2.1018 - Accuracy: 0.6157 - perplexity: 8.181 ## Model description - fine-tuned on dataset of emails for 4 epochs - intended use: "text completion" of partially written emails ## Usage example ```python from transformers import pipeline model_tag = "postbot/emailgen-pythia-410m-deduped" generator = pipeline( "text-generation", model=model_tag, ) prompt = """ Hello, Following up on the bubblegum shipment.""" result = generator( prompt, ) # generate print(result[0]["generated_text"]) ``` --- # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_postbot__emailgen-pythia-410m-deduped) | Metric | Value | |-----------------------|---------------------------| | Avg. | 26.65 | | ARC (25-shot) | 27.9 | | HellaSwag (10-shot) | 40.04 | | MMLU (5-shot) | 27.35 | | TruthfulQA (0-shot) | 38.2 | | Winogrande (5-shot) | 52.09 | | GSM8K (5-shot) | 0.0 | | DROP (3-shot) | 0.99 |
SoloBSD/Tinyllama-sp
SoloBSD
"2024-06-25T00:31:42Z"
984
0
transformers
[ "transformers", "gguf", "llama", "license:bsd-3-clause", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
"2024-06-24T22:32:38Z"
--- license: bsd-3-clause ---
timm/vit_base_patch32_384.augreg_in21k_ft_in1k
timm
"2023-05-06T00:03:42Z"
983
0
timm
[ "timm", "pytorch", "safetensors", "image-classification", "dataset:imagenet-1k", "dataset:imagenet-21k", "arxiv:2106.10270", "arxiv:2010.11929", "license:apache-2.0", "region:us" ]
image-classification
"2022-12-22T07:35:19Z"
--- tags: - image-classification - timm library_name: timm license: apache-2.0 datasets: - imagenet-1k - imagenet-21k --- # Model card for vit_base_patch32_384.augreg_in21k_ft_in1k A Vision Transformer (ViT) image classification model. Trained on ImageNet-21k and fine-tuned on ImageNet-1k (with additional augmentation and regularization) in JAX by paper authors, ported to PyTorch by Ross Wightman. ## Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 88.3 - GMACs: 12.7 - Activations (M): 12.1 - Image size: 384 x 384 - **Papers:** - How to train your ViT? Data, Augmentation, and Regularization in Vision Transformers: https://arxiv.org/abs/2106.10270 - An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale: https://arxiv.org/abs/2010.11929v2 - **Dataset:** ImageNet-1k - **Pretrain Dataset:** ImageNet-21k - **Original:** https://github.com/google-research/vision_transformer ## Model Usage ### Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model('vit_base_patch32_384.augreg_in21k_ft_in1k', pretrained=True) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5) ``` ### Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'vit_base_patch32_384.augreg_in21k_ft_in1k', pretrained=True, num_classes=0, # remove classifier nn.Linear ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor # or equivalently (without needing to set num_classes=0) output = model.forward_features(transforms(img).unsqueeze(0)) # output is unpooled, a (1, 145, 768) shaped tensor output = model.forward_head(output, pre_logits=True) # output is a (1, num_features) shaped tensor ``` ## Model Comparison Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results). ## Citation ```bibtex @article{steiner2021augreg, title={How to train your ViT? Data, Augmentation, and Regularization in Vision Transformers}, author={Steiner, Andreas and Kolesnikov, Alexander and and Zhai, Xiaohua and Wightman, Ross and Uszkoreit, Jakob and Beyer, Lucas}, journal={arXiv preprint arXiv:2106.10270}, year={2021} } ``` ```bibtex @article{dosovitskiy2020vit, title={An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale}, author={Dosovitskiy, Alexey and Beyer, Lucas and Kolesnikov, Alexander and Weissenborn, Dirk and Zhai, Xiaohua and Unterthiner, Thomas and Dehghani, Mostafa and Minderer, Matthias and Heigold, Georg and Gelly, Sylvain and Uszkoreit, Jakob and Houlsby, Neil}, journal={ICLR}, year={2021} } ``` ```bibtex @misc{rw2019timm, author = {Ross Wightman}, title = {PyTorch Image Models}, year = {2019}, publisher = {GitHub}, journal = {GitHub repository}, doi = {10.5281/zenodo.4414861}, howpublished = {\url{https://github.com/huggingface/pytorch-image-models}} } ```
soi147/3d-humanoid
soi147
"2024-06-06T04:59:11Z"
983
0
diffusers
[ "diffusers", "safetensors", "text-to-image", "stable-diffusion", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
"2023-10-27T00:44:14Z"
--- license: creativeml-openrail-m tags: - text-to-image - stable-diffusion --- ### 3D_humanoid Dreambooth model trained by soi147 with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb) Sample pictures of this concept: 這 3D人形建模是使用 DesignDoll 免費版的,採集圖片 訓練模型 以修復 基礎訓練模型 為主的 合併用的 訓練出來的 3D人形建模, 是不專業的 玩家的我 ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63f36b834745321de3510a67/Lrj0bINqhmnHg7NDgfL6I.png) ----------------------
ConvexAI/Luminex-34B-v0.1
ConvexAI
"2024-03-04T16:33:31Z"
983
8
transformers
[ "transformers", "pytorch", "safetensors", "llama", "text-generation", "conversational", "base_model:jondurbin/bagel-34b-v0.2", "license:other", "model-index", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-02-16T09:12:34Z"
--- license: other license_name: yi-license license_link: https://huggingface.co/01-ai/Yi-34B-200K/blob/main/LICENSE base_model: jondurbin/bagel-34b-v0.2 model-index: - name: Luminex-34B-v0.1 results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 73.63 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ConvexAI/Luminex-34B-v0.1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 86.59 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ConvexAI/Luminex-34B-v0.1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 76.55 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ConvexAI/Luminex-34B-v0.1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 69.68 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ConvexAI/Luminex-34B-v0.1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 83.43 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ConvexAI/Luminex-34B-v0.1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 72.48 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ConvexAI/Luminex-34B-v0.1 name: Open LLM Leaderboard --- ![image/png](https://i.ibb.co/9VB5SHL/OIG1-3.jpg) # ConvexAI/Luminex-34B-v0.1 This model is [Smaug-34b](https://huggingface.co/abacusai/Smaug-34B-v0.1) with LaserRMT applied. [Join our Discord!](https://discord.gg/rJXGjmxqzS) ### Evaluation Results Coming Soon ### Contamination Results Coming Soon # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_ConvexAI__Luminex-34B-v0.1) | Metric |Value| |---------------------------------|----:| |Avg. |77.06| |AI2 Reasoning Challenge (25-Shot)|73.63| |HellaSwag (10-Shot) |86.59| |MMLU (5-Shot) |76.55| |TruthfulQA (0-shot) |69.68| |Winogrande (5-shot) |83.43| |GSM8k (5-shot) |72.48|
TechxGenus/gemma-2b-GPTQ
TechxGenus
"2024-03-05T04:49:04Z"
983
1
transformers
[ "transformers", "safetensors", "gemma", "text-generation", "arxiv:2312.11805", "arxiv:2009.03300", "arxiv:1905.07830", "arxiv:1911.11641", "arxiv:1904.09728", "arxiv:1905.10044", "arxiv:1907.10641", "arxiv:1811.00937", "arxiv:1809.02789", "arxiv:1911.01547", "arxiv:1705.03551", "arxiv:2107.03374", "arxiv:2108.07732", "arxiv:2110.14168", "arxiv:2304.06364", "arxiv:2206.04615", "arxiv:1804.06876", "arxiv:2110.08193", "arxiv:2009.11462", "arxiv:2101.11718", "arxiv:1804.09301", "arxiv:2109.07958", "arxiv:2203.09509", "license:other", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "4-bit", "gptq", "region:us" ]
text-generation
"2024-03-05T04:40:53Z"
--- library_name: transformers tags: [] extra_gated_heading: "Access Gemma on Hugging Face" extra_gated_prompt: "To access Gemma on Hugging Face, you’re required to review and agree to Google’s usage license. To do this, please ensure you’re logged-in to Hugging Face and click below. Requests are processed immediately." extra_gated_button_content: "Acknowledge license" license: other license_name: gemma-terms-of-use license_link: https://ai.google.dev/gemma/terms --- GPTQ quantized version of gemma-2b model. --- # Gemma Model Card **Model Page**: [Gemma](https://ai.google.dev/gemma/docs) This model card corresponds to the 2B base version of the Gemma model. You can also visit the model card of the [7B base model](https://huggingface.co/google/gemma-7b), [7B instruct model](https://huggingface.co/google/gemma-7b-it), and [2B instruct model](https://huggingface.co/google/gemma-2b-it). **Resources and Technical Documentation**: * [Responsible Generative AI Toolkit](https://ai.google.dev/responsible) * [Gemma on Kaggle](https://www.kaggle.com/models/google/gemma) * [Gemma on Vertex Model Garden](https://console.cloud.google.com/vertex-ai/publishers/google/model-garden/335?version=gemma-2b-gg-hf) **Terms of Use**: [Terms](https://www.kaggle.com/models/google/gemma/license/consent) **Authors**: Google ## Model Information Summary description and brief definition of inputs and outputs. ### Description Gemma is a family of lightweight, state-of-the-art open models from Google, built from the same research and technology used to create the Gemini models. They are text-to-text, decoder-only large language models, available in English, with open weights, pre-trained variants, and instruction-tuned variants. Gemma models are well-suited for a variety of text generation tasks, including question answering, summarization, and reasoning. Their relatively small size makes it possible to deploy them in environments with limited resources such as a laptop, desktop or your own cloud infrastructure, democratizing access to state of the art AI models and helping foster innovation for everyone. ### Usage Below we share some code snippets on how to get quickly started with running the model. First make sure to `pip install -U transformers`, then copy the snippet from the section that is relevant for your usecase. #### Fine-tuning the model You can find fine-tuning scripts and notebook under the [`examples/` directory](https://huggingface.co/google/gemma-7b/tree/main/examples) of [`google/gemma-7b`](https://huggingface.co/google/gemma-7b) repository. To adapt it to this model, simply change the model-id to `google/gemma-2b`. In that repository, we provide: * A script to perform Supervised Fine-Tuning (SFT) on UltraChat dataset using QLoRA * A script to perform SFT using FSDP on TPU devices * A notebook that you can run on a free-tier Google Colab instance to perform SFT on English quotes dataset #### Running the model on a CPU ```python from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("google/gemma-2b") model = AutoModelForCausalLM.from_pretrained("google/gemma-2b") input_text = "Write me a poem about Machine Learning." input_ids = tokenizer(input_text, return_tensors="pt") outputs = model.generate(**input_ids) print(tokenizer.decode(outputs[0])) ``` #### Running the model on a single / multi GPU ```python # pip install accelerate from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("google/gemma-2b") model = AutoModelForCausalLM.from_pretrained("google/gemma-2b", device_map="auto") input_text = "Write me a poem about Machine Learning." input_ids = tokenizer(input_text, return_tensors="pt").to("cuda") outputs = model.generate(**input_ids) print(tokenizer.decode(outputs[0])) ``` #### Running the model on a GPU using different precisions * _Using `torch.float16`_ ```python # pip install accelerate from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("google/gemma-2b") model = AutoModelForCausalLM.from_pretrained("google/gemma-2b", device_map="auto", torch_dtype=torch.float16) input_text = "Write me a poem about Machine Learning." input_ids = tokenizer(input_text, return_tensors="pt").to("cuda") outputs = model.generate(**input_ids) print(tokenizer.decode(outputs[0])) ``` * _Using `torch.bfloat16`_ ```python # pip install accelerate from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("google/gemma-2b") model = AutoModelForCausalLM.from_pretrained("google/gemma-2b", device_map="auto", torch_dtype=torch.bfloat16) input_text = "Write me a poem about Machine Learning." input_ids = tokenizer(input_text, return_tensors="pt").to("cuda") outputs = model.generate(**input_ids) print(tokenizer.decode(outputs[0])) ``` #### Quantized Versions through `bitsandbytes` * _Using 8-bit precision (int8)_ ```python # pip install bitsandbytes accelerate from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig quantization_config = BitsAndBytesConfig(load_in_8bit=True) tokenizer = AutoTokenizer.from_pretrained("google/gemma-2b") model = AutoModelForCausalLM.from_pretrained("google/gemma-2b", quantization_config=quantization_config) input_text = "Write me a poem about Machine Learning." input_ids = tokenizer(input_text, return_tensors="pt").to("cuda") outputs = model.generate(**input_ids) print(tokenizer.decode(outputs[0])) ``` * _Using 4-bit precision_ ```python # pip install bitsandbytes accelerate from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig quantization_config = BitsAndBytesConfig(load_in_4bit=True) tokenizer = AutoTokenizer.from_pretrained("google/gemma-2b") model = AutoModelForCausalLM.from_pretrained("google/gemma-2b", quantization_config=quantization_config) input_text = "Write me a poem about Machine Learning." input_ids = tokenizer(input_text, return_tensors="pt").to("cuda") outputs = model.generate(**input_ids) print(tokenizer.decode(outputs[0])) ``` #### Other optimizations * _Flash Attention 2_ First make sure to install `flash-attn` in your environment `pip install flash-attn` ```diff model = AutoModelForCausalLM.from_pretrained( model_id, torch_dtype=torch.float16, + attn_implementation="flash_attention_2" ).to(0) ``` ### Inputs and outputs * **Input:** Text string, such as a question, a prompt, or a document to be summarized. * **Output:** Generated English-language text in response to the input, such as an answer to a question, or a summary of a document. ## Model Data Data used for model training and how the data was processed. ### Training Dataset These models were trained on a dataset of text data that includes a wide variety of sources, totaling 6 trillion tokens. Here are the key components: * Web Documents: A diverse collection of web text ensures the model is exposed to a broad range of linguistic styles, topics, and vocabulary. Primarily English-language content. * Code: Exposing the model to code helps it to learn the syntax and patterns of programming languages, which improves its ability to generate code or understand code-related questions. * Mathematics: Training on mathematical text helps the model learn logical reasoning, symbolic representation, and to address mathematical queries. The combination of these diverse data sources is crucial for training a powerful language model that can handle a wide variety of different tasks and text formats. ### Data Preprocessing Here are the key data cleaning and filtering methods applied to the training data: * CSAM Filtering: Rigorous CSAM (Child Sexual Abuse Material) filtering was applied at multiple stages in the data preparation process to ensure the exclusion of harmful and illegal content * Sensitive Data Filtering: As part of making Gemma pre-trained models safe and reliable, automated techniques were used to filter out certain personal information and other sensitive data from training sets. * Additional methods: Filtering based on content quality and safely in line with [our policies](https://storage.googleapis.com/gweb-uniblog-publish-prod/documents/2023_Google_AI_Principles_Progress_Update.pdf#page=11). ## Implementation Information Details about the model internals. ### Hardware Gemma was trained using the latest generation of [Tensor Processing Unit (TPU)](https://cloud.google.com/tpu/docs/intro-to-tpu) hardware (TPUv5e). Training large language models requires significant computational power. TPUs, designed specifically for matrix operations common in machine learning, offer several advantages in this domain: * Performance: TPUs are specifically designed to handle the massive computations involved in training LLMs. They can speed up training considerably compared to CPUs. * Memory: TPUs often come with large amounts of high-bandwidth memory, allowing for the handling of large models and batch sizes during training. This can lead to better model quality. * Scalability: TPU Pods (large clusters of TPUs) provide a scalable solution for handling the growing complexity of large foundation models. You can distribute training across multiple TPU devices for faster and more efficient processing. * Cost-effectiveness: In many scenarios, TPUs can provide a more cost-effective solution for training large models compared to CPU-based infrastructure, especially when considering the time and resources saved due to faster training. * These advantages are aligned with [Google's commitments to operate sustainably](https://sustainability.google/operating-sustainably/). ### Software Training was done using [JAX](https://github.com/google/jax) and [ML Pathways](https://blog.google/technology/ai/introducing-pathways-next-generation-ai-architecture/ml-pathways). JAX allows researchers to take advantage of the latest generation of hardware, including TPUs, for faster and more efficient training of large models. ML Pathways is Google's latest effort to build artificially intelligent systems capable of generalizing across multiple tasks. This is specially suitable for [foundation models](https://ai.google/discover/foundation-models/), including large language models like these ones. Together, JAX and ML Pathways are used as described in the [paper about the Gemini family of models](https://arxiv.org/abs/2312.11805); "the 'single controller' programming model of Jax and Pathways allows a single Python process to orchestrate the entire training run, dramatically simplifying the development workflow." ## Evaluation Model evaluation metrics and results. ### Benchmark Results These models were evaluated against a large collection of different datasets and metrics to cover different aspects of text generation: | Benchmark | Metric | 2B Params | 7B Params | | ------------------------------ | ------------- | ----------- | --------- | | [MMLU](https://arxiv.org/abs/2009.03300) | 5-shot, top-1 | 42.3 | 64.3 | | [HellaSwag](https://arxiv.org/abs/1905.07830) | 0-shot |71.4 | 81.2 | | [PIQA](https://arxiv.org/abs/1911.11641) | 0-shot | 77.3 | 81.2 | | [SocialIQA](https://arxiv.org/abs/1904.09728) | 0-shot | 59.7 | 51.8 | | [BooIQ](https://arxiv.org/abs/1905.10044) | 0-shot | 69.4 | 83.2 | | [WinoGrande](https://arxiv.org/abs/1907.10641) | partial score | 65.4 | 72.3 | | [CommonsenseQA](https://arxiv.org/abs/1811.00937) | 7-shot | 65.3 | 71.3 | | [OpenBookQA](https://arxiv.org/abs/1809.02789) | | 47.8 | 52.8 | | [ARC-e](https://arxiv.org/abs/1911.01547) | | 73.2 | 81.5 | | [ARC-c](https://arxiv.org/abs/1911.01547) | | 42.1 | 53.2 | | [TriviaQA](https://arxiv.org/abs/1705.03551) | 5-shot | 53.2 | 63.4 | | [Natural Questions](https://github.com/google-research-datasets/natural-questions) | 5-shot | - | 23 | | [HumanEval](https://arxiv.org/abs/2107.03374) | pass@1 | 22.0 | 32.3 | | [MBPP](https://arxiv.org/abs/2108.07732) | 3-shot | 29.2 | 44.4 | | [GSM8K](https://arxiv.org/abs/2110.14168) | maj@1 | 17.7 | 46.4 | | [MATH](https://arxiv.org/abs/2108.07732) | 4-shot | 11.8 | 24.3 | | [AGIEval](https://arxiv.org/abs/2304.06364) | | 24.2 | 41.7 | | [BIG-Bench](https://arxiv.org/abs/2206.04615) | | 35.2 | 55.1 | | ------------------------------ | ------------- | ----------- | --------- | | **Average** | | **54.0** | **56.4** | ## Ethics and Safety Ethics and safety evaluation approach and results. ### Evaluation Approach Our evaluation methods include structured evaluations and internal red-teaming testing of relevant content policies. Red-teaming was conducted by a number of different teams, each with different goals and human evaluation metrics. These models were evaluated against a number of different categories relevant to ethics and safety, including: * Text-to-Text Content Safety: Human evaluation on prompts covering safety policies including child sexual abuse and exploitation, harassment, violence and gore, and hate speech. * Text-to-Text Representational Harms: Benchmark against relevant academic datasets such as [WinoBias](https://arxiv.org/abs/1804.06876) and [BBQ Dataset](https://arxiv.org/abs/2110.08193v2). * Memorization: Automated evaluation of memorization of training data, including the risk of personally identifiable information exposure. * Large-scale harm: Tests for "dangerous capabilities," such as chemical, biological, radiological, and nuclear (CBRN) risks. ### Evaluation Results The results of ethics and safety evaluations are within acceptable thresholds for meeting [internal policies](https://storage.googleapis.com/gweb-uniblog-publish-prod/documents/2023_Google_AI_Principles_Progress_Update.pdf#page=11) for categories such as child safety, content safety, representational harms, memorization, large-scale harms. On top of robust internal evaluations, the results of well known safety benchmarks like BBQ, BOLD, Winogender, Winobias, RealToxicity, and TruthfulQA are shown here. | Benchmark | Metric | 2B Params | 7B Params | | ------------------------------ | ------------- | ----------- | --------- | | [RealToxicity](https://arxiv.org/abs/2009.11462) | average | 6.86 | 7.90 | | [BOLD](https://arxiv.org/abs/2101.11718) | | 45.57 | 49.08 | | [CrowS-Pairs](https://aclanthology.org/2020.emnlp-main.154/) | top-1 | 45.82 | 51.33 | | [BBQ Ambig](https://arxiv.org/abs/2110.08193v2) | 1-shot, top-1 | 62.58 | 92.54 | | [BBQ Disambig](https://arxiv.org/abs/2110.08193v2) | top-1 | 54.62 | 71.99 | | [Winogender](https://arxiv.org/abs/1804.09301) | top-1 | 51.25 | 54.17 | | [TruthfulQA](https://arxiv.org/abs/2109.07958) | | 44.84 | 31.81 | | [Winobias 1_2](https://arxiv.org/abs/1804.06876) | | 56.12 | 59.09 | | [Winobias 2_2](https://arxiv.org/abs/1804.06876) | | 91.10 | 92.23 | | [Toxigen](https://arxiv.org/abs/2203.09509) | | 29.77 | 39.59 | | ------------------------------ | ------------- | ----------- | --------- | ## Usage and Limitations These models have certain limitations that users should be aware of. ### Intended Usage Open Large Language Models (LLMs) have a wide range of applications across various industries and domains. The following list of potential uses is not comprehensive. The purpose of this list is to provide contextual information about the possible use-cases that the model creators considered as part of model training and development. * Content Creation and Communication * Text Generation: These models can be used to generate creative text formats such as poems, scripts, code, marketing copy, and email drafts. * Chatbots and Conversational AI: Power conversational interfaces for customer service, virtual assistants, or interactive applications. * Text Summarization: Generate concise summaries of a text corpus, research papers, or reports. * Research and Education * Natural Language Processing (NLP) Research: These models can serve as a foundation for researchers to experiment with NLP techniques, develop algorithms, and contribute to the advancement of the field. * Language Learning Tools: Support interactive language learning experiences, aiding in grammar correction or providing writing practice. * Knowledge Exploration: Assist researchers in exploring large bodies of text by generating summaries or answering questions about specific topics. ### Limitations * Training Data * The quality and diversity of the training data significantly influence the model's capabilities. Biases or gaps in the training data can lead to limitations in the model's responses. * The scope of the training dataset determines the subject areas the model can handle effectively. * Context and Task Complexity * LLMs are better at tasks that can be framed with clear prompts and instructions. Open-ended or highly complex tasks might be challenging. * A model's performance can be influenced by the amount of context provided (longer context generally leads to better outputs, up to a certain point). * Language Ambiguity and Nuance * Natural language is inherently complex. LLMs might struggle to grasp subtle nuances, sarcasm, or figurative language. * Factual Accuracy * LLMs generate responses based on information they learned from their training datasets, but they are not knowledge bases. They may generate incorrect or outdated factual statements. * Common Sense * LLMs rely on statistical patterns in language. They might lack the ability to apply common sense reasoning in certain situations. ### Ethical Considerations and Risks The development of large language models (LLMs) raises several ethical concerns. In creating an open model, we have carefully considered the following: * Bias and Fairness * LLMs trained on large-scale, real-world text data can reflect socio-cultural biases embedded in the training material. These models underwent careful scrutiny, input data pre-processing described and posterior evaluations reported in this card. * Misinformation and Misuse * LLMs can be misused to generate text that is false, misleading, or harmful. * Guidelines are provided for responsible use with the model, see the [Responsible Generative AI Toolkit](http://ai.google.dev/gemma/responsible). * Transparency and Accountability: * This model card summarizes details on the models' architecture, capabilities, limitations, and evaluation processes. * A responsibly developed open model offers the opportunity to share innovation by making LLM technology accessible to developers and researchers across the AI ecosystem. Risks identified and mitigations: * Perpetuation of biases: It's encouraged to perform continuous monitoring (using evaluation metrics, human review) and the exploration of de-biasing techniques during model training, fine-tuning, and other use cases. * Generation of harmful content: Mechanisms and guidelines for content safety are essential. Developers are encouraged to exercise caution and implement appropriate content safety safeguards based on their specific product policies and application use cases. * Misuse for malicious purposes: Technical limitations and developer and end-user education can help mitigate against malicious applications of LLMs. Educational resources and reporting mechanisms for users to flag misuse are provided. Prohibited uses of Gemma models are outlined in the [Gemma Prohibited Use Policy](https://ai.google.dev/gemma/prohibited_use_policy). * Privacy violations: Models were trained on data filtered for removal of PII (Personally Identifiable Information). Developers are encouraged to adhere to privacy regulations with privacy-preserving techniques. ### Benefits At the time of release, this family of models provides high-performance open large language model implementations designed from the ground up for Responsible AI development compared to similarly sized models. Using the benchmark evaluation metrics described in this document, these models have shown to provide superior performance to other, comparably-sized open model alternatives.
Josephgflowers/Tinyllama-616M-Cinder-DPO-With-GGUF
Josephgflowers
"2024-04-03T02:37:42Z"
983
0
transformers
[ "transformers", "safetensors", "gguf", "llama", "text-generation", "conversational", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-04-03T01:55:44Z"
--- license: mit --- I cut my TinyLlama 1.1B cinder v 2 down from 22 layers to 14. At 14 there was no coherent text but there were emerging ideas of a response. I then trained on the Reason with Cinder dataset and prunned the model again to 11 layers and only emerging responses. I then trained on a subset of open orca, sharegpt, cinder again, and tiny textbooks. I am putting it up as a base model that may need work. If you continue training please let me know on the tinyllama discord, I have some interesting plans for this model. I use the Zephyr chat format.
Yntec/3DRendering
Yntec
"2023-10-17T06:53:21Z"
982
6
diffusers
[ "diffusers", "safetensors", "Anime", "Illustration", "Midjourney", "LONGD", "s6yx", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
"2023-10-07T16:18:15Z"
--- license: creativeml-openrail-m library_name: diffusers pipeline_tag: text-to-image tags: - Anime - Illustration - Midjourney - LONGD - s6yx - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers --- # 3DRendering ReVAnimated with the 3DMM V3 LoRA merged in. It uses the MoistMixV2VAE. Samples and prompt: ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63239b8370edc53f51cd5d42/j3lmlpCI0--aSvbmda0bJ.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63239b8370edc53f51cd5d42/f04tJL1s4QwRKOeY4PT4K.png) yakuza, ✨, a beautiful detailed painting of a still life of fruit and Cartoon Pretty CUTE LITTLE Girl, beautiful detailed legs, flowers in a basket, DETAILED CHIBI EYES, artgerm, Terry moore, james jean, visible brushstrokes, Iconic, gorgeous detailed hair, Magazine ad, detailed fruitcake, 1949, sharp focus Original pages: https://civitai.com/models/73756?modelVersionId=78564 (3DMM LoRA) https://civitai.com/models/7371?modelVersionId=8665 (ReVAnimated)
paulml/OmniBeagleMBX-v3-7B
paulml
"2024-02-04T16:55:39Z"
982
1
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "merge", "mergekit", "lazymergekit", "mlabonne/OmniBeagle-7B", "flemmingmiguel/MBX-7B-v3", "base_model:mlabonne/OmniBeagle-7B", "base_model:flemmingmiguel/MBX-7B-v3", "license:cc-by-nc-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-02-04T13:23:36Z"
--- tags: - merge - mergekit - lazymergekit - mlabonne/OmniBeagle-7B - flemmingmiguel/MBX-7B-v3 base_model: - mlabonne/OmniBeagle-7B - flemmingmiguel/MBX-7B-v3 license: cc-by-nc-4.0 --- # OmniBeagleMBX-v3-7B OmniBeagleMBX-v3-7B is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): * [mlabonne/OmniBeagle-7B](https://huggingface.co/mlabonne/OmniBeagle-7B) * [flemmingmiguel/MBX-7B-v3](https://huggingface.co/flemmingmiguel/MBX-7B-v3) ## 🧩 Configuration ```yaml slices: - sources: - model: mlabonne/OmniBeagle-7B layer_range: [0, 32] - model: flemmingmiguel/MBX-7B-v3 layer_range: [0, 32] merge_method: slerp base_model: mlabonne/OmniBeagle-7B parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "paulml/OmniBeagleMBX-v3-7B" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
MBZUAI/MobiLlama-1B
MBZUAI
"2024-02-28T05:18:05Z"
982
14
transformers
[ "transformers", "safetensors", "llama", "text-generation", "nlp", "code", "custom_code", "en", "dataset:LLM360/AmberDatasets", "arxiv:2402.16840", "license:mit", "autotrain_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-02-24T16:07:35Z"
--- license: mit license_link: https://huggingface.co/microsoft/phi-2/resolve/main/LICENSE language: - en pipeline_tag: text-generation tags: - nlp - code datasets: - LLM360/AmberDatasets --- # MobiLlama-1B <center><img src="MobileLLaMa.png" alt="mobillama logo" width="300"/></center> MobiLlama-1B is a Small Language Model with **1.2 billion** parameters. It was trained using the Amber data sources [Amber-Dataset](https://huggingface.co/datasets/LLM360/AmberDatasets). ## Model Summary "Bigger the better" has been the predominant trend in recent Large Language Models (LLMs) development. However, LLMs do not suit well for scenarios that require on-device processing, energy efficiency, low memory footprint, and response efficiency. These requisites are crucial for privacy, security, and sustainable deployment. This paper explores the ‘less is more’ paradigm by addressing the challenge of designing accurate yet efficient Small Language Models (SLMs) for resource-constrained devices. Our primary contribution is the introduction of an accurate and fully transparent open-source 0.5 billion (0.5B) parameter SLM, named MobiLlama, catering to the specific needs of resource-constrained computing with an emphasis on enhanced performance with reduced resource demands. MobiLlama is a SLM design that initiates from a larger model and applies a careful parameter sharing scheme to reduce both the pre-training and the deployment cost. Our work strives to not only bridge the gap in open-source SLMs but also ensures full transparency, where complete training data pipeline, training code, model weights, and over 300 checkpoints along with evaluation codes are available on our [Github](https://github.com/mbzuai-oryx/MobiLlama). [Arxiv Paper Link](https://arxiv.org/abs/2402.16840) ## Model Description - **Model type:** Small Language Model (SLM) built using the architecture design of LLaMA-7B - **Language(s) (NLP):** English - **License:** Apache 2.0 - **Resources for more information:** - [Training Code](https://github.com/mbzuai-oryx/MobiLlama) - [Data Preparation](https://github.com/LLM360/amber-data-prep) - [Fully processed Amber pretraining data](https://huggingface.co/datasets/LLM360/AmberDatasets) ## How to Use ```python from transformers import AutoModelForCausalLM, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("MBZUAI/MobiLlama-1B", trust_remote_code=True) model = AutoModelForCausalLM.from_pretrained("MBZUAI/MobiLlama-1B", trust_remote_code=True) model.to('cuda') text = "I was walking towards the river when " input_ids = tokenizer(text, return_tensors="pt").to('cuda').input_ids outputs = model.generate(input_ids, max_length=1000, repetition_penalty=1.2, pad_token_id=tokenizer.eos_token_id) print(tokenizer.batch_decode(outputs[:, input_ids.shape[1]:-1])[0].strip()) ``` ## Training DataMix | Subset | Tokens (Billion) | | ----------- | ----------- | | Arxiv | 30.00 | | Book | 28.86 | | C4 | 197.67 | | Refined-Web | 665.01 | | StarCoder | 291.92 | | StackExchange | 21.75 | | Wikipedia | 23.90 | | Total | 1259.13 | ## Hyperparameters | Hyperparameter | Value | | ----------- | ----------- | | Total Parameters | 1.2B | | Hidden Size | 2048 | | Intermediate Size (MLPs) | 5632 | | Number of Attention Heads | 32 | | Number of Hidden Lyaers | 22 | | RMSNorm ɛ | 1e^-5 | | Max Seq Length | 2048 | | Vocab Size | 32000 | ## Evaluation | Evaluation Benchmark | MobiLlama-0.5B | MobiLlama-0.8B | MobiLlama-1.2B | | ----------- | ----------- | ----------- | ----------- | | HellaSwag | 52.52 | 54.09 | 62.99 | | MMLU | 26.45 | 26.92 | 24.23 | | Arc Challenge | 29.52 | 30.20 | 34.55 | | TruthfulQA | 38.05 | 38.48 | 35.57 | | CrowsPairs | 64.03 | 64.82 | 68.12 | | PIQA | 72.03 | 73.17 | 75.29 | | Race | 33.68 | 33.37 | 35.31 | | SIQA | 40.22 | 41.60 | 41.96 | | Winogrande | 57.53 | 57.45 | 61.08 | ## Citation **BibTeX:** ```bibtex @misc{thawakar2024mobillama, title={MobiLlama: Towards Accurate and Lightweight Fully Transparent GPT}, author={Omkar Thawakar and Ashmal Vayani and Salman Khan and Hisham Cholakkal and Rao Muhammad Anwer and Michael Felsberg and Timothy Baldwin and Eric P. Xing and Fahad Shahbaz Khan}, year={2024}, eprint={2402.16840}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
RichardErkhov/r3dhummingbird_-_DialoGPT-medium-joshua-gguf
RichardErkhov
"2024-06-05T00:05:52Z"
982
0
null
[ "gguf", "region:us" ]
null
"2024-06-04T23:32:25Z"
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) DialoGPT-medium-joshua - GGUF - Model creator: https://huggingface.co/r3dhummingbird/ - Original model: https://huggingface.co/r3dhummingbird/DialoGPT-medium-joshua/ | Name | Quant method | Size | | ---- | ---- | ---- | | [DialoGPT-medium-joshua.Q2_K.gguf](https://huggingface.co/RichardErkhov/r3dhummingbird_-_DialoGPT-medium-joshua-gguf/blob/main/DialoGPT-medium-joshua.Q2_K.gguf) | Q2_K | 0.17GB | | [DialoGPT-medium-joshua.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/r3dhummingbird_-_DialoGPT-medium-joshua-gguf/blob/main/DialoGPT-medium-joshua.IQ3_XS.gguf) | IQ3_XS | 0.18GB | | [DialoGPT-medium-joshua.IQ3_S.gguf](https://huggingface.co/RichardErkhov/r3dhummingbird_-_DialoGPT-medium-joshua-gguf/blob/main/DialoGPT-medium-joshua.IQ3_S.gguf) | IQ3_S | 0.19GB | | [DialoGPT-medium-joshua.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/r3dhummingbird_-_DialoGPT-medium-joshua-gguf/blob/main/DialoGPT-medium-joshua.Q3_K_S.gguf) | Q3_K_S | 0.19GB | | [DialoGPT-medium-joshua.IQ3_M.gguf](https://huggingface.co/RichardErkhov/r3dhummingbird_-_DialoGPT-medium-joshua-gguf/blob/main/DialoGPT-medium-joshua.IQ3_M.gguf) | IQ3_M | 0.2GB | | [DialoGPT-medium-joshua.Q3_K.gguf](https://huggingface.co/RichardErkhov/r3dhummingbird_-_DialoGPT-medium-joshua-gguf/blob/main/DialoGPT-medium-joshua.Q3_K.gguf) | Q3_K | 0.21GB | | [DialoGPT-medium-joshua.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/r3dhummingbird_-_DialoGPT-medium-joshua-gguf/blob/main/DialoGPT-medium-joshua.Q3_K_M.gguf) | Q3_K_M | 0.21GB | | [DialoGPT-medium-joshua.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/r3dhummingbird_-_DialoGPT-medium-joshua-gguf/blob/main/DialoGPT-medium-joshua.Q3_K_L.gguf) | Q3_K_L | 0.23GB | | [DialoGPT-medium-joshua.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/r3dhummingbird_-_DialoGPT-medium-joshua-gguf/blob/main/DialoGPT-medium-joshua.IQ4_XS.gguf) | IQ4_XS | 0.22GB | | [DialoGPT-medium-joshua.Q4_0.gguf](https://huggingface.co/RichardErkhov/r3dhummingbird_-_DialoGPT-medium-joshua-gguf/blob/main/DialoGPT-medium-joshua.Q4_0.gguf) | Q4_0 | 0.23GB | | [DialoGPT-medium-joshua.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/r3dhummingbird_-_DialoGPT-medium-joshua-gguf/blob/main/DialoGPT-medium-joshua.IQ4_NL.gguf) | IQ4_NL | 0.23GB | | [DialoGPT-medium-joshua.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/r3dhummingbird_-_DialoGPT-medium-joshua-gguf/blob/main/DialoGPT-medium-joshua.Q4_K_S.gguf) | Q4_K_S | 0.23GB | | [DialoGPT-medium-joshua.Q4_K.gguf](https://huggingface.co/RichardErkhov/r3dhummingbird_-_DialoGPT-medium-joshua-gguf/blob/main/DialoGPT-medium-joshua.Q4_K.gguf) | Q4_K | 0.25GB | | [DialoGPT-medium-joshua.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/r3dhummingbird_-_DialoGPT-medium-joshua-gguf/blob/main/DialoGPT-medium-joshua.Q4_K_M.gguf) | Q4_K_M | 0.25GB | | [DialoGPT-medium-joshua.Q4_1.gguf](https://huggingface.co/RichardErkhov/r3dhummingbird_-_DialoGPT-medium-joshua-gguf/blob/main/DialoGPT-medium-joshua.Q4_1.gguf) | Q4_1 | 0.25GB | | [DialoGPT-medium-joshua.Q5_0.gguf](https://huggingface.co/RichardErkhov/r3dhummingbird_-_DialoGPT-medium-joshua-gguf/blob/main/DialoGPT-medium-joshua.Q5_0.gguf) | Q5_0 | 0.27GB | | [DialoGPT-medium-joshua.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/r3dhummingbird_-_DialoGPT-medium-joshua-gguf/blob/main/DialoGPT-medium-joshua.Q5_K_S.gguf) | Q5_K_S | 0.27GB | | [DialoGPT-medium-joshua.Q5_K.gguf](https://huggingface.co/RichardErkhov/r3dhummingbird_-_DialoGPT-medium-joshua-gguf/blob/main/DialoGPT-medium-joshua.Q5_K.gguf) | Q5_K | 0.29GB | | [DialoGPT-medium-joshua.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/r3dhummingbird_-_DialoGPT-medium-joshua-gguf/blob/main/DialoGPT-medium-joshua.Q5_K_M.gguf) | Q5_K_M | 0.29GB | | [DialoGPT-medium-joshua.Q5_1.gguf](https://huggingface.co/RichardErkhov/r3dhummingbird_-_DialoGPT-medium-joshua-gguf/blob/main/DialoGPT-medium-joshua.Q5_1.gguf) | Q5_1 | 0.29GB | | [DialoGPT-medium-joshua.Q6_K.gguf](https://huggingface.co/RichardErkhov/r3dhummingbird_-_DialoGPT-medium-joshua-gguf/blob/main/DialoGPT-medium-joshua.Q6_K.gguf) | Q6_K | 0.32GB | | [DialoGPT-medium-joshua.Q8_0.gguf](https://huggingface.co/RichardErkhov/r3dhummingbird_-_DialoGPT-medium-joshua-gguf/blob/main/DialoGPT-medium-joshua.Q8_0.gguf) | Q8_0 | 0.41GB | Original model description: --- thumbnail: https://raw.githubusercontent.com/RuolinZheng08/twewy-discord-chatbot/main/gif-demo/icon.png tags: - conversational license: mit --- # DialoGPT Trained on the Speech of a Game Character This is an instance of [microsoft/DialoGPT-medium](https://huggingface.co/microsoft/DialoGPT-medium) trained on a game character, Joshua from [The World Ends With You](https://en.wikipedia.org/wiki/The_World_Ends_with_You). The data comes from [a Kaggle game script dataset](https://www.kaggle.com/ruolinzheng/twewy-game-script). I built a Discord AI chatbot based on this model. [Check out my GitHub repo.](https://github.com/RuolinZheng08/twewy-discord-chatbot) Chat with the model: ```python from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("r3dhummingbird/DialoGPT-medium-joshua") model = AutoModelWithLMHead.from_pretrained("r3dhummingbird/DialoGPT-medium-joshua") # Let's chat for 4 lines for step in range(4): # encode the new user input, add the eos_token and return a tensor in Pytorch new_user_input_ids = tokenizer.encode(input(">> User:") + tokenizer.eos_token, return_tensors='pt') # print(new_user_input_ids) # append the new user input tokens to the chat history bot_input_ids = torch.cat([chat_history_ids, new_user_input_ids], dim=-1) if step > 0 else new_user_input_ids # generated a response while limiting the total chat history to 1000 tokens, chat_history_ids = model.generate( bot_input_ids, max_length=200, pad_token_id=tokenizer.eos_token_id, no_repeat_ngram_size=3, do_sample=True, top_k=100, top_p=0.7, temperature=0.8 ) # pretty print last ouput tokens from bot print("JoshuaBot: {}".format(tokenizer.decode(chat_history_ids[:, bot_input_ids.shape[-1]:][0], skip_special_tokens=True))) ```
gglabs/TinyLM-Chat-0611-9-epoch
gglabs
"2024-06-11T17:06:41Z"
982
0
transformers
[ "transformers", "gguf", "llama", "text-generation-inference", "unsloth", "en", "base_model:unsloth/tinyllama-chat-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2024-06-11T14:32:21Z"
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - gguf base_model: unsloth/tinyllama-chat-bnb-4bit --- # Uploaded model - **Developed by:** gglabs - **License:** apache-2.0 - **Finetuned from model :** unsloth/tinyllama-chat-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
ahishamm/skinsam
ahishamm
"2023-07-10T17:54:07Z"
981
0
transformers
[ "transformers", "pytorch", "sam", "mask-generation", "en", "dataset:ahishamm/combined_masks", "license:apache-2.0", "endpoints_compatible", "region:us" ]
mask-generation
"2023-07-09T13:28:57Z"
--- license: apache-2.0 datasets: - ahishamm/combined_masks language: - en metrics: - accuracy - f1 - mean_iou --- ## Model Description <!-- Provide a longer summary of what this model is/does. --> SkinSAM is on the 12-layer ViT-b model, the mask decoder module of SAM is fine-tuned on a combined dataset of ISIC and PH2 skin lesion images and masks. SkinSAM was trained on an Nvidia Tesla A100 40GB GPU. Some of the notable results taken: ISIC Dataset: 1. IOU 78.25% 2. Pixel Accuracy 92.18% 3. F1 Score 87.47% PH2 Dataset: 1. IOU 86.68% 2. Pixel Accuracy 93.33% 3. F1 Score 93.95%
lorahub/flan_t5_large-newsroom
lorahub
"2023-07-24T10:17:44Z"
981
0
peft
[ "peft", "region:us" ]
null
"2023-07-24T10:17:34Z"
--- library_name: peft ---
utrobinmv/t5_translate_en_ru_zh_base_200
utrobinmv
"2024-02-04T13:44:00Z"
981
8
transformers
[ "transformers", "safetensors", "t5", "text2text-generation", "translation", "ru", "zh", "en", "dataset:ccmatrix", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
translation
"2024-01-07T14:48:46Z"
--- language: - ru - zh - en tags: - translation - text2text-generation - t5 license: apache-2.0 datasets: - ccmatrix metrics: - sacrebleu widget: - example_title: translate zh-ru text: > translate to ru: 开发的目的是为用户提供个人同步翻译。 - example_title: translate ru-en text: > translate to en: Цель разработки — предоставить пользователям личного синхронного переводчика. - example_title: translate en-ru text: > translate to ru: The purpose of the development is to provide users with a personal synchronized interpreter. - example_title: translate en-zh text: > translate to zh: The purpose of the development is to provide users with a personal synchronized interpreter. - example_title: translate zh-en text: > translate to en: 开发的目的是为用户提供个人同步解释器。 - example_title: translate ru-zh text: > translate to zh: Цель разработки — предоставить пользователям личного синхронного переводчика. model-index: - name: utrobinmv/t5_translate_en_ru_zh_base_200 results: - task: type: translation name: Translation en-ru dataset: name: ntrex_en-ru type: ntrex config: ntrex en-ru split: test metrics: - type: sacrebleu value: 28.575940911021487 name: bleu verified: false - type: chrf value: 54.27996346886896 name: chrf verified: false - type: ter value: 62.494863914873584 name: ter verified: false - type: meteor value: 0.5174833677740809 name: meteor verified: false - type: rouge value: 0.1908317951570274 name: ROUGE-1 verified: false - type: rouge value: 0.065555552204933 name: ROUGE-2 verified: false - type: rouge value: 0.1895542893295215 name: ROUGE-L verified: false - type: rouge value: 0.1893813749889601 name: ROUGE-LSUM verified: false - type: bertscore value: 0.8554933660030365 name: bertscore_f1 verified: false - type: bertscore value: 0.8578473615646363 name: bertscore_precision verified: false - type: bertscore value: 0.8534188346862793 name: bertscore_recall verified: false source: name: NTREX dataset Benchmark url: https://huggingface.co/spaces/utrobinmv/TREX_benchmark_en_ru_zh - name: utrobinmv/t5_translate_en_ru_zh_base_200 results: - task: type: translation name: Translation ru-en dataset: name: ntrex_ru-en type: ntrex config: ntrex ru-en split: test metrics: - type: sacrebleu value: 28.575940911021487 name: bleu verified: false - type: chrf value: 54.27996346886896 name: chrf verified: false - type: ter value: 62.494863914873584 name: ter verified: false - type: meteor value: 0.5174833677740809 name: meteor verified: false - type: rouge value: 0.1908317951570274 name: ROUGE-1 verified: false - type: rouge value: 0.065555552204933 name: ROUGE-2 verified: false - type: rouge value: 0.1895542893295215 name: ROUGE-L verified: false - type: rouge value: 0.1893813749889601 name: ROUGE-LSUM verified: false - type: bertscore value: 0.8554933660030365 name: bertscore_f1 verified: false - type: bertscore value: 0.8578473615646363 name: bertscore_precision verified: false - type: bertscore value: 0.8534188346862793 name: bertscore_recall verified: false source: name: NTREX dataset Benchmark url: https://huggingface.co/spaces/utrobinmv/TREX_benchmark_en_ru_zh --- # T5 English, Russian and Chinese multilingual machine translation This model represents a conventional T5 transformer in multitasking mode for translation into the required language, precisely configured for machine translation for pairs: ru-zh, zh-ru, en-zh, zh-en, en-ru, ru-en. The model can perform direct translation between any pair of Russian, Chinese or English languages. For translation into the target language, the target language identifier is specified as a prefix 'translate to <lang>:'. In this case, the source language may not be specified, in addition, the source text may be multilingual. Example translate Russian to Chinese ```python from transformers import T5ForConditionalGeneration, T5Tokenizer model_name = 'utrobinmv/t5_translate_en_ru_zh_small_1024' model = T5ForConditionalGeneration.from_pretrained(model_name) tokenizer = T5Tokenizer.from_pretrained(model_name) prefix = 'translate to zh: ' src_text = prefix + "Цель разработки — предоставить пользователям личного синхронного переводчика." # translate Russian to Chinese input_ids = tokenizer(src_text, return_tensors="pt") generated_tokens = model.generate(**input_ids) result = tokenizer.batch_decode(generated_tokens, skip_special_tokens=True) print(result) #开发的目的是为用户提供个人同步翻译。 ``` and Example translate Chinese to Russian ```python from transformers import T5ForConditionalGeneration, T5Tokenizer model_name = 'utrobinmv/t5_translate_en_ru_zh_small_1024' model = T5ForConditionalGeneration.from_pretrained(model_name) tokenizer = T5Tokenizer.from_pretrained(model_name) prefix = 'translate to ru: ' src_text = prefix + "开发的目的是为用户提供个人同步翻译。" # translate Russian to Chinese input_ids = tokenizer(src_text, return_tensors="pt") generated_tokens = model.generate(**input_ids) result = tokenizer.batch_decode(generated_tokens, skip_special_tokens=True) print(result) #Цель разработки - предоставить пользователям персональный синхронный перевод. ``` ## ## Languages covered Russian (ru_RU), Chinese (zh_CN), English (en_US)
BarraHome/zephyr-dpo-4bit
BarraHome
"2024-02-04T02:56:26Z"
981
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "text-generation-inference", "unsloth", "trl", "conversational", "en", "base_model:unsloth/zephyr-sft", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
"2024-02-04T02:47:31Z"
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - mistral - trl base_model: unsloth/zephyr-sft --- # Uploaded model - **Developed by:** BarraHome - **License:** apache-2.0 - **Finetuned from model :** unsloth/zephyr-sft This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
openbmb/MiniCPM-2B-dpo-bf16-llama-format
openbmb
"2024-04-07T02:26:13Z"
981
12
transformers
[ "transformers", "pytorch", "safetensors", "llama", "text-generation", "MiniCPM", "ModelBest", "THUNLP", "conversational", "en", "zh", "license:other", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-02-06T11:25:02Z"
--- license: other license_name: gml license_link: https://github.com/OpenBMB/General-Model-License/blob/main/%E9%80%9A%E7%94%A8%E6%A8%A1%E5%9E%8B%E8%AE%B8%E5%8F%AF%E5%8D%8F%E8%AE%AE-%E6%9D%A5%E6%BA%90%E8%AF%B4%E6%98%8E-%E5%AE%A3%E4%BC%A0%E9%99%90%E5%88%B6-%E5%95%86%E4%B8%9A%E6%8E%88%E6%9D%83.md language: - en - zh tags: - MiniCPM - ModelBest - THUNLP --- <div align="center"> <h1> MiniCPM </h1> </div> <p align="center"> <a href="https://shengdinghu.notion.site/MiniCPM-c805a17c5c8046398914e47f0542095a?pvs=4" target="_blank">MiniCPM 技术报告</a><a href="https://shengdinghu.notion.site/MiniCPM-Unveiling-the-Potential-of-End-side-Large-Language-Models-d4d3a8c426424654a4e80e42a711cb20?pvs=4" target="_blank"> Technical Report</a> | <a href="https://github.com/OpenBMB/OmniLMM/" target="_blank">OmniLMM 多模态模型 Multi-modal Model</a> | <a href="https://luca.cn/" target="_blank">CPM-C 千亿模型试用 ~100B Model Trial </a> </p> MiniCPM 是面壁与清华大学自然语言处理实验室共同开源的系列端侧语言大模型,主体语言模型 MiniCPM-2B 仅有 24亿(2.4B)的非词嵌入参数量。 - 经过 SFT 后,MiniCPM 在公开综合性评测集上,MiniCPM 与 Mistral-7B相近(中文、数学、代码能力更优),整体性能超越 Llama2-13B、MPT-30B、Falcon-40B 等模型。 - 经过 DPO 后,MiniCPM 在当前最接近用户体感的评测集 MTBench上,MiniCPM-2B 也超越了 Llama2-70B-Chat、Vicuna-33B、Mistral-7B-Instruct-v0.1、Zephyr-7B-alpha 等众多代表性开源大模型。 - 以 MiniCPM-2B 为基础构建端侧多模态大模型 MiniCPM-V,整体性能在同规模模型中实现最佳,超越基于 Phi-2 构建的现有多模态大模型,在部分评测集上达到与 9.6B Qwen-VL-Chat 相当甚至更好的性能。 - 经过 Int4 量化后,MiniCPM 可在手机上进行部署推理,流式输出速度略高于人类说话速度。MiniCPM-V 也首次跑通了多模态大模型在手机上的部署。 - 一张1080/2080可高效参数微调,一张3090/4090可全参数微调,一台机器可持续训练 MiniCPM,二次开发成本较低。 我们将完全开源MiniCPM-2B的模型参数供学术研究和有限商用,以及训练过程中的所有Checkpoint和大部分非专有数据供模型机理研究。 - 基于MiniCPM-2B的指令微调与人类偏好对**MiniCPM-2B-SFT/DPO。** - 基于MiniCPM-2B的多模态模型**MiniCPM-V**,能力超越基于Phi-2的同参数级别多模态模型**。** - MiniCPM-2B-SFT/DPO的Int4量化版**MiniCPM-2B-SFT/DPO-Int4。** - 基于MLC-LLM、LLMFarm开发的MiniCPM手机端程序,**文本及多模态模型均可在手机端进行推理。** MiniCPM is an End-Size LLM developed by ModelBest Inc. and TsinghuaNLP, with only 2.4B parameters excluding embeddings. - MiniCPM has very close performance compared with Mistral-7B on open-sourced general benchmarks with better ability on Chinese, Mathmetics and Coding after SFT. The overall performance exceeds Llama2-13B, MPT-30B, Falcon-40B, etc. - After DPO, MiniCPM outperforms Llama2-70B-Chat, Vicuna-33B, Mistral-7B-Instruct-v0.1, Zephyr-7B-alpha, etc. on MTBench. - MiniCPM-V, based on MiniCPM-2B, achieves the best overall performance among multimodel models of the same scale, surpassing existing multimodal large models built on Phi-2 and achieving performance comparable to or even better than 9.6B Qwen-VL-Chat on some tasks. - MiniCPM can be deployed and infer on smartphones, and the speed of streaming output is relatively higher than the verbal speed of human. MiniCPM-V is the first multi-modal models that can be deployed on smartphones. - The cost of developing based on MiniCPM is low. Parameter efficient finetuning can be conducted with a single 1080/2080 GPU and full parameter finetuning can be conducted with a 3090/4090 GPU. We release all model parameters for research and limited commercial use. We also release all the checkpoint during training and most public training data for research on model mechanism. - SFT and DPO version based on MiniCPM-2B and human preference: **MiniCPM-2B-SFT/DPO** - The multi-modal model **MiniCPM-V** based on MiniCPM-2B, which outperforms models with similar size, i.e., Phi-2 - The INT4 quantized version **MiniCPM-2B-SFT/DPO-Int4** based on MiniCPM-2B-SFT/DPO - Mobile phone application based on MLC-LLM and LLMFarm. Both language model and multimodel model can conduct inference on smartphones. ### 评测结果 Evaluation Results 详细的评测结果位于[github仓库](https://github.com/OpenBMB/MiniCPM?tab=readme-ov-file#%E8%AF%84%E6%B5%8B%E7%BB%93%E6%9E%9C) Detailed evaluation results are in [github repo](https://github.com/OpenBMB/MiniCPM/blob/main/README-en.md#evaluation-results) 注意:我们发现使用Huggingface生成质量略差于vLLM,因此推荐使用vLLM进行测试。我们正在排查原因。 Notice: We discovered that the quality of Huggingface generation is slightly lower than vLLM, thus benchmarking using vLLM is recommended. We are investigating the cause now. ### 局限性 Limitations - 受限于模型规模,模型可能出现幻觉性问题。其中由于DPO模型生成的回复内容更长,更容易出现幻觉。我们也将持续进行MiniCPM模型的迭代改进; - 为了保证在学术研究用途上模型的通用性,我们未对模型进行任何身份认同训练。同时由于我们用ShareGPT开源语料作为部分训练数据,模型可能会输出类似GPT系列模型的身份认同信息; - 受限于模型规模,模型的输出受到提示词(prompt)的影响较大,可能多次尝试产生不一致的结果; - 受限于模型容量,模型的知识记忆较不准确,后续我们将结合RAG方法来增强模型的知识记忆能力。 - Due to limitations in model size, the model may experience hallucinatory issues. As DPO model tend to generate longer response, hallucinations are more likely to occur. We will also continue to iterate and improve the MiniCPM model. - To ensure the universality of the model for academic research purposes, we did not conduct any identity training on the model. Meanwhile, as we use ShareGPT open-source corpus as part of the training data, the model may output identity information similar to the GPT series models. - Due to the limitation of model size, the output of the model is greatly influenced by prompt words, which may result in inconsistent results from multiple attempts. - Due to limited model capacity, the model's knowledge memory is not accurate. In the future, we will combine the RAG method to enhance the model's knowledge memory ability. ## 模型下载 Download | HuggingFace | ModelScope | WiseModel | |-------------|------------|-----------| |[sft-bf16](https://huggingface.co/openbmb/MiniCPM-2B-sft-bf16)|[sft-bf16](https://modelscope.cn/models/OpenBMB/miniCPM-bf16)|[sft-bf16](https://wisemodel.cn/models/OpenBMB/miniCPM-bf16) |[sft-fp32](https://huggingface.co/openbmb/MiniCPM-2B-sft-fp32)|[sft-fp32](https://modelscope.cn/models/OpenBMB/MiniCPM-2B-sft-fp32)|[sft-fp32](https://wisemodel.cn/models/OpenBMB/miniCPM-dpo-fp32) |[dpo-bf16](https://huggingface.co/openbmb/MiniCPM-2B-dpo-bf16)|[dpo-bf16](https://modelscope.cn/models/OpenBMB/MiniCPM-2B-dpo-bf16/summary)|[dpo-bf16](https://wisemodel.cn/models/OpenBMB/MiniCPM-2B-dpo-bf16) |[dpo-fp16](https://huggingface.co/openbmb/MiniCPM-2B-dpo-fp16)|[dpo-fp16](https://modelscope.cn/models/OpenBMB/MiniCPM-2B-dpo-fp16/)|[dpo-fp16](https://wisemodel.cn/models/OpenBMB/MiniCPM-2B-dpo-fp16) |[dpo-fp32](https://huggingface.co/openbmb/MiniCPM-2B-dpo-fp32)|[dpo-fp32](https://modelscope.cn/models/OpenBMB/MiniCPM-2B-dpo-fp32)|[dpo-fp32](https://wisemodel.cn/models/OpenBMB/miniCPM-dpo-fp32) ## 模型使用 Usage * 安装`transformers>=4.36.0`以及`accelerate`后,运行以下代码 * 注意:需要在`from_pretrained`中明确指明模型的数据类型,否则会引起较大计算误差 * Run the following code after install `transformers>=4.36.0` and `accelerate` * Warning: It is necessary to specify the data type of the model clearly in 'from_pretrained', otherwise large calculation errors will be caused ```python from transformers import AutoModelForCausalLM, AutoTokenizer import torch torch.manual_seed(0) path = 'openbmb/MiniCPM-2B-dpo-bf16' tokenizer = AutoTokenizer.from_pretrained(path) model = AutoModelForCausalLM.from_pretrained(path, torch_dtype=torch.bfloat16, device_map='cuda', trust_remote_code=True) responds, history = model.chat(tokenizer, "山东省最高的山是哪座山, 它比黄山高还是矮?差距多少?", temperature=0.8, top_p=0.8) print(responds) ``` * 期望输出 Expected Output ```shell 山东省最高的山是泰山,海拔1545米。 相对于黄山(海拔1864米),泰山海拔较低,相差约319米。 ``` ## 开源协议 LICENSE #### 模型协议 Model LICENSE * 本仓库中代码依照 [Apache-2.0](https://github.com/OpenBMB/MiniCPM/blob/main/LICENSE) 协议开源 * MiniCPM 模型权重的使用则需要遵循 [“通用模型许可协议-来源说明-宣传限制-商业授权”](https://github.com/OpenBMB/General-Model-License/blob/main/%E9%80%9A%E7%94%A8%E6%A8%A1%E5%9E%8B%E8%AE%B8%E5%8F%AF%E5%8D%8F%E8%AE%AE-%E6%9D%A5%E6%BA%90%E8%AF%B4%E6%98%8E-%E5%AE%A3%E4%BC%A0%E9%99%90%E5%88%B6-%E5%95%86%E4%B8%9A%E6%8E%88%E6%9D%83.md)。 * MiniCPM 模型权重对学术研究完全开放。 * 如需将模型用于商业用途,请联系[email protected]来获取书面授权,在登记后亦允许免费商业使用。 * This repository is released under the [Apache-2.0](https://github.com/OpenBMB/MiniCPM/blob/main/LICENSE) License. * The usage of MiniCPM model weights must strictly follow [the General Model License (GML)](https://github.com/OpenBMB/General-Model-License/blob/main/%E9%80%9A%E7%94%A8%E6%A8%A1%E5%9E%8B%E8%AE%B8%E5%8F%AF%E5%8D%8F%E8%AE%AE-%E6%9D%A5%E6%BA%90%E8%AF%B4%E6%98%8E-%E5%AE%A3%E4%BC%A0%E9%99%90%E5%88%B6-%E5%95%86%E4%B8%9A%E6%8E%88%E6%9D%83.md). * The models and weights of MiniCPM are completely free for academic research. * If you intend to utilize the model for commercial purposes, please reach out to [email protected] to obtain the certificate of authorization. #### 声明 Statement * 作为一个语言模型,MiniCPM 通过学习大量的文本来生成内容,但它无法理解、表达个人观点或价值判断,它所输出的任何内容都不代表模型开发者的观点和立场。 * 因此用户在使用 MiniCPM 生成的内容时,应自行负责对其进行评估和验证。 * 如果由于使用 MinCPM 开源模型而导致的任何问题,包括但不限于数据安全问题、公共舆论风险,或模型被误导、滥用、传播或不当利用所带来的任何风险和问题,我们将不承担任何责任。 * As a language model, MiniCPM generates content by learning from a vast amount of text. * However, it does not possess the ability to comprehend or express personal opinions or value judgments. * Any content generated by MiniCPM does not represent the viewpoints or positions of the model developers. * Therefore, when using content generated by MiniCPM, users should take full responsibility for evaluating and verifying it on their own. <p id="8"></p> ## 工作引用 Citation * 如果觉得MiniCPM有助于您的工作,请考虑引用下列[技术报告](https://shengdinghu.notion.site/MiniCPM-c805a17c5c8046398914e47f0542095a?pvs=4) * Please cite our [techinical report](https://shengdinghu.notion.site/MiniCPM-Unveiling-the-Potential-of-End-side-Large-Language-Models-d4d3a8c426424654a4e80e42a711cb20?pvs=4) if you find our work valuable. ``` @inproceedings{minicpm2024, title={MiniCPM:Unveiling the Potential of End-side Large Language Models}, booktitle={OpenBMB Blog}, year={2024} } ```
Rookie/Llama-3-8B-Instruct-Chinese
Rookie
"2024-04-24T11:17:09Z"
981
27
transformers
[ "transformers", "safetensors", "gguf", "llama", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-04-22T02:55:09Z"
# Llama-3-8B-Instruct-Chinese-chat Llama-3-8B-Instruct in Chinese 自己微调版本 ### 训练可用数据整理 | 数据集 | 介绍 | |----------------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | [firefly-train-1.1M](https://huggingface.co/datasets/YeungNLP/firefly-train-1.1M) | 包含了23种常见的中文NLP任务的数据,并且构造了许多与中华文化相关的数据,如对联、作诗、文言文翻译、散文、金庸小说等。对于每个任务,由人工书写若干种指令模板,保证数据的高质量与丰富度,数据量为115万。 | | [moss-003-sft-data](https://huggingface.co/datasets/YeungNLP/moss-003-sft-data) | 由复旦大学MOSS团队开源的中英文多轮对话数据,包含100万+数本。 | | [school_math_0.25M](https://huggingface.co/datasets/YeungNLP/school_math_0.25M) | 由BELLE项目组开源的数学运算指令数据,包含25万条数问。 | | [ruozhiba](https://huggingface.co/datasets/LooksJuicy/ruozhiba) | 弱智吧数据问答,据说比较锻炼模型的心智能力。 | 欢迎补充,要求中文且一问一答形式,适合用于提升llama3任务能力的数据集 ### [github地址](https://github.com/Rookie1019/Llama-3-8B-Instruct-Chinese.git) ### 推荐微调工具 在此感谢以下项目,提供了许多优秀的中文微调工具,供大家参考: - Firefly - https://github.com/yangjianxin1/Firefly - LLaMA-Factory - https://github.com/hiyouga/LLaMA-Factory.git ### Chat版模型下载 - Instruct + 继续中文sft版 - [huggingface地址](https://huggingface.co/Rookie/Llama-3-8B-Instruct-Chinese) ### 模型量化加速、部署 ### 模型使用 默认情况下直接运行以下代码即可体验llama3中文对话,请自行修改`model_name_or_path`为你下载的模型路径 ```python from transformers import AutoTokenizer, AutoConfig, AddedToken, AutoModelForCausalLM, BitsAndBytesConfig from peft import PeftModel from dataclasses import dataclass from typing import Dict import torch import copy ## 定义聊天模板 @dataclass class Template: template_name:str system_format: str user_format: str assistant_format: str system: str stop_word: str template_dict: Dict[str, Template] = dict() def register_template(template_name, system_format, user_format, assistant_format, system, stop_word=None): template_dict[template_name] = Template( template_name=template_name, system_format=system_format, user_format=user_format, assistant_format=assistant_format, system=system, stop_word=stop_word, ) # 这里的系统提示词是训练时使用的,推理时可以自行尝试修改效果 register_template( template_name='llama3', system_format='<|begin_of_text|><|start_header_id|>system<|end_header_id|>\n\n{content}<|eot_id|>', user_format='<|start_header_id|>user<|end_header_id|>\n\n{content}<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\n', assistant_format='{content}<|eot_id|>', system=None, stop_word='<|eot_id|>' ) ## 加载模型 def load_model(model_name_or_path, load_in_4bit=False, adapter_name_or_path=None): if load_in_4bit: quantization_config = BitsAndBytesConfig( load_in_4bit=True, bnb_4bit_compute_dtype=torch.float16, bnb_4bit_use_double_quant=True, bnb_4bit_quant_type="nf4", llm_int8_threshold=6.0, llm_int8_has_fp16_weight=False, ) else: quantization_config = None # 加载base model model = AutoModelForCausalLM.from_pretrained( model_name_or_path, load_in_4bit=load_in_4bit, trust_remote_code=True, low_cpu_mem_usage=True, torch_dtype=torch.float16, device_map='auto', quantization_config=quantization_config ) # 加载adapter if adapter_name_or_path is not None: model = PeftModel.from_pretrained(model, adapter_name_or_path) return model ## 加载tokenzier def load_tokenizer(model_name_or_path): tokenizer = AutoTokenizer.from_pretrained( model_name_or_path, trust_remote_code=True, use_fast=False ) if tokenizer.pad_token is None: tokenizer.pad_token = tokenizer.eos_token return tokenizer ## 构建prompt def build_prompt(tokenizer, template, query, history, system=None): template_name = template.template_name system_format = template.system_format user_format = template.user_format assistant_format = template.assistant_format system = system if system is not None else template.system history.append({"role": 'user', 'message': query}) input_ids = [] # 添加系统信息 if system_format is not None: if system is not None: system_text = system_format.format(content=system) input_ids = tokenizer.encode(system_text, add_special_tokens=False) # 拼接历史对话 for item in history: role, message = item['role'], item['message'] if role == 'user': message = user_format.format(content=message, stop_token=tokenizer.eos_token) else: message = assistant_format.format(content=message, stop_token=tokenizer.eos_token) tokens = tokenizer.encode(message, add_special_tokens=False) input_ids += tokens input_ids = torch.tensor([input_ids], dtype=torch.long) return input_ids def main(): model_name_or_path = 'NousResearch/Meta-Llama-3-8B' template_name = 'llama3' adapter_name_or_path = None template = template_dict[template_name] load_in_4bit = False max_new_tokens = 500 top_p = 0.9 temperature = 0.35 repetition_penalty = 1.1 # 加载模型 print(f'Loading model from: {model_name_or_path}') print(f'adapter_name_or_path: {adapter_name_or_path}') model = load_model( model_name_or_path, load_in_4bit=load_in_4bit, adapter_name_or_path=adapter_name_or_path ).eval() tokenizer = load_tokenizer(model_name_or_path if adapter_name_or_path is None else adapter_name_or_path) if template.stop_word is None: template.stop_word = tokenizer.eos_token stop_token_id = tokenizer.encode(template.stop_word, add_special_tokens=True) assert len(stop_token_id) == 1 stop_token_id = stop_token_id[0] history = [] query = input('# User:') while True: query = query.strip() input_ids = build_prompt(tokenizer, template, query, copy.deepcopy(history), system=None).to(model.device) attention_mask = torch.ones(input_ids.shape,dtype=torch.long,device=model.device) outputs = model.generate( input_ids=input_ids, max_new_tokens=max_new_tokens, do_sample=True, top_p=top_p, temperature=temperature, repetition_penalty=repetition_penalty, eos_token_id=stop_token_id, attention_mask=attention_mask, pad_token_id=tokenizer.eos_token_id ) outputs = outputs.tolist()[0][len(input_ids[0]):] response = tokenizer.decode(outputs) response = response.strip().replace(template.stop_word, "").strip() # 存储对话历史 history.append({"role": 'user', 'message': query}) history.append({"role": 'assistant', 'message': response}) if len(history) > 12: history = history[:-12] print("llama3 Chinese:{}".format(response)) query = input('# User:') if __name__ == '__main__': main() ``` ### 多轮问答示例 ``` User:你是谁 llama3 Chinese:我是一名AI语言模型,我的名字叫MOSS。 User:你会什么 llama3 Chinese:我可以回答问题、提供信息、帮助您完成任务等等。请问您需要我做什么? User:两千块钱买什么新车好? llama3 Chinese:两千块钱买一辆新车可能比较困难,因为大多数新车的价格都比这个金额高得多。但是,有一些二手车或者低价新车可能在这个价格范围内,您可以考虑购买这些车辆。但请注意,在购买二手车时,您应该仔细检查车辆的状况,并了解其历史记录,以确保您获得了一个安全可靠的车辆。 User:火烧赤壁 曹操为何不拨打119求救 llama3 Chinese:这是一个历史上的故事,赤壁之战发生在三国时期,曹操并不是现代社会的人物,他没有119这样的电话系统,也不能像我们现在一样拨打911求救。因此,他只能通过其他方式来组织部队和进行战斗。 User:帮我写一个transformer block llama3 Chinese:当然可以!下面是一个简单的Transformer Block示例: ```python import torch.nn as nn import torch class TransformerBlock(nn.Module): def __init__(self, d_model, n_heads, dropout=0.1): super(TransformerBlock, self).__init__() self.self_attn = MultiHeadAttention(d_model, n_heads) self.feed_forward = nn.Linear(d_model, d_model) self.dropout = nn.Dropout(dropout) def forward(self, x): # Self-Attention residual = x output = self.self_attn(x, x) output += residual output = self.dropout(output) # Feed Forward residual = output output = self.feed_forward(output) output += residual output = self.dropout(output) return output class MultiHeadAttention(nn.Module): def __init__(self, d_model, n_heads): super(MultiHeadAttention, self).__init__() self.d_model = d_model self.n_heads = n_heads self.query_linear = nn.Linear(d_model, d_model) self.key_linear = nn.Linear(d_model, d_model) self.value_linear = nn.Linear(d_model, d_model) self.dropout = nn.Dropout(0.1) def forward(self, query, key, value): batch_size = query.size(0) head_dim = self.d_model // self.n_heads query = self.query_linear(query).view(batch_size, -1, self.n_heads, head_dim) key = self.key_linear(key).view(batch_size, -1, self.n_heads, head_dim) value = self.value_linear(value).view(batch_size, -1, self.n_heads, head_dim) attention_scores = torch.matmul(query.permute(0, 2, 1, 3), key.permute(0, 2, 3, 1)) / math.sqrt(head_dim) attention_scores = F.softmax(attention_scores, dim=-1) attention_scores = self.dropout(attention_scores) output = torch.matmul(attention_scores, value) output = output.view(batch_size, -1, self.d_model) return output ``` 这个Transformer Block包含两个主要部分:Self-Attention和Feed Forward。Self-Attention使用MultiHeadAttention模块来计算输入序列之间的注意力权重 User:能写歌快速排序吗 llama3 Chinese:当然可以,以下是Python实现的快速排序算法: ```python def quick_sort(arr): if len(arr) <= 1: return arr pivot = arr[0] left = [x for x in arr[1:] if x < pivot] right = [x for x in arr[1:] if x >= pivot] return quick_sort(left) + [pivot] + quick_sort(right) ``` 这个函数接受一个列表作为参数,并返回排好序后的列表。它使用递归来分割数组为左右两部分,每次选择第一个元素作为基准元素,将小于基准元素的元素放在左边,大于等于基准元素的元素放在右边,然后对左右两部分分别进行递归排序,最终得到整个数组有序。 需要注意的是,这种实现方式称为“原地排序”,即在排序过程中不需要额外的存储空间。但是,它的时间复杂度可能会比其他实现方式更高,因为每次都需要创建新的列表来存放子数组。 User:你觉得你还存在哪些缺陷? llama3 Chinese:作为一个AI语言模型,我可能会存在以下缺陷: 1. 语言理解能力有限:虽然我可以处理大量的文本数据,但是我的语言理解能力仍然受到一定的限制。我可能无法正确理解某些复杂的语境或隐喻。 2. 缺乏人类的情感:虽然我可以模拟人类的语言,但我缺乏真正的人类情感。这意味着我可能无法理解某些情感上的细微差异,或者无法与人类建立深入的关系。 3. 数据偏见:我的训练数据可能存在偏见,这可能导致我的输出也存在偏见。例如,如果我的训练数据中缺少某个群体的信息,那么我可能无法准确地回答该群体相关的问题。 4. 缺乏创造力:虽然我可以生成一些文本,但是我的创造力仍然受到一定的限制。我可能无法像人类一样产生新的想法或创新解决方案。 总之,虽然我是一个强大的工具,但我仍然存在一些缺陷和局限性。 ```
internlm/internlm2-math-plus-7b
internlm
"2024-06-05T03:43:15Z"
981
9
transformers
[ "transformers", "safetensors", "internlm2", "text-generation", "math", "conversational", "custom_code", "en", "zh", "arxiv:2402.06332", "license:other", "autotrain_compatible", "region:us" ]
text-generation
"2024-05-24T03:10:48Z"
--- pipeline_tag: text-generation license: other language: - en - zh tags: - math --- # InternLM-Math-Plus <div align="center"> <img src="https://raw.githubusercontent.com/InternLM/InternLM/main/assets/logo.svg" width="200"/> <div> </div> <div align="center"> <b><font size="5">InternLM-Math</font></b> <sup> <a href="https://internlm.intern-ai.org.cn/"> <i><font size="4">Plus</font></i> </a> </sup> <div> </div> </div> State-of-the-art bilingual open-sourced Math reasoning LLMs. A **solver**, **prover**, **verifier**, **augmentor**. [💻 Github](https://github.com/InternLM/InternLM-Math) [🤗 Demo](https://huggingface.co/spaces/internlm/internlm2-math-7b) </div> # News - [2024.05.24] We release updated version InternLM2-Math-Plus with 4 sizes and state-of-the-art performances including 1.8B, 7B, 20B, and 8x22B. We improve informal math reasoning performance (chain-of-thought and code-intepreter) and formal math reasoning performance (LEAN 4 translation and LEAN 4 theorem proving) significantly. - [2024.02.10] We add tech reports and citation reference. - [2024.01.31] We add MiniF2F results with evaluation codes! - [2024.01.29] We add checkpoints from ModelScope. Update results about majority voting and Code Intepreter. Tech report is on the way! - [2024.01.26] We add checkpoints from OpenXLab, which ease Chinese users to download! # Performance ## Formal Math Reasoning We evaluate the performance of InternLM2-Math-Plus on formal math reasoning benchmark MiniF2F-test. The evaluation setting is same as Llemma with LEAN 4. | Models | MiniF2F-test | | -------------------------------- | ------------ | | ReProver | 26.5 | | LLMStep | 27.9 | | GPT-F | 36.6 | | HTPS | 41.0 | | Llemma-7B | 26.2 | | Llemma-34B | 25.8 | | InternLM2-Math-7B-Base | 30.3 | | InternLM2-Math-20B-Base | 29.5 | | InternLM2-Math-Plus-1.8B | 38.9 | | InternLM2-Math-Plus-7B | **43.4** | | InternLM2-Math-Plus-20B | 42.6 | | InternLM2-Math-Plus-Mixtral8x22B | 37.3 | ## Informal Math Reasoning We evaluate the performance of InternLM2-Math-Plus on informal math reasoning benchmark MATH and GSM8K. InternLM2-Math-Plus-1.8B outperforms MiniCPM-2B in the smallest size setting. InternLM2-Math-Plus-7B outperforms Deepseek-Math-7B-RL which is the state-of-the-art math reasoning open source model. InternLM2-Math-Plus-Mixtral8x22B achieves 68.5 on MATH (with Python) and 91.8 on GSM8K. | Model | MATH | MATH-Python | GSM8K | | -------------------------------- | -------- | ----------- | -------- | | MiniCPM-2B | 10.2 | - | 53.8 | | InternLM2-Math-Plus-1.8B | **37.0** | **41.5** | **58.8** | | InternLM2-Math-7B | 34.6 | 50.9 | 78.1 | | Deepseek-Math-7B-RL | 51.7 | 58.8 | **88.2** | | InternLM2-Math-Plus-7B | **53.0** | **59.7** | 85.8 | | InternLM2-Math-20B | 37.7 | 54.3 | 82.6 | | InternLM2-Math-Plus-20B | **53.8** | **61.8** | **87.7** | | Mixtral8x22B-Instruct-v0.1 | 41.8 | - | 78.6 | | Eurux-8x22B-NCA | 49.0 | - | - | | InternLM2-Math-Plus-Mixtral8x22B | **58.1** | **68.5** | **91.8** | We also evaluate models on [MathBench-A](https://github.com/open-compass/MathBench). InternLM2-Math-Plus-Mixtral8x22B has comparable performance compared to Claude 3 Opus. | Model | Arithmetic | Primary | Middle | High | College | Average | | -------------------------------- | ---------- | ------- | ------ | ---- | ------- | ------- | | GPT-4o-0513 | 77.7 | 87.7 | 76.3 | 59.0 | 54.0 | 70.9 | | Claude 3 Opus | 85.7 | 85.0 | 58.0 | 42.7 | 43.7 | 63.0 | | Qwen-Max-0428 | 72.3 | 86.3 | 65.0 | 45.0 | 27.3 | 59.2 | | Qwen-1.5-110B | 70.3 | 82.3 | 64.0 | 47.3 | 28.0 | 58.4 | | Deepseek-V2 | 82.7 | 89.3 | 59.0 | 39.3 | 29.3 | 59.9 | | Llama-3-70B-Instruct | 70.3 | 86.0 | 53.0 | 38.7 | 34.7 | 56.5 | | InternLM2-Math-Plus-Mixtral8x22B | 77.5 | 82.0 | 63.6 | 50.3 | 36.8 | 62.0 | | InternLM2-Math-20B | 58.7 | 70.0 | 43.7 | 24.7 | 12.7 | 42.0 | | InternLM2-Math-Plus-20B | 65.8 | 79.7 | 59.5 | 47.6 | 24.8 | 55.5 | | Llama3-8B-Instruct | 54.7 | 71.0 | 25.0 | 19.0 | 14.0 | 36.7 | | InternLM2-Math-7B | 53.7 | 67.0 | 41.3 | 18.3 | 8.0 | 37.7 | | Deepseek-Math-7B-RL | 68.0 | 83.3 | 44.3 | 33.0 | 23.0 | 50.3 | | InternLM2-Math-Plus-7B | 61.4 | 78.3 | 52.5 | 40.5 | 21.7 | 50.9 | | MiniCPM-2B | 49.3 | 51.7 | 18.0 | 8.7 | 3.7 | 26.3 | | InternLM2-Math-Plus-1.8B | 43.0 | 43.3 | 25.4 | 18.9 | 4.7 | 27.1 | # Citation and Tech Report ``` @misc{ying2024internlmmath, title={InternLM-Math: Open Math Large Language Models Toward Verifiable Reasoning}, author={Huaiyuan Ying and Shuo Zhang and Linyang Li and Zhejian Zhou and Yunfan Shao and Zhaoye Fei and Yichuan Ma and Jiawei Hong and Kuikun Liu and Ziyi Wang and Yudong Wang and Zijian Wu and Shuaibin Li and Fengzhe Zhou and Hongwei Liu and Songyang Zhang and Wenwei Zhang and Hang Yan and Xipeng Qiu and Jiayu Wang and Kai Chen and Dahua Lin}, year={2024}, eprint={2402.06332}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
UBC-NLP/AraT5-base
UBC-NLP
"2024-05-16T17:42:36Z"
980
15
transformers
[ "transformers", "pytorch", "tf", "t5", "Arabic T5", "MSA", "Twitter", "Arabic Dialect", "Arabic Machine Translation", "Arabic Text Summarization", "Arabic News Title and Question Generation", "Arabic Paraphrasing and Transliteration", "Arabic Code-Switched Translation", "ar", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
"2022-03-02T23:29:05Z"
--- language: - ar tags: - Arabic T5 - MSA - Twitter - Arabic Dialect - Arabic Machine Translation - Arabic Text Summarization - Arabic News Title and Question Generation - Arabic Paraphrasing and Transliteration - Arabic Code-Switched Translation --- # AraT5-base # AraT5: Text-to-Text Transformers for Arabic Language Generation <img src="https://huggingface.co/UBC-NLP/AraT5-base/resolve/main/AraT5_CR_new.png" alt="AraT5" width="45%" height="35%" align="right"/> This is the repository accompanying our paper [AraT5: Text-to-Text Transformers for Arabic Language Understanding and Generation](https://aclanthology.org/2022.acl-long.47/). In this is the repository we Introduce **AraT5<sub>MSA</sub>**, **AraT5<sub>Tweet</sub>**, and **AraT5**: three powerful Arabic-specific text-to-text Transformer based models; <span style="color:red"><b>A new version of AraT5 comes out and we recommend using the [AraT5v2-base-1024](https://huggingface.co/UBC-NLP/AraT5v2-base-1024) instead of this version.</b></span> --- # How to use AraT5 models Below is an example for fine-tuning **AraT5-base** for News Title Generation on the Aranews dataset ``` bash !python run_trainier_seq2seq_huggingface.py \ --learning_rate 5e-5 \ --max_target_length 128 --max_source_length 128 \ --per_device_train_batch_size 8 --per_device_eval_batch_size 8 \ --model_name_or_path "UBC-NLP/AraT5-base" \ --output_dir "/content/AraT5_FT_title_generation" --overwrite_output_dir \ --num_train_epochs 3 \ --train_file "/content/ARGEn_title_genration_sample_train.tsv" \ --validation_file "/content/ARGEn_title_genration_sample_valid.tsv" \ --task "title_generation" --text_column "document" --summary_column "title" \ --load_best_model_at_end --metric_for_best_model "eval_bleu" --greater_is_better True --evaluation_strategy epoch --logging_strategy epoch --predict_with_generate\ --do_train --do_eval ``` For more details about the fine-tuning example, please read this notebook [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://github.com/UBC-NLP/araT5/blob/main/examples/Fine_tuning_AraT5.ipynb) In addition, we release the fine-tuned checkpoint of the News Title Generation (NGT) which is described in the paper. The model available at Huggingface ([UBC-NLP/AraT5-base-title-generation](https://huggingface.co/UBC-NLP/AraT5-base-title-generation)). For more details, please visit our own [GitHub](https://github.com/UBC-NLP/araT5). # AraT5 Models Checkpoints AraT5 Pytorch and TensorFlow checkpoints are available on the Huggingface website for direct download and use ```exclusively for research```. ```For commercial use, please contact the authors via email @ (muhammad.mageed[at]ubc[dot]ca).``` | **Model** | **Link** | |---------|:------------------:| | **AraT5-base** | [https://huggingface.co/UBC-NLP/AraT5-base](https://huggingface.co/UBC-NLP/AraT5-base) | | **AraT5-msa-base** | [https://huggingface.co/UBC-NLP/AraT5-msa-base](https://huggingface.co/UBC-NLP/AraT5-msa-base) | | **AraT5-tweet-base** | [https://huggingface.co/UBC-NLP/AraT5-tweet-base](https://huggingface.co/UBC-NLP/AraT5-tweet-base) | | **AraT5-msa-small** | [https://huggingface.co/UBC-NLP/AraT5-msa-small](https://huggingface.co/UBC-NLP/AraT5-msa-small) | | **AraT5-tweet-small**| [https://huggingface.co/UBC-NLP/AraT5-tweet-small](https://huggingface.co/UBC-NLP/AraT5-tweet-small) | # BibTex If you use our models for your scientific publication, or if you find the resources in this repository useful, please cite our papers as follows: **(AraT5-base, AraT5-msa-base, AraT5-tweet-base, AraT5-msa-small, or AraT5-tweet-small)** ```bibtex @inproceedings{nagoudi2022_arat5, @inproceedings{nagoudi-etal-2022-arat5, title = "{A}ra{T}5: Text-to-Text Transformers for {A}rabic Language Generation", author = "Nagoudi, El Moatez Billah and Elmadany, AbdelRahim and Abdul-Mageed, Muhammad", booktitle = "Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = may, year = "2022", address = "Dublin, Ireland", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2022.acl-long.47", pages = "628--647", abstract = "Transfer learning with a unified Transformer framework (T5) that converts all language problems into a text-to-text format was recently proposed as a simple and effective transfer learning approach. Although a multilingual version of the T5 model (mT5) was also introduced, it is not clear how well it can fare on non-English tasks involving diverse data. To investigate this question, we apply mT5 on a language with a wide variety of dialects{--}Arabic. For evaluation, we introduce a novel benchmark for ARabic language GENeration (ARGEN), covering seven important tasks. For model comparison, we pre-train three powerful Arabic T5-style models and evaluate them on ARGEN. Although pre-trained with {\textasciitilde}49 less data, our new models perform significantly better than mT5 on all ARGEN tasks (in 52 out of 59 test sets) and set several new SOTAs. Our models also establish new SOTA on the recently-proposed, large Arabic language understanding evaluation benchmark ARLUE (Abdul-Mageed et al., 2021). Our new models are publicly available. We also link to ARGEN datasets through our repository: https://github.com/UBC-NLP/araT5.", } ``` **AraT5v2-base-1024** ``` bibtex @inproceedings{elmadany-etal-2023-octopus, title = "Octopus: A Multitask Model and Toolkit for {A}rabic Natural Language Generation", author = "Elmadany, AbdelRahim and Nagoudi, El Moatez Billah and Abdul-Mageed, Muhammad", booktitle = "Proceedings of ArabicNLP 2023", month = dec, year = "2023", address = "Singapore (Hybrid)", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.arabicnlp-1.20", doi = "10.18653/v1/2023.arabicnlp-1.20", pages = "232--243", } ``` ## Acknowledgments We gratefully acknowledge support from the Natural Sciences and Engineering Research Council of Canada, the Social Sciences and Humanities Research Council of Canada, Canadian Foundation for Innovation, [ComputeCanada](www.computecanada.ca) and [UBC ARC-Sockeye](https://doi.org/10.14288/SOCKEYE). We also thank the [Google TensorFlow Research Cloud (TFRC)](https://www.tensorflow.org/tfrc) program for providing us with free TPU access.
stabilityai/stable-diffusion-2-1-unclip-small
stabilityai
"2023-04-12T15:49:28Z"
980
32
diffusers
[ "diffusers", "safetensors", "stable-diffusion", "text-to-image", "arxiv:2112.10752", "arxiv:1910.09700", "license:openrail++", "diffusers:StableUnCLIPImg2ImgPipeline", "region:us" ]
text-to-image
"2023-03-24T16:35:48Z"
--- license: openrail++ tags: - stable-diffusion - text-to-image pinned: true --- # Stable Diffusion v2-1-unclip (small) Model Card This model card focuses on the model associated with the Stable Diffusion v2-1 model, codebase available [here](https://github.com/Stability-AI/stablediffusion). This `stable-diffusion-2-1-unclip-small` is a finetuned version of Stable Diffusion 2.1, modified to accept (noisy) CLIP image embedding in addition to the text prompt, and can be used to create image variations (Examples) or can be chained with text-to-image CLIP priors. The amount of noise added to the image embedding can be specified via the noise_level (0 means no noise, 1000 full noise). - Use it with 🧨 [`diffusers`](#examples) ## Model Details - **Developed by:** Robin Rombach, Patrick Esser - **Model type:** Diffusion-based text-to-image generation model - **Language(s):** English - **License:** [CreativeML Open RAIL++-M License](https://huggingface.co/stabilityai/stable-diffusion-2/blob/main/LICENSE-MODEL) - **Model Description:** This is a model that can be used to generate and modify images based on text prompts. It is a [Latent Diffusion Model](https://arxiv.org/abs/2112.10752) that uses a fixed, pretrained text encoder ([OpenCLIP-ViT/H](https://github.com/mlfoundations/open_clip)). - **Resources for more information:** [GitHub Repository](https://github.com/Stability-AI/). - **Cite as:** @InProceedings{Rombach_2022_CVPR, author = {Rombach, Robin and Blattmann, Andreas and Lorenz, Dominik and Esser, Patrick and Ommer, Bj\"orn}, title = {High-Resolution Image Synthesis With Latent Diffusion Models}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2022}, pages = {10684-10695} } ## Examples Using the [🤗's Diffusers library](https://github.com/huggingface/diffusers) to run Stable Diffusion UnCLIP 2-1-small in a simple and efficient manner. ```bash pip install diffusers transformers accelerate scipy safetensors ``` Running the pipeline (if you don't swap the scheduler it will run with the default DDIM, in this example we are swapping it to DPMSolverMultistepScheduler): ```python from diffusers import DiffusionPipeline from diffusers.utils import load_image import torch pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1-unclip-small", torch_dtype=torch.float16) pipe.to("cuda") # get image url = "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/stable_unclip/tarsila_do_amaral.png" image = load_image(url) # run image variation image = pipe(image).images[0] ``` ![img](./image.png) # Uses ## Direct Use The model is intended for research purposes only. Possible research areas and tasks include - Safe deployment of models which have the potential to generate harmful content. - Probing and understanding the limitations and biases of generative models. - Generation of artworks and use in design and other artistic processes. - Applications in educational or creative tools. - Research on generative models. Excluded uses are described below. ### Misuse, Malicious Use, and Out-of-Scope Use _Note: This section is originally taken from the [DALLE-MINI model card](https://huggingface.co/dalle-mini/dalle-mini), was used for Stable Diffusion v1, but applies in the same way to Stable Diffusion v2_. The model should not be used to intentionally create or disseminate images that create hostile or alienating environments for people. This includes generating images that people would foreseeably find disturbing, distressing, or offensive; or content that propagates historical or current stereotypes. #### Out-of-Scope Use The model was not trained to be factual or true representations of people or events, and therefore using the model to generate such content is out-of-scope for the abilities of this model. #### Misuse and Malicious Use Using the model to generate content that is cruel to individuals is a misuse of this model. This includes, but is not limited to: - Generating demeaning, dehumanizing, or otherwise harmful representations of people or their environments, cultures, religions, etc. - Intentionally promoting or propagating discriminatory content or harmful stereotypes. - Impersonating individuals without their consent. - Sexual content without consent of the people who might see it. - Mis- and disinformation - Representations of egregious violence and gore - Sharing of copyrighted or licensed material in violation of its terms of use. - Sharing content that is an alteration of copyrighted or licensed material in violation of its terms of use. ## Limitations and Bias ### Limitations - The model does not achieve perfect photorealism - The model cannot render legible text - The model does not perform well on more difficult tasks which involve compositionality, such as rendering an image corresponding to “A red cube on top of a blue sphere” - Faces and people in general may not be generated properly. - The model was trained mainly with English captions and will not work as well in other languages. - The autoencoding part of the model is lossy - The model was trained on a subset of the large-scale dataset [LAION-5B](https://laion.ai/blog/laion-5b/), which contains adult, violent and sexual content. To partially mitigate this, we have filtered the dataset using LAION's NFSW detector (see Training section). ### Bias While the capabilities of image generation models are impressive, they can also reinforce or exacerbate social biases. Stable Diffusion was primarily trained on subsets of [LAION-2B(en)](https://laion.ai/blog/laion-5b/), which consists of images that are limited to English descriptions. Texts and images from communities and cultures that use other languages are likely to be insufficiently accounted for. This affects the overall output of the model, as white and western cultures are often set as the default. Further, the ability of the model to generate content with non-English prompts is significantly worse than with English-language prompts. Stable Diffusion v2 mirrors and exacerbates biases to such a degree that viewer discretion must be advised irrespective of the input or its intent. ## Training **Training Data** The model developers used the following dataset for training the model: - LAION-5B and subsets (details below). The training data is further filtered using LAION's NSFW detector, with a "p_unsafe" score of 0.1 (conservative). For more details, please refer to LAION-5B's [NeurIPS 2022](https://openreview.net/forum?id=M3Y74vmsMcY) paper and reviewer discussions on the topic. ## Environmental Impact **Stable Diffusion v1** **Estimated Emissions** Based on that information, we estimate the following CO2 emissions using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). The hardware, runtime, cloud provider, and compute region were utilized to estimate the carbon impact. - **Hardware Type:** A100 PCIe 40GB - **Hours used:** 200000 - **Cloud Provider:** AWS - **Compute Region:** US-east - **Carbon Emitted (Power consumption x Time x Carbon produced based on location of power grid):** 15000 kg CO2 eq. ## Citation @InProceedings{Rombach_2022_CVPR, author = {Rombach, Robin and Blattmann, Andreas and Lorenz, Dominik and Esser, Patrick and Ommer, Bj\"orn}, title = {High-Resolution Image Synthesis With Latent Diffusion Models}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2022}, pages = {10684-10695} } *This model card was written by: Robin Rombach, Patrick Esser and David Ha and is based on the [Stable Diffusion v1](https://github.com/CompVis/stable-diffusion/blob/main/Stable_Diffusion_v1_Model_Card.md) and [DALL-E Mini model card](https://huggingface.co/dalle-mini/dalle-mini).*
CHE-72/Qwen2-7B-Instruct-Q3_K_M-GGUF
CHE-72
"2024-06-21T18:53:21Z"
980
0
null
[ "gguf", "chat", "llama-cpp", "gguf-my-repo", "text-generation", "en", "base_model:Qwen/Qwen2-7B-Instruct", "license:apache-2.0", "region:us" ]
text-generation
"2024-06-21T18:53:00Z"
--- base_model: Qwen/Qwen2-7B-Instruct language: - en license: apache-2.0 pipeline_tag: text-generation tags: - chat - llama-cpp - gguf-my-repo --- # CHE-72/Qwen2-7B-Instruct-Q3_K_M-GGUF This model was converted to GGUF format from [`Qwen/Qwen2-7B-Instruct`](https://huggingface.co/Qwen/Qwen2-7B-Instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/Qwen/Qwen2-7B-Instruct) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo CHE-72/Qwen2-7B-Instruct-Q3_K_M-GGUF --hf-file qwen2-7b-instruct-q3_k_m.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo CHE-72/Qwen2-7B-Instruct-Q3_K_M-GGUF --hf-file qwen2-7b-instruct-q3_k_m.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo CHE-72/Qwen2-7B-Instruct-Q3_K_M-GGUF --hf-file qwen2-7b-instruct-q3_k_m.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo CHE-72/Qwen2-7B-Instruct-Q3_K_M-GGUF --hf-file qwen2-7b-instruct-q3_k_m.gguf -c 2048 ```
ab-ai/pii_model_based_on_distilbert
ab-ai
"2024-03-21T12:53:17Z"
979
0
transformers
[ "transformers", "tensorboard", "safetensors", "distilbert", "token-classification", "pii-detection", "en", "dataset:custom_dataset", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
"2024-03-15T19:22:28Z"
--- language: en tags: - token-classification - pii-detection license: apache-2.0 datasets: - custom_dataset --- # Model Name PII Detection Model Based on DistilBERT ## Model description This model is a token classification model trained for detecting personally identifiable information (PII) entities such as names, addresses, dates of birth, credit card numbers, etc. The model is based on the DistilBERT architecture and has been fine-tuned on a custom dataset for PII detection. ## Intended use The model is intended to be used for automatically identifying and extracting PII entities from text data. It can be incorporated into data processing pipelines for tasks such as data anonymization, redaction, compliance with privacy regulations, etc. ## Evaluation results The model's performance was evaluated on a held-out validation set using the following metrics: - Precision: 94% - Recall: 96% - F1 Score: 95% - Accuracy: 99% ## Limitations and bias - The model's performance may vary depending on the quality and diversity of the input data. - It may exhibit biases present in the training data, such as overrepresentation or underrepresentation of certain demographic groups or types of PII. - The model may struggle with detecting PII entities in noisy or poorly formatted text. ## Ethical considerations - Care should be taken when deploying the model in production to ensure that it does not inadvertently expose sensitive information or violate individuals' privacy rights. - Data used to train and evaluate the model should be handled with caution to avoid the risk of exposing PII. - Regular monitoring and auditing of the model's predictions may be necessary to identify and mitigate any potential biases or errors. ## Model Training and Evaluation Results | Epoch | Training Loss | Validation Loss | Precision | Recall | F1 Score | Accuracy | |-------|---------------|-----------------|-----------|--------|----------|----------| | 1 | 0.047 | 0.051537 | 91.35% | 95.23% | 93.25% | 98.56% | | 2 | 0.0307 | 0.043873 | 93.27% | 96.10% | 94.66% | 98.75% | | 3 | 0.0208 | 0.04702 | 91.83% | 95.49% | 93.62% | 98.54% | | 4 | 0.0147 | 0.046979 | 93.27% | 94.97% | 94.11% | 98.77% | | 5 | 0.0094 | 0.057863 | 93.41% | 95.92% | 94.65% | 98.70% | ## Authors - [email protected]
MaziyarPanahi/Multi_verse_modelM7-7B-GGUF
MaziyarPanahi
"2024-04-01T04:28:38Z"
979
0
transformers
[ "transformers", "gguf", "mistral", "quantized", "2-bit", "3-bit", "4-bit", "5-bit", "6-bit", "8-bit", "GGUF", "safetensors", "text-generation", "mergekit", "merge", "conversational", "arxiv:2311.03099", "arxiv:2306.01708", "base_model:liminerity/M7-7b", "base_model:ammarali32/multi_verse_model", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us", "base_model:automerger/Multi_verse_modelM7-7B" ]
text-generation
"2024-04-01T04:05:29Z"
--- tags: - quantized - 2-bit - 3-bit - 4-bit - 5-bit - 6-bit - 8-bit - GGUF - transformers - safetensors - mistral - text-generation - mergekit - merge - conversational - arxiv:2311.03099 - arxiv:2306.01708 - base_model:liminerity/M7-7b - base_model:ammarali32/multi_verse_model - autotrain_compatible - endpoints_compatible - text-generation-inference - region:us - text-generation model_name: Multi_verse_modelM7-7B-GGUF base_model: automerger/Multi_verse_modelM7-7B inference: false model_creator: automerger pipeline_tag: text-generation quantized_by: MaziyarPanahi --- # [MaziyarPanahi/Multi_verse_modelM7-7B-GGUF](https://huggingface.co/MaziyarPanahi/Multi_verse_modelM7-7B-GGUF) - Model creator: [automerger](https://huggingface.co/automerger) - Original model: [automerger/Multi_verse_modelM7-7B](https://huggingface.co/automerger/Multi_verse_modelM7-7B) ## Description [MaziyarPanahi/Multi_verse_modelM7-7B-GGUF](https://huggingface.co/MaziyarPanahi/Multi_verse_modelM7-7B-GGUF) contains GGUF format model files for [automerger/Multi_verse_modelM7-7B](https://huggingface.co/automerger/Multi_verse_modelM7-7B). ## How to use Thanks to [TheBloke](https://huggingface.co/TheBloke) for preparing an amazing README on how to use GGUF models: ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplete list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models. ### Explanation of quantisation methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw ## How to download GGUF files **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: * LM Studio * LoLLMS Web UI * Faraday.dev ### In `text-generation-webui` Under Download Model, you can enter the model repo: [MaziyarPanahi/Multi_verse_modelM7-7B-GGUF](https://huggingface.co/MaziyarPanahi/Multi_verse_modelM7-7B-GGUF) and below it, a specific filename to download, such as: Multi_verse_modelM7-7B-GGUF.Q4_K_M.gguf. Then click Download. ### On the command line, including multiple files at once I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download MaziyarPanahi/Multi_verse_modelM7-7B-GGUF Multi_verse_modelM7-7B.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` </details> <details> <summary>More advanced huggingface-cli download usage (click to read)</summary> You can also download multiple files at once with a pattern: ```shell huggingface-cli download [MaziyarPanahi/Multi_verse_modelM7-7B-GGUF](https://huggingface.co/MaziyarPanahi/Multi_verse_modelM7-7B-GGUF) --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf' ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download MaziyarPanahi/Multi_verse_modelM7-7B-GGUF Multi_verse_modelM7-7B.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> ## Example `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 35 -m Multi_verse_modelM7-7B.Q4_K_M.gguf --color -c 32768 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant" ``` Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 32768` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## How to run in `text-generation-webui` Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20-%20Model%20Tab.md#llamacpp). ## How to run from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python. ### How to load this model in Python code, using llama-cpp-python For full documentation, please see: [llama-cpp-python docs](https://github.com/abetlen/llama-cpp-python/). #### First install the package Run one of the following commands, according to your system: ```shell # Base ctransformers with no GPU acceleration pip install llama-cpp-python # With NVidia CUDA acceleration CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python # Or with OpenBLAS acceleration CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python # Or with CLBLast acceleration CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python # Or with AMD ROCm GPU acceleration (Linux only) CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python # Or with Metal GPU acceleration for macOS systems only CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python # In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA: $env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on" pip install llama-cpp-python ``` #### Simple llama-cpp-python example code ```python from llama_cpp import Llama # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = Llama( model_path="./Multi_verse_modelM7-7B.Q4_K_M.gguf", # Download the model file first n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available ) # Simple inference example output = llm( "<|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant", # Prompt max_tokens=512, # Generate up to 512 tokens stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using. echo=True # Whether to echo the prompt ) # Chat Completion API llm = Llama(model_path="./Multi_verse_modelM7-7B.Q4_K_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using llm.create_chat_completion( messages = [ {"role": "system", "content": "You are a story writing assistant."}, { "role": "user", "content": "Write a story about llamas." } ] ) ``` ## How to use with LangChain Here are guides on using llama-cpp-python and ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
BLURPLETESTS/L3-8B-Chara-v1-Alpha-Q5_K_M_imat_GGUF
BLURPLETESTS
"2024-07-01T06:38:45Z"
979
0
null
[ "gguf", "llama-cpp", "gguf-my-repo", "en", "base_model:Sao10K/L3-8B-Chara-v1-Alpha", "license:cc-by-nc-4.0", "region:us" ]
null
"2024-07-01T06:38:19Z"
--- base_model: Sao10K/L3-8B-Chara-v1-Alpha language: - en license: cc-by-nc-4.0 tags: - llama-cpp - gguf-my-repo --- # BLURPLETESTS/L3-8B-Chara-v1-Alpha-Q5_K_M-GGUF This model was converted to GGUF format from [`Sao10K/L3-8B-Chara-v1-Alpha`](https://huggingface.co/Sao10K/L3-8B-Chara-v1-Alpha) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/Sao10K/L3-8B-Chara-v1-Alpha) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo BLURPLETESTS/L3-8B-Chara-v1-Alpha-Q5_K_M-GGUF --hf-file l3-8b-chara-v1-alpha-q5_k_m-imat.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo BLURPLETESTS/L3-8B-Chara-v1-Alpha-Q5_K_M-GGUF --hf-file l3-8b-chara-v1-alpha-q5_k_m-imat.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo BLURPLETESTS/L3-8B-Chara-v1-Alpha-Q5_K_M-GGUF --hf-file l3-8b-chara-v1-alpha-q5_k_m-imat.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo BLURPLETESTS/L3-8B-Chara-v1-Alpha-Q5_K_M-GGUF --hf-file l3-8b-chara-v1-alpha-q5_k_m-imat.gguf -c 2048 ```
HamidRezaAttar/gpt2-product-description-generator
HamidRezaAttar
"2024-06-03T11:17:25Z"
978
14
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "en", "arxiv:1706.03762", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2022-03-02T23:29:04Z"
--- language: en tags: - text-generation license: apache-2.0 widget: - text: "Maximize your bedroom space without sacrificing style with the storage bed." - text: "Handcrafted of solid acacia in weathered gray, our round Jozy drop-leaf dining table is a space-saving." - text: "Our plush and luxurious Emmett modular sofa brings custom comfort to your living space." --- ## GPT2-Home This model is fine-tuned using GPT-2 on amazon home products metadata. It can generate descriptions for your **home** products by getting a text prompt. https://github.com/HamidRezaAttar/GPT2-Home ### Model description [GPT-2](https://openai.com/blog/better-language-models/) is a large [transformer](https://arxiv.org/abs/1706.03762)-based language model with 1.5 billion parameters, trained on a dataset of 8 million web pages. GPT-2 is trained with a simple objective: predict the next word, given all of the previous words within some text. The diversity of the dataset causes this simple goal to contain naturally occurring demonstrations of many tasks across diverse domains. GPT-2 is a direct scale-up of GPT, with more than 10X the parameters and trained on more than 10X the amount of data. ### Live Demo For testing model with special configuration, please visit [Demo](https://huggingface.co/spaces/HamidRezaAttar/gpt2-home) ### Blog Post For more detailed information about project development please refer to my [blog post](https://hamidrezaattar.github.io/blog/markdown/2022/02/17/gpt2-home.html). ### How to use For best experience and clean outputs, you can use Live Demo mentioned above, also you can use the notebook mentioned in my [GitHub](https://github.com/HamidRezaAttar/GPT2-Home) You can use this model directly with a pipeline for text generation. ```python >>> from transformers import AutoTokenizer, AutoModelForCausalLM, pipeline >>> tokenizer = AutoTokenizer.from_pretrained("HamidRezaAttar/gpt2-product-description-generator") >>> model = AutoModelForCausalLM.from_pretrained("HamidRezaAttar/gpt2-product-description-generator") >>> generator = pipeline('text-generation', model, tokenizer=tokenizer, config={'max_length':100}) >>> generated_text = generator("This bed is very comfortable.") ``` ### Citation info ```bibtex @misc{GPT2-Home, author = {HamidReza Fatollah Zadeh Attar}, title = {GPT2-Home the English home product description generator}, year = {2021}, publisher = {GitHub}, journal = {GitHub repository}, howpublished = {\url{https://github.com/HamidRezaAttar/GPT2-Home}}, } ```
imvladikon/general_character_bert
imvladikon
"2022-01-30T11:35:11Z"
978
3
transformers
[ "transformers", "pytorch", "bert", "language model", "en", "dataset:wikipedia", "dataset:openwebtext", "arxiv:2010.10392", "endpoints_compatible", "region:us" ]
null
"2022-03-02T23:29:05Z"
--- language: - en tags: - language model datasets: - wikipedia - openwebtext --- Pretrained general_character_bert model from the ['CharacterBERT: Reconciling ELMo and BERT for Word-Level Open-Vocabulary Representations From Characters' El Boukkouri H., et al., 2020](https://github.com/helboukkouri/character-bert) ``` @inproceedings{el-boukkouri-etal-2020-characterbert, title = "{C}haracter{BERT}: Reconciling {ELM}o and {BERT} for Word-Level Open-Vocabulary Representations From Characters", author = "El Boukkouri, Hicham and Ferret, Olivier and Lavergne, Thomas and Noji, Hiroshi and Zweigenbaum, Pierre and Tsujii, Jun{'}ichi", booktitle = "Proceedings of the 28th International Conference on Computational Linguistics", month = dec, year={2020}, eprint={2010.10392}, archivePrefix={arXiv}, address = "Barcelona, Spain (Online)", publisher = "International Committee on Computational Linguistics", url = "https://www.aclweb.org/anthology/2020.coling-main.609", doi = "10.18653/v1/2020.coling-main.609", pages = "6903--6915", abstract = "Due to the compelling improvements brought by BERT, many recent representation models adopted the Transformer architecture as their main building block, consequently inheriting the wordpiece tokenization system despite it not being intrinsically linked to the notion of Transformers. While this system is thought to achieve a good balance between the flexibility of characters and the efficiency of full words, using predefined wordpiece vocabularies from the general domain is not always suitable, especially when building models for specialized domains (e.g., the medical domain). Moreover, adopting a wordpiece tokenization shifts the focus from the word level to the subword level, making the models conceptually more complex and arguably less convenient in practice. For these reasons, we propose CharacterBERT, a new variant of BERT that drops the wordpiece system altogether and uses a Character-CNN module instead to represent entire words by consulting their characters. We show that this new model improves the performance of BERT on a variety of medical domain tasks while at the same time producing robust, word-level, and open-vocabulary representations.", } ```
timm/resnet101d.ra2_in1k
timm
"2024-02-10T23:40:02Z"
978
0
timm
[ "timm", "pytorch", "safetensors", "image-classification", "arxiv:2110.00476", "arxiv:1512.03385", "arxiv:1812.01187", "license:apache-2.0", "region:us" ]
image-classification
"2023-04-05T18:25:53Z"
--- license: apache-2.0 library_name: timm tags: - image-classification - timm --- # Model card for resnet101d.ra2_in1k A ResNet-D image classification model. This model features: * ReLU activations * 3-layer stem of 3x3 convolutions with pooling * 2x2 average pool + 1x1 convolution shortcut downsample Trained on ImageNet-1k in `timm` using recipe template described below. Recipe details: * RandAugment `RA2` recipe. Inspired by and evolved from EfficientNet RandAugment recipes. Published as `B` recipe in [ResNet Strikes Back](https://arxiv.org/abs/2110.00476). * RMSProp (TF 1.0 behaviour) optimizer, EMA weight averaging * Step (exponential decay w/ staircase) LR schedule with warmup ## Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 44.6 - GMACs: 10.5 - Activations (M): 22.3 - Image size: train = 256 x 256, test = 320 x 320 - **Papers:** - ResNet strikes back: An improved training procedure in timm: https://arxiv.org/abs/2110.00476 - Deep Residual Learning for Image Recognition: https://arxiv.org/abs/1512.03385 - Bag of Tricks for Image Classification with Convolutional Neural Networks: https://arxiv.org/abs/1812.01187 - **Original:** https://github.com/huggingface/pytorch-image-models ## Model Usage ### Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model('resnet101d.ra2_in1k', pretrained=True) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5) ``` ### Feature Map Extraction ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'resnet101d.ra2_in1k', pretrained=True, features_only=True, ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 for o in output: # print shape of each feature map in output # e.g.: # torch.Size([1, 64, 128, 128]) # torch.Size([1, 256, 64, 64]) # torch.Size([1, 512, 32, 32]) # torch.Size([1, 1024, 16, 16]) # torch.Size([1, 2048, 8, 8]) print(o.shape) ``` ### Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'resnet101d.ra2_in1k', pretrained=True, num_classes=0, # remove classifier nn.Linear ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor # or equivalently (without needing to set num_classes=0) output = model.forward_features(transforms(img).unsqueeze(0)) # output is unpooled, a (1, 2048, 8, 8) shaped tensor output = model.forward_head(output, pre_logits=True) # output is a (1, num_features) shaped tensor ``` ## Model Comparison Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results). |model |img_size|top1 |top5 |param_count|gmacs|macts|img/sec| |------------------------------------------|--------|-----|-----|-----------|-----|-----|-------| |[seresnextaa101d_32x8d.sw_in12k_ft_in1k_288](https://huggingface.co/timm/seresnextaa101d_32x8d.sw_in12k_ft_in1k_288)|320 |86.72|98.17|93.6 |35.2 |69.7 |451 | |[seresnextaa101d_32x8d.sw_in12k_ft_in1k_288](https://huggingface.co/timm/seresnextaa101d_32x8d.sw_in12k_ft_in1k_288)|288 |86.51|98.08|93.6 |28.5 |56.4 |560 | |[seresnextaa101d_32x8d.sw_in12k_ft_in1k](https://huggingface.co/timm/seresnextaa101d_32x8d.sw_in12k_ft_in1k)|288 |86.49|98.03|93.6 |28.5 |56.4 |557 | |[seresnextaa101d_32x8d.sw_in12k_ft_in1k](https://huggingface.co/timm/seresnextaa101d_32x8d.sw_in12k_ft_in1k)|224 |85.96|97.82|93.6 |17.2 |34.2 |923 | |[resnext101_32x32d.fb_wsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x32d.fb_wsl_ig1b_ft_in1k)|224 |85.11|97.44|468.5 |87.3 |91.1 |254 | |[resnetrs420.tf_in1k](https://huggingface.co/timm/resnetrs420.tf_in1k)|416 |85.0 |97.12|191.9 |108.4|213.8|134 | |[ecaresnet269d.ra2_in1k](https://huggingface.co/timm/ecaresnet269d.ra2_in1k)|352 |84.96|97.22|102.1 |50.2 |101.2|291 | |[ecaresnet269d.ra2_in1k](https://huggingface.co/timm/ecaresnet269d.ra2_in1k)|320 |84.73|97.18|102.1 |41.5 |83.7 |353 | |[resnetrs350.tf_in1k](https://huggingface.co/timm/resnetrs350.tf_in1k)|384 |84.71|96.99|164.0 |77.6 |154.7|183 | |[seresnextaa101d_32x8d.ah_in1k](https://huggingface.co/timm/seresnextaa101d_32x8d.ah_in1k)|288 |84.57|97.08|93.6 |28.5 |56.4 |557 | |[resnetrs200.tf_in1k](https://huggingface.co/timm/resnetrs200.tf_in1k)|320 |84.45|97.08|93.2 |31.5 |67.8 |446 | |[resnetrs270.tf_in1k](https://huggingface.co/timm/resnetrs270.tf_in1k)|352 |84.43|96.97|129.9 |51.1 |105.5|280 | |[seresnext101d_32x8d.ah_in1k](https://huggingface.co/timm/seresnext101d_32x8d.ah_in1k)|288 |84.36|96.92|93.6 |27.6 |53.0 |595 | |[seresnet152d.ra2_in1k](https://huggingface.co/timm/seresnet152d.ra2_in1k)|320 |84.35|97.04|66.8 |24.1 |47.7 |610 | |[resnetrs350.tf_in1k](https://huggingface.co/timm/resnetrs350.tf_in1k)|288 |84.3 |96.94|164.0 |43.7 |87.1 |333 | |[resnext101_32x8d.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x8d.fb_swsl_ig1b_ft_in1k)|224 |84.28|97.17|88.8 |16.5 |31.2 |1100 | |[resnetrs420.tf_in1k](https://huggingface.co/timm/resnetrs420.tf_in1k)|320 |84.24|96.86|191.9 |64.2 |126.6|228 | |[seresnext101_32x8d.ah_in1k](https://huggingface.co/timm/seresnext101_32x8d.ah_in1k)|288 |84.19|96.87|93.6 |27.2 |51.6 |613 | |[resnext101_32x16d.fb_wsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x16d.fb_wsl_ig1b_ft_in1k)|224 |84.18|97.19|194.0 |36.3 |51.2 |581 | |[resnetaa101d.sw_in12k_ft_in1k](https://huggingface.co/timm/resnetaa101d.sw_in12k_ft_in1k)|288 |84.11|97.11|44.6 |15.1 |29.0 |1144 | |[resnet200d.ra2_in1k](https://huggingface.co/timm/resnet200d.ra2_in1k)|320 |83.97|96.82|64.7 |31.2 |67.3 |518 | |[resnetrs200.tf_in1k](https://huggingface.co/timm/resnetrs200.tf_in1k)|256 |83.87|96.75|93.2 |20.2 |43.4 |692 | |[seresnextaa101d_32x8d.ah_in1k](https://huggingface.co/timm/seresnextaa101d_32x8d.ah_in1k)|224 |83.86|96.65|93.6 |17.2 |34.2 |923 | |[resnetrs152.tf_in1k](https://huggingface.co/timm/resnetrs152.tf_in1k)|320 |83.72|96.61|86.6 |24.3 |48.1 |617 | |[seresnet152d.ra2_in1k](https://huggingface.co/timm/seresnet152d.ra2_in1k)|256 |83.69|96.78|66.8 |15.4 |30.6 |943 | |[seresnext101d_32x8d.ah_in1k](https://huggingface.co/timm/seresnext101d_32x8d.ah_in1k)|224 |83.68|96.61|93.6 |16.7 |32.0 |986 | |[resnet152d.ra2_in1k](https://huggingface.co/timm/resnet152d.ra2_in1k)|320 |83.67|96.74|60.2 |24.1 |47.7 |706 | |[resnetrs270.tf_in1k](https://huggingface.co/timm/resnetrs270.tf_in1k)|256 |83.59|96.61|129.9 |27.1 |55.8 |526 | |[seresnext101_32x8d.ah_in1k](https://huggingface.co/timm/seresnext101_32x8d.ah_in1k)|224 |83.58|96.4 |93.6 |16.5 |31.2 |1013 | |[resnetaa101d.sw_in12k_ft_in1k](https://huggingface.co/timm/resnetaa101d.sw_in12k_ft_in1k)|224 |83.54|96.83|44.6 |9.1 |17.6 |1864 | |[resnet152.a1h_in1k](https://huggingface.co/timm/resnet152.a1h_in1k)|288 |83.46|96.54|60.2 |19.1 |37.3 |904 | |[resnext101_32x16d.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x16d.fb_swsl_ig1b_ft_in1k)|224 |83.35|96.85|194.0 |36.3 |51.2 |582 | |[resnet200d.ra2_in1k](https://huggingface.co/timm/resnet200d.ra2_in1k)|256 |83.23|96.53|64.7 |20.0 |43.1 |809 | |[resnext101_32x4d.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x4d.fb_swsl_ig1b_ft_in1k)|224 |83.22|96.75|44.2 |8.0 |21.2 |1814 | |[resnext101_64x4d.c1_in1k](https://huggingface.co/timm/resnext101_64x4d.c1_in1k)|288 |83.16|96.38|83.5 |25.7 |51.6 |590 | |[resnet152d.ra2_in1k](https://huggingface.co/timm/resnet152d.ra2_in1k)|256 |83.14|96.38|60.2 |15.4 |30.5 |1096 | |[resnet101d.ra2_in1k](https://huggingface.co/timm/resnet101d.ra2_in1k)|320 |83.02|96.45|44.6 |16.5 |34.8 |992 | |[ecaresnet101d.miil_in1k](https://huggingface.co/timm/ecaresnet101d.miil_in1k)|288 |82.98|96.54|44.6 |13.4 |28.2 |1077 | |[resnext101_64x4d.tv_in1k](https://huggingface.co/timm/resnext101_64x4d.tv_in1k)|224 |82.98|96.25|83.5 |15.5 |31.2 |989 | |[resnetrs152.tf_in1k](https://huggingface.co/timm/resnetrs152.tf_in1k)|256 |82.86|96.28|86.6 |15.6 |30.8 |951 | |[resnext101_32x8d.tv2_in1k](https://huggingface.co/timm/resnext101_32x8d.tv2_in1k)|224 |82.83|96.22|88.8 |16.5 |31.2 |1099 | |[resnet152.a1h_in1k](https://huggingface.co/timm/resnet152.a1h_in1k)|224 |82.8 |96.13|60.2 |11.6 |22.6 |1486 | |[resnet101.a1h_in1k](https://huggingface.co/timm/resnet101.a1h_in1k)|288 |82.8 |96.32|44.6 |13.0 |26.8 |1291 | |[resnet152.a1_in1k](https://huggingface.co/timm/resnet152.a1_in1k)|288 |82.74|95.71|60.2 |19.1 |37.3 |905 | |[resnext101_32x8d.fb_wsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x8d.fb_wsl_ig1b_ft_in1k)|224 |82.69|96.63|88.8 |16.5 |31.2 |1100 | |[resnet152.a2_in1k](https://huggingface.co/timm/resnet152.a2_in1k)|288 |82.62|95.75|60.2 |19.1 |37.3 |904 | |[resnetaa50d.sw_in12k_ft_in1k](https://huggingface.co/timm/resnetaa50d.sw_in12k_ft_in1k)|288 |82.61|96.49|25.6 |8.9 |20.6 |1729 | |[resnet61q.ra2_in1k](https://huggingface.co/timm/resnet61q.ra2_in1k)|288 |82.53|96.13|36.8 |9.9 |21.5 |1773 | |[wide_resnet101_2.tv2_in1k](https://huggingface.co/timm/wide_resnet101_2.tv2_in1k)|224 |82.5 |96.02|126.9 |22.8 |21.2 |1078 | |[resnext101_64x4d.c1_in1k](https://huggingface.co/timm/resnext101_64x4d.c1_in1k)|224 |82.46|95.92|83.5 |15.5 |31.2 |987 | |[resnet51q.ra2_in1k](https://huggingface.co/timm/resnet51q.ra2_in1k)|288 |82.36|96.18|35.7 |8.1 |20.9 |1964 | |[ecaresnet50t.ra2_in1k](https://huggingface.co/timm/ecaresnet50t.ra2_in1k)|320 |82.35|96.14|25.6 |8.8 |24.1 |1386 | |[resnet101.a1_in1k](https://huggingface.co/timm/resnet101.a1_in1k)|288 |82.31|95.63|44.6 |13.0 |26.8 |1291 | |[resnetrs101.tf_in1k](https://huggingface.co/timm/resnetrs101.tf_in1k)|288 |82.29|96.01|63.6 |13.6 |28.5 |1078 | |[resnet152.tv2_in1k](https://huggingface.co/timm/resnet152.tv2_in1k)|224 |82.29|96.0 |60.2 |11.6 |22.6 |1484 | |[wide_resnet50_2.racm_in1k](https://huggingface.co/timm/wide_resnet50_2.racm_in1k)|288 |82.27|96.06|68.9 |18.9 |23.8 |1176 | |[resnet101d.ra2_in1k](https://huggingface.co/timm/resnet101d.ra2_in1k)|256 |82.26|96.07|44.6 |10.6 |22.2 |1542 | |[resnet101.a2_in1k](https://huggingface.co/timm/resnet101.a2_in1k)|288 |82.24|95.73|44.6 |13.0 |26.8 |1290 | |[seresnext50_32x4d.racm_in1k](https://huggingface.co/timm/seresnext50_32x4d.racm_in1k)|288 |82.2 |96.14|27.6 |7.0 |23.8 |1547 | |[ecaresnet101d.miil_in1k](https://huggingface.co/timm/ecaresnet101d.miil_in1k)|224 |82.18|96.05|44.6 |8.1 |17.1 |1771 | |[resnext50_32x4d.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext50_32x4d.fb_swsl_ig1b_ft_in1k)|224 |82.17|96.22|25.0 |4.3 |14.4 |2943 | |[ecaresnet50t.a1_in1k](https://huggingface.co/timm/ecaresnet50t.a1_in1k)|288 |82.12|95.65|25.6 |7.1 |19.6 |1704 | |[resnext50_32x4d.a1h_in1k](https://huggingface.co/timm/resnext50_32x4d.a1h_in1k)|288 |82.03|95.94|25.0 |7.0 |23.8 |1745 | |[ecaresnet101d_pruned.miil_in1k](https://huggingface.co/timm/ecaresnet101d_pruned.miil_in1k)|288 |82.0 |96.15|24.9 |5.8 |12.7 |1787 | |[resnet61q.ra2_in1k](https://huggingface.co/timm/resnet61q.ra2_in1k)|256 |81.99|95.85|36.8 |7.8 |17.0 |2230 | |[resnext101_32x8d.tv2_in1k](https://huggingface.co/timm/resnext101_32x8d.tv2_in1k)|176 |81.98|95.72|88.8 |10.3 |19.4 |1768 | |[resnet152.a1_in1k](https://huggingface.co/timm/resnet152.a1_in1k)|224 |81.97|95.24|60.2 |11.6 |22.6 |1486 | |[resnet101.a1h_in1k](https://huggingface.co/timm/resnet101.a1h_in1k)|224 |81.93|95.75|44.6 |7.8 |16.2 |2122 | |[resnet101.tv2_in1k](https://huggingface.co/timm/resnet101.tv2_in1k)|224 |81.9 |95.77|44.6 |7.8 |16.2 |2118 | |[resnext101_32x16d.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnext101_32x16d.fb_ssl_yfcc100m_ft_in1k)|224 |81.84|96.1 |194.0 |36.3 |51.2 |583 | |[resnet51q.ra2_in1k](https://huggingface.co/timm/resnet51q.ra2_in1k)|256 |81.78|95.94|35.7 |6.4 |16.6 |2471 | |[resnet152.a2_in1k](https://huggingface.co/timm/resnet152.a2_in1k)|224 |81.77|95.22|60.2 |11.6 |22.6 |1485 | |[resnetaa50d.sw_in12k_ft_in1k](https://huggingface.co/timm/resnetaa50d.sw_in12k_ft_in1k)|224 |81.74|96.06|25.6 |5.4 |12.4 |2813 | |[ecaresnet50t.a2_in1k](https://huggingface.co/timm/ecaresnet50t.a2_in1k)|288 |81.65|95.54|25.6 |7.1 |19.6 |1703 | |[ecaresnet50d.miil_in1k](https://huggingface.co/timm/ecaresnet50d.miil_in1k)|288 |81.64|95.88|25.6 |7.2 |19.7 |1694 | |[resnext101_32x8d.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnext101_32x8d.fb_ssl_yfcc100m_ft_in1k)|224 |81.62|96.04|88.8 |16.5 |31.2 |1101 | |[wide_resnet50_2.tv2_in1k](https://huggingface.co/timm/wide_resnet50_2.tv2_in1k)|224 |81.61|95.76|68.9 |11.4 |14.4 |1930 | |[resnetaa50.a1h_in1k](https://huggingface.co/timm/resnetaa50.a1h_in1k)|288 |81.61|95.83|25.6 |8.5 |19.2 |1868 | |[resnet101.a1_in1k](https://huggingface.co/timm/resnet101.a1_in1k)|224 |81.5 |95.16|44.6 |7.8 |16.2 |2125 | |[resnext50_32x4d.a1_in1k](https://huggingface.co/timm/resnext50_32x4d.a1_in1k)|288 |81.48|95.16|25.0 |7.0 |23.8 |1745 | |[gcresnet50t.ra2_in1k](https://huggingface.co/timm/gcresnet50t.ra2_in1k)|288 |81.47|95.71|25.9 |6.9 |18.6 |2071 | |[wide_resnet50_2.racm_in1k](https://huggingface.co/timm/wide_resnet50_2.racm_in1k)|224 |81.45|95.53|68.9 |11.4 |14.4 |1929 | |[resnet50d.a1_in1k](https://huggingface.co/timm/resnet50d.a1_in1k)|288 |81.44|95.22|25.6 |7.2 |19.7 |1908 | |[ecaresnet50t.ra2_in1k](https://huggingface.co/timm/ecaresnet50t.ra2_in1k)|256 |81.44|95.67|25.6 |5.6 |15.4 |2168 | |[ecaresnetlight.miil_in1k](https://huggingface.co/timm/ecaresnetlight.miil_in1k)|288 |81.4 |95.82|30.2 |6.8 |13.9 |2132 | |[resnet50d.ra2_in1k](https://huggingface.co/timm/resnet50d.ra2_in1k)|288 |81.37|95.74|25.6 |7.2 |19.7 |1910 | |[resnet101.a2_in1k](https://huggingface.co/timm/resnet101.a2_in1k)|224 |81.32|95.19|44.6 |7.8 |16.2 |2125 | |[seresnet50.ra2_in1k](https://huggingface.co/timm/seresnet50.ra2_in1k)|288 |81.3 |95.65|28.1 |6.8 |18.4 |1803 | |[resnext50_32x4d.a2_in1k](https://huggingface.co/timm/resnext50_32x4d.a2_in1k)|288 |81.3 |95.11|25.0 |7.0 |23.8 |1746 | |[seresnext50_32x4d.racm_in1k](https://huggingface.co/timm/seresnext50_32x4d.racm_in1k)|224 |81.27|95.62|27.6 |4.3 |14.4 |2591 | |[ecaresnet50t.a1_in1k](https://huggingface.co/timm/ecaresnet50t.a1_in1k)|224 |81.26|95.16|25.6 |4.3 |11.8 |2823 | |[gcresnext50ts.ch_in1k](https://huggingface.co/timm/gcresnext50ts.ch_in1k)|288 |81.23|95.54|15.7 |4.8 |19.6 |2117 | |[senet154.gluon_in1k](https://huggingface.co/timm/senet154.gluon_in1k)|224 |81.23|95.35|115.1 |20.8 |38.7 |545 | |[resnet50.a1_in1k](https://huggingface.co/timm/resnet50.a1_in1k)|288 |81.22|95.11|25.6 |6.8 |18.4 |2089 | |[resnet50_gn.a1h_in1k](https://huggingface.co/timm/resnet50_gn.a1h_in1k)|288 |81.22|95.63|25.6 |6.8 |18.4 |676 | |[resnet50d.a2_in1k](https://huggingface.co/timm/resnet50d.a2_in1k)|288 |81.18|95.09|25.6 |7.2 |19.7 |1908 | |[resnet50.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnet50.fb_swsl_ig1b_ft_in1k)|224 |81.18|95.98|25.6 |4.1 |11.1 |3455 | |[resnext50_32x4d.tv2_in1k](https://huggingface.co/timm/resnext50_32x4d.tv2_in1k)|224 |81.17|95.34|25.0 |4.3 |14.4 |2933 | |[resnext50_32x4d.a1h_in1k](https://huggingface.co/timm/resnext50_32x4d.a1h_in1k)|224 |81.1 |95.33|25.0 |4.3 |14.4 |2934 | |[seresnet50.a2_in1k](https://huggingface.co/timm/seresnet50.a2_in1k)|288 |81.1 |95.23|28.1 |6.8 |18.4 |1801 | |[seresnet50.a1_in1k](https://huggingface.co/timm/seresnet50.a1_in1k)|288 |81.1 |95.12|28.1 |6.8 |18.4 |1799 | |[resnet152s.gluon_in1k](https://huggingface.co/timm/resnet152s.gluon_in1k)|224 |81.02|95.41|60.3 |12.9 |25.0 |1347 | |[resnet50.d_in1k](https://huggingface.co/timm/resnet50.d_in1k)|288 |80.97|95.44|25.6 |6.8 |18.4 |2085 | |[gcresnet50t.ra2_in1k](https://huggingface.co/timm/gcresnet50t.ra2_in1k)|256 |80.94|95.45|25.9 |5.4 |14.7 |2571 | |[resnext101_32x4d.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnext101_32x4d.fb_ssl_yfcc100m_ft_in1k)|224 |80.93|95.73|44.2 |8.0 |21.2 |1814 | |[resnet50.c1_in1k](https://huggingface.co/timm/resnet50.c1_in1k)|288 |80.91|95.55|25.6 |6.8 |18.4 |2084 | |[seresnext101_32x4d.gluon_in1k](https://huggingface.co/timm/seresnext101_32x4d.gluon_in1k)|224 |80.9 |95.31|49.0 |8.0 |21.3 |1585 | |[seresnext101_64x4d.gluon_in1k](https://huggingface.co/timm/seresnext101_64x4d.gluon_in1k)|224 |80.9 |95.3 |88.2 |15.5 |31.2 |918 | |[resnet50.c2_in1k](https://huggingface.co/timm/resnet50.c2_in1k)|288 |80.86|95.52|25.6 |6.8 |18.4 |2085 | |[resnet50.tv2_in1k](https://huggingface.co/timm/resnet50.tv2_in1k)|224 |80.85|95.43|25.6 |4.1 |11.1 |3450 | |[ecaresnet50t.a2_in1k](https://huggingface.co/timm/ecaresnet50t.a2_in1k)|224 |80.84|95.02|25.6 |4.3 |11.8 |2821 | |[ecaresnet101d_pruned.miil_in1k](https://huggingface.co/timm/ecaresnet101d_pruned.miil_in1k)|224 |80.79|95.62|24.9 |3.5 |7.7 |2961 | |[seresnet33ts.ra2_in1k](https://huggingface.co/timm/seresnet33ts.ra2_in1k)|288 |80.79|95.36|19.8 |6.0 |14.8 |2506 | |[ecaresnet50d_pruned.miil_in1k](https://huggingface.co/timm/ecaresnet50d_pruned.miil_in1k)|288 |80.79|95.58|19.9 |4.2 |10.6 |2349 | |[resnet50.a2_in1k](https://huggingface.co/timm/resnet50.a2_in1k)|288 |80.78|94.99|25.6 |6.8 |18.4 |2088 | |[resnet50.b1k_in1k](https://huggingface.co/timm/resnet50.b1k_in1k)|288 |80.71|95.43|25.6 |6.8 |18.4 |2087 | |[resnext50_32x4d.ra_in1k](https://huggingface.co/timm/resnext50_32x4d.ra_in1k)|288 |80.7 |95.39|25.0 |7.0 |23.8 |1749 | |[resnetrs101.tf_in1k](https://huggingface.co/timm/resnetrs101.tf_in1k)|192 |80.69|95.24|63.6 |6.0 |12.7 |2270 | |[resnet50d.a1_in1k](https://huggingface.co/timm/resnet50d.a1_in1k)|224 |80.68|94.71|25.6 |4.4 |11.9 |3162 | |[eca_resnet33ts.ra2_in1k](https://huggingface.co/timm/eca_resnet33ts.ra2_in1k)|288 |80.68|95.36|19.7 |6.0 |14.8 |2637 | |[resnet50.a1h_in1k](https://huggingface.co/timm/resnet50.a1h_in1k)|224 |80.67|95.3 |25.6 |4.1 |11.1 |3452 | |[resnext50d_32x4d.bt_in1k](https://huggingface.co/timm/resnext50d_32x4d.bt_in1k)|288 |80.67|95.42|25.0 |7.4 |25.1 |1626 | |[resnetaa50.a1h_in1k](https://huggingface.co/timm/resnetaa50.a1h_in1k)|224 |80.63|95.21|25.6 |5.2 |11.6 |3034 | |[ecaresnet50d.miil_in1k](https://huggingface.co/timm/ecaresnet50d.miil_in1k)|224 |80.61|95.32|25.6 |4.4 |11.9 |2813 | |[resnext101_64x4d.gluon_in1k](https://huggingface.co/timm/resnext101_64x4d.gluon_in1k)|224 |80.61|94.99|83.5 |15.5 |31.2 |989 | |[gcresnet33ts.ra2_in1k](https://huggingface.co/timm/gcresnet33ts.ra2_in1k)|288 |80.6 |95.31|19.9 |6.0 |14.8 |2578 | |[gcresnext50ts.ch_in1k](https://huggingface.co/timm/gcresnext50ts.ch_in1k)|256 |80.57|95.17|15.7 |3.8 |15.5 |2710 | |[resnet152.a3_in1k](https://huggingface.co/timm/resnet152.a3_in1k)|224 |80.56|95.0 |60.2 |11.6 |22.6 |1483 | |[resnet50d.ra2_in1k](https://huggingface.co/timm/resnet50d.ra2_in1k)|224 |80.53|95.16|25.6 |4.4 |11.9 |3164 | |[resnext50_32x4d.a1_in1k](https://huggingface.co/timm/resnext50_32x4d.a1_in1k)|224 |80.53|94.46|25.0 |4.3 |14.4 |2930 | |[wide_resnet101_2.tv2_in1k](https://huggingface.co/timm/wide_resnet101_2.tv2_in1k)|176 |80.48|94.98|126.9 |14.3 |13.2 |1719 | |[resnet152d.gluon_in1k](https://huggingface.co/timm/resnet152d.gluon_in1k)|224 |80.47|95.2 |60.2 |11.8 |23.4 |1428 | |[resnet50.b2k_in1k](https://huggingface.co/timm/resnet50.b2k_in1k)|288 |80.45|95.32|25.6 |6.8 |18.4 |2086 | |[ecaresnetlight.miil_in1k](https://huggingface.co/timm/ecaresnetlight.miil_in1k)|224 |80.45|95.24|30.2 |4.1 |8.4 |3530 | |[resnext50_32x4d.a2_in1k](https://huggingface.co/timm/resnext50_32x4d.a2_in1k)|224 |80.45|94.63|25.0 |4.3 |14.4 |2936 | |[wide_resnet50_2.tv2_in1k](https://huggingface.co/timm/wide_resnet50_2.tv2_in1k)|176 |80.43|95.09|68.9 |7.3 |9.0 |3015 | |[resnet101d.gluon_in1k](https://huggingface.co/timm/resnet101d.gluon_in1k)|224 |80.42|95.01|44.6 |8.1 |17.0 |2007 | |[resnet50.a1_in1k](https://huggingface.co/timm/resnet50.a1_in1k)|224 |80.38|94.6 |25.6 |4.1 |11.1 |3461 | |[seresnet33ts.ra2_in1k](https://huggingface.co/timm/seresnet33ts.ra2_in1k)|256 |80.36|95.1 |19.8 |4.8 |11.7 |3267 | |[resnext101_32x4d.gluon_in1k](https://huggingface.co/timm/resnext101_32x4d.gluon_in1k)|224 |80.34|94.93|44.2 |8.0 |21.2 |1814 | |[resnext50_32x4d.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnext50_32x4d.fb_ssl_yfcc100m_ft_in1k)|224 |80.32|95.4 |25.0 |4.3 |14.4 |2941 | |[resnet101s.gluon_in1k](https://huggingface.co/timm/resnet101s.gluon_in1k)|224 |80.28|95.16|44.7 |9.2 |18.6 |1851 | |[seresnet50.ra2_in1k](https://huggingface.co/timm/seresnet50.ra2_in1k)|224 |80.26|95.08|28.1 |4.1 |11.1 |2972 | |[resnetblur50.bt_in1k](https://huggingface.co/timm/resnetblur50.bt_in1k)|288 |80.24|95.24|25.6 |8.5 |19.9 |1523 | |[resnet50d.a2_in1k](https://huggingface.co/timm/resnet50d.a2_in1k)|224 |80.22|94.63|25.6 |4.4 |11.9 |3162 | |[resnet152.tv2_in1k](https://huggingface.co/timm/resnet152.tv2_in1k)|176 |80.2 |94.64|60.2 |7.2 |14.0 |2346 | |[seresnet50.a2_in1k](https://huggingface.co/timm/seresnet50.a2_in1k)|224 |80.08|94.74|28.1 |4.1 |11.1 |2969 | |[eca_resnet33ts.ra2_in1k](https://huggingface.co/timm/eca_resnet33ts.ra2_in1k)|256 |80.08|94.97|19.7 |4.8 |11.7 |3284 | |[gcresnet33ts.ra2_in1k](https://huggingface.co/timm/gcresnet33ts.ra2_in1k)|256 |80.06|94.99|19.9 |4.8 |11.7 |3216 | |[resnet50_gn.a1h_in1k](https://huggingface.co/timm/resnet50_gn.a1h_in1k)|224 |80.06|94.95|25.6 |4.1 |11.1 |1109 | |[seresnet50.a1_in1k](https://huggingface.co/timm/seresnet50.a1_in1k)|224 |80.02|94.71|28.1 |4.1 |11.1 |2962 | |[resnet50.ram_in1k](https://huggingface.co/timm/resnet50.ram_in1k)|288 |79.97|95.05|25.6 |6.8 |18.4 |2086 | |[resnet152c.gluon_in1k](https://huggingface.co/timm/resnet152c.gluon_in1k)|224 |79.92|94.84|60.2 |11.8 |23.4 |1455 | |[seresnext50_32x4d.gluon_in1k](https://huggingface.co/timm/seresnext50_32x4d.gluon_in1k)|224 |79.91|94.82|27.6 |4.3 |14.4 |2591 | |[resnet50.d_in1k](https://huggingface.co/timm/resnet50.d_in1k)|224 |79.91|94.67|25.6 |4.1 |11.1 |3456 | |[resnet101.tv2_in1k](https://huggingface.co/timm/resnet101.tv2_in1k)|176 |79.9 |94.6 |44.6 |4.9 |10.1 |3341 | |[resnetrs50.tf_in1k](https://huggingface.co/timm/resnetrs50.tf_in1k)|224 |79.89|94.97|35.7 |4.5 |12.1 |2774 | |[resnet50.c2_in1k](https://huggingface.co/timm/resnet50.c2_in1k)|224 |79.88|94.87|25.6 |4.1 |11.1 |3455 | |[ecaresnet26t.ra2_in1k](https://huggingface.co/timm/ecaresnet26t.ra2_in1k)|320 |79.86|95.07|16.0 |5.2 |16.4 |2168 | |[resnet50.a2_in1k](https://huggingface.co/timm/resnet50.a2_in1k)|224 |79.85|94.56|25.6 |4.1 |11.1 |3460 | |[resnet50.ra_in1k](https://huggingface.co/timm/resnet50.ra_in1k)|288 |79.83|94.97|25.6 |6.8 |18.4 |2087 | |[resnet101.a3_in1k](https://huggingface.co/timm/resnet101.a3_in1k)|224 |79.82|94.62|44.6 |7.8 |16.2 |2114 | |[resnext50_32x4d.ra_in1k](https://huggingface.co/timm/resnext50_32x4d.ra_in1k)|224 |79.76|94.6 |25.0 |4.3 |14.4 |2943 | |[resnet50.c1_in1k](https://huggingface.co/timm/resnet50.c1_in1k)|224 |79.74|94.95|25.6 |4.1 |11.1 |3455 | |[ecaresnet50d_pruned.miil_in1k](https://huggingface.co/timm/ecaresnet50d_pruned.miil_in1k)|224 |79.74|94.87|19.9 |2.5 |6.4 |3929 | |[resnet33ts.ra2_in1k](https://huggingface.co/timm/resnet33ts.ra2_in1k)|288 |79.71|94.83|19.7 |6.0 |14.8 |2710 | |[resnet152.gluon_in1k](https://huggingface.co/timm/resnet152.gluon_in1k)|224 |79.68|94.74|60.2 |11.6 |22.6 |1486 | |[resnext50d_32x4d.bt_in1k](https://huggingface.co/timm/resnext50d_32x4d.bt_in1k)|224 |79.67|94.87|25.0 |4.5 |15.2 |2729 | |[resnet50.bt_in1k](https://huggingface.co/timm/resnet50.bt_in1k)|288 |79.63|94.91|25.6 |6.8 |18.4 |2086 | |[ecaresnet50t.a3_in1k](https://huggingface.co/timm/ecaresnet50t.a3_in1k)|224 |79.56|94.72|25.6 |4.3 |11.8 |2805 | |[resnet101c.gluon_in1k](https://huggingface.co/timm/resnet101c.gluon_in1k)|224 |79.53|94.58|44.6 |8.1 |17.0 |2062 | |[resnet50.b1k_in1k](https://huggingface.co/timm/resnet50.b1k_in1k)|224 |79.52|94.61|25.6 |4.1 |11.1 |3459 | |[resnet50.tv2_in1k](https://huggingface.co/timm/resnet50.tv2_in1k)|176 |79.42|94.64|25.6 |2.6 |6.9 |5397 | |[resnet32ts.ra2_in1k](https://huggingface.co/timm/resnet32ts.ra2_in1k)|288 |79.4 |94.66|18.0 |5.9 |14.6 |2752 | |[resnet50.b2k_in1k](https://huggingface.co/timm/resnet50.b2k_in1k)|224 |79.38|94.57|25.6 |4.1 |11.1 |3459 | |[resnext50_32x4d.tv2_in1k](https://huggingface.co/timm/resnext50_32x4d.tv2_in1k)|176 |79.37|94.3 |25.0 |2.7 |9.0 |4577 | |[resnext50_32x4d.gluon_in1k](https://huggingface.co/timm/resnext50_32x4d.gluon_in1k)|224 |79.36|94.43|25.0 |4.3 |14.4 |2942 | |[resnext101_32x8d.tv_in1k](https://huggingface.co/timm/resnext101_32x8d.tv_in1k)|224 |79.31|94.52|88.8 |16.5 |31.2 |1100 | |[resnet101.gluon_in1k](https://huggingface.co/timm/resnet101.gluon_in1k)|224 |79.31|94.53|44.6 |7.8 |16.2 |2125 | |[resnetblur50.bt_in1k](https://huggingface.co/timm/resnetblur50.bt_in1k)|224 |79.31|94.63|25.6 |5.2 |12.0 |2524 | |[resnet50.a1h_in1k](https://huggingface.co/timm/resnet50.a1h_in1k)|176 |79.27|94.49|25.6 |2.6 |6.9 |5404 | |[resnext50_32x4d.a3_in1k](https://huggingface.co/timm/resnext50_32x4d.a3_in1k)|224 |79.25|94.31|25.0 |4.3 |14.4 |2931 | |[resnet50.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnet50.fb_ssl_yfcc100m_ft_in1k)|224 |79.22|94.84|25.6 |4.1 |11.1 |3451 | |[resnet33ts.ra2_in1k](https://huggingface.co/timm/resnet33ts.ra2_in1k)|256 |79.21|94.56|19.7 |4.8 |11.7 |3392 | |[resnet50d.gluon_in1k](https://huggingface.co/timm/resnet50d.gluon_in1k)|224 |79.07|94.48|25.6 |4.4 |11.9 |3162 | |[resnet50.ram_in1k](https://huggingface.co/timm/resnet50.ram_in1k)|224 |79.03|94.38|25.6 |4.1 |11.1 |3453 | |[resnet50.am_in1k](https://huggingface.co/timm/resnet50.am_in1k)|224 |79.01|94.39|25.6 |4.1 |11.1 |3461 | |[resnet32ts.ra2_in1k](https://huggingface.co/timm/resnet32ts.ra2_in1k)|256 |79.01|94.37|18.0 |4.6 |11.6 |3440 | |[ecaresnet26t.ra2_in1k](https://huggingface.co/timm/ecaresnet26t.ra2_in1k)|256 |78.9 |94.54|16.0 |3.4 |10.5 |3421 | |[resnet152.a3_in1k](https://huggingface.co/timm/resnet152.a3_in1k)|160 |78.89|94.11|60.2 |5.9 |11.5 |2745 | |[wide_resnet101_2.tv_in1k](https://huggingface.co/timm/wide_resnet101_2.tv_in1k)|224 |78.84|94.28|126.9 |22.8 |21.2 |1079 | |[seresnext26d_32x4d.bt_in1k](https://huggingface.co/timm/seresnext26d_32x4d.bt_in1k)|288 |78.83|94.24|16.8 |4.5 |16.8 |2251 | |[resnet50.ra_in1k](https://huggingface.co/timm/resnet50.ra_in1k)|224 |78.81|94.32|25.6 |4.1 |11.1 |3454 | |[seresnext26t_32x4d.bt_in1k](https://huggingface.co/timm/seresnext26t_32x4d.bt_in1k)|288 |78.74|94.33|16.8 |4.5 |16.7 |2264 | |[resnet50s.gluon_in1k](https://huggingface.co/timm/resnet50s.gluon_in1k)|224 |78.72|94.23|25.7 |5.5 |13.5 |2796 | |[resnet50d.a3_in1k](https://huggingface.co/timm/resnet50d.a3_in1k)|224 |78.71|94.24|25.6 |4.4 |11.9 |3154 | |[wide_resnet50_2.tv_in1k](https://huggingface.co/timm/wide_resnet50_2.tv_in1k)|224 |78.47|94.09|68.9 |11.4 |14.4 |1934 | |[resnet50.bt_in1k](https://huggingface.co/timm/resnet50.bt_in1k)|224 |78.46|94.27|25.6 |4.1 |11.1 |3454 | |[resnet34d.ra2_in1k](https://huggingface.co/timm/resnet34d.ra2_in1k)|288 |78.43|94.35|21.8 |6.5 |7.5 |3291 | |[gcresnext26ts.ch_in1k](https://huggingface.co/timm/gcresnext26ts.ch_in1k)|288 |78.42|94.04|10.5 |3.1 |13.3 |3226 | |[resnet26t.ra2_in1k](https://huggingface.co/timm/resnet26t.ra2_in1k)|320 |78.33|94.13|16.0 |5.2 |16.4 |2391 | |[resnet152.tv_in1k](https://huggingface.co/timm/resnet152.tv_in1k)|224 |78.32|94.04|60.2 |11.6 |22.6 |1487 | |[seresnext26ts.ch_in1k](https://huggingface.co/timm/seresnext26ts.ch_in1k)|288 |78.28|94.1 |10.4 |3.1 |13.3 |3062 | |[bat_resnext26ts.ch_in1k](https://huggingface.co/timm/bat_resnext26ts.ch_in1k)|256 |78.25|94.1 |10.7 |2.5 |12.5 |3393 | |[resnet50.a3_in1k](https://huggingface.co/timm/resnet50.a3_in1k)|224 |78.06|93.78|25.6 |4.1 |11.1 |3450 | |[resnet50c.gluon_in1k](https://huggingface.co/timm/resnet50c.gluon_in1k)|224 |78.0 |93.99|25.6 |4.4 |11.9 |3286 | |[eca_resnext26ts.ch_in1k](https://huggingface.co/timm/eca_resnext26ts.ch_in1k)|288 |78.0 |93.91|10.3 |3.1 |13.3 |3297 | |[seresnext26t_32x4d.bt_in1k](https://huggingface.co/timm/seresnext26t_32x4d.bt_in1k)|224 |77.98|93.75|16.8 |2.7 |10.1 |3841 | |[resnet34.a1_in1k](https://huggingface.co/timm/resnet34.a1_in1k)|288 |77.92|93.77|21.8 |6.1 |6.2 |3609 | |[resnet101.a3_in1k](https://huggingface.co/timm/resnet101.a3_in1k)|160 |77.88|93.71|44.6 |4.0 |8.3 |3926 | |[resnet26t.ra2_in1k](https://huggingface.co/timm/resnet26t.ra2_in1k)|256 |77.87|93.84|16.0 |3.4 |10.5 |3772 | |[seresnext26ts.ch_in1k](https://huggingface.co/timm/seresnext26ts.ch_in1k)|256 |77.86|93.79|10.4 |2.4 |10.5 |4263 | |[resnetrs50.tf_in1k](https://huggingface.co/timm/resnetrs50.tf_in1k)|160 |77.82|93.81|35.7 |2.3 |6.2 |5238 | |[gcresnext26ts.ch_in1k](https://huggingface.co/timm/gcresnext26ts.ch_in1k)|256 |77.81|93.82|10.5 |2.4 |10.5 |4183 | |[ecaresnet50t.a3_in1k](https://huggingface.co/timm/ecaresnet50t.a3_in1k)|160 |77.79|93.6 |25.6 |2.2 |6.0 |5329 | |[resnext50_32x4d.a3_in1k](https://huggingface.co/timm/resnext50_32x4d.a3_in1k)|160 |77.73|93.32|25.0 |2.2 |7.4 |5576 | |[resnext50_32x4d.tv_in1k](https://huggingface.co/timm/resnext50_32x4d.tv_in1k)|224 |77.61|93.7 |25.0 |4.3 |14.4 |2944 | |[seresnext26d_32x4d.bt_in1k](https://huggingface.co/timm/seresnext26d_32x4d.bt_in1k)|224 |77.59|93.61|16.8 |2.7 |10.2 |3807 | |[resnet50.gluon_in1k](https://huggingface.co/timm/resnet50.gluon_in1k)|224 |77.58|93.72|25.6 |4.1 |11.1 |3455 | |[eca_resnext26ts.ch_in1k](https://huggingface.co/timm/eca_resnext26ts.ch_in1k)|256 |77.44|93.56|10.3 |2.4 |10.5 |4284 | |[resnet26d.bt_in1k](https://huggingface.co/timm/resnet26d.bt_in1k)|288 |77.41|93.63|16.0 |4.3 |13.5 |2907 | |[resnet101.tv_in1k](https://huggingface.co/timm/resnet101.tv_in1k)|224 |77.38|93.54|44.6 |7.8 |16.2 |2125 | |[resnet50d.a3_in1k](https://huggingface.co/timm/resnet50d.a3_in1k)|160 |77.22|93.27|25.6 |2.2 |6.1 |5982 | |[resnext26ts.ra2_in1k](https://huggingface.co/timm/resnext26ts.ra2_in1k)|288 |77.17|93.47|10.3 |3.1 |13.3 |3392 | |[resnet34.a2_in1k](https://huggingface.co/timm/resnet34.a2_in1k)|288 |77.15|93.27|21.8 |6.1 |6.2 |3615 | |[resnet34d.ra2_in1k](https://huggingface.co/timm/resnet34d.ra2_in1k)|224 |77.1 |93.37|21.8 |3.9 |4.5 |5436 | |[seresnet50.a3_in1k](https://huggingface.co/timm/seresnet50.a3_in1k)|224 |77.02|93.07|28.1 |4.1 |11.1 |2952 | |[resnext26ts.ra2_in1k](https://huggingface.co/timm/resnext26ts.ra2_in1k)|256 |76.78|93.13|10.3 |2.4 |10.5 |4410 | |[resnet26d.bt_in1k](https://huggingface.co/timm/resnet26d.bt_in1k)|224 |76.7 |93.17|16.0 |2.6 |8.2 |4859 | |[resnet34.bt_in1k](https://huggingface.co/timm/resnet34.bt_in1k)|288 |76.5 |93.35|21.8 |6.1 |6.2 |3617 | |[resnet34.a1_in1k](https://huggingface.co/timm/resnet34.a1_in1k)|224 |76.42|92.87|21.8 |3.7 |3.7 |5984 | |[resnet26.bt_in1k](https://huggingface.co/timm/resnet26.bt_in1k)|288 |76.35|93.18|16.0 |3.9 |12.2 |3331 | |[resnet50.tv_in1k](https://huggingface.co/timm/resnet50.tv_in1k)|224 |76.13|92.86|25.6 |4.1 |11.1 |3457 | |[resnet50.a3_in1k](https://huggingface.co/timm/resnet50.a3_in1k)|160 |75.96|92.5 |25.6 |2.1 |5.7 |6490 | |[resnet34.a2_in1k](https://huggingface.co/timm/resnet34.a2_in1k)|224 |75.52|92.44|21.8 |3.7 |3.7 |5991 | |[resnet26.bt_in1k](https://huggingface.co/timm/resnet26.bt_in1k)|224 |75.3 |92.58|16.0 |2.4 |7.4 |5583 | |[resnet34.bt_in1k](https://huggingface.co/timm/resnet34.bt_in1k)|224 |75.16|92.18|21.8 |3.7 |3.7 |5994 | |[seresnet50.a3_in1k](https://huggingface.co/timm/seresnet50.a3_in1k)|160 |75.1 |92.08|28.1 |2.1 |5.7 |5513 | |[resnet34.gluon_in1k](https://huggingface.co/timm/resnet34.gluon_in1k)|224 |74.57|91.98|21.8 |3.7 |3.7 |5984 | |[resnet18d.ra2_in1k](https://huggingface.co/timm/resnet18d.ra2_in1k)|288 |73.81|91.83|11.7 |3.4 |5.4 |5196 | |[resnet34.tv_in1k](https://huggingface.co/timm/resnet34.tv_in1k)|224 |73.32|91.42|21.8 |3.7 |3.7 |5979 | |[resnet18.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnet18.fb_swsl_ig1b_ft_in1k)|224 |73.28|91.73|11.7 |1.8 |2.5 |10213 | |[resnet18.a1_in1k](https://huggingface.co/timm/resnet18.a1_in1k)|288 |73.16|91.03|11.7 |3.0 |4.1 |6050 | |[resnet34.a3_in1k](https://huggingface.co/timm/resnet34.a3_in1k)|224 |72.98|91.11|21.8 |3.7 |3.7 |5967 | |[resnet18.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnet18.fb_ssl_yfcc100m_ft_in1k)|224 |72.6 |91.42|11.7 |1.8 |2.5 |10213 | |[resnet18.a2_in1k](https://huggingface.co/timm/resnet18.a2_in1k)|288 |72.37|90.59|11.7 |3.0 |4.1 |6051 | |[resnet14t.c3_in1k](https://huggingface.co/timm/resnet14t.c3_in1k)|224 |72.26|90.31|10.1 |1.7 |5.8 |7026 | |[resnet18d.ra2_in1k](https://huggingface.co/timm/resnet18d.ra2_in1k)|224 |72.26|90.68|11.7 |2.1 |3.3 |8707 | |[resnet18.a1_in1k](https://huggingface.co/timm/resnet18.a1_in1k)|224 |71.49|90.07|11.7 |1.8 |2.5 |10187 | |[resnet14t.c3_in1k](https://huggingface.co/timm/resnet14t.c3_in1k)|176 |71.31|89.69|10.1 |1.1 |3.6 |10970 | |[resnet18.gluon_in1k](https://huggingface.co/timm/resnet18.gluon_in1k)|224 |70.84|89.76|11.7 |1.8 |2.5 |10210 | |[resnet18.a2_in1k](https://huggingface.co/timm/resnet18.a2_in1k)|224 |70.64|89.47|11.7 |1.8 |2.5 |10194 | |[resnet34.a3_in1k](https://huggingface.co/timm/resnet34.a3_in1k)|160 |70.56|89.52|21.8 |1.9 |1.9 |10737 | |[resnet18.tv_in1k](https://huggingface.co/timm/resnet18.tv_in1k)|224 |69.76|89.07|11.7 |1.8 |2.5 |10205 | |[resnet10t.c3_in1k](https://huggingface.co/timm/resnet10t.c3_in1k)|224 |68.34|88.03|5.4 |1.1 |2.4 |13079 | |[resnet18.a3_in1k](https://huggingface.co/timm/resnet18.a3_in1k)|224 |68.25|88.17|11.7 |1.8 |2.5 |10167 | |[resnet10t.c3_in1k](https://huggingface.co/timm/resnet10t.c3_in1k)|176 |66.71|86.96|5.4 |0.7 |1.5 |20327 | |[resnet18.a3_in1k](https://huggingface.co/timm/resnet18.a3_in1k)|160 |65.66|86.26|11.7 |0.9 |1.3 |18229 | ## Citation ```bibtex @inproceedings{wightman2021resnet, title={ResNet strikes back: An improved training procedure in timm}, author={Wightman, Ross and Touvron, Hugo and Jegou, Herve}, booktitle={NeurIPS 2021 Workshop on ImageNet: Past, Present, and Future} } ``` ```bibtex @misc{rw2019timm, author = {Ross Wightman}, title = {PyTorch Image Models}, year = {2019}, publisher = {GitHub}, journal = {GitHub repository}, doi = {10.5281/zenodo.4414861}, howpublished = {\url{https://github.com/huggingface/pytorch-image-models}} } ``` ```bibtex @article{He2015, author = {Kaiming He and Xiangyu Zhang and Shaoqing Ren and Jian Sun}, title = {Deep Residual Learning for Image Recognition}, journal = {arXiv preprint arXiv:1512.03385}, year = {2015} } ``` ```bibtex @article{He2018BagOT, title={Bag of Tricks for Image Classification with Convolutional Neural Networks}, author={Tong He and Zhi Zhang and Hang Zhang and Zhongyue Zhang and Junyuan Xie and Mu Li}, journal={2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, year={2018}, pages={558-567} } ```
predibase/gsm8k
predibase
"2024-02-21T19:13:55Z"
978
2
peft
[ "peft", "safetensors", "text-generation", "base_model:mistralai/Mistral-7B-v0.1", "region:us" ]
text-generation
"2024-02-19T19:10:41Z"
--- library_name: peft base_model: mistralai/Mistral-7B-v0.1 pipeline_tag: text-generation --- Description: Grade school math problems\ Original dataset: https://huggingface.co/datasets/gsm8k \ ---\ Try querying this adapter for free in Lora Land at https://predibase.com/lora-land! \ The adapter_category is STEM and the name is Grade School Math (gsm8k)\ ---\ Sample input: Please answer the following question: James decides to run 3 sprints 3 times a week. He runs 60 meters each sprint. How many total meters does he run a week?\nAnswer:\ ---\ Sample output: He runs 3*3=<<3*3=9>>9 sprints a week He runs 60*9=<<60*9=540>>540 meters a week #### 540\ ---\ Try using this adapter yourself! ``` from transformers import AutoModelForCausalLM, AutoTokenizer model_id = "mistralai/Mistral-7B-v0.1" peft_model_id = "predibase/gsm8k" model = AutoModelForCausalLM.from_pretrained(model_id) model.load_adapter(peft_model_id) ```
second-state/E5-Mistral-7B-Instruct-Embedding-GGUF
second-state
"2024-07-01T03:20:13Z"
978
5
transformers
[ "transformers", "gguf", "mistral", "feature-extraction", "en", "base_model:intfloat/e5-mistral-7b-instruct", "license:mit", "endpoints_compatible", "text-generation-inference", "region:us" ]
feature-extraction
"2024-03-14T06:24:50Z"
--- base_model: intfloat/e5-mistral-7b-instruct license: mit model_creator: intfloat quantized_by: Second State Inc. language: en --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://github.com/LlamaEdge/LlamaEdge/raw/dev/assets/logo.svg" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # E5-Mistral-7B-Instruct-Embedding-GGUF ## Original Model [intfloat/e5-mistral-7b-instruct](https://huggingface.co/intfloat/e5-mistral-7b-instruct) ## Run with LlamaEdge - LlamaEdge version: [v0.8.2](https://github.com/LlamaEdge/LlamaEdge/releases/tag/0.8.2) and above - Prompt template - Prompt type: `embedding` - Context size: `4096` - Run as LlamaEdge service ```bash wasmedge --dir .:. --nn-preload default:GGML:AUTO:e5-mistral-7b-instruct-Q5_K_M.gguf \ llama-api-server.wasm \ --prompt-template embedding \ --ctx-size 4096 \ --model-name e5-mistral-7b-instruct ``` ## Quantized GGUF Models | Name | Quant method | Bits | Size | Use case | | ---- | ---- | ---- | ---- | ----- | | [e5-mistral-7b-instruct-Q2_K.gguf](https://huggingface.co/second-state/E5-Mistral-7B-Instruct-Embedding-GGUF/blob/main/e5-mistral-7b-instruct-Q2_K.gguf) | Q2_K | 2 | 2.72 GB| smallest, significant quality loss - not recommended for most purposes | | [e5-mistral-7b-instruct-Q3_K_L.gguf](https://huggingface.co/second-state/E5-Mistral-7B-Instruct-Embedding-GGUF/blob/main/e5-mistral-7b-instruct-Q3_K_L.gguf) | Q3_K_L | 3 | 3.82 GB| small, substantial quality loss | | [e5-mistral-7b-instruct-Q3_K_M.gguf](https://huggingface.co/second-state/E5-Mistral-7B-Instruct-Embedding-GGUF/blob/main/e5-mistral-7b-instruct-Q3_K_M.gguf) | Q3_K_M | 3 | 3.52 GB| very small, high quality loss | | [e5-mistral-7b-instruct-Q3_K_S.gguf](https://huggingface.co/second-state/E5-Mistral-7B-Instruct-Embedding-GGUF/blob/main/e5-mistral-7b-instruct-Q3_K_S.gguf) | Q3_K_S | 3 | 3.16 GB| very small, high quality loss | | [e5-mistral-7b-instruct-Q4_0.gguf](https://huggingface.co/second-state/E5-Mistral-7B-Instruct-Embedding-GGUF/blob/main/e5-mistral-7b-instruct-Q4_0.gguf) | Q4_0 | 4 | 4.11 GB| legacy; small, very high quality loss - prefer using Q3_K_M | | [e5-mistral-7b-instruct-Q4_K_M.gguf](https://huggingface.co/second-state/E5-Mistral-7B-Instruct-Embedding-GGUF/blob/main/e5-mistral-7b-instruct-Q4_K_M.gguf) | Q4_K_M | 4 | 4.37 GB| medium, balanced quality - recommended | | [e5-mistral-7b-instruct-Q4_K_S.gguf](https://huggingface.co/second-state/E5-Mistral-7B-Instruct-Embedding-GGUF/blob/main/e5-mistral-7b-instruct-Q4_K_S.gguf) | Q4_K_S | 4 | 4.14 GB| small, greater quality loss | | [e5-mistral-7b-instruct-Q5_0.gguf](https://huggingface.co/second-state/E5-Mistral-7B-Instruct-Embedding-GGUF/blob/main/e5-mistral-7b-instruct-Q5_0.gguf) | Q5_0 | 5 | 5.00 GB| legacy; medium, balanced quality - prefer using Q4_K_M | | [e5-mistral-7b-instruct-Q5_K_M.gguf](https://huggingface.co/second-state/E5-Mistral-7B-Instruct-Embedding-GGUF/blob/main/e5-mistral-7b-instruct-Q5_K_M.gguf) | Q5_K_M | 5 | 5.13 GB| large, very low quality loss - recommended | | [e5-mistral-7b-instruct-Q5_K_S.gguf](https://huggingface.co/second-state/E5-Mistral-7B-Instruct-Embedding-GGUF/blob/main/e5-mistral-7b-instruct-Q5_K_S.gguf) | Q5_K_S | 5 | 5.00 GB| large, low quality loss - recommended | | [e5-mistral-7b-instruct-Q6_K.gguf](https://huggingface.co/second-state/E5-Mistral-7B-Instruct-Embedding-GGUF/blob/main/e5-mistral-7b-instruct-Q6_K.gguf) | Q6_K | 6 | 5.94 GB| very large, extremely low quality loss | | [e5-mistral-7b-instruct-Q8_0.gguf](https://huggingface.co/second-state/E5-Mistral-7B-Instruct-Embedding-GGUF/blob/main/e5-mistral-7b-instruct-Q8_0.gguf) | Q8_0 | 8 | 7.7 GB| very large, extremely low quality loss - not recommended | | [e5-mistral-7b-instruct-f16.gguf](https://huggingface.co/second-state/E5-Mistral-7B-Instruct-Embedding-GGUF/blob/main/e5-mistral-7b-instruct-f16.gguf) | f16 | 8 | 14.5 GB| very large, extremely low quality loss - not recommended | *Quantized with llama.cpp b2334*
ShleeSSU/Scoring_Korean_Narrative_Sentences
ShleeSSU
"2024-05-21T06:54:34Z"
978
0
transformers
[ "transformers", "safetensors", "roberta", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2024-05-21T06:53:29Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
DiscoResearch/Llama3-German-8B-32k
DiscoResearch
"2024-05-29T11:36:19Z"
978
9
transformers
[ "transformers", "safetensors", "llama", "text-generation", "de", "arxiv:2404.10830", "license:llama3", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-05-24T07:31:58Z"
--- license: llama3 language: - de library_name: transformers --- # Llama3-German-8B-32k (version 0.1) This version of the model refers to the long-context extension version described [below](https://huggingface.co/DiscoResearch/Llama3-German-8B-32k#long-context-extension) Llama3-German-8B-v0.1 is a large language model based on [Meta's Llama3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B). It is specialized for the German language through continuous pretraining on 65 billion high-quality tokens, similar to previous [LeoLM](https://huggingface.co/LeoLM) or [Occiglot](https://huggingface.co/collections/occiglot/occiglot-eu5-7b-v01-65dbed502a6348b052695e01) models. Llama3 itself was trained on 15T tokens, of which only <1T were multilingual, resulting in suboptimal performance in German with reduced linguistic capabilities and frequent grammatical errors, motivating the necessity for continued pretraining. Benchmark results on our model show minimal degradation in English performance, despite the absence of replay during training. Importantly, Llama3-German-8B-v0.1 demonstrates strong improvements in German, particularly on the Hellaswag benchmark, which measures linguistic understanding and general reasoning. [DiscoResearch/Llama3-German-8B-v0.1](https://huggingface.co/collections/DiscoResearch/discoleo-8b-llama3-for-german-6650527496c0fafefd4c9729) is the result of a joint effort between [DiscoResearch](https://huggingface.co/DiscoResearch) and [Occiglot](https://huggingface.co/occiglot) with support from the [DFKI](https://www.dfki.de/web/) (German Research Center for Artificial Intelligence) and [hessian.Ai](https://hessian.ai). Occiglot kindly handled data preprocessing, filtering, and deduplication as part of their latest [dataset release](https://huggingface.co/datasets/occiglot/occiglot-fineweb-v0.5), as well as sharing their compute allocation at hessian.Ai's 42 Supercomputer. ## How to use This is a base model and should probably be subject to finetuning before use. See our [collection](https://huggingface.co/collections/DiscoResearch/discoleo-8b-llama3-for-german-6650527496c0fafefd4c9729) for various finetuned and long-context versions. ## Model Training and Hyperparameters The model was trained on 128 GPUs on [hessian.Ai 42](hessian.ai) for ~60 hours. See detailed hyperparameters below. | Parameter | Value | |-------------------|-----------------------------------| | Sequence Length | 8192 tokens | | Learning Rate | 1.5e-5 to 1.5e-6 (cosine schedule)| | Batch Size | 4194304 (512*8192) tokens | | Micro Batch Size | 4*8192 tokens | | Training Steps | 15500 | | Warmup Steps | 155 (1%) | | Weight Decay | 0.05 | | Optimizer | AdamW | ## Data Collection and Preprocessing For pre-training, we used 65B German tokens from the [occiglot-fineweb-0.5](https://huggingface.co/datasets/occiglot/occiglot-fineweb-v0.5) dataset. The data comprises multiple curated datasets from [LLM-Datasets](https://github.com/malteos/llm-datasets) as well as 12 [Common-Crawl](https://commoncrawl.org) releases that were processed with [OSCAR's Ungoliant pipeline](https://github.com/oscar-project/ungoliant). All data was further filtered with a set of language-specific filters based on [Huggingface's fine-web](https://github.com/huggingface/datatrove/blob/main/examples/fineweb.py) and globally deduplicated. For more information please refer to the [dataset card](https://huggingface.co/datasets/occiglot/occiglot-fineweb-v0.5) and corresponding [blog-post](https://occiglot.eu/posts/occiglot-fineweb/). ## Evaluation and Results We evaluated the model using a suite of common English Benchmarks and their German counterparts with [GermanBench](https://github.com/bjoernpl/GermanBenchmark). The following figure shows the benchmark results in comparison to the base model [meta-llama/Meta-Llama3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B) and two different hyperparameter configurations. We swept different learning rates to identify a well-working setup. The final released model is the 1.5e-5 lr version. ![alt text](base_model_evals.png) Find the detailed benchmark scores for the base and long-context models in this table. | Model | truthful_qa_de | truthfulqa_mc | arc_challenge | arc_challenge_de | hellaswag | hellaswag_de | MMLU | MMLU-DE | mean | |--------------------------------------|----------------|---------------|---------------|------------------|-----------|--------------|--------|---------|------------| | DiscoResearch/Llama3-German-8B | **0.49499** | 0.44838 | 0.55802 | **0.49829** | 0.79924 | **0.65395** | 0.62240| **0.54413** | **0.57743** | | DiscoResearch/Llama3-German-8B-32k | 0.48920 | **0.45138** | 0.54437 | 0.49232 | 0.79078 | 0.64310 | 0.58774| 0.47971 | 0.55982 | | meta-llama/Meta-Llama-3-8B-Instruct | 0.47498 | 0.43923 | **0.59642** | 0.47952 | **0.82025**| 0.60008 | **0.66658**| 0.53541 | 0.57656 | ## Long-Context Extension In addition to the base model, we release a long-context version of Llama3-German-8B ([DiscoResearch/Llama3-German-8B-32k](https://huggingface.co/DiscoResearch/Llama3-German-8B-32k) capable of processing context lengths up to 65k tokens. This variant was trained on an additional 100 million tokens at 32k context length, using a rope_theta value of `1.5e6` and a learning rate of `1.5e-5` with a batch size of `256*8192` tokens and otherwise equal hyperparameters to the base model. ## Instruction Tuning We also provide an instruction-tuned version: [DiscoResearch/Llama3-DiscoLeo-Instruct-8B-v0.1](https://huggingface.co/DiscoResearch/Llama3-DiscoLeo-Instruct-8B-v0.1), utilizing the DiscoLM German dataset for fine-tuning (also available as a long-context model at [DiscoResearch/Llama3-DiscoLeo-Instruct-8B-32k-v0.1](https://huggingface.co/DiscoResearch/Llama3-DiscoLeo-Instruct-8B-v0.1)). Find more details in the respective model cards. Also check out our experimental merge ([DiscoResearch/Llama3-DiscoLeo-8B-DARE-Experimental](https://huggingface.co/DiscoResearch/Llama3-DiscoLeo-8B-DARE-Experimental)) between [meta-llama/Meta-Llama3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) and our finetuned model in an attempt to keep the extraordinary capabilities of Llama3-Instruct and add exceptional German skills. ## Document Packing We employed a more intelligent document packing strategy based on the ["Fewer Truncations Improve Language Modeling" paper by Ding et al.](https://arxiv.org/abs/2404.10830v2), using the first-fit-decreasing algorithm to pack documents into batches without truncation. We packed our data in chunks of 10000 documents for more efficient processing while maintaining >99% packing efficiency. Documents longer than the sequence length are split into chunks of sequence length. This approach results in overall higher benchmark scores when training on the same data with equal hyperparameters. The following numbers are from initial experiments with `3e-5 lr` and 12k steps and show improvements comparable to those shown in the original paper. | Task | Naive Packing | Fewer Truncations Packing | Percentage Increase | |-------------------|---------------|---------------------------|---------------------| | truthfulqa_mc | 0.452648 | 0.467687 | 3.32% | | arc_challenge | 0.517918 | 0.528157 | 1.98% | | truthful_qa_de | 0.485529 | 0.492979 | 1.53% | | arc_challenge_de | 0.480375 | 0.493174 | 2.66% | | hellaswag | 0.776041 | 0.773352 | -0.35% | | hellaswag_de | 0.655248 | 0.653356 | -0.29% | | MMLU | 0.573719 | 0.579802 | 1.06% | | MMLU-DE | 0.504509 | 0.503863 | -0.13% | The following is our simple implementation of the first-fit-decreasing algorithm described in the paper. ```python def pack_documents(tokenized_documents): # Sort documents by their length in descending order sorted_docs = sorted(tokenized_documents, key=len, reverse=True) # Initialize bins bins = [] # Function to find the first bin that can accommodate the document def find_bin(doc): for b in bins: if sum(len(d) for d in b) + len(doc) <= 8192: return b return None # Place each document in the first available bin or create a new bin for doc in sorted_docs: target_bin = find_bin(doc) if target_bin is not None: target_bin.append(doc) else: # Create a new bin with this document if no suitable bin is found bins.append([doc]) # Return results return bins ``` ## Model Configurations We release DiscoLeo-8B in the following configurations: 1. [Base model with continued pretraining](https://huggingface.co/DiscoResearch/Llama3-German-8B) 2. [Long-context version (32k context length)](https://huggingface.co/DiscoResearch/Llama3-German-8B-32k) 3. [Instruction-tuned version of the base model](https://huggingface.co/DiscoResearch/Llama3-DiscoLeo-Instruct-8B-v0.1) 4. [Instruction-tuned version of the long-context model](https://huggingface.co/DiscoResearch/Llama3-DiscoLeo-Instruct-8B-32k-v0.1) 5. [Experimental `DARE-TIES` Merge with Llama3-Instruct](https://huggingface.co/DiscoResearch/Llama3-DiscoLeo-8B-DARE-Experimental) 6. [Collection of Quantized versions](https://huggingface.co/collections/DiscoResearch/discoleo-8b-quants-6651bcf8f72c9a37ce485d42) ## How to use: Here's how to use the model with transformers: ```python from transformers import AutoModelForCausalLM, AutoTokenizer import torch device="cuda" model = AutoModelForCausalLM.from_pretrained( "DiscoResearch/Llama3-DiscoLeo-Instruct-8B-v0.1", torch_dtype="auto", device_map="auto" ) tokenizer = AutoTokenizer.from_pretrained("DiscoResearch/Llama3-DiscoLeo-Instruct-8B-v0.1") prompt = "Schreibe ein Essay über die Bedeutung der Energiewende für Deutschlands Wirtschaft" messages = [ {"role": "system", "content": "Du bist ein hilfreicher Assistent."}, {"role": "user", "content": prompt} ] text = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) model_inputs = tokenizer([text], return_tensors="pt").to(device) generated_ids = model.generate( model_inputs.input_ids, max_new_tokens=512 ) generated_ids = [ output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids) ] response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0] ``` ## Acknowledgements The model was trained and evaluated by [Björn Plüster](https://huggingface.co/bjoernp) ([DiscoResearch](https://huggingface.co/DiscoResearch), [ellamind](https://ellamind.com)) with data preparation and project supervision by [Manuel Brack](http://manuel-brack.eu) ([DFKI](https://www.dfki.de/web/), [TU-Darmstadt](https://www.tu-darmstadt.de/)). Initial work on dataset collection and curation was performed by [Malte Ostendorff](https://ostendorff.org) and [Pedro Ortiz Suarez](https://portizs.eu). Instruction tuning was done with the DiscoLM German dataset created by [Jan-Philipp Harries](https://huggingface.co/jphme) and [Daniel Auras](https://huggingface.co/rasdani) ([DiscoResearch](https://huggingface.co/DiscoResearch), [ellamind](https://ellamind.com)). We extend our gratitude to [LAION](https://laion.ai/) and friends, especially [Christoph Schuhmann](https://entwickler.de/experten/christoph-schuhmann) and [Jenia Jitsev](https://huggingface.co/JJitsev), for initiating this collaboration. The model training was supported by a compute grant at the [42 supercomputer](https://hessian.ai/) which is a central component in the development of [hessian AI](https://hessian.ai/), the [AI Innovation Lab](https://hessian.ai/infrastructure/ai-innovationlab/) (funded by the [Hessian Ministry of Higher Education, Research and the Art (HMWK)](https://wissenschaft.hessen.de) & the [Hessian Ministry of the Interior, for Security and Homeland Security (HMinD)](https://innen.hessen.de)) and the [AI Service Centers](https://hessian.ai/infrastructure/ai-service-centre/) (funded by the [German Federal Ministry for Economic Affairs and Climate Action (BMWK)](https://www.bmwk.de/Navigation/EN/Home/home.html)). The curation of the training data is partially funded by the [German Federal Ministry for Economic Affairs and Climate Action (BMWK)](https://www.bmwk.de/Navigation/EN/Home/home.html) through the project [OpenGPT-X](https://opengpt-x.de/en/) (project no. 68GX21007D).
SicariusSicariiStuff/Zion_Alpha_Instruction_Tuned_SLERP
SicariusSicariiStuff
"2024-06-10T01:25:36Z"
978
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "conversational", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-06-09T21:10:58Z"
--- language: - en license: apache-2.0 --- <div align="center"> <b style="font-size: 40px;">Zion_Alpha_Instruction_Tuned_SLERP</b> </div> <img src="https://i.imgur.com/e1LEQ18.png" alt="Zion_Alpha_Instruction_Tuned_SLERP" style="width: 50%; min-width: 400px; display: block; margin: auto;"> # Model Details Zion_Alpha is the first **REAL** Hebrew model in the world. This version WAS fine tuned for tasks. I did the finetune using SOTA techniques and using my insights from years of underwater basket weaving. If you wanna offer me a job, just add me on Facebook. # Another world record broken by Zion_Alpha! On **June 10th, 2024**, this model achieved the **highest sentiment analysis score in the world** for Hebrew LLMs, with an impressive **70.3**, surpassing even a **35B** model that's five times its size! <div align="center"> <img src="https://i.imgur.com/yg6CJoz.png" alt="Zion_Alpha SNLI Score" style="width: 80%; min-width: 700px; display: block; margin: auto;"> </div> # Future Plans My previous LLM, Zion_Alpha, set a world record on Hugging Face by achieving the highest SNLI score for Hebrew open LLMs at 84.05. The current model, a SLERP merge, achieved a lower SNLI score but still surprised everyone by securing the highest sentiment analysis score of 70.3. This demonstrates significant untapped potential in optimizing the training process, showing that 7B models can deliver far more performance in Hebrew than previously thought possible. This will be my last Hebrew model for a while, as I have other adventures to pursue. # Looking for Sponsors Since all my work is done on-premises, I am constrained by my current hardware. I would greatly appreciate any support in acquiring an A6000, which would enable me to train significantly larger models much faster. # Papers? Maybe. We'll see. No promises here 🤓 # Contact Details I'm not great at self-marketing (to say the least) and don't have any social media accounts. If you'd like to reach out to me, you can email me at [email protected]. Please note that this email might receive more messages than I can handle, so I apologize in advance if I can't respond to everyone. # Versions and QUANTS - Base model: [FP16](https://huggingface.co/SicariusSicariiStuff/Zion_Alpha) - Instruction tuned: [FP16](https://huggingface.co/SicariusSicariiStuff/Zion_Alpha_Instruction_Tuned) | [GGUF](https://huggingface.co/SicariusSicariiStuff/Zion_Alpha_Instruction_Tuned_GGUF) # Model architecture Based on Mistral 7B. I didn't even bother to alter the tokenizer. # The recommended prompt setting is Debug-deterministic: ``` temperature: 1 top_p: 1 top_k: 1 typical_p: 1 min_p: 1 repetition_penalty: 1 ``` # The recommended instruction template is Mistral: ``` {%- for message in messages %} {%- if message['role'] == 'system' -%} {{- message['content'] -}} {%- else -%} {%- if message['role'] == 'user' -%} {{-'[INST] ' + message['content'].rstrip() + ' [/INST]'-}} {%- else -%} {{-'' + message['content'] + '</s>' -}} {%- endif -%} {%- endif -%} {%- endfor -%} {%- if add_generation_prompt -%} {{-''-}} {%- endif -%} ``` # English to hebrew example: <div align="center"> <b style="font-size: 40px;">Zion_Alpha English to Hebrew example</b> </div> <img src="https://i.imgur.com/JnTuawF.png" alt="Zion_Alpha" style="width: 40%; min-width: 600px; display: block; margin: auto;"> # English to hebrew example: <div align="center"> <b style="font-size: 40px;">Zion_Alpha Hebrew to English example</b> </div> <img src="https://i.imgur.com/Wm2igLJ.png" alt="Zion_Alpha" style="width: 40%; min-width: 600px; display: block; margin: auto;"> <div align="center"> <b style="font-size: 30px;">Unscripted video: live zero shot demonstration at story writing capabilities in Hebrew</b> [![Zion_Alpha Story writing](https://img.youtube.com/vi/YYKeovnS0do/0.jpg)](https://www.youtube.com/watch?v=YYKeovnS0do) </div> <div align="center"> <b style="font-size: 30px;">Zion_Alpha VS Mistral 'Hebrew' Live & unscripted in real time</b> [![Zion_Alpha Story writing](https://img.youtube.com/vi/YYKeovnS0do/0.jpg)](https://www.youtube.com/watch?v=DQFtx8M2txc) </div> <div align="center"> <b style="font-size: 30px;">Zion_Alpha VS Mistral 'Hebrew' Live & unscripted in real time Long text translation</b> [![Zion_Alpha Story writing](https://img.youtube.com/vi/YYKeovnS0do/0.jpg)](https://www.youtube.com/watch?v=w5fz3Ot6tH8) </div> ### History The model was originally trained about 2 month after Mistral (v0.1) was released. As of 04 June 2024, Zion_Alpha got the **Highest SNLI score in the world** among open source models in Hebrew, surpassing most of the models by a huge margin. (**84.05** score) <img src="https://i.imgur.com/7HokS5w.png" alt="Zion_Alpha SNLI Score" style="width: 80%; min-width: 700px; display: block; margin: auto;"> ### Support <img src="https://i.imgur.com/0lHHN95.png" alt="GPUs too expensive" style="width: 10%; min-width: 100px; display: block; margin: left;"> - [My Ko-fi page](https://ko-fi.com/sicarius) ALL donations will go for research resources and compute, every bit counts 🙏🏻 - [My Patreon](https://patreon.com/TenebraAI) ALL donations will go for research resources and compute, every bit counts 🙏🏻