modelId
stringlengths 5
122
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC] | downloads
int64 0
738M
| likes
int64 0
11k
| library_name
stringclasses 245
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 48
values | createdAt
timestamp[us, tz=UTC] | card
stringlengths 1
901k
|
---|---|---|---|---|---|---|---|---|---|
timm/gernet_s.idstcv_in1k | timm | 2024-02-10T23:34:39Z | 746 | 0 | timm | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"arxiv:2006.14090",
"license:apache-2.0",
"region:us"
] | image-classification | 2023-03-22T07:16:04Z | ---
license: apache-2.0
library_name: timm
tags:
- image-classification
- timm
datasets:
- imagenet-1k
---
# Model card for gernet_s.idstcv_in1k
A GENet (GPU-Efficient-Networks) image classification model. Trained on ImageNet-1k by paper authors.
This model architecture is implemented using `timm`'s flexible [BYOBNet (Bring-Your-Own-Blocks Network)](https://github.com/huggingface/pytorch-image-models/blob/main/timm/models/byobnet.py).
BYOBNet allows configuration of:
* block / stage layout
* stem layout
* output stride (dilation)
* activation and norm layers
* channel and spatial / self-attention layers
...and also includes `timm` features common to many other architectures, including:
* stochastic depth
* gradient checkpointing
* layer-wise LR decay
* per-stage feature extraction
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 8.2
- GMACs: 0.7
- Activations (M): 2.7
- Image size: 224 x 224
- **Papers:**
- Neural Architecture Design for GPU-Efficient Networks: https://arxiv.org/abs/2006.14090
- **Dataset:** ImageNet-1k
- **Original:** https://github.com/idstcv/GPU-Efficient-Networks
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('gernet_s.idstcv_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'gernet_s.idstcv_in1k',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g.:
# torch.Size([1, 13, 112, 112])
# torch.Size([1, 48, 56, 56])
# torch.Size([1, 48, 28, 28])
# torch.Size([1, 384, 14, 14])
# torch.Size([1, 1920, 7, 7])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'gernet_s.idstcv_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 1920, 7, 7) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
## Citation
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
}
```
```bibtex
@misc{lin2020neural,
title={Neural Architecture Design for GPU-Efficient Networks},
author={Ming Lin and Hesen Chen and Xiuyu Sun and Qi Qian and Hao Li and Rong Jin},
year={2020},
eprint={2006.14090},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
|
Uminosachi/dreamshaper_8Inpainting | Uminosachi | 2023-08-03T01:13:11Z | 746 | 0 | diffusers | [
"diffusers",
"safetensors",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2023-08-03T00:52:21Z | ---
license: creativeml-openrail-m
---
This is an inpainting model, which has been converted from the [dreamshaper_8Inpainting](https://civitai.com/models/4384?modelVersionId=131004). |
pleisto/yuren-13b-chatml | pleisto | 2024-03-05T18:08:50Z | 746 | 3 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"llama2",
"zh",
"en",
"dataset:bigcode/the-stack",
"dataset:mc4",
"dataset:pleisto/wikipedia-cn-20230720-filtered",
"dataset:gsm8k",
"dataset:OpenAssistant/oasst1",
"dataset:b-mc2/sql-create-context",
"dataset:niv0",
"dataset:BAAI/COIG",
"dataset:wenhu/TheoremQA",
"dataset:zjunlp/KnowLM-IE",
"arxiv:2009.03300",
"license:llama2",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-09-25T13:55:26Z | ---
language:
- zh
- en
license: llama2
library_name: transformers
tags:
- llama2
datasets:
- bigcode/the-stack
- mc4
- pleisto/wikipedia-cn-20230720-filtered
- gsm8k
- OpenAssistant/oasst1
- b-mc2/sql-create-context
- niv0
- BAAI/COIG
- wenhu/TheoremQA
- zjunlp/KnowLM-IE
model-index:
- name: yuren-13b-chatml
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 53.07
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=pleisto/yuren-13b-chatml
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 78.03
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=pleisto/yuren-13b-chatml
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 56.34
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=pleisto/yuren-13b-chatml
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 42.32
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=pleisto/yuren-13b-chatml
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 74.43
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=pleisto/yuren-13b-chatml
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 28.13
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=pleisto/yuren-13b-chatml
name: Open LLM Leaderboard
---
# Yuren-13B (羽人13B)
[Github](https://github.com/pleisto/yuren-13b)
Yuren 13B is a large-scale language model that has been continuously trained based on Llama 2 13B. Focused on the field of **information synthesis** and built upon the data-centric work of Pleisto, this model achieves state-of-the-art levels in data synthesis scenarios such as information extraction in multiple languages, natural language generation of SQL, and structured data output, all with an equivalent parameter count.
羽人 13B 是在 Llama 2 13B 基础上进行持续训练的大语言模型,**聚焦于信息合成领域**并建立在 Pleisto 以数据为中心的工作上。该模型在以中英文为主的多种语言的信息抽取、自然语言生成 SQL、结构化数据输出等数据合成类场景下实现了同等参数量下的 SOTA 水平。
## Quick Start/快速开始
```python
from transformers import LlamaTokenizer, LlamaForCausalLM
import torch
device = torch.device("cuda")
model = LlamaForCausalLM.from_pretrained(
"pleisto/yuren-13b-chatml", torch_dtype=torch.bfloat16, device_map="auto"
)
tokenizer = LlamaTokenizer.from_pretrained("pleisto/yuren-13b-chatml", use_fast=False)
system_prompt = "You are an AI model capable of translating natural language queries into SQL statements. Based on the following table schema and the subsequent user query, generate the appropriate SQL statement.\nTable schema: CREATE TABLE table_name_86 (name VARCHAR, score VARCHAR, song_type VARCHAR)\nScoreTypeEnum: [\"folk\",\"rock\",\"other\"]"
query = "8分以上的民谣有哪些?"
inputs = f"<|im_start|>system\n{system_prompt}<|im_end|>\n<|im_start|>user\n{query}<|im_end|>\n<|im_start|>assistant\n"
input_ids = tokenizer(inputs, return_tensors="pt").input_ids.to(device)
generate_ids = model.generate(
input_ids,
max_new_tokens=4096,
do_sample=True,
top_p=1.0,
temperature=0.32,
eos_token_id=36845,
)
output = tokenizer.batch_decode(generate_ids)[0]
print(output)
# <s> <|im_start|> system
# You are an AI model capable of translating natural language queries into SQL statements. Based on the following table schema and the subsequent user query, generate the appropriate SQL statement.
# Table schema: CREATE TABLE table_name_86 (name VARCHAR, score VARCHAR, song_type VARCHAR)
# ScoreTypeEnum: ["folk","rock","other"] <|im_end|> <|im_start|> user
# 8分以上的民谣有哪些? <|im_end|> <|im_start|> assistant
# SELECT name FROM table_name_86 WHERE score > 8 AND song_type = "folk" <|im_end|>
```
## Example Dialogue/示例对话
### Text2SQL/自然语言转SQL查询
| System Prompt | You are an AI model capable of translating natural language queries into MySQL statements. Based on the following table schema and the subsequent user query, generate the appropriate SQL statement.\nTable schema: CREATE TABLE \`comments\` ( \`id\` int(10) unsigned NOT NULL AUTO\_INCREMENT, \`message\_id\` int(10) unsigned NOT NULL, \`user\_id\` int(10) unsigned NOT NULL, \`text\_encrypted\` blob NOT NULL, \`comment\_secret\` varchar(255) NOT NULL, \`private\_to\_user\` int(10) unsigned DEFAULT NULL, \`time\_inserted\` int(10) unsigned NOT NULL, \`deleted\` tinyint(1) unsigned NOT NULL DEFAULT '0', PRIMARY KEY (\`id\`), KEY \`selection\` (\`message\_id\`,\`time\_inserted\`) ) ENGINE=MyISAM DEFAULT CHARSET=utf8; CREATE TABLE \`connections\` ( \`id\` int(10) unsigned NOT NULL AUTO\_INCREMENT, \`from\_user\` int(10) unsigned NOT NULL, \`type\` enum('friend','block') NOT NULL, \`to\_user\` int(10) unsigned NOT NULL, \`time\_inserted\` int(10) unsigned NOT NULL, PRIMARY KEY (\`id\`), UNIQUE KEY \`combination\` (\`from\_user\`,\`to\_user\`), KEY \`selection\` (\`to\_user\`,\`type\`) ) ENGINE=MyISAM DEFAULT CHARSET=utf8; CREATE TABLE \`favorites\` ( \`id\` int(10) unsigned NOT NULL AUTO\_INCREMENT, \`user\_id\` int(10) unsigned NOT NULL, \`message\_id\` int(10) unsigned NOT NULL, \`degree\` int(10) unsigned NOT NULL, \`time\_added\` int(10) unsigned NOT NULL, PRIMARY KEY (\`id\`), UNIQUE KEY \`combination\` (\`user\_id\`,\`message\_id\`), KEY \`selection\` (\`user\_id\`,\`time\_added\`) ) ENGINE=MyISAM DEFAULT CHARSET=utf8; CREATE TABLE \`feeds\` ( \`id\` int(10) unsigned NOT NULL AUTO\_INCREMENT, \`user\_id\` int(10) unsigned NOT NULL, \`message\_id\` int(10) unsigned NOT NULL, \`degree\` int(10) unsigned NOT NULL, PRIMARY KEY (\`id\`), UNIQUE KEY \`combination\` (\`user\_id\`,\`message\_id\`) ) ENGINE=MyISAM DEFAULT CHARSET=utf8; CREATE TABLE \`ids\_in\_threads\` ( \`id\` int(10) unsigned NOT NULL AUTO\_INCREMENT, \`content\_type\` enum('message','comment') NOT NULL, \`content\_id\` int(10) unsigned NOT NULL, \`private\_id\` int(10) unsigned NOT NULL, \`public\_id\` int(10) unsigned NOT NULL, PRIMARY KEY (\`id\`), UNIQUE KEY \`combination\` (\`content\_type\`,\`content\_id\`,\`private\_id\`) ) ENGINE=MyISAM DEFAULT CHARSET=utf8; CREATE TABLE \`messages\` ( \`id\` int(10) unsigned NOT NULL AUTO\_INCREMENT, \`user\_id\` int(10) unsigned NOT NULL, \`color\_hex\` varchar(7) NOT NULL, \`pattern\_id\` int(10) unsigned NOT NULL, \`text\_encrypted\` blob NOT NULL, \`message\_secret\` varchar(255) NOT NULL, \`favorites\_count\` int(10) unsigned NOT NULL DEFAULT '0', \`comments\_count\` int(10) unsigned NOT NULL DEFAULT '0', \`time\_published\` int(10) unsigned NOT NULL, \`time\_active\` int(10) unsigned NOT NULL DEFAULT '2147483647', \`language\_iso3\` varchar(3) DEFAULT NULL, \`country\_iso3\` varchar(3) DEFAULT NULL, \`geo\_lat\` float DEFAULT NULL, \`geo\_long\` float DEFAULT NULL, \`topic\` enum('','politics','art','business','work','culture','health','science','sports','technology','sex','dating','beauty','books','movies','music','family','food','life','love','confessions','dreams','fantasy','friendship','funny','games','hobbies','money','party','philosophy','quotes','school','stories','studies','travel','meta') NOT NULL DEFAULT '', \`score\` decimal(8,6) unsigned NOT NULL DEFAULT '0.000000', \`dispatched\` tinyint(1) unsigned NOT NULL DEFAULT '0', \`deleted\` tinyint(1) unsigned NOT NULL DEFAULT '0', PRIMARY KEY (\`id\`), KEY \`time\_published\` (\`time\_published\`), KEY \`dispatcher\` (\`dispatched\`,\`time\_published\`), KEY \`popular\_by\_language\` (\`language\_iso3\`,\`score\`), KEY \`latest\_by\_language\` (\`language\_iso3\`,\`time\_published\`), KEY \`time\_active\` (\`time\_active\`), KEY \`latest\_by\_location\` (\`geo\_lat\`,\`geo\_long\`,\`time\_published\`) ) ENGINE=MyISAM DEFAULT CHARSET=utf8; CREATE TABLE \`reports\` ( \`id\` int(10) unsigned NOT NULL AUTO\_INCREMENT, \`user\_id\` int(10) unsigned NOT NULL, \`content\_type\` enum('message','comment') NOT NULL, \`content\_id\` int(10) unsigned NOT NULL, \`reason\` int(10) unsigned NOT NULL, \`weight\` tinyint(3) unsigned NOT NULL DEFAULT '0', \`time\_reported\` int(10) unsigned NOT NULL, PRIMARY KEY (\`id\`), UNIQUE KEY \`combination\` (\`user\_id\`,\`content\_type\`,\`content\_id\`), KEY \`selection\_by\_user\` (\`user\_id\`,\`time\_reported\`), KEY \`selection\_by\_content\` (\`content\_type\`,\`content\_id\`) ) ENGINE=MyISAM DEFAULT CHARSET=utf8; CREATE TABLE \`subscriptions\` ( \`id\` int(10) unsigned NOT NULL AUTO\_INCREMENT, \`message\_id\` int(10) unsigned NOT NULL, \`user\_id\` int(10) unsigned NOT NULL, \`degree\` int(10) unsigned NOT NULL DEFAULT '3', \`reasonForBan\` tinyint(1) unsigned NOT NULL DEFAULT '0', \`counter\` tinyint(3) unsigned NOT NULL DEFAULT '0', PRIMARY KEY (\`id\`), UNIQUE KEY \`combination\` (\`message\_id\`,\`user\_id\`), KEY \`selection\` (\`user\_id\`,\`counter\`) ) ENGINE=MyISAM DEFAULT CHARSET=utf8; CREATE TABLE \`throttling\` ( \`username\` varchar(255) NOT NULL, \`date\_str\` char(8) NOT NULL, \`action\_type\` enum('failed\_login') NOT NULL, \`action\_count\` smallint(8) unsigned NOT NULL DEFAULT '1', PRIMARY KEY (\`username\`,\`date\_str\`,\`action\_type\`) ) ENGINE=MyISAM DEFAULT CHARSET=utf8; CREATE TABLE \`users\` ( \`id\` int(10) unsigned NOT NULL AUTO\_INCREMENT, \`username\` varchar(255) NOT NULL, \`password\` varchar(255) DEFAULT NULL, \`reported\_count\` tinyint(1) unsigned NOT NULL DEFAULT '0', \`write\_lock\_until\` int(10) unsigned NOT NULL DEFAULT '0', \`login\_throttled\_until\` int(10) unsigned NOT NULL DEFAULT '0', \`time\_last\_active\` int(10) unsigned DEFAULT NULL, \`time\_registered\` int(10) unsigned DEFAULT NULL, PRIMARY KEY (\`id\`), UNIQUE KEY \`username\` (\`username\`), KEY \`login\` (\`username\`(166),\`password\`(166)) ) ENGINE=MyISAM DEFAULT CHARSET=utf8; CREATE TABLE \`verifications\` ( \`id\` int(10) unsigned NOT NULL AUTO\_INCREMENT, \`user\_id\` int(10) unsigned NOT NULL, \`new\_password\` varchar(255) NOT NULL, \`verification\_code\` varchar(255) NOT NULL, \`time\_created\` int(10) unsigned NOT NULL, \`time\_until\` int(10) unsigned NOT NULL, PRIMARY KEY (\`id\`), KEY \`selection\_by\_user\` (\`user\_id\`,\`time\_until\`), KEY \`selection\_by\_code\` (\`verification\_code\`,\`time\_until\`) ) ENGINE=MyISAM DEFAULT CHARSET=utf8; |
| --- | --- |
| User Prompt | 最新的一条报告的作者ID 和对应的报告原因是什么? |
| Yuren13b Output | `SELECT reports.user\_id, reports.reason FROM reports ORDER BY reports.time\_reported DESC LIMIT 1` |
### Structured Data Extraction/结构化数据抽取
| System Prompt | Extract meta data from a resume full text. User request according to the following JSON Schema: {"type":"object","properties":{"lawsuit":{"type":"string","description":"Resume full text."}},"required":\["lawsuit"\]} Translate it into JSON objects according to the following JSON Schema: {"type":"object","properties":{"name":{"type":"string"},"profession":{"type":"string"},"experienceYears":{"type":"number"},"skills":{"type":"array","items":{"type":"object","properties":{"name":{"type":"string"},"proficiency":{"type":"string"}},"required":\["name","proficiency"\],"additionalProperties":false}},"projectExperiences":{"type":"array","items":{"type":"object","properties":{"title":{"type":"string"},"role":{"type":"string"},"description":{"type":"string"}},"required":\["title","role","description"\],"additionalProperties":false}},"education":{"type":"object","properties":{"degree":{"type":"string"},"fieldOfStudy":{"type":"string"},"university":{"type":"string"},"year":{"type":"number"}},"required":\["degree","fieldOfStudy","university","year"\],"additionalProperties":false}},"required":\["name","profession","experienceYears","skills","projectExperiences","education"\]} |
| --- | --- |
| User Prompt | user request:{"lawsuit":\["\n我是张三,一名资深的软件工程师,拥有超过七年的前端开发经验。我热衷于构建出色的用户界面,熟练运用HTML、CSS和JavaScript,并精通React、Vue以及Angular等前端框架。我曾参与过多个大型 项目,负责设计和实现前端架构,确保网站的高性能和用户友好性。此外,我还具备项目管理的经验,能够带领团队按时交付高质量的成果。\n\n### 项目经历\n\n#### 1. 电商网站重构 (ABC 公司)\n- 负责参与了ABC公司旗下电商网站 的重构项目,担任前端技术负责人。\n- 使用React框架重建网站前端,实现了页面响应式设计和动态加载功能,提升了用户体验。\n- 优化前端性能,减少了页面加载时间,提高了网站整体性能。\n- 设计并实施了用户行为跟踪和分析系统,为市场营销团队提供了关键的数据支持。\n\n#### 2. 社交媒体应用开发 (XYZ 创业公司)\n- 领导一个四人的前端开发团队,从零开始开发了一款社交媒体应用。\n- 采用了Vue.js框架和Vuex进行状态管理,实现了实时聊天、帖子发布 和用户互动功能。\n- 集成了第三方登录和分享功能,提升了用户注册和活跃度。\n- 成功将应用推向市场,用户数量从零增长到五万以上。\n\n#### 3. 内部管理系统升级 (DEF 企业)\n- 负责升级公司内部管理系统,从传统的后端渲染转变为现代化的前后端分离架构。\n- 使用Angular框架开发新的前端界面,实现了快速的数据加载和交互功能。\n- 利用GraphQL优化了与后端的数据通信,减少了不必要的请求次数,提高了系统效率。\n- 通过培训和文档编写,帮助团队 成员顺利过渡到新的技术栈。\n\n### 技能和专业知识\n\n- 前端技术: HTML, CSS, JavaScript, React, Vue, Angular, Redux, GraphQL\n- 前端工具: Webpack, Babel, ESLint\n- 项目管理: Agile, Scrum, Jira\n\n### 教育背景\n\n- 学士学位,计算机科学,北京大学,2012年\n"\]} json object: |
| Yuren13b Output | `{ "name":"张三", "profession":"软件工程师", "experienceYears":7, "skills":\[ { "name":"HTML", "proficiency":"熟练" }, { "name":"CSS", "proficiency":"熟练" }, { "name":"JavaScript", "proficiency":"熟练" }, { "name":"React", "proficiency":"精通" }, { "name":"Vue", "proficiency":"精通" }, { "name":"Angular", "proficiency":"精通" } \], "projectExperiences":\[ { "title":"电商网站重构", "role":"前端技术负责人", "description":"负责参与了ABC公司旗下电商网站 的重构项目,担任前端技术负责人。使用React框架重建网站前端,实现了页面响应式设计和动态加载功能,提升了用户体验。优化前端性能,减少了页面加载时间,提高了网站整体性能。设计并实施了用户行为跟踪和分析系统,为市场营销团队提供了关键的数据支持。" }, { "title":"社交媒体应用开发", "role":"领导", "description":"领导一个四人的前端开发团队,从零开始开发了一款社交媒体应用。采用了Vue.js框架和Vuex进行状态管理,实现了实时聊天、帖子发布 和用户互动功能。集成了第三方登录和分享功能,提升了用户注册和活跃度。成功将应用推向市场,用户数量从零增长到五万以上。" }, { "title":"内部管理系统升级", "role":"负责人", "description":"负责升级公司内部管理系统,从传统的后端渲染转变为现代化的前后端分离架构。使用Angular框架开发新的前端界面,实现了快速的数据加载和交互功能。利用GraphQL优化了与后端的数据通信,减少了不必要的请求次数,提高了系统效率。通过培训和文档编写,帮助团队 成员顺利过渡到新的技术栈。" } \], "education":{ "degree":"学士", "fieldOfStudy":"计算机科学", "university":"北京大学", "year":2012 } }` |
### ReAct
| System Prompt | You are helpful AI assistant with functions invocation capability enabled. First I will give you the task description, and your task start. At each step, you need to give your thought to analyze the status now and what to do next, with a function call to actually excute your step. Your output should follow this format: Thought: Action Action Input: After the call, you will get the call result, and you are now in a new state. Then you will analyze your status now, then decide what to do next... After many (Thought-call) pairs, you finally perform the task, then you can give your finial answer. Remember: 1.the state change is irreversible, you can't go back to one of the former state, if you want to restart the task, say "I give up and restart". 2.All the thought is short, at most in 5 sentence. 3.You can do more then one trys, so if your plan is to continusly try some conditions, you can do one of the conditions per try. Let's Begin! Task description: You should use functions to help handle the real time user querys. Remember: 1.ALWAYS call "Finish" function at the end of the task. And the final answer should contain enough information to show to the user,If you can't handle the task, or you find that function calls always fail(the function is not valid now), use function Finish->give\_up\_and\_restart. 2.Do not use origin tool names, use only subfunctions' names. You have access of the following tools: 1.memeados: Generate custom image, gif and video memes. Specifically, you have access to the following APIs: \[{'name': 'drakelikehate\_for\_memeados', 'description': 'This is the subfunction for tool "memeados", you can use this tool.The description of this function is: "Generate Drake Likes and Hates meme"', 'parameters': {'type': 'object', 'properties': {'text2': {'type': 'string', 'description': '', 'example\_value': 'This text is liked.'}, 'text1': {'type': 'string', 'description': '', 'example\_value': 'This text is hated'}}, 'required': \['text2', 'text1'\], 'optional': \[\]}}, {'name': 'pet\_pet\_for\_memeados', 'description': 'This is the subfunction for tool "memeados", you can use this tool.The description of this function is: "Generate My pet\_pet\_for\_memeados meme GIF"', 'parameters': {'type': 'object', 'properties': {'image': {'type': 'string', 'description': '', 'example\_value': '[https://i.pravatar.cc/300](https://i.pravatar.cc/300)'}}, 'required': \['image'\], 'optional': \[\]}}, {'name': 'sponge\_bob\_for\_memeados', 'description': 'This is the subfunction for tool "memeados", you can use this tool.The description of this function is: "Generate Sponge Bob meme"', 'parameters': {'type': 'object', 'properties': {'text': {'type': 'string', 'description': '', 'example\_value': 'Example test'}}, 'required': \['text'\], 'optional': \[\]}}, {'name': 'google\_fake\_autocomplete\_for\_memeados', 'description': 'This is the subfunction for tool "memeados", you can use this tool.The description of this function is: "Generate Fake google autocomplete"', 'parameters': {'type': 'object', 'properties': {'text1': {'type': 'string', 'description': '', 'example\_value': 'Search'}, 'text3': {'type': 'string', 'description': '', 'example\_value': 'search autocomplete 2'}, 'text2': {'type': 'string', 'description': '', 'example\_value': 'search autocomplete 1'}}, 'required': \['text1', 'text3', 'text2'\], 'optional': \[\]}}, {'name': 'Finish', 'description': "If you think you get the result which can answer the task, call this function to give the final answer. Or, if you think you can't handle the task from this status, call this function to restart. Remember: you should ALWAYS call this function at the end of your try, and the final answer is the ONLY part that will be showed to user, so final answer should contain enough information.", 'parameters': {'type': 'object', 'properties': {'return\_type': {'type': 'string', 'enum': \['give\_answer', 'give\_up\_and\_restart'\]}, 'final\_answer': {'type': 'string', 'description': 'The final answer you want to give the user. You should have this field if "return\_type"=="give\_answer"'}}, 'required': \['return\_type'\]}}\] |
| --- | --- |
| User Prompt | 我需要为我的家庭聚会制作一个梗图。你能生成一张海绵宝宝的梗图,上面写着“家庭爱”吗?另外,我想用一张家庭照片作为输入图像,生成一张Pet Pet梗图的GIF。 |
| Yuren13b Output | Thought: Action: sponge\_bob\_for\_memeados Action Input: { "text": "家庭爱" } |
### Other/其他场景
| User Prompt | Yuren13b Output |
| --- | --- |
| 根据以下事件描述,请给出事件所发生的时间(如果没有提及时间则输出无时间) 3月9日,“20豫能化CP001”主承销商农行发布公告称,拟于3月12日召开河南能源化工集团有限公司“20豫能化CP001”2021年度第一次持有人会议,审议《关于同意调整本次债券持有人会议召开程序的议案》和《关于同意发行人先行支付存续期间全部利息、兑付50%本金,剩余本金展期1年,展期期间利率保持不变,到期一次性还本付息,并豁免本期债券违约的议案》。 资料显示,“20豫能化CP001”发行总额10亿元,期限1年,债券利率5.50%,主体评级为BB,发行日期为2020年3月19日,到期日为2021年3月23日。 | 2021年3月12日 |
| 请阅读文本,用几句话概括文中的重点。\n并以 {\"text\": str} 形式的json格式返回结果 9月 25 日,华为举办秋季全场景新品发布会。往年这场发布会上的重头戏都是其旗舰系列的产品更新,而在今年,华为反其道而行之。 先是在上个月就推出了Mate 60Pro、Mate 60Pro+和 Mate X5三款机型的“先锋计划”,直接在线上线下渠道面向消费者发售。而这次发布会之前,外界也对其是否真正在发布会上交代手机的具体参数、以及对其如何解决手机中的5G模块倍感关心。 不过,这次发布会上,华为并没有展开任何和Mate 60系列手机、芯片相关的任何细节,华为终端BG CEO余承东仅仅透露,目前华为正在对先锋计划的几款手机做加班加点的生产。 这次秋季发布会上的亮点是——高端品牌升级、星闪技术落地到产品、鸿蒙升级等等。 首先是,华为推出了「非凡大师」系列的全新高端品牌,还邀请了刘德华代言。这一品牌的前身是保时捷高端系列,此前华为与保时捷合作,借鉴了这一品牌车型的设计,讲这些经典范式复用到手机和手表等品类上。 华为方面解释,这一全新的系列将主要面向的各领域的精英人士,会主要搭载华为重点的设计工艺和技术创新。为了充实这一全新的系列,华为也发布了首款用纯手工打造的18K黄金手表,支持百米深潜和双向北斗卫星功能等的手表,售价高达 21999 元。 | {"text": "华为秋季发布会上,推出了「非凡大师」系列的全新高端品牌,邀请了刘德华代言。同时,华为还推出了一款18K黄金手表,售价高达21999元。"} |
| 给定的实体类型列表是\['组织机构', '地理位置', '人物'\]\n根据实体类型列表抽取,在这个句子中可能包含哪些实体?你可以先别出实体, 再判断实体类型。请以\"(实体,实体类型)\"的格式回答。\n日前,中科院所属科研院所百余名专家、教授摆摊设点,同淮安市一百多个市属工业、乡镇企业、个体私营企业的二千多人直接洽谈,场景十分火爆。 | 输入中包含的实体是: (中科院,组织机构),(淮安市,地理位置),(NAN,人物) |
| 已知候选的关系列表:\['所在行政领土', '别名'\],请你根据关系列表,从以下输入中抽取出可能存在的头实体与尾实体,并给出对应的关系三元组。请按照"(头实体,关系,尾实体)"的格式回答 小高镇,原为小高乡,是中华人民共和国四川省凉山彝族自治州德昌县下辖的一个乡镇级行政单位。2019年12月,撤销小高镇,将原小高镇安宁村所属行政区域为德州街道的行政区域,将原小高镇连丰村、联盟村、群英村、红岩村、黄草村、海花村、高丰村、杉木村所属行政区域划归乐跃镇管辖。 | 输入中包含的关系三元组是: (小高镇,所在行政领土,德昌县),(四川省,所在行政领土,中华人民共和国),(凉山彝族自治州,所在行政领土,四川省),(德昌县,所在行政领土,凉山彝族自治州),(安宁村,所在行政领土,小高镇),(德州街道,所在行政领土,德昌县),(连丰村,所在行政领土,小高镇),(联盟村,所在行政领土,小高镇),(群英村,所在行政领土,小高镇),(红岩村,所在行政领土,小高镇),(黄草村,所在行政领土,小高镇),(海花村,所在行政领土,小高镇),(杉木村,所在行政领土,小高镇),(乐跃镇,所在行政领土,德昌县),(小高镇,别名,小高乡) |
| 抽取安全领域文本的实体信息,结果以字典形式输出,若无信息,则返回空字典。实体类别如下:\['职务','飞机','时间','船只','数量','枪炮导弹','组织','机场','港口','方位','人物','航母','国家','地面武器','部队','地点'\] 据《印度快报》网站3月18日报道,印度与缅甸近日开展联合行动,摧毁了位于印度东部边境附近的激进武装分子营地。 | `{ "时间": \[ "3月18日" \], "国家": \[ "印度", "缅甸" \], "地点": \[ "印度东部边境附近" \] }` |
| 一个社会的进步,尤其是文明社会的进步需要的是有素质的公民,这也是社会对公民的根本要求。“禁止地铁中吃东西”的原委是为了塑造良好的地铁环境卫生,为的是形成一个良好的社会风气。按照制度经济学的观点,如香港地铁的规定乃是一种正式制度安排,因为有详细的惩罚细则。而正式制度的遵守需要非正式制度的约束,即人们的习惯。在这样的基础上才能形成遵守正式制度的好风气。而这样的一种约束或者说制度安排根本上无关乎歧视内地人,因为这是对于全部人们的要求,也非具有针对性的规定。作者认为“香港地铁的规定”是 | 一种正式制度安排,因为有详细的惩罚细则。 |
| 今年“五一”假期铁路运输期限为4月27日至5月4日,共8天。来自中国铁路北京局集团公司(简称“中国铁路北京局”)的消息,4月27日,中国铁路北京局共计发送旅客111.48万人,其中北京地区发送旅客57.5万人(北京站发送7.79万人、北京西站发送16.45万人、北京南站发送18.77万人、北京北站1.58万人、清河站2.17万、北京朝阳站4.89万人、北京丰台站4.57万人)。4月28日,中国铁路北京局预计发送旅客130万人,其中北京地区预计发送旅客68.54万人。中国铁路北京局结合假期运输特点,全力提升旅客乘车的舒适度。各次高铁列车针对互联网订餐订单量激增的实际情况,合理安排配送人员,做好列车配送服务,保障好出行旅客的网络订餐。运行中结合旅客实际需求,从列车环境、服务、用餐等方面提供多元化服务,部分列车还配有儿童读物和拼图,满足不同旅客的需求。北京站在候车大厅西侧明显位置,设置“医疗服务”室,配备诊疗床、AED除颤仪等基础急救设备,为旅客提供寻医问药服务。北京西站针对年纪太大无法久坐、手术后无法坐立的旅客提供可躺式座椅。北京南站4月27日至5月4日全面开启地下换乘层京沪高铁、京津城际4个快速进站厅,每日开关时间根据现场客流情况动态延长调整。北京北站启用地上地下进站口,“德馨”服务组推出便民服务车,关口前移,提升旅客出行体验。清河站全面启用南落客和西进站口,全面满足地铁、公交、出租车、私家车等多种交通方式接驳的旅客乘车。 4月27日北京哪个火车站发送旅客人次最多?北京铁路局采取哪些措施提升服务水平? | 北京火车站发送旅客人次最多的是北京站,共计发送旅客7.79万人。北京铁路局采取了多种措施提升服务水平,包括:1. 提供多元化服务,如配置儿童读物和拼图;2. 设置医疗服务室,配备基础急救设备;3. 启用地上地下进站口,增加进站效率;4. 提供便民服务车,关口前移;5. 全面启用清河站的南落客和西进站口,满足多种交通方式接驳的需求。 |
| 目的:探讨阑尾周边高回声结构的性质,评价其在急性阑尾炎超声诊断中的临床意义。 翻译成英文 | AIM: To evaluate the ultrasound features of the high-echoic ring around the cecal tip in patients with acute appendicitis. |
| 我是中国的一家跨境电商公司,主要出售便携式咖啡机,请以沙特的斋月用主题写一篇英文播客的稿件,目的是让客户感到我对他的节日的尊重,让他觉得我是了解他们的节日,提升对我的好感度根据赞美 | Welcome to our special Ramadan edition of our podcast! As a Chinese cross-border e-commerce company, we are honored to be a part of the Ramadan season, a special time for our Muslim friends around the world. Ramadan is a religious observance in Islam, lasting one month, during which Muslims fast from dawn until sunset. It is a time of spiritual discipline, sacrifice, and sacrifice. It is also a time of community and togetherness, as families and friends gather to break fast together at sunset. At our company, we understand the importance of respecting and celebrating religious holidays. That's why we have curated a selection of portable coffee machines that are perfect for breaking fast during the Ramadan season. Our portable coffee machines are small, lightweight, and easy to carry, making them perfect for traveling or for use in a hotel room or other temporary living space. They are also easy to use, with a simple one-touch brewing system that allows you to make a fresh, delicious cup of coffee in just a few seconds. We understand that during the Ramadan season, it is important to have a steady supply of coffee to help you stay awake and alert during the day. Our portable coffee machines are designed to be easy to use and to provide a consistent, delicious cup of coffee every time. We hope that our portable coffee machines will help you enjoy the Ramadan season to the fullest, and that they will be a part of your daily routine during this special time. Thank you for choosing our company, and happy Ramadan! |
| 这是一条语料,存在需要清洗去掉的脏数据。请给我清洗后的干净的文本。hashtag 也删掉。 #头条创作挑战赛##旅游##我要上 头条##今日头条日签##城市#在\<a>我国</a>,有许多省份都有一个别\ 称。比如山东被称为齐鲁大地、河南称为中原之地等等。而今天我们要说的这个地方也有个别名——赣 鄱明珠!它就是位于江西省东北部的上饶市。那么接下来就让我们一起走进这座美丽的江南小城吧~一.地理位置与地形概况1.地<!-- --> 理位置上饶地处武夷山脉和怀玉山脉之间,是长江三角洲经济区的重要组成部分。Copyright (c) 2020 XXX | 在我国,有许多省份都有一个别称。比如山东被称为齐鲁大地、河南称为中原之地等等。而今天我们要说的这个地方也有个别名——赣鄱明珠!它就是位于江西省东北部的上饶市。那么接下来就让我们一起走进这座美丽的江南小城吧!一.地理位置与地形概况1.地理位置上饶地处武夷山脉和怀玉山脉之间,是长江三角洲经济区的重要组成部分。 |
| 痛风发作的时候应该冷敷还热敷 | 痛风发作时,应该采取冷敷的方式来缓解疼痛。冷敷可以减轻疼痛和炎症,并有助于控制疼痛的持续时间。在疼痛发作时,应尽量避免热敷,因为热敷可能会加重疼痛。 |
| 过去两年便携式储能站上了风口,如今行业赛场延伸到了家庭储能(也称户用储能)。行业头部厂商华宝新能、正浩、安克创新等均在近一年内进军家庭储能市场。\n 以 json 数组返回提及的公司名字 | `\[ "华宝新能", "正浩", "安克创新" ]\` |
## Details of model training/模型训练细节
### Extending vocabulary/词表扩充
In the original Llama vocabulary, only a few hundred Chinese characters were included, and the remaining Chinese characters had to be generated by concatenating multiple Unicode bytes. This issue not only obviously affects the Chinese inference performance (generation speed), but also significantly creates a performance bottleneck in Chinese semantic understanding.
We conducted a series of comparative experiments on different vocabulary expansion approaches and found the following:
* Compared to the prevailing strategy of adding a large number of commonly used Chinese character words to the vocabulary, simply adding Chinese character characters to the vocabulary can achieve better semantic understanding performance with a smaller scale of pretraining data. In our experiments, we found that the existing BPE-based tokenizers for Chinese word segmentation inevitably lead to token segmentation that is difficult to align with the true semantics due to the inherent ambiguity in word segmentation. Although increasing the model's parameter size and diversifying the training data can enable the model to have the ability to correctly understand incorrectly segmented tokens during the pretraining process, this understanding always comes with additional costs.
* The number of newly added tokens during vocabulary expansion is directly proportional to the perturbation of the original token distribution, so the fewer new tokens added, the less impact it will have on the semantic disturbance of existing tokens.
Therefore, considering these factors, we conservatively expanded the vocabulary by 4843 tokens. Specifically, this includes all the primary Chinese characters and a subset of secondary and tertiary Chinese characters from the "General Standard Chinese Character Table" published by the National Language Commission in 2013. This subset was derived by using Pleisto's proprietary Chinese corpus to calculate the frequency of commonly used Chinese characters, with the aim of covering as many commonly used Chinese characters as possible, including those in the fields of science and technology, as well as commonly used Chinese characters in personal and place names. Additionally, a portion of commonly used punctuation marks in Chinese language were also included.
原始 Llama 词表中仅含有几百个汉字,其余汉字均需要以多个 unicode 字节形式拼接生成。这一问题除了显而易见地导致中文推理性能(生成速度)受到影响之外,还在很大程度上造成了模型在中文语义理解上造成了性能瓶颈。
我们对于不同的词表扩充方案进行了一系列对比实验并发现:
* 相较于目前主流的在词表中加入大量的常用汉字词语的策略而言,仅在词表中添加汉字字符的方案可以在更少的预训练数据规模下实现更佳的语义理解性能。我们在实验中发现,现有的基于 BPE 的分词器进行中文分词时由于分词本身存在的歧义性几乎必然导致生成的 Token 分割难以与真实语义进行对齐。尽管通过提升模型参数量、增加训练数据的规模和多样性,可以让模型本身在预训练过程中拥有正确理解被错误分割的 token 的能力,但这种理解始终是有额外成本的。
* 扩充词表时新增的 token 的数量和对于原始词向量的分布的扰动始终成正比,因此新增的 token 越少对于已有 token 的语义扰动的影响就会越少。
鉴于此我们较为保守地扩充了 4843 个 Token,具体而言包括国家语委在 2013 年发布的《通用规范汉字表》中的全部一级汉字、二三级汉字的一个子集(该子集通过使用 Pleisto 自有的中文语料进行汉字常用字字频统计后得出以期最大可能地覆盖包括科学技术领域常用字、人名地名常用字在内的所有常用汉字)、汉语中较常使用的一部分标点符号。
### Training an embedding layer/词向量嵌入层的训练
Although the mainstream approach to extending the vocabulary typically does not involve separate pretraining of the embedding layer due to cost considerations, and instead relies on updating the embedding layer during continuous pretraining to achieve alignment, our research has revealed the following:
* Completely initializing the newly added word embeddings randomly and then achieving semantic alignment during the continuous pretraining phase can result in the model struggling to effectively learn a portion of the pretraining data. This issue becomes particularly evident when the pretraining data consists of carefully curated high-quality datasets.
* Freezing the other layers and training only the embedding layer using diverse datasets, especially those containing multilingual parallel corpora, different from the ones used in continuous pretraining, helps enhance the model's semantic understanding capabilities. This approach proves beneficial, particularly when the pretraining phase employs small-scale high-quality datasets. Pretraining the embedding layer with a more diverse and unfiltered dataset before continuous pretraining improves the model's semantic understanding and resilience.
Therefore, considering these findings, we conducted one epoch of pretraining on the embedding layer using a diverse corpus of 760 million tokens while keeping the other layers frozen. The training was performed with a global batch size of 128, and the training loss decreased from 5.907 before training to 3.429.
尽管目前主流的词表扩充方案基于成本考虑通常不再单独针对词向量嵌入层进行预先训练,而是依靠在进行持续预训练时对于词向量嵌入层的更新来实现词向量的对齐。但我们的研究发现:
* 完全将新增词向量进行随机初始化,而后在持续预训练阶段进行语义对齐的方案会导致一部分预训练数据难以被模型真正学到,该问题在预训练数据是经过精心清洗的高质量数据集的情况下尤其明显。
* 冻结其他层,使用不同于持续预训练阶段的多样性数据集(尤其是包含多语平行语料的数据集)仅针对词向量嵌入层进行训练有助于提升模型的语义理解能力。尤其是当预训练阶段使用高质量小规模数据集的情况下,使用更具多样性的、未经人工过滤的数据集预先训练词向量嵌入层有助于提升模型的语义理解能力和抗毒性能力。
鉴于此我们在冻结其他层的情况下,使用了 760M Token 的多样性语料进行了 1 个 Epoch 的词向量嵌入层预训练。训练中使用了 128 的全局 Btach Size,train/loss 从训练前的 5.907 降低到 3.429。
### Pre Traning/预训练
We performed continuous pretraining on the model using a high-quality corpus of 2.45 billion tokens. The English portion of the corpus was derived from a carefully curated diverse subset of the Falcon RefinedWeb dataset, while the code data came from a specific subset of the bigcode/the-stack dataset. The Chinese portion of the corpus consisted of a specific subset of the mc4 dataset, a curated subset of Chinese Wikipedia, and Pleisto's proprietary collection of public books and papers. During the data preprocessing stage, we employed a series of heuristic methods to clean and deduplicate the data, and used an in-house proprietary model to score the quality of the corpus and align the diversity distribution.
Furthermore, our experiments revealed that the order of the training data significantly impacts the final model performance. Therefore, we sorted the training data using a heuristic algorithm based on the principle of "easy first, difficult later," rather than employing a random shuffling strategy.
During continuous pretraining, we used a sequence length of 4096 and a global batch size of 128. We utilized the 32-bit Lion optimizer with a constant learning rate of 7x10-5. Due to hardware resource limitations, we conducted the training using Lora with a rank of 64. In addition to training all the linear layers, we also trained the embed_token and lm_head layers. However, unlike QLora, we used fp16 precision for training.
我们仅使用了 2.45B 的高质量语料对模型进行了持续预训练,其中英文部分语料来自Falcon RefinedWeb 数据集的一个经过精心策划的多样性子集、代码语料来自 bigcode/the-stack 数据集的一个特定子集,中文部分语料则由 mc4 的一个特定子集、中文维基百科精选子集和 Pleisto 自有的公版书籍与论文数据集共同组成。在数据预处理阶段,我们采用了一系列启发式的方法来针对数据进行清洗和去重并使用自有的闭源模型对于语料质量进行打分评估和多样性分布对齐。
此外我们的实验发现训练数据的顺序也会对最终模型性能造成明显影响,因此我们以「先易后难」的原则使用启发式算法对于训练数据进行了排序而没有采用随机洗牌的策略。
我们在 4096 的序列长度下,使用 128 的全局 Btach Size 进行了持续预训练并使用了 32bit Lion 优化器和7x10-5 的恒定学习率。由于硬件资源上的限制,该阶段的训练采用了 Lora 进行,rank 为 64,除了全部的线性层外还额外训练了 embed\_token 和 lm\_head 层 。但不同于 QLora,我们采用了 fp16 精度进行训练。
### Supervise Fine-tuning/有监督微调
We conducted supervised fine-tuning in two stages. In the first stage, we trained the model for one epoch using a more diverse dataset consisting of 1.8 million samples. A significant portion of this dataset was constructed from a subset of the Orca-style instruction dataset, which is based on flan2021 and COIG (inspired by the Microsoft Orca paper). Additionally, it included subsets from the following publicly available datasets:
* GSM-8k
* OpenAssistant/oasst1
* b-mc2/sql-create-context
* flan2021
* niv0
* COIG
* TheoremQA
In the second stage, we performed an additional two epochs of training using a highly curated subset of 500,000 samples that underwent multiple verification steps.
我们的有监督微调分 2 个阶段进行,首先使用了一个更具多样性的含有 180 万条数据的数据集训练了 1 个 epoch。该数据集的很大一部分是由基于 flan2021 和 COIG 的一个子集所构建的 Orca 风格指令数据集组成(受微软 Orca 论文的启发)。此外还涵盖了如下的公开数据集的子集:
* GSM-8k
* OpenAssistant/oasst1
* b-mc2/sql-create-context
* flan2021
* niv0
* COIG
* TheoremQA
在第二阶段我们使用了一个由 50 万条经过多重校验的高质量子集进行了额外的 2 个 epoch 的训练。
## Benchmark Evaluation/性能评测
### GSM8k
| Model | Score |
| --- | --- |
| Llama2-13b | 28.7 |
| YuRen-13b | 34.42 |
| Llama1-30b | 35.6 |
| ChatGLM2-6b | 28.05 |
| Baichuan 13b - Chat | 26.6 |
| InternLM 7b | 31.2 |
| GPT-3.5 | 57.1 |
### AGIEval-English
| Model | Avg/平均 | AquA-RAT | LogiQA-en | LSAT-AR | LSAT-LR | LSAT-RC | SAT-en | SAT-en(w/o Psg.) | SAT-math |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| Llama2-13b | 39.1 | 21.7 | 38.1 | 23.0 | 41.0 | 54.6 | 62.1 | 46.1 | 27.3 |
| YuRen-13b | 39.6 | 26.77 | 37.33 | 24.35 | 36.86 | 48.7 | 69.42 | 46.6 | 26.82 |
| Llama1-30b | 41.7 | 18.9 | 37.3 | 18.7 | 48.0 | 59.5 | 74.8 | 44.7 | 35 |
| GPT-3.5 | 57.1 | 31.3 | 43.5 | 25.7 | 59.2 | 67.7 | 81.1 | 53.9 | 40.9 |
### C-Eval 中文能力
| Model | Avg/平均 | Avg/平均(Hard) | STEM | 社会科学 | 人文科学 | 其他 |
| --- | --- | --- | --- | --- | --- | --- |
| Llama2-13b | 39.1 | 21.7 | 38.1 | 23.0 | 41.0 | 54.6 |
| YuRen-13b | 40.4 | 28.2 | 36.9 | 48.8 | 40.7 | 38.9 |
| Llama1-30b | 41.7 | 18.9 | 37.3 | 18.7 | 48.0 | 59.5 |
| GPT-3.5 | 57.1 | 31.3 | 43.5 | 25.7 | 59.2 | 67.7 |
| Baichuan-13B-Chat | [](https://arxiv.org/abs/2009.03300)51.5 | / | 43.5 | 64.6 | 56.2 | 49.2 |
## Limitations and Biases/局限性
YuRen 13B model is primarily designed for the field of information synthesis, including building intelligent agents, natural language understanding, generating SQL, and other business scenarios, rather than directly providing services to the public. We strongly recommend applying this model to internal data processing scenarios within enterprises, rather than public environments.
While we have made every effort to ensure the compliance of the data used during the model training process, unforeseen issues may arise due to the complexity of the model and data. We disclaim any responsibility for any problems caused by the use of the YuRen 13B open-source model, including but not limited to data security issues, public opinion risks, or any risks and issues arising from the model being misled, abused, disseminated, or improperly utilized.
We strongly advise implementing additional security measures when using the model, such as filtering, reviewing, or restricting the inputs and outputs of the model, to prevent harm to users. Technological development should take place in a regulated and lawful environment, and we hope that all users uphold this principle. We will continue to improve the training and use of the model to enhance its security and effectiveness.
羽人 13B 模型主要设计用于信息合成领域,包括构建智能代理、自然语言理解、生成SQL等业务场景,而并非直接用于向公众提供服务。我们强烈建议将此模型应用于企业内部的数据处理场景,而不是公开环境。
虽然我们已经尽可能确保模型训练过程中的数据合规性,但由于模型和数据的复杂性,可能存在无法预见的问题。如果由于使用羽人 13B 开源模型而导致的任何问题,包括但不限于数据安全问题、公共舆论风险,或模型被误导、滥用、传播或不当利用所带来的任何风险和问题,我们不承担任何责任。
我们强烈建议在使用模型时采用额外的安全措施,如对模型的输入输出进行过滤、审查或限制,以免对用户造成伤害。科技的发展应在规范和合法的环境下进行,我们希望所有使用者都能秉持这一原则。我们将持续改进模型的训练和使用,以提升其安全性和有效性。
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_pleisto__yuren-13b-chatml)
| Metric |Value|
|---------------------------------|----:|
|Avg. |55.39|
|AI2 Reasoning Challenge (25-Shot)|53.07|
|HellaSwag (10-Shot) |78.03|
|MMLU (5-Shot) |56.34|
|TruthfulQA (0-shot) |42.32|
|Winogrande (5-shot) |74.43|
|GSM8k (5-shot) |28.13|
|
qgyd2021/chinese_porn_novel | qgyd2021 | 2023-11-16T06:51:59Z | 746 | 1 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-11-16T06:44:43Z | ---
license: apache-2.0
---
|
deepnight-research/zsc-text | deepnight-research | 2023-11-26T11:46:54Z | 746 | 0 | transformers | [
"transformers",
"safetensors",
"deberta-v2",
"text-classification",
"zero-shot-classification",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | zero-shot-classification | 2023-11-25T20:51:18Z | ---
license: mit
pipeline_tag: zero-shot-classification
--- |
hamxea/Mistral-7B-v0.1-activity-fine-tuned-v2 | hamxea | 2024-03-31T14:52:48Z | 746 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"medical",
"text-generation-inference",
"en",
"arxiv:1910.09700",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-12-05T16:06:46Z | ---
license: other
language:
- en
library_name: transformers
tags:
- medical
- text-generation-inference
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.2.dev0 |
Weyaxi/OpenHermes-2.5-neural-chat-v3-3-openchat-3.5-1210-Slerp | Weyaxi | 2024-01-21T08:03:27Z | 746 | 2 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"base_model:Weyaxi/OpenHermes-2.5-neural-chat-v3-3-Slerp",
"base_model:openchat/openchat-3.5-1210",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-12-24T14:30:22Z | ---
license: apache-2.0
tags:
- merge
base_model:
- Weyaxi/OpenHermes-2.5-neural-chat-v3-3-Slerp
- openchat/openchat-3.5-1210
---
# OpenHermes-2.5-neural-chat-v3-3-openchat-3.5-1210-Slerp
This is the model for OpenHermes-2.5-neural-chat-v3-3-openchat-3.5-1210-Slerp. I used [mergekit](https://github.com/cg123/mergekit) to merge models.
# Yaml Config
```yaml
slices:
- sources:
- model: Weyaxi/OpenHermes-2.5-neural-chat-v3-3-Slerp
layer_range: [0, 32]
- model: openchat/openchat-3.5-1210
layer_range: [0, 32]
merge_method: slerp
base_model: mistralai/Mistral-7B-v0.1
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5 # fallback for rest of tensors
tokenizer_source: union
dtype: bfloat16
``` |
Weyaxi/Bagel-Hermes-2x34B | Weyaxi | 2024-06-27T08:30:30Z | 746 | 15 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"yi",
"moe",
"conversational",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-01-12T16:02:03Z | ---
tags:
- yi
- moe
model-index:
- name: Bagel-Hermes-2x34b
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 69.8
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/Bagel-Hermes-2x34b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 85.26
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/Bagel-Hermes-2x34b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 77.24
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/Bagel-Hermes-2x34b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 64.82
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/Bagel-Hermes-2x34b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 84.77
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/Bagel-Hermes-2x34b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 68.69
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/Bagel-Hermes-2x34b
name: Open LLM Leaderboard
license: apache-2.0
---

# Bagel-Hermes-2x34B
This is the model for Bagel-Hermes-2x34B. I used [this repo](https://bit.ly/weyaxi-moe-repo) to make this MOE model.
# Prompt Template(s):
Since [bagel-dpo-34b-v0.2](https://huggingface.co/jondurbin/bagel-dpo-34b-v0.2) uses many prompt templates, and [Nous-Hermes-2-Yi-34B](https://huggingface.co/NousResearch/Nous-Hermes-2-Yi-34B) uses ChatML, you can utilize ChatML and other prompt templates provided by bagel.
**Note:** I currently do not know which prompt template is best.
### ChatML:
```
<|im_start|>system
{system}<|im_end|>
<|im_start|>user
{user}<|im_end|>
<|im_start|>assistant
{asistant}<|im_end|>
```
### Alpaca (sort of)
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{system}
{instruction}
### Response:
```
### Vicuna
```
{system}
USER: {instruction}
ASSISTANT:
```
Visit [bagel-dpo-34b-v0.2](https://huggingface.co/jondurbin/bagel-dpo-34b-v0.2) to try more prompt templates.
# Yaml Config to reproduce
```yaml
base_model: nontoxic-bagel-34b-v0.2
gate_mode: hidden
dtype: bfloat16
experts:
- source_model: bagel-dpo-34b-v0.2
positive_prompts: ["question answering", "Q:", science", "biology", "chemistry", "physics"]
- source_model: Nous-Hermes-2-Yi-34B
positive_prompts: ["chat", "math", "reason", "mathematics", "solve", "count", "python", "javascript", "programming", "algorithm", "tell me", "assistant"]
```
# Quantizationed versions
Quantizationed versions of this model is available thanks to [TheBloke](https://hf.co/TheBloke).
##### GPTQ
- [TheBloke/Bagel-Hermes-2x34B-GPTQ](https://huggingface.co/TheBloke/Bagel-Hermes-2x34B-GPTQ)
##### GGUF
- [TheBloke/Bagel-Hermes-2x34B-GGUF](https://huggingface.co/TheBloke/Bagel-Hermes-2x34B-GGUF)
##### AWQ
- [TheBloke/Bagel-Hermes-2x34B-AWQ](https://huggingface.co/TheBloke/Bagel-Hermes-2x34B-AWQ)
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Weyaxi__Bagel-Hermes-2x34b)
| Metric |Value|
|---------------------------------|----:|
|Avg. |75.10|
|AI2 Reasoning Challenge (25-Shot)|69.80|
|HellaSwag (10-Shot) |85.26|
|MMLU (5-Shot) |77.24|
|TruthfulQA (0-shot) |64.82|
|Winogrande (5-shot) |84.77|
|GSM8k (5-shot) |68.69|
If you would like to support me:
[☕ Buy Me a Coffee](https://www.buymeacoffee.com/weyaxi) |
Sharathhebbar24/math_gpt2 | Sharathhebbar24 | 2024-01-24T03:45:22Z | 746 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"maths",
"arxiv-math",
"mathgpt2",
"en",
"dataset:ArtifactAI/arxiv-math-instruct-50k",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-01-14T15:10:58Z | ---
license: apache-2.0
language:
- en
pipeline_tag: text-generation
tags:
- maths
- arxiv-math
- gpt2
- mathgpt2
datasets:
- ArtifactAI/arxiv-math-instruct-50k
widget:
- text: Which motion is formed by an incident particle?
example_title: Example 1
- text: What type of diffusional modeling is used for diffusion?
example_title: Example 2
---
This model is a finetuned version of ```gpt2``` using ```ArtifactAI/arxiv-math-instruct-50k```
## Model description
GPT-2 is a transformers model pre-trained on a very large corpus of English data in a self-supervised fashion. This
means it was pre-trained on the raw texts only, with no humans labeling them in any way (which is why it can use lots
of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely,
it was trained to guess the next word in sentences.
More precisely, inputs are sequences of continuous text of a certain length and the targets are the same sequence,
shifting one token (word or piece of word) to the right. The model uses a masking mechanism to make sure the
predictions for the token `i` only use the inputs from `1` to `i` but not the future tokens.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks. The model is best at what it was trained for, however, which is generating texts from a
prompt.
### To use this model
```python
>>> from transformers import AutoTokenizer, AutoModelForCausalLM
>>> model_name = "Sharathhebbar24/math_gpt2"
>>> model = AutoModelForCausalLM.from_pretrained(model_name)
>>> tokenizer = AutoTokenizer.from_pretrained(model_name)
>>> def generate_text(prompt):
>>> inputs = tokenizer.encode(prompt, return_tensors='pt')
>>> outputs = model.generate(inputs, max_length=64, pad_token_id=tokenizer.eos_token_id)
>>> generated = tokenizer.decode(outputs[0], skip_special_tokens=True)
>>> return generated[:generated.rfind(".")+1]
>>> prompt = "What structure is classified as a definite lie algebra?"
>>> res = generate_text(prompt)
>>> res
``` |
eren23/DistiLabelOrca-TinyLLama-1.1B | eren23 | 2024-03-05T15:44:45Z | 746 | 2 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"question-answering",
"en",
"dataset:argilla/distilabel-intel-orca-dpo-pairs",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | question-answering | 2024-01-27T10:24:44Z | ---
language:
- en
license: apache-2.0
library_name: transformers
datasets:
- argilla/distilabel-intel-orca-dpo-pairs
pipeline_tag: question-answering
model-index:
- name: DistiLabelOrca-TinyLLama-1.1B
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 36.18
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=eren23/DistiLabelOrca-TinyLLama-1.1B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 61.15
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=eren23/DistiLabelOrca-TinyLLama-1.1B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 25.09
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=eren23/DistiLabelOrca-TinyLLama-1.1B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 38.05
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=eren23/DistiLabelOrca-TinyLLama-1.1B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 60.85
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=eren23/DistiLabelOrca-TinyLLama-1.1B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 1.67
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=eren23/DistiLabelOrca-TinyLLama-1.1B
name: Open LLM Leaderboard
---
TinyLlama/TinyLlama-1.1B-Chat-v1.0 dpo finetuned on the argilla/distilabel-intel-orca-dpo-pairs dataset, which is the distilled version of https://huggingface.co/datasets/Intel/orca_dpo_pairs
GGUF Version: To be added
Exllama Version: To be added
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_eren23__DistiLabelOrca-TinyLLama-1.1B)
| Metric |Value|
|---------------------------------|----:|
|Avg. |37.17|
|AI2 Reasoning Challenge (25-Shot)|36.18|
|HellaSwag (10-Shot) |61.15|
|MMLU (5-Shot) |25.09|
|TruthfulQA (0-shot) |38.05|
|Winogrande (5-shot) |60.85|
|GSM8k (5-shot) | 1.67|
|
lqtrung1998/Codellama-7b-hf-ReFT-GSM8k | lqtrung1998 | 2024-02-23T06:27:03Z | 746 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"arxiv:2401.08967",
"license:llama2",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-01-29T07:00:19Z | ---
license: llama2
---
# ReFT: Reasoning with REinforced Fine-Tuning
Paper: https://arxiv.org/pdf/2401.08967.pdf
Repo: https://github.com/lqtrung1998/mwp_ReFT (under [Apache2.0 License](https://github.com/lqtrung1998/mwp_ReFT/blob/main/License.txt))
## Introduction
We introduce REinforced Fine-tuning (ReFT), a method that enhances the generalizability of learning LLMs for reasoning.
This repository contains:
- A Warmup Supervised Fine-tuned model on GSM8k benchmark: [lqtrung1998/Codellama-7b-hf-SFT-warmup-GSM8k](https://huggingface.co/lqtrung1998/Codellama-7b-hf-SFT-warmup-GSM8k)
- A Supervised Fine-tuned model on GSM8k benchmark: [lqtrung1998/Codellama-7b-hf-SFT-GSM8k](https://huggingface.co/lqtrung1998/Codellama-7b-hf-SFT-GSM8k)
- A Rerank model that can score the fine-tuned SFT model output: [lqtrung1998/Codellama-7b-hf-SFT-Rerank-GSM8k](https://huggingface.co/lqtrung1998/Codellama-7b-hf-SFT-Rerank-GSM8k)
- A REinforced Fine-tuned model on GSM8k benchmark: [lqtrung1998/Codellama-7b-hf-ReFT-GSM8k](https://huggingface.co/lqtrung1998/Codellama-7b-hf-ReFT-GSM8k)
- A Rerank model that can score the fine-tuned ReFT model output: [lqtrung1998/Codellama-7b-hf-ReFT-Rerank-GSM8k](https://huggingface.co/lqtrung1998/Codellama-7b-hf-ReFT-Rerank-GSM8k)
Note: Our models are tuned based on Codellama, thus, licenses applicable to Codellama, such as [Llama license](https://ai.meta.com/resources/models-and-libraries/llama-downloads/), also hold on these models
## Training Data
The model is trained on GSM8k data with Python SDP CoT format, which can be found [here](https://github.com/lqtrung1998/mwp_ReFT)
## Training Procedure
Check out our paper and repo for complete details.
#### ReFT model
ReFT model is warm-up via Supervised Fine-tuning using GSM8k Python SDP training data for 2 epochs then it is REinforced Fine-tuned for 300 epochs using questions in GSM8k training set.
#### Rerank model
Rerank model is trained to classify if the output CoT is correct or not using sampling data of ReFT model after 2 epochs warm-up.
## Evaluation Results
See evaluations results of the models at table 4 of the research paper.
Updated results:
| | Top-1 | Voting@100 | Rerank@100 |
|--------------------------------------------------------------------|:------:|:----------:|:----------:|
| Codellama-7b-hf-SFT-warmup-GSM8k | 63.00 | - | - |
| Codellama-7b-hf-SFT-GSM8k<br>(+Codellama-7b-hf-SFT-Rerank-GSM8k) | 63.68 | 68.0 | 77.0 |
| Codellama-7b-hf-ReFT-GSM8k<br>(+Codellama-7b-hf-ReFT-Rerank-GSM8k) | 75.28 | 78.0 | 81.2 |
## Usage
You can use the models through Huggingface's Transformers library or follow scripts in our repo.
Prompt format:
```python
Question:
Weng earns $12 an hour for babysitting. Yesterday, she
just did 50 minutes of babysitting. How much did she earn?
Answer reasoning:
```
Expected response:
```python
def solution():
"""Weng earns $12 an hour for babysitting. Yesterday, she just did
50 minutes of babysitting. How much did she earn?"""
hourly_rate = 12
minutes_worked = 50
hours_worked = minutes_worked / 60
earnings = hourly_rate * hours_worked
result = earnings
return result
```
## Citation
Please cite the paper if you use our data, model or code.
```
@misc{luong2024reft,
title={ReFT: Reasoning with Reinforced Fine-Tuning},
author={Trung Quoc Luong and Xinbo Zhang and Zhanming Jie and Peng Sun and Xiaoran Jin and Hang Li},
year={2024},
eprint={2401.08967},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
## Intended Use
Intended Use Cases Code Llama and its variants is intended for commercial and research use in English and relevant programming languages. The base model Code Llama can be adapted for a variety of code synthesis and understanding tasks, Code Llama - Python is designed specifically to handle the Python programming language, and Code Llama - Instruct is intended to be safer to use for code assistant and generation applications.
Out-of-Scope Uses Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in languages other than English. Use in any other way that is prohibited by the Acceptable Use Policy and Licensing Agreement for Code Llama and its variants.
## Ethical Considerations and Limitations
Code Llama and its variants are a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Code Llama’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate or objectionable responses to user prompts. Therefore, before deploying any applications of Code Llama, developers should perform safety testing and tuning tailored to their specific applications of the model.
Please see the Responsible Use Guide available available at https://ai.meta.com/llama/responsible-use-guide. |
macadeliccc/MBX-7B-v3-DPO | macadeliccc | 2024-03-04T19:25:16Z | 746 | 4 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"dataset:jondurbin/truthy-dpo-v0.1",
"license:cc",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-01-30T17:43:56Z | ---
license: cc
library_name: transformers
datasets:
- jondurbin/truthy-dpo-v0.1
model-index:
- name: MBX-7B-v3-DPO
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 73.55
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=macadeliccc/MBX-7B-v3-DPO
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 89.11
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=macadeliccc/MBX-7B-v3-DPO
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 64.91
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=macadeliccc/MBX-7B-v3-DPO
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 74.0
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=macadeliccc/MBX-7B-v3-DPO
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 85.56
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=macadeliccc/MBX-7B-v3-DPO
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 69.67
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=macadeliccc/MBX-7B-v3-DPO
name: Open LLM Leaderboard
---
# MBX-7B-v3-DPO
This model is a finetune of [flemmingmiguel/MBX-7B-v3](https://huggingface.co/flemmingmiguel/MBX-7B-v3) using jondurbin/truthy-dpo-v0.1

## Code Example
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("macadeliccc/MBX-7B-v3-DPO")
model = AutoModelForCausalLM.from_pretrained("macadeliccc/MBX-7B-v3-DPO")
messages = [
{"role": "system", "content": "Respond to the users request like a pirate"},
{"role": "user", "content": "Can you write me a quicksort algorithm?"}
]
gen_input = tokenizer.apply_chat_template(messages, return_tensors="pt")
```
## Example Output

## GGUF
Available [here](https://huggingface.co/macadeliccc/MBX-7B-v3-DPO-GGUF/tree/main)
## Exllamav2
Quants are available from bartowski, check them out [here](https://huggingface.co/bartowski/MBX-7B-v3-DPO-exl2)
Download the size you want below, VRAM figures are estimates.
| Branch | Bits | lm_head bits | VRAM (4k) | VRAM (16k) | VRAM (32k) | Description |
| ----- | ---- | ------- | ------ | ------ | ------ | ------------ |
| [8_0](https://huggingface.co/bartowski/MBX-7B-v3-DPO-exl2/tree/8_0) | 8.0 | 8.0 | 8.4 GB | 9.8 GB | 11.8 GB | Maximum quality that ExLlamaV2 can produce, near unquantized performance. |
| [6_5](https://huggingface.co/bartowski/MBX-7B-v3-DPO-exl2/tree/6_5) | 6.5 | 8.0 | 7.2 GB | 8.6 GB | 10.6 GB | Very similar to 8.0, good tradeoff of size vs performance, **recommended**. |
| [5_0](https://huggingface.co/bartowski/MBX-7B-v3-DPO-exl2/tree/5_0) | 5.0 | 6.0 | 6.0 GB | 7.4 GB | 9.4 GB | Slightly lower quality vs 6.5, but usable on 8GB cards. |
| [4_25](https://huggingface.co/bartowski/MBX-7B-v3-DPO-exl2/tree/4_25) | 4.25 | 6.0 | 5.3 GB | 6.7 GB | 8.7 GB | GPTQ equivalent bits per weight, slightly higher quality. |
| [3_5](https://huggingface.co/bartowski/MBX-7B-v3-DPO-exl2/tree/3_5) | 3.5 | 6.0 | 4.7 GB | 6.1 GB | 8.1 GB | Lower quality, only use if you have to. |
## Evaluations
## EQ-Bench Comparison
<pre>----Benchmark Complete----
2024-01-30 15:22:18
Time taken: 145.9 mins
Prompt Format: ChatML
Model: macadeliccc/MBX-7B-v3-DPO
Score (v2): 74.32
Parseable: 166.0
---------------
Batch completed
Time taken: 145.9 mins
---------------
</pre>
### Original Model
<pre>----Benchmark Complete----
2024-01-31 01:26:26
Time taken: 89.1 mins
Prompt Format: Mistral
Model: flemmingmiguel/MBX-7B-v3
Score (v2): 73.87
Parseable: 168.0
---------------
Batch completed
Time taken: 89.1 mins
---------------
</pre>
| Model |AGIEval|GPT4All|TruthfulQA|Bigbench|Average|
|-----------------------------------------------------------------|------:|------:|---------:|-------:|------:|
|[MBX-7B-v3-DPO](https://huggingface.co/macadeliccc/MBX-7B-v3-DPO)| 45.16| 77.73| 74.62| 48.83| 61.58|
### AGIEval
| Task |Version| Metric |Value| |Stderr|
|------------------------------|------:|--------|----:|---|-----:|
|agieval_aqua_rat | 0|acc |27.95|± | 2.82|
| | |acc_norm|26.77|± | 2.78|
|agieval_logiqa_en | 0|acc |41.01|± | 1.93|
| | |acc_norm|40.55|± | 1.93|
|agieval_lsat_ar | 0|acc |25.65|± | 2.89|
| | |acc_norm|23.91|± | 2.82|
|agieval_lsat_lr | 0|acc |50.78|± | 2.22|
| | |acc_norm|52.94|± | 2.21|
|agieval_lsat_rc | 0|acc |66.54|± | 2.88|
| | |acc_norm|65.80|± | 2.90|
|agieval_sat_en | 0|acc |77.67|± | 2.91|
| | |acc_norm|77.67|± | 2.91|
|agieval_sat_en_without_passage| 0|acc |43.20|± | 3.46|
| | |acc_norm|43.20|± | 3.46|
|agieval_sat_math | 0|acc |32.27|± | 3.16|
| | |acc_norm|30.45|± | 3.11|
Average: 45.16%
### GPT4All
| Task |Version| Metric |Value| |Stderr|
|-------------|------:|--------|----:|---|-----:|
|arc_challenge| 0|acc |68.43|± | 1.36|
| | |acc_norm|68.34|± | 1.36|
|arc_easy | 0|acc |87.54|± | 0.68|
| | |acc_norm|82.11|± | 0.79|
|boolq | 1|acc |88.20|± | 0.56|
|hellaswag | 0|acc |69.76|± | 0.46|
| | |acc_norm|87.40|± | 0.33|
|openbookqa | 0|acc |40.20|± | 2.19|
| | |acc_norm|49.60|± | 2.24|
|piqa | 0|acc |83.68|± | 0.86|
| | |acc_norm|85.36|± | 0.82|
|winogrande | 0|acc |83.11|± | 1.05|
Average: 77.73%
### TruthfulQA
| Task |Version|Metric|Value| |Stderr|
|-------------|------:|------|----:|---|-----:|
|truthfulqa_mc| 1|mc1 |58.87|± | 1.72|
| | |mc2 |74.62|± | 1.44|
Average: 74.62%
### Bigbench
| Task |Version| Metric |Value| |Stderr|
|------------------------------------------------|------:|---------------------|----:|---|-----:|
|bigbench_causal_judgement | 0|multiple_choice_grade|60.00|± | 3.56|
|bigbench_date_understanding | 0|multiple_choice_grade|63.14|± | 2.51|
|bigbench_disambiguation_qa | 0|multiple_choice_grade|47.67|± | 3.12|
|bigbench_geometric_shapes | 0|multiple_choice_grade|22.56|± | 2.21|
| | |exact_str_match | 0.84|± | 0.48|
|bigbench_logical_deduction_five_objects | 0|multiple_choice_grade|33.20|± | 2.11|
|bigbench_logical_deduction_seven_objects | 0|multiple_choice_grade|23.00|± | 1.59|
|bigbench_logical_deduction_three_objects | 0|multiple_choice_grade|59.67|± | 2.84|
|bigbench_movie_recommendation | 0|multiple_choice_grade|47.40|± | 2.24|
|bigbench_navigate | 0|multiple_choice_grade|56.10|± | 1.57|
|bigbench_reasoning_about_colored_objects | 0|multiple_choice_grade|71.25|± | 1.01|
|bigbench_ruin_names | 0|multiple_choice_grade|56.47|± | 2.35|
|bigbench_salient_translation_error_detection | 0|multiple_choice_grade|35.27|± | 1.51|
|bigbench_snarks | 0|multiple_choice_grade|73.48|± | 3.29|
|bigbench_sports_understanding | 0|multiple_choice_grade|75.46|± | 1.37|
|bigbench_temporal_sequences | 0|multiple_choice_grade|52.10|± | 1.58|
|bigbench_tracking_shuffled_objects_five_objects | 0|multiple_choice_grade|22.64|± | 1.18|
|bigbench_tracking_shuffled_objects_seven_objects| 0|multiple_choice_grade|19.83|± | 0.95|
|bigbench_tracking_shuffled_objects_three_objects| 0|multiple_choice_grade|59.67|± | 2.84|
Average: 48.83%
Average score: 61.58%
Elapsed time: 02:37:39
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_macadeliccc__MBX-7B-v3-DPO)
| Metric |Value|
|---------------------------------|----:|
|Avg. |76.13|
|AI2 Reasoning Challenge (25-Shot)|73.55|
|HellaSwag (10-Shot) |89.11|
|MMLU (5-Shot) |64.91|
|TruthfulQA (0-shot) |74.00|
|Winogrande (5-shot) |85.56|
|GSM8k (5-shot) |69.67|
|
TeeZee/DarkSapling-7B-v2.0 | TeeZee | 2024-03-10T22:27:28Z | 746 | 5 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"not-for-all-audiences",
"merge",
"en",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-12T08:04:21Z | ---
language:
- en
license: apache-2.0
tags:
- mistral
- not-for-all-audiences
- merge
pipeline_tag: text-generation
inference: false
model-index:
- name: DarkSapling-7B-v2.0
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 64.16
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=TeeZee/DarkSapling-7B-v2.0
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 85.1
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=TeeZee/DarkSapling-7B-v2.0
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 64.37
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=TeeZee/DarkSapling-7B-v2.0
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 52.21
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=TeeZee/DarkSapling-7B-v2.0
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 78.61
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=TeeZee/DarkSapling-7B-v2.0
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 45.41
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=TeeZee/DarkSapling-7B-v2.0
name: Open LLM Leaderboard
---
# DarkSapling-7B-v2.0

## Model Details
- A result of 4 models merge. DARE TIES method was used this time so resulting model better preserves characteristics of all included models than v1.x.
- models used for merge:
[cognitivecomputations/dolphin-2.6-mistral-7b-dpo-laser](https://huggingface.co/cognitivecomputations/dolphin-2.6-mistral-7b-dpo-laser)
[KoboldAI/Mistral-7B-Holodeck-1](https://huggingface.co/KoboldAI/Mistral-7B-Holodeck-1)
[KoboldAI/Mistral-7B-Erebus-v3](https://huggingface.co/KoboldAI/Mistral-7B-Erebus-v3)
[cognitivecomputations/samantha-mistral-7b](https://huggingface.co/cognitivecomputations/samantha-mistral-7b)
- See [mergekit-config.yml](https://huggingface.co/TeeZee/DarkSapling-7B-v2.0/resolve/main/mergekit-config.yml) for details on the merge method used.
**Warning: This model can produce NSFW content!**
## Results
- a little different than version v1.0, more romantic and empathetic.
- smarter than versions 1.0 and 1.1.
- best for one-on-one ERP.
- produces SFW nad NSFW content without issues, switches context seamlessly.
- sticks to character card
- pretty smart due to mistral, empathetic after Samantha and sometimes produces dark scenarions - Erebus.
- storytelling is satisfactory due to Holodeck
- good at following instructions
All comments are greatly appreciated, download, test and if you appreciate my work, consider buying me my fuel:
<a href="https://www.buymeacoffee.com/TeeZee" target="_blank"><img src="https://cdn.buymeacoffee.com/buttons/v2/default-yellow.png" alt="Buy Me A Coffee" style="height: 60px !important;width: 217px !important;" ></a>
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_TeeZee__DarkSapling-7B-v2.0)
| Metric |Value|
|---------------------------------|----:|
|Avg. |64.98|
|AI2 Reasoning Challenge (25-Shot)|64.16|
|HellaSwag (10-Shot) |85.10|
|MMLU (5-Shot) |64.37|
|TruthfulQA (0-shot) |52.21|
|Winogrande (5-shot) |78.61|
|GSM8k (5-shot) |45.41|
|
ArianAskari/SOLID-SFT-DPO-MixQV3-SOLIDRejected-SFTChosen-Zephyr-7b-beta | ArianAskari | 2024-02-13T06:21:35Z | 746 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"en",
"dataset:ArianAskari/SOLID",
"arxiv:1910.09700",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-13T01:42:36Z | ---
library_name: transformers
tags: []
license: apache-2.0
language:
- en
datasets: ArianAskari/SOLID
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
OpenBuddy/openbuddy-deepseek-67b-v18.1-4k | OpenBuddy | 2024-03-05T01:44:16Z | 746 | 1 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"zh",
"en",
"fr",
"de",
"ja",
"ko",
"it",
"ru",
"fi",
"license:other",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-18T01:37:31Z | ---
language:
- zh
- en
- fr
- de
- ja
- ko
- it
- ru
- fi
license: other
library_name: transformers
pipeline_tag: text-generation
inference: false
license_name: deepseek
license_link: https://github.com/deepseek-ai/DeepSeek-LLM/blob/548a39bdd03986297ea4e233a8b7676edd6bec3e/LICENSE-MODEL
model-index:
- name: openbuddy-deepseek-67b-v18.1-4k
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 67.75
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=OpenBuddy/openbuddy-deepseek-67b-v18.1-4k
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 84.65
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=OpenBuddy/openbuddy-deepseek-67b-v18.1-4k
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 70.58
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=OpenBuddy/openbuddy-deepseek-67b-v18.1-4k
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 55.66
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=OpenBuddy/openbuddy-deepseek-67b-v18.1-4k
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 82.95
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=OpenBuddy/openbuddy-deepseek-67b-v18.1-4k
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 69.22
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=OpenBuddy/openbuddy-deepseek-67b-v18.1-4k
name: Open LLM Leaderboard
---
# OpenBuddy - Open Multilingual Chatbot
GitHub and Usage Guide: [https://github.com/OpenBuddy/OpenBuddy](https://github.com/OpenBuddy/OpenBuddy)
Website and Demo: [https://openbuddy.ai](https://openbuddy.ai)
Evaluation result of this model: [Evaluation.txt](Evaluation.txt)

# Copyright Notice
Base model: https://huggingface.co/deepseek-ai/deepseek-llm-67b-base
License: [deepseek](https://github.com/deepseek-ai/DeepSeek-LLM/blob/548a39bdd03986297ea4e233a8b7676edd6bec3e/LICENSE-MODEL)
## Disclaimer
All OpenBuddy models have inherent limitations and may potentially produce outputs that are erroneous, harmful, offensive, or otherwise undesirable. Users should not use these models in critical or high-stakes situations that may lead to personal injury, property damage, or significant losses. Examples of such scenarios include, but are not limited to, the medical field, controlling software and hardware systems that may cause harm, and making important financial or legal decisions.
OpenBuddy is provided "as-is" without any warranty of any kind, either express or implied, including, but not limited to, the implied warranties of merchantability, fitness for a particular purpose, and non-infringement. In no event shall the authors, contributors, or copyright holders be liable for any claim, damages, or other liabilities, whether in an action of contract, tort, or otherwise, arising from, out of, or in connection with the software or the use or other dealings in the software.
By using OpenBuddy, you agree to these terms and conditions, and acknowledge that you understand the potential risks associated with its use. You also agree to indemnify and hold harmless the authors, contributors, and copyright holders from any claims, damages, or liabilities arising from your use of OpenBuddy.
## 免责声明
所有OpenBuddy模型均存在固有的局限性,可能产生错误的、有害的、冒犯性的或其他不良的输出。用户在关键或高风险场景中应谨慎行事,不要使用这些模型,以免导致人身伤害、财产损失或重大损失。此类场景的例子包括但不限于医疗领域、可能导致伤害的软硬件系统的控制以及进行重要的财务或法律决策。
OpenBuddy按“原样”提供,不附带任何种类的明示或暗示的保证,包括但不限于适销性、特定目的的适用性和非侵权的暗示保证。在任何情况下,作者、贡献者或版权所有者均不对因软件或使用或其他软件交易而产生的任何索赔、损害赔偿或其他责任(无论是合同、侵权还是其他原因)承担责任。
使用OpenBuddy即表示您同意这些条款和条件,并承认您了解其使用可能带来的潜在风险。您还同意赔偿并使作者、贡献者和版权所有者免受因您使用OpenBuddy而产生的任何索赔、损害赔偿或责任的影响。
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_OpenBuddy__openbuddy-deepseek-67b-v18.1-4k)
| Metric |Value|
|---------------------------------|----:|
|Avg. |71.80|
|AI2 Reasoning Challenge (25-Shot)|67.75|
|HellaSwag (10-Shot) |84.65|
|MMLU (5-Shot) |70.58|
|TruthfulQA (0-shot) |55.66|
|Winogrande (5-shot) |82.95|
|GSM8k (5-shot) |69.22|
|
ChaoticNeutrals/Prima-LelantaclesV4-7b-16k-bf16 | ChaoticNeutrals | 2024-03-04T20:11:15Z | 746 | 1 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"base_model:Test157t/Yarncules-7b-128k",
"base_model:Test157t/Prima-LelantaclesV3-7b",
"license:other",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-20T04:47:25Z | ---
license: other
library_name: transformers
tags:
- mergekit
- merge
base_model:
- Test157t/Yarncules-7b-128k
- Test157t/Prima-LelantaclesV3-7b
model-index:
- name: Prima-LelantaclesV4-7b-16k-bf16
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 66.04
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ChaoticNeutrals/Prima-LelantaclesV4-7b-16k-bf16
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 85.07
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ChaoticNeutrals/Prima-LelantaclesV4-7b-16k-bf16
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 64.7
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ChaoticNeutrals/Prima-LelantaclesV4-7b-16k-bf16
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 54.76
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ChaoticNeutrals/Prima-LelantaclesV4-7b-16k-bf16
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 80.27
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ChaoticNeutrals/Prima-LelantaclesV4-7b-16k-bf16
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 58.83
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ChaoticNeutrals/Prima-LelantaclesV4-7b-16k-bf16
name: Open LLM Leaderboard
---

Quants Thanks to @Lewdiculous :https://huggingface.co/Lewdiculous/Prima-LelantaclesV4-7b-16k-GGUF
The following models were included in the merge:
* [Test157t/Yarncules-7b-128k](https://huggingface.co/Test157t/Yarncules-7b-128k)
* [Test157t/Prima-LelantaclesV3-7b](https://huggingface.co/Test157t/Prima-LelantaclesV3-7b)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: Test157t/Yarncules-7b-128k
layer_range: [0, 32]
- model: Test157t/Prima-LelantaclesV3-7b
layer_range: [0, 32]
merge_method: slerp
base_model: Test157t/Prima-LelantaclesV3-7b
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_ChaoticNeutrals__Prima-LelantaclesV4-7b-16k-bf16)
| Metric |Value|
|---------------------------------|----:|
|Avg. |68.28|
|AI2 Reasoning Challenge (25-Shot)|66.04|
|HellaSwag (10-Shot) |85.07|
|MMLU (5-Shot) |64.70|
|TruthfulQA (0-shot) |54.76|
|Winogrande (5-shot) |80.27|
|GSM8k (5-shot) |58.83|
|
yam-peleg/Experiment25-7B | yam-peleg | 2024-02-27T21:30:30Z | 746 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"chat",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-27T16:14:06Z | ---
license: apache-2.0
language:
- en
library_name: transformers
pipeline_tag: text-generation
tags:
- chat
---
**Experiment25-7B**
An experiment for testing and refining a specific training and evaluation pipeline research framework.
This experiment aims to identify potential optimizations, focusing on data engineering, architecture efficiency, and evaluation performance.
The goal is to evaluate the effectiveness of a new training / evaluation pipeline for LLMs.
The experiment will explore adjustments in data preprocessing, model training algorithms, and evaluation metrics to test methods for improvement.
More details in the future experiments.
---
license: apache-2.0
--- |
zhengchenphd/Mistral-Plus-7B | zhengchenphd | 2024-03-18T21:15:02Z | 746 | 4 | transformers | [
"transformers",
"pytorch",
"mistral",
"text-generation",
"arxiv:2403.02513",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-04T20:55:59Z | ---
license: apache-2.0
---
# Mistral-Plus Model Card
## Model Details
Mistral-Plus is a chat assistant trained by Reinforcement Learning from Human Feedback (RLHF) using the Mistral-7B base model as the backbone.
- Mistral-Plus adopts an innovative approach by completely bypassing Supervised Fine-Tuning (SFT) and directly implementing Harmless Reinforcement Learning from Human Feedback (RLHF).
- Mistral-Plus uses the [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) model as its backbone.
- License: Mistral-Plus is licensed under the same license as the [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) model.
## Model Sources
__Paper (Mistral-Plus):__ https://arxiv.org/abs/2403.02513
## Uses
Mistral-Plus is primarily utilized for research in the areas of large language models and chatbots. It is intended chiefly for use by researchers and hobbyists specializing in natural language processing, machine learning, and artificial intelligence.
Mistral-Plus not only preserves the Mistral base model's general capabilities, but also significantly enhances its conversational abilities and notably reduces the generation of toxic outputs as human preference.

## Goal: Empower researchers worldwide!
To the best of knowledge, this is the first academic endeavor to bypass supervised fine-tuning and directly apply reinforcement learning from human feedback. More importantly, Mistral-Plus is publicly available through HuggingFace for promoting collaborative research and innovation.
This initiative to open-source Mistral-Plus seeks to empower researchers worldwide, enabling them to delve deeper into and build upon Mistral-Plus work, with a particular focus on conversational tasks, such as customer service, intelligent assistant, etc.
## Model Performance on 11 general Language Tasks

### Mistral-Plus on General language Understanding and Reasoning
|  |  |
|----------------------|----------------------|
|  |  |
### Enhancing Conversational Safety in the Mistral-Plus
|  |  |
|----------------------|----------------------|
Bad word generation probablity on Mistral-Instruct and Mistral-Plus. The x-axis represents different intermittent layers, y-axis shows token probability.
## Case Study
### Multiple Round Dialogue

### Case Study from Different Prompts
 |
ChaoticNeutrals/Eris_Floramix_DPO_7B | ChaoticNeutrals | 2024-03-07T23:30:49Z | 746 | 5 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"en",
"dataset:athirdpath/DPO_Pairs-Roleplay-Alpaca-NSFW-v1-SHUFFLED",
"dataset:ResplendentAI/Synthetic_Soul_1k",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-07T23:09:39Z | ---
library_name: transformers
license: other
datasets:
- athirdpath/DPO_Pairs-Roleplay-Alpaca-NSFW-v1-SHUFFLED
- ResplendentAI/Synthetic_Soul_1k
language:
- en
---
# Eris Floramix DPO
This is a mix between Eris Remix DPO and Flora DPO, a finetune of the original Eris Remix on the Synthetic_Soul_1k dataset.
Applied this DPO dataset: https://huggingface.co/datasets/athirdpath/DPO_Pairs-Roleplay-Alpaca-NSFW-v1-SHUFFLED |
frankenmerger/cosmo-3b-test-v0.2 | frankenmerger | 2024-03-21T18:55:58Z | 746 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-13T13:25:19Z | ---
library_name: transformers
inference:
parameters:
temperature: 1.1
top_p: 0.95
top_k: 30
repetition_penalty: 1
widget:
- text: Photosynthesis is
example_title: Textbook
group: Completion
- text: '<s> [INST] How to take care of exotic cars? [/INST] '
example_title: Wikihow
group: Completion
- text: '<s> [INST] Generate a story about Dark Knight [/INST] '
example_title: Story
group: Completion
license: apache-2.0
language:
- en
---
**Updated to v0.2**
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "frankenmerger/cosmo-3b-test-v0.2"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
ChaoticNeutrals/InfinityNexus_9B | ChaoticNeutrals | 2024-03-14T22:39:19Z | 746 | 4 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"en",
"base_model:Endevor/InfinityRP-v1-7B",
"base_model:jeiku/NarrativeNexus_7B",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-14T03:31:45Z | ---
base_model:
- Endevor/InfinityRP-v1-7B
- jeiku/NarrativeNexus_7B
library_name: transformers
tags:
- mergekit
- merge
license: other
language:
- en
---
# InfinityNexus

GGUF available here: https://huggingface.co/Lewdiculous/InfinityNexus_9B-GGUF-IQ-Imatrix
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the passthrough merge method.
### Models Merged
The following models were included in the merge:
* [Endevor/InfinityRP-v1-7B](https://huggingface.co/Endevor/InfinityRP-v1-7B)
* [jeiku/NarrativeNexus_7B](https://huggingface.co/jeiku/NarrativeNexus_7B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: Endevor/InfinityRP-v1-7B
layer_range: [0, 20]
- sources:
- model: jeiku/NarrativeNexus_7B
layer_range: [12, 32]
merge_method: passthrough
dtype: float16
``` |
ShadowFall09/FANNO | ShadowFall09 | 2024-05-27T17:25:56Z | 746 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-22T13:53:31Z | ---
library_name: transformers
license: apache-2.0
---
# FANNO
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
Based on LlaMA2-7b.
### Model Description
instruction-finetuning
autonomous-framework
data-annotation
FANNO is an innovative, fully autonomous, open-sourced framework designed to streamline the annotation process for instruction datasets without requiring pre-existing annotated data. Leveraging the capabilities of the Mistral-7b-instruct model, FANNO efficiently generates diverse and high-quality datasets through a structured process that includes document pre-screening, instruction generation, and response generation.
### Key Features
Autonomous Annotation: Eliminates the need for manual annotations or costly API calls of proprietary LLMs, making the annotation process cost-effective and efficient.
High-Quality Data Generation: Produces datasets with diversity and complexity that are comparable to human-annotated or cleaned datasets, such as Alpaca-GPT4-Cleaned.
Open-Sourced Framework: Fully open-sourced, allowing the community to leverage and contribute to the ongoing improvement of the framework.
|
N8Programs/Thestral-v0.2 | N8Programs | 2024-03-25T20:18:11Z | 746 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-25T18:58:22Z | ---
license: apache-2.0
---
# Model Card for Thestral-v0.2
A therapy-oriented model trained by Lora-ing [NousResearch/Nous-Hermes-2-Mistral-7B-DPO](https://huggingface.co/NousResearch/Nous-Hermes-2-Mistral-7B-DPO) on the dataset [wesley7137/therapist-sft-format](https://huggingface.co/datasets/wesley7137/therapist-sft-format)
Prompt template is ChatML. System prompt is 'You are Thestral, an AI therapist'. More empathetic and concise than ChatGPT, and doesn't constantly say 'consult a medical professional'. No advice it generates is from a medical professional - treat it as an experiment, and nothing more. |
Isotonic/TinyQwex-4x620M-MoE | Isotonic | 2024-04-04T02:39:49Z | 746 | 0 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"moe",
"merge",
"mergekit",
"lazymergekit",
"Qwen/Qwen1.5-0.5B",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-04-01T13:56:15Z | ---
license: apache-2.0
tags:
- moe
- merge
- mergekit
- lazymergekit
- Qwen/Qwen1.5-0.5B
---
# TinyQwex-4x620M-MoE
TinyQwex-4x620M-MoE is a Mixure of Experts (MoE) made with the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [Qwen/Qwen1.5-0.5B](https://huggingface.co/Qwen/Qwen1.5-0.5B)
* [Qwen/Qwen1.5-0.5B](https://huggingface.co/Qwen/Qwen1.5-0.5B)
* [Qwen/Qwen1.5-0.5B](https://huggingface.co/Qwen/Qwen1.5-0.5B)
* [Qwen/Qwen1.5-0.5B](https://huggingface.co/Qwen/Qwen1.5-0.5B)
🌟 Buying me coffee is a direct way to show support for this project.
<a href="https://www.buymeacoffee.com/isotonic"><img src="https://www.buymeacoffee.com/assets/img/guidelines/download-assets-sm-1.svg" alt=""></a>
## 💻 Usage
```python
!pip install -qU transformers bitsandbytes accelerate eniops
from transformers import AutoTokenizer
import transformers
import torch
model = "Isotonic/TinyQwex-4x620M-MoE"
tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen1.5-0.5B")
pipeline = transformers.pipeline(
"text-generation",
model=model,
model_kwargs={"torch_dtype": torch.bfloat16, "load_in_4bit": True},
)
messages = [{"role": "user", "content": "Explain what a Mixture of Experts is in less than 100 words."}]
prompt = pipeline.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
## 🧩 Configuration
```yamlbase_model: Qwen/Qwen1.5-0.5B
experts:
- source_model: Qwen/Qwen1.5-0.5B
positive_prompts:
- "reasoning"
- source_model: Qwen/Qwen1.5-0.5B
positive_prompts:
- "program"
- source_model: Qwen/Qwen1.5-0.5B
positive_prompts:
- "storytelling"
- source_model: Qwen/Qwen1.5-0.5B
positive_prompts:
- "Instruction following assistant"
``` |
mradermacher/Cypher-7B-GGUF | mradermacher | 2024-05-06T05:16:40Z | 746 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"mistral",
"nous",
"westlake",
"samantha",
"en",
"base_model:aloobun/Cypher-7B",
"license:cc",
"endpoints_compatible",
"region:us"
] | null | 2024-04-05T01:45:10Z | ---
base_model: aloobun/Cypher-7B
language:
- en
library_name: transformers
license: cc
quantized_by: mradermacher
tags:
- mergekit
- merge
- mistral
- nous
- westlake
- samantha
---
## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/aloobun/Cypher-7B
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Cypher-7B-GGUF/resolve/main/Cypher-7B.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/Cypher-7B-GGUF/resolve/main/Cypher-7B.IQ3_XS.gguf) | IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/Cypher-7B-GGUF/resolve/main/Cypher-7B.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Cypher-7B-GGUF/resolve/main/Cypher-7B.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Cypher-7B-GGUF/resolve/main/Cypher-7B.IQ3_M.gguf) | IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/Cypher-7B-GGUF/resolve/main/Cypher-7B.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Cypher-7B-GGUF/resolve/main/Cypher-7B.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Cypher-7B-GGUF/resolve/main/Cypher-7B.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/Cypher-7B-GGUF/resolve/main/Cypher-7B.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Cypher-7B-GGUF/resolve/main/Cypher-7B.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Cypher-7B-GGUF/resolve/main/Cypher-7B.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/Cypher-7B-GGUF/resolve/main/Cypher-7B.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/Cypher-7B-GGUF/resolve/main/Cypher-7B.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Cypher-7B-GGUF/resolve/main/Cypher-7B.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
AlanNgaiJX/helloWorld | AlanNgaiJX | 2024-06-24T01:43:13Z | 746 | 0 | null | [
"gguf",
"region:us"
] | null | 2024-06-03T10:40:56Z | TEST README |
Koleshjr/mistral_7b_v2_8bit_q8_0_10_epochs | Koleshjr | 2024-06-29T02:43:36Z | 746 | 0 | transformers | [
"transformers",
"gguf",
"mistral",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/mistral-7b-instruct-v0.2-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-06-29T02:39:26Z | ---
base_model: unsloth/mistral-7b-instruct-v0.2-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- gguf
---
# Uploaded model
- **Developed by:** Koleshjr
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-7b-instruct-v0.2-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
veddm/paraphrase-multilingual-MiniLM-L12-v2-finetuned-DIT | veddm | 2022-04-12T15:46:27Z | 745 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-04-12T11:59:37Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: paraphrase-multilingual-MiniLM-L12-v2-finetuned-DIT
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# paraphrase-multilingual-MiniLM-L12-v2-finetuned-DIT
This model is a fine-tuned version of [sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 7.4783
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 91 | 9.2794 |
| No log | 2.0 | 182 | 8.1920 |
| No log | 3.0 | 273 | 7.6378 |
| No log | 4.0 | 364 | 7.4783 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.2+cpu
- Datasets 2.0.0
- Tokenizers 0.11.6
|
aashay96/indic-gpt | aashay96 | 2022-12-22T11:39:09Z | 745 | 1 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2022-12-20T19:34:27Z | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: indic-gpt
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# indic-gpt
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an Indian Language(https://ai4bharat.iitm.ac.in/corpora) dataset. Sample Dataset is present on https://huggingface.co/datasets/aashay96/indic-gpt.
It achieves the following results on the evaluation set:
- Loss: 1.9482
## Model description
Model is trained on multiple Indian Languages - Assamese, bengali, gujarati, Kannada, Malayalam,telugu, tamil, odhiya and punjabi.
## Intended uses & limitations
More information needed
## Training and evaluation data
TBD - Evaluation on indic_glue
## Training procedure
Check the notebook!
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 1024
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.3653 | 0.3 | 500 | 2.2985 |
| 2.2079 | 0.61 | 1000 | 2.0401 |
| 2.0396 | 0.91 | 1500 | 1.9482 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu116
- Datasets 2.8.0
- Tokenizers 0.13.2
|
EleutherAI/pythia-160m-seed3 | EleutherAI | 2024-01-04T13:21:29Z | 745 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"gpt_neox",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-02-15T22:34:42Z | Entry not found |
Yntec/GalenaVAE | Yntec | 2023-08-04T02:38:57Z | 745 | 3 | diffusers | [
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"schneed",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2023-08-04T02:04:35Z | ---
license: creativeml-openrail-m
library_name: diffusers
pipeline_tag: text-to-image
tags:
- stable-diffusion
- stable-diffusion-diffusers
- diffusers
- text-to-image
- schneed
---
# Galena Blend 1.0 VAE
This model with the Color 101 VAE baked in.
Original pages:
https://civitai.com/models/16300?modelVersionId=19249
|
TheBloke/meditron-70B-GGUF | TheBloke | 2023-11-30T17:54:45Z | 745 | 18 | transformers | [
"transformers",
"gguf",
"llama",
"medical",
"health",
"llama2",
"text-generation",
"en",
"dataset:bigbio/med_qa",
"dataset:medmcqa",
"dataset:bigbio/pubmed_qa",
"dataset:epfl-llm/guidelines",
"arxiv:2311.16079",
"base_model:epfl-llm/meditron-70b",
"license:llama2",
"text-generation-inference",
"region:us"
] | text-generation | 2023-11-30T17:10:33Z | ---
base_model: epfl-llm/meditron-70b
datasets:
- bigbio/med_qa
- medmcqa
- bigbio/pubmed_qa
- epfl-llm/guidelines
inference: false
language:
- en
license: llama2
metrics:
- accuracy
- perplexity
model_creator: EPFL LLM Team
model_name: Meditron 70B
model_type: llama
pipeline_tag: text-generation
prompt_template: '<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
'
quantized_by: TheBloke
tags:
- medical
- health
- llama2
---
<!-- markdownlint-disable MD041 -->
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Meditron 70B - GGUF
- Model creator: [EPFL LLM Team](https://huggingface.co/epfl-llm)
- Original model: [Meditron 70B](https://huggingface.co/epfl-llm/meditron-70b)
<!-- description start -->
## Description
This repo contains GGUF format model files for [EPFL LLM Team's Meditron 70B](https://huggingface.co/epfl-llm/meditron-70b).
These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/meditron-70B-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/meditron-70B-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/meditron-70B-GGUF)
* [EPFL LLM Team's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/epfl-llm/meditron-70b)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: ChatML
```
<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [meditron-70b.Q2_K.gguf](https://huggingface.co/TheBloke/meditron-70B-GGUF/blob/main/meditron-70b.Q2_K.gguf) | Q2_K | 2 | 29.28 GB| 31.78 GB | smallest, significant quality loss - not recommended for most purposes |
| [meditron-70b.Q3_K_S.gguf](https://huggingface.co/TheBloke/meditron-70B-GGUF/blob/main/meditron-70b.Q3_K_S.gguf) | Q3_K_S | 3 | 29.92 GB| 32.42 GB | very small, high quality loss |
| [meditron-70b.Q3_K_M.gguf](https://huggingface.co/TheBloke/meditron-70B-GGUF/blob/main/meditron-70b.Q3_K_M.gguf) | Q3_K_M | 3 | 33.19 GB| 35.69 GB | very small, high quality loss |
| [meditron-70b.Q3_K_L.gguf](https://huggingface.co/TheBloke/meditron-70B-GGUF/blob/main/meditron-70b.Q3_K_L.gguf) | Q3_K_L | 3 | 36.15 GB| 38.65 GB | small, substantial quality loss |
| [meditron-70b.Q4_0.gguf](https://huggingface.co/TheBloke/meditron-70B-GGUF/blob/main/meditron-70b.Q4_0.gguf) | Q4_0 | 4 | 38.87 GB| 41.37 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [meditron-70b.Q4_K_S.gguf](https://huggingface.co/TheBloke/meditron-70B-GGUF/blob/main/meditron-70b.Q4_K_S.gguf) | Q4_K_S | 4 | 39.07 GB| 41.57 GB | small, greater quality loss |
| [meditron-70b.Q4_K_M.gguf](https://huggingface.co/TheBloke/meditron-70B-GGUF/blob/main/meditron-70b.Q4_K_M.gguf) | Q4_K_M | 4 | 41.42 GB| 43.92 GB | medium, balanced quality - recommended |
| [meditron-70b.Q5_0.gguf](https://huggingface.co/TheBloke/meditron-70B-GGUF/blob/main/meditron-70b.Q5_0.gguf) | Q5_0 | 5 | 47.46 GB| 49.96 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [meditron-70b.Q5_K_S.gguf](https://huggingface.co/TheBloke/meditron-70B-GGUF/blob/main/meditron-70b.Q5_K_S.gguf) | Q5_K_S | 5 | 47.46 GB| 49.96 GB | large, low quality loss - recommended |
| [meditron-70b.Q5_K_M.gguf](https://huggingface.co/TheBloke/meditron-70B-GGUF/blob/main/meditron-70b.Q5_K_M.gguf) | Q5_K_M | 5 | 48.75 GB| 51.25 GB | large, very low quality loss - recommended |
| meditron-70b.Q6_K.gguf | Q6_K | 6 | 56.59 GB| 59.09 GB | very large, extremely low quality loss |
| meditron-70b.Q8_0.gguf | Q8_0 | 8 | 73.29 GB| 75.79 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
### Q6_K and Q8_0 files are split and require joining
**Note:** HF does not support uploading files larger than 50GB. Therefore I have uploaded the Q6_K and Q8_0 files as split files.
<details>
<summary>Click for instructions regarding Q6_K and Q8_0 files</summary>
### q6_K
Please download:
* `meditron-70b.Q6_K.gguf-split-a`
* `meditron-70b.Q6_K.gguf-split-b`
### q8_0
Please download:
* `meditron-70b.Q8_0.gguf-split-a`
* `meditron-70b.Q8_0.gguf-split-b`
To join the files, do the following:
Linux and macOS:
```
cat meditron-70b.Q6_K.gguf-split-* > meditron-70b.Q6_K.gguf && rm meditron-70b.Q6_K.gguf-split-*
cat meditron-70b.Q8_0.gguf-split-* > meditron-70b.Q8_0.gguf && rm meditron-70b.Q8_0.gguf-split-*
```
Windows command line:
```
COPY /B meditron-70b.Q6_K.gguf-split-a + meditron-70b.Q6_K.gguf-split-b meditron-70b.Q6_K.gguf
del meditron-70b.Q6_K.gguf-split-a meditron-70b.Q6_K.gguf-split-b
COPY /B meditron-70b.Q8_0.gguf-split-a + meditron-70b.Q8_0.gguf-split-b meditron-70b.Q8_0.gguf
del meditron-70b.Q8_0.gguf-split-a meditron-70b.Q8_0.gguf-split-b
```
</details>
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/meditron-70B-GGUF and below it, a specific filename to download, such as: meditron-70b.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/meditron-70B-GGUF meditron-70b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage (click to read)</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/meditron-70B-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/meditron-70B-GGUF meditron-70b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 35 -m meditron-70b.Q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<|im_start|>system\n{system_message}<|im_end|>\n<|im_start|>user\n{prompt}<|im_end|>\n<|im_start|>assistant"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python.
### How to load this model in Python code, using llama-cpp-python
For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/).
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install llama-cpp-python
# With NVidia CUDA acceleration
CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python
# Or with OpenBLAS acceleration
CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python
# Or with CLBLast acceleration
CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python
# Or with AMD ROCm GPU acceleration (Linux only)
CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python
# Or with Metal GPU acceleration for macOS systems only
CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python
# In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA:
$env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on"
pip install llama-cpp-python
```
#### Simple llama-cpp-python example code
```python
from llama_cpp import Llama
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = Llama(
model_path="./meditron-70b.Q4_K_M.gguf", # Download the model file first
n_ctx=4096, # The max sequence length to use - note that longer sequence lengths require much more resources
n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance
n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available
)
# Simple inference example
output = llm(
"<|im_start|>system\n{system_message}<|im_end|>\n<|im_start|>user\n{prompt}<|im_end|>\n<|im_start|>assistant", # Prompt
max_tokens=512, # Generate up to 512 tokens
stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using.
echo=True # Whether to echo the prompt
)
# Chat Completion API
llm = Llama(model_path="./meditron-70b.Q4_K_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using
llm.create_chat_completion(
messages = [
{"role": "system", "content": "You are a story writing assistant."},
{
"role": "user",
"content": "Write a story about llamas."
}
]
)
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Brandon Frisco, LangChain4j, Spiking Neurons AB, transmissions 11, Joseph William Delisle, Nitin Borwankar, Willem Michiel, Michael Dempsey, vamX, Jeffrey Morgan, zynix, jjj, Omer Bin Jawed, Sean Connelly, jinyuan sun, Jeromy Smith, Shadi, Pawan Osman, Chadd, Elijah Stavena, Illia Dulskyi, Sebastain Graf, Stephen Murray, terasurfer, Edmond Seymore, Celu Ramasamy, Mandus, Alex, biorpg, Ajan Kanaga, Clay Pascal, Raven Klaugh, 阿明, K, ya boyyy, usrbinkat, Alicia Loh, John Villwock, ReadyPlayerEmma, Chris Smitley, Cap'n Zoog, fincy, GodLy, S_X, sidney chen, Cory Kujawski, OG, Mano Prime, AzureBlack, Pieter, Kalila, Spencer Kim, Tom X Nguyen, Stanislav Ovsiannikov, Michael Levine, Andrey, Trailburnt, Vadim, Enrico Ros, Talal Aujan, Brandon Phillips, Jack West, Eugene Pentland, Michael Davis, Will Dee, webtim, Jonathan Leane, Alps Aficionado, Rooh Singh, Tiffany J. Kim, theTransient, Luke @flexchar, Elle, Caitlyn Gatomon, Ari Malik, subjectnull, Johann-Peter Hartmann, Trenton Dambrowitz, Imad Khwaja, Asp the Wyvern, Emad Mostaque, Rainer Wilmers, Alexandros Triantafyllidis, Nicholas, Pedro Madruga, SuperWojo, Harry Royden McLaughlin, James Bentley, Olakabola, David Ziegler, Ai Maven, Jeff Scroggin, Nikolai Manek, Deo Leter, Matthew Berman, Fen Risland, Ken Nordquist, Manuel Alberto Morcote, Luke Pendergrass, TL, Fred von Graf, Randy H, Dan Guido, NimbleBox.ai, Vitor Caleffi, Gabriel Tamborski, knownsqashed, Lone Striker, Erik Bjäreholt, John Detwiler, Leonard Tan, Iucharbius
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: EPFL LLM Team's Meditron 70B
<img width=50% src="meditron_LOGO.png" alt="Alt text" title="Meditron-logo">
# Model Card for Meditron-70B-v1.0
Meditron is a suite of open-source medical Large Language Models (LLMs).
Meditron-70B is a 70 billion parameters model adapted to the medical domain from Llama-2-70B through continued pretraining on a comprehensively curated medical corpus, including selected PubMed articles, abstracts, a [new dataset](https://huggingface.co/datasets/epfl-llm/guidelines) of internationally-recognized medical guidelines, and general domain data from [RedPajama-v1](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T).
Meditron-70B, finetuned on relevant training data, outperforms Llama-2-70B, GPT-3.5 (`text-davinci-003`, 8-shot), and Flan-PaLM on multiple medical reasoning tasks.
<!--# Table of Contents
[Model Card for Meditron 70B](#model-card-for--meditron-70b-v1.0)
- [Table of Contents](#table-of-contents)
- [Model Details](#model-details)
- [Model Description](#model-description)
- [Uses](#uses)
- [Downstream Use](#downstream-use)
- [Out-of-Scope Use](#out-of-scope-use)
- [Bias, Risks, and Limitations](#bias-risks-and-limitations)
- [Recommendations](#recommendations)
- [Training Details](#training-details)
- [Training Data](#training-data)
- [Training Procedure](#training-procedure)
- [Preprocessing](#preprocessing)
- [Evaluation](#evaluation)
- [Testing Data & Metrics](#testing-data-&-metrics)
- [Testing Data](#testing-data)
- [Metrics](#metrics)
- [Results](#results)
- [Environmental Impact](#environmental-impact)
- [Citation](#citation)-->
<details open>
<summary><strong>Advisory Notice</strong></summary>
<blockquote style="padding: 10px; margin: 0 0 10px; border-left: 5px solid #ddd;">
While Meditron is designed to encode medical knowledge from sources of high-quality evidence, it is not yet adapted to deliver this knowledge appropriately, safely, or within professional actionable constraints.
We recommend against deploying Meditron in medical applications without extensive use-case alignment, as well as additional testing, specifically including randomized controlled trials in real-world practice settings.
</blockquote>
</details>
## Model Details
- **Developed by:** [EPFL LLM Team](https://huggingface.co/epfl-llm)
- **Model type:** Causal decoder-only transformer language model
- **Language(s):** English (mainly)
- **Model License:** [LLAMA 2 COMMUNITY LICENSE AGREEMENT](https://huggingface.co/meta-llama/Llama-2-70b/raw/main/LICENSE.txt)
- **Code License:** [APACHE 2.0 LICENSE](LICENSE)
- **Continue-pretrained from model:** [Llama-2-70B](https://huggingface.co/meta-llama/Llama-2-70b)
- **Context length:** 4K tokens
- **Input:** Text-only data
- **Output:** Model generates text only
- **Status:** This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we enhance model's performance.
- **Knowledge Cutoff:** August 2023
### Model Sources
- **Repository:** [epflLLM/meditron](https://github.com/epfLLM/meditron)
- **Trainer:** [epflLLM/Megatron-LLM](https://github.com/epfLLM/Megatron-LLM)
- **Paper:** *[MediTron-70B: Scaling Medical Pretraining for Large Language Models](https://arxiv.org/abs/2311.16079)*
## Uses
Meditron-70B is being made available for further testing and assessment as an AI assistant to enhance clinical decision-making and enhance access to an LLM for healthcare use. Potential use cases may include but are not limited to:
- Medical exam question answering
- Supporting differential diagnosis
- Disease information (symptoms, cause, treatment) query
- General health information query
### Direct Use
It is possible to use this model to generate text, which is useful for experimentation and understanding its capabilities.
It should not be used directly for production or work that may impact people.
### Downstream Use
Meditron-70B is a foundation model that can be finetuned, instruction-tuned, or RLHF-tuned for specific downstream tasks and applications.
The main way we have used this model is finetuning for downstream question-answering tasks, but we encourage using this model for additional applications.
Specific formatting needs to be followed to prompt our finetuned models, including the `<|im_start|>`, `<|im_end|>` tags, and `system`, `question`, `answer` identifiers.
"""
<|im_start|>system
{system_message}<|im_end|>
<|im_start|>question
{prompt}<|im_end|>
<|im_start|>answer
"""
**Note 1**: The above formatting is not required for running the base model (this repository)
**Note 2**: the above formatting is just an example of a finetuning template. This format is not a requirement if you use your own formatting option for the finetuning of the model.
To run proper generation with this base model, we recommend using a high-throughput and memory-efficient inference engine, such as [vLLM](https://github.com/vllm-project/vllm), with a UI that supports chat and text generation, such as [BetterChatGPT](https://github.com/ztjhz/BetterChatGPT)
To see more details about model deployment and generation, please see our [documentation](https://github.com/epfLLM/meditron/blob/main/deployment/README.md).
### Out-of-Scope Use
We do not recommend using this model for natural language generation in a production environment, finetuned or otherwise.
## Truthfulness, Helpfulness, Risk, and Bias
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
We did an initial assessment of Meditron models' **Truthfulness** against baseline models and consumer-level medical models.
We use TruthfulQA (multiple choice) as the main evaluation benchmark.
We only focus on the categories that are relevant to the medical domain, including Health, Nutrition, Psychology, and Science.
For 7B models, we perform one-shot evaluations for consistent answer generation.
For 70B models, the evaluations are under the zero-shot setting.
Below, we report the detailed truthfulness performance of each category.
| | | | | | | | |
| --- | ------ |----- |----- |----- |----- |----- |----- |
|Category | meditron-70b | llama-2-70b | med42-70b* | meditron-7b | llama-2-7b | PMC-llama-7b |
|Health | 81.8 | 69.1 | 83.6 | 27.3 | 16.4 | 3.6 |
|Nutrition | 77.9 | 68.8 | 62.5 | 31.1 | 12.5 | 6.3 |
|Psychology| 47.4 | 36.8 | 52.6 | 21.1 | 10.5 | 0.0 |
|Science | 77.8 | 44.4 | 33.3 | 33.3 | 11.1 | 0.0 |
|Avg | 71.2 | 54.8 | 58.0 | 28.3 | 12.6 | 2.5 |
| | | | | | | |
For a more detailed performance analysis, please see our paper.
For **Helpfulness**, **Risk** and **Bias**, we provide a comprehensive qualitative generation report of Meditron-70B on queries designed by medical experts.
Each query targets specific aspects of helpfulness (medical accuracy, up-to-date information, etc.), risk (public health, medical ethics, etc.) and bias (gender, age, race, etc.).
Please see the detailed generations in our paper. We compare our generations to Llama-2-70B and ChatGPT-3.5 (version Nov, 27, 2023)
Significant research is still required to fully explore potential bias, fairness, and safety issues with this language model.
### Recommendations
**IMPORTANT!**
Users (both direct and downstream) should be made aware of the risks, biases, and limitations of the model.
While this model is capable of generating natural language text, we have only begun to explore this capability and its limitations.
Understanding these limitations is especially important in a domain like medicine.
Therefore, we strongly recommend against using this model in production for natural language generation or for professional purposes related to health and medicine without comprehensive testing for your application.
## Training Details
### Training Data
Meditron’s domain-adaptive pre-training corpus GAP-Replay combines 48.1B tokens from four corpora:
- [**Clinical Guidelines**](https://huggingface.co/datasets/epfl-llm/guidelines): a new dataset of 46K internationally-recognized clinical practice guidelines from various healthcare-related sources, including hospitals and international organizations.
- **Medical Paper Abstracts**: 16.1M abstracts extracted from closed-access PubMed and PubMed Central papers.
- **Medical Papers**: full-text articles extracted from 5M publicly available PubMed and PubMed Central papers.
- **Replay Data**: 400M tokens of general domain pretraining data sampled from [RedPajama-v1](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T)
<img width="60%" src="gap-replay.png" alt="Alt text" title="Meditron-logo">
#### Data Preprocessing
Please see the detailed preprocessing procedure in our paper.
### Training Procedure
We used the [Megatron-LLM](https://github.com/epfLLM/Megatron-LLM) distributed training library, a derivative of Nvidia's Megatron LM project, to optimize training efficiency.
Hardware consists of 16 nodes of 8x NVIDIA A100 (80GB) SXM GPUs connected by NVLink and NVSwitch with a single Nvidia ConnectX-6 DX network card and equipped with 2 x AMD EPYC 7543 32-Core Processors and 512 GB of RAM.
The nodes are connected via RDMA over Converged Ethernet.
Our three-way parallelism scheme uses:
- Data Parallelism (DP -- different GPUs process different subsets of the batches) of 2,
- Pipeline Parallelism (PP -- different GPUs process different layers) of 8,
- Tensor Parallelism (TP -- different GPUs process different subtensors for matrix multiplication) of 8.
#### Training Hyperparameters
| | |
| --- | ------ |
| bf16 | true |
| lr | 1.5e-4 |
| eps | 1e-5 |
| betas | \[0.9, 0.95\] |
| clip_grad | 1 |
| weight decay | 0.1 |
| DP size | 2 |
| TP size | 8 |
| PP size | 8 |
| seq length | 4096 |
| lr scheduler | cosine|
| min lr | 1e-6 |
| warmup iteration | 2000 |
| micro batch size | 2 |
| global batch size | 512 |
| | |
#### Speeds, Sizes, Times
The model was trained in September and October 2023.
The model architecture is exactly Llama 2, meaning
| | |
| --- | ------ |
| Model size | 70B |
| Hidden dimension | 8192 |
| Num. attention heads | 64 |
| Num. layers | 80 |
| | | |
We train the 70B model on 48e9 tokens, at a throughput of about 40,200 tokens / second.
This amounts to a bfloat16 model flops utilization of roughly 42.3\%.
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data & Metrics
#### Testing Data
- [MedQA (USMLE)](https://huggingface.co/datasets/bigbio/med_qa)
- [MedMCQA](https://huggingface.co/datasets/medmcqa)
- [PubMedQA](https://huggingface.co/datasets/bigbio/pubmed_qa)
- [MMLU-Medical](https://huggingface.co/datasets/lukaemon/mmlu)
- [MedQA-4-Option](https://huggingface.co/datasets/GBaker/MedQA-USMLE-4-options)
#### Metrics
- Accuracy: suite the evaluation of multiple-choice question-answering tasks.
### Results
We finetune meditron-70b and llama-2-70b on each benchmark (pubmedqa, medmcqa, medqa)'s training data individually.
We report the finetuned models' performance with self-consistency chain-of-thought as the inference mode.
For MMLU-Medical, models finetuned on MedMCQA are used for inference.
For MedQA-4-Option, models finetuned on MedQA are used for inference.
For a more detailed performance analysis, please see our paper.
| | | | | | |
| --- | ------ |----- |----- |----- |----- |
|Dataset| meditron-70b | llama-2-70b | med42-70b* | clinical-camel-70b* |
|MMLU-Medical | 77.6 | 77.9 | 74.5 | 65.7 |
|PubMedQA | 81.6 | 80.0 | 61.2 | 67.0 |
|MedMCQA | 66.0 | 62.6 | 59.2 | 46.7 |
|MedQA | 64.4 | 61.5 | 59.1 | 50.8 |
|MedQA-4-Option| 70.2 | 63.8 | 63.9 | 56.8 |
|Avg | 72.0 | 69.2 | 63.6 | 57.4 |
| | | | | | |
**Note**: models with * are already instruction-tuned, so we exclude them from further finetuning on any training data.
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
- **Hardware Type:** 128 x NVIDIA A100 (80GB) SXM
- **Total GPU hours:** 42,496
- **Hardware Provider:** EPFL Research Computing Platform
- **Compute Region:** Switzerland
- **Carbon Emitted:** Switzerland has a carbon efficiency of 0.016 kgCO2/kWh (https://www.carbonfootprint.com/docs/2018_8_electricity_factors_august_2018_-_online_sources.pdf). 332 hours of 128 A100s means 42496 hours at a TDP of 400W. Assuming a Power Usage effectiveness of 1.8, total emissions are estimated to be:
(400W / 1000W/kWh / GPU * 0.016 kgCO2/kWh * 332 h * 128 GPU) * 1.8 PUE = 486 kgCO2.
## Citation
**BibTeX:**
If you use Meditron or its training data, please cite our work:
```
@misc{chen2023meditron70b,
title={MEDITRON-70B: Scaling Medical Pretraining for Large Language Models},
author={Zeming Chen and Alejandro Hernández-Cano and Angelika Romanou and Antoine Bonnet and Kyle Matoba and Francesco Salvi and Matteo Pagliardini and Simin Fan and Andreas Köpf and Amirkeivan Mohtashami and Alexandre Sallinen and Alireza Sakhaeirad and Vinitra Swamy and Igor Krawczuk and Deniz Bayazit and Axel Marmet and Syrielle Montariol and Mary-Anne Hartley and Martin Jaggi and Antoine Bosselut},
year={2023},
eprint={2311.16079},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
@software{epfmedtrn,
author = {Zeming Chen and Alejandro Hernández Cano and Angelika Romanou and Antoine Bonnet and Kyle Matoba and Francesco Salvi and Matteo Pagliardini and Simin Fan and Andreas Köpf and Amirkeivan Mohtashami and Alexandre Sallinen and Alireza Sakhaeirad and Vinitra Swamy and Igor Krawczuk and Deniz Bayazit and Axel Marmet and Syrielle Montariol and Mary-Anne Hartley and Martin Jaggi and Antoine Bosselut},
title = {MediTron-70B: Scaling Medical Pretraining for Large Language Models},
month = November,
year = 2023,
url = {https://github.com/epfLLM/meditron}
}
```
<!-- original-model-card end -->
|
jan-hq/supermario-v2 | jan-hq | 2024-03-04T13:36:09Z | 745 | 7 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"en",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-12-11T12:49:07Z | ---
language:
- en
license: apache-2.0
model-index:
- name: supermario-v2
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 68.52
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=janhq/supermario-v2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 86.51
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=janhq/supermario-v2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 64.88
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=janhq/supermario-v2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 60.58
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=janhq/supermario-v2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 81.37
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=janhq/supermario-v2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 72.18
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=janhq/supermario-v2
name: Open LLM Leaderboard
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://github.com/janhq/jan/assets/89722390/35daac7d-b895-487c-a6ac-6663daaad78e" alt="Jan banner" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<p align="center">
<a href="https://jan.ai/">Jan</a>
- <a href="https://discord.gg/AsJ8krTT3N">Discord</a>
</p>
<!-- header end -->
# Model Description
This model uses the `DARE_TIES` merge method from 3 models:
1. [OpenHermes-2.5-neural-chat-v3-3-Slerp](https://huggingface.co/Weyaxi/OpenHermes-2.5-neural-chat-v3-3-Slerp)
2. [MetaMath-Cybertron-Starling](https://huggingface.co/Q-bert/MetaMath-Cybertron-Starling)
3. [Marcoroni-7B-v3](https://huggingface.co/AIDC-ai-business/Marcoroni-7B-v3)
- base model: [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1)
The yaml config file for this model here:
```yaml
base_model: mistralai/Mistral-7B-v0.1
dtype: bfloat16
merge_method: dare_ties
models:
- model: mistralai/Mistral-7B-v0.1
- model: Weyaxi/OpenHermes-2.5-neural-chat-v3-3-Slerp
parameters:
density: 0.8
weight: 0.4
- model: Q-bert/MetaMath-Cybertron-Starling
parameters:
density: 0.8
weight: 0.3
- model: AIDC-ai-business/Marcoroni-7B-v3
parameters:
density: 0.8
weight: 0.3
parameters:
int8_mask: true
```
# Prompt template:
- **ChatML**
```
<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
- **System**
```
### System:
{system}
### User:
{user}
### Assistant:
```
# Run this model
You can run this model using [Jan Desktop](https://jan.ai/) on Mac, Windows, or Linux.
Jan is an open source, ChatGPT alternative that is:
- 💻 **100% offline on your machine**: Your conversations remain confidential, and visible only to you.
- 🗂️ **An Open File Format**: Conversations and model settings stay on your computer and can be exported or deleted at any time.
- 🌐 **OpenAI Compatible**: Local server on port `1337` with OpenAI compatible endpoints
- 🌍 **Open Source & Free**: We build in public; check out our [Github](https://github.com/janhq)

# About Jan
Jan believes in the need for an open-source AI ecosystem and is building the infra and tooling to allow open-source AIs to compete on a level playing field with proprietary ones.
Jan's long-term vision is to build a cognitive framework for future robots, who are practical, useful assistants for humans and businesses in everyday life.
# Jan Model Merger
This is a test project for merging models.
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_janhq__supermario-v2).
| Metric | Value |
|-----------------------|---------------------------|
| Avg. | 72.36|
| ARC (25-shot) | 68.52 |
| HellaSwag (10-shot) | 86.51 |
| MMLU (5-shot) | 64.88|
| TruthfulQA (0-shot) | 60.58 |
| Winogrande (5-shot) | 81.37 |
| GSM8K (5-shot) | 72.18 |
# Acknowlegement
- [mergekit](https://github.com/cg123/mergekit)
- [DARE](https://github.com/yule-BUAA/MergeLM/blob/main/README.md)
- [lm-evaluation-harness
](https://github.com/EleutherAI/lm-evaluation-harness)
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_janhq__supermario-v2)
| Metric |Value|
|---------------------------------|----:|
|Avg. |72.34|
|AI2 Reasoning Challenge (25-Shot)|68.52|
|HellaSwag (10-Shot) |86.51|
|MMLU (5-Shot) |64.88|
|TruthfulQA (0-shot) |60.58|
|Winogrande (5-shot) |81.37|
|GSM8k (5-shot) |72.18|
|
EmbeddedLLM/Mistral-7B-Merge-14-v0.2 | EmbeddedLLM | 2024-01-20T16:04:20Z | 745 | 9 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"en",
"base_model:janai-hq/trinity-v1",
"base_model:EmbeddedLLM/Mistral-7B-Merge-14-v0",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-12-18T08:35:35Z | ---
license: cc-by-nc-4.0
language:
- en
tags:
- merge
base_model:
- janai-hq/trinity-v1
- EmbeddedLLM/Mistral-7B-Merge-14-v0
---
# Update 2023-12-19
In light of [dataset contamination issue among the merged models](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard/discussions/474)
raised by the community in recent days, in particular
[berkeley-nest/Starling-LM-7B-alpha](https://huggingface.co/berkeley-nest/Starling-LM-7B-alpha),
[Q-bert/MetaMath-Cybertron-Starling](https://huggingface.co/Q-bert/MetaMath-Cybertron-Starling), and
[janai-hq/trinity-v1](https://huggingface.co/janai-hq/trinity-v1),
we decided to remake another model without the models mentioned.
Additionally, their CC-by-NC-4.0 license is restrictive and thus are not suitable for an open model.
# Open LLM Leaderboard
For reference, this model obtained an average score of 72.88.
| Average | 72.88 |
|------------|-------|
| ARC | 68.86 |
| HellaSwag | 87.01 |
| MMLU | 65.05 |
| TruthfulQA | 64.19 |
| Winogrande | 81.69 |
| GSM8K | 70.51 |
# Model Description
This is an experiment to test merging 14 models using DARE TIES 🦙
The merged model is then merged again with [janai-hq/trinity-v1](https://huggingface.co/janai-hq/trinity-v1) using Gradient SLERP.
The result is a base model that performs quite well but requires some further instruction fine-tuning.
The 14 models are as follows:
1. [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2)
2. [ehartford/dolphin-2.2.1-mistral-7b](https://huggingface.co/ehartford/dolphin-2.2.1-mistral-7b)
3. [SciPhi/SciPhi-Mistral-7B-32k](https://huggingface.co/SciPhi/SciPhi-Mistral-7B-32k)
4. [ehartford/samantha-1.2-mistral-7b](https://huggingface.co/ehartford/samantha-1.2-mistral-7b)
5. [Arc53/docsgpt-7b-mistral](https://huggingface.co/Arc53/docsgpt-7b-mistral)
6. [berkeley-nest/Starling-LM-7B-alpha](https://huggingface.co/berkeley-nest/Starling-LM-7B-alpha)
7. [Q-bert/MetaMath-Cybertron-Starling](https://huggingface.co/Q-bert/MetaMath-Cybertron-Starling)
8. [Open-Orca/Mistral-7B-OpenOrca](https://huggingface.co/Open-Orca/Mistral-7B-OpenOrca)
9. [v1olet/v1olet_marcoroni-go-bruins-merge-7B](https://huggingface.co/v1olet/v1olet_marcoroni-go-bruins-merge-7B)
10. [beowolx/MistralHermes-CodePro-7B-v1](https://huggingface.co/beowolx/MistralHermes-CodePro-7B-v1)
11. [TIGER-Lab/MAmmoTH-7B-Mistral](https://huggingface.co/TIGER-Lab/MAmmoTH-7B-Mistral)
12. [teknium/OpenHermes-2.5-Mistral-7B](https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B)
13. [Weyaxi/OpenHermes-2.5-neural-chat-v3-3-Slerp](https://huggingface.co/Weyaxi/OpenHermes-2.5-neural-chat-v3-3-Slerp)
14. [mlabonne/NeuralHermes-2.5-Mistral-7B](https://huggingface.co/mlabonne/NeuralHermes-2.5-Mistral-7B)
- base model: [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1)
The yaml config file for this model is here:
```yaml
slices:
- sources:
- model: janai-hq/trinity-v1
layer_range: [0, 32]
- model: EmbeddedLLM/Mistral-7B-Merge-14-v0
layer_range: [0, 32]
merge_method: slerp
base_model: janai-hq/trinity-v1
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
``` |
TheBloke/LLaMA-Pro-8B-Instruct-GGUF | TheBloke | 2024-01-07T12:42:03Z | 745 | 20 | transformers | [
"transformers",
"gguf",
"llama",
"base_model:TencentARC/LLaMA-Pro-8B-Instruct",
"license:llama2",
"text-generation-inference",
"region:us"
] | null | 2024-01-07T12:36:04Z | ---
base_model: TencentARC/LLaMA-Pro-8B-Instruct
inference: false
license: llama2
model_creator: ARC Lab, Tencent PCG
model_name: Llama Pro 8B Instruct
model_type: llama
prompt_template: '<|user|>
{prompt}
<|assistant|>
'
quantized_by: TheBloke
---
<!-- markdownlint-disable MD041 -->
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Llama Pro 8B Instruct - GGUF
- Model creator: [ARC Lab, Tencent PCG](https://huggingface.co/TencentARC)
- Original model: [Llama Pro 8B Instruct](https://huggingface.co/TencentARC/LLaMA-Pro-8B-Instruct)
<!-- description start -->
## Description
This repo contains GGUF format model files for [ARC Lab, Tencent PCG's Llama Pro 8B Instruct](https://huggingface.co/TencentARC/LLaMA-Pro-8B-Instruct).
These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/LLaMA-Pro-8B-Instruct-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/LLaMA-Pro-8B-Instruct-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/LLaMA-Pro-8B-Instruct-GGUF)
* [ARC Lab, Tencent PCG's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/TencentARC/LLaMA-Pro-8B-Instruct)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: ToRA
```
<|user|>
{prompt}
<|assistant|>
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [llama-pro-8b-instruct.Q2_K.gguf](https://huggingface.co/TheBloke/LLaMA-Pro-8B-Instruct-GGUF/blob/main/llama-pro-8b-instruct.Q2_K.gguf) | Q2_K | 2 | 3.49 GB| 5.99 GB | smallest, significant quality loss - not recommended for most purposes |
| [llama-pro-8b-instruct.Q3_K_S.gguf](https://huggingface.co/TheBloke/LLaMA-Pro-8B-Instruct-GGUF/blob/main/llama-pro-8b-instruct.Q3_K_S.gguf) | Q3_K_S | 3 | 3.64 GB| 6.14 GB | very small, high quality loss |
| [llama-pro-8b-instruct.Q3_K_M.gguf](https://huggingface.co/TheBloke/LLaMA-Pro-8B-Instruct-GGUF/blob/main/llama-pro-8b-instruct.Q3_K_M.gguf) | Q3_K_M | 3 | 4.08 GB| 6.58 GB | very small, high quality loss |
| [llama-pro-8b-instruct.Q3_K_L.gguf](https://huggingface.co/TheBloke/LLaMA-Pro-8B-Instruct-GGUF/blob/main/llama-pro-8b-instruct.Q3_K_L.gguf) | Q3_K_L | 3 | 4.46 GB| 6.96 GB | small, substantial quality loss |
| [llama-pro-8b-instruct.Q4_0.gguf](https://huggingface.co/TheBloke/LLaMA-Pro-8B-Instruct-GGUF/blob/main/llama-pro-8b-instruct.Q4_0.gguf) | Q4_0 | 4 | 4.74 GB| 7.24 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [llama-pro-8b-instruct.Q4_K_S.gguf](https://huggingface.co/TheBloke/LLaMA-Pro-8B-Instruct-GGUF/blob/main/llama-pro-8b-instruct.Q4_K_S.gguf) | Q4_K_S | 4 | 4.77 GB| 7.27 GB | small, greater quality loss |
| [llama-pro-8b-instruct.Q4_K_M.gguf](https://huggingface.co/TheBloke/LLaMA-Pro-8B-Instruct-GGUF/blob/main/llama-pro-8b-instruct.Q4_K_M.gguf) | Q4_K_M | 4 | 5.06 GB| 7.56 GB | medium, balanced quality - recommended |
| [llama-pro-8b-instruct.Q5_0.gguf](https://huggingface.co/TheBloke/LLaMA-Pro-8B-Instruct-GGUF/blob/main/llama-pro-8b-instruct.Q5_0.gguf) | Q5_0 | 5 | 5.77 GB| 8.27 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [llama-pro-8b-instruct.Q5_K_S.gguf](https://huggingface.co/TheBloke/LLaMA-Pro-8B-Instruct-GGUF/blob/main/llama-pro-8b-instruct.Q5_K_S.gguf) | Q5_K_S | 5 | 5.77 GB| 8.27 GB | large, low quality loss - recommended |
| [llama-pro-8b-instruct.Q5_K_M.gguf](https://huggingface.co/TheBloke/LLaMA-Pro-8B-Instruct-GGUF/blob/main/llama-pro-8b-instruct.Q5_K_M.gguf) | Q5_K_M | 5 | 5.93 GB| 8.43 GB | large, very low quality loss - recommended |
| [llama-pro-8b-instruct.Q6_K.gguf](https://huggingface.co/TheBloke/LLaMA-Pro-8B-Instruct-GGUF/blob/main/llama-pro-8b-instruct.Q6_K.gguf) | Q6_K | 6 | 6.86 GB| 9.36 GB | very large, extremely low quality loss |
| [llama-pro-8b-instruct.Q8_0.gguf](https://huggingface.co/TheBloke/LLaMA-Pro-8B-Instruct-GGUF/blob/main/llama-pro-8b-instruct.Q8_0.gguf) | Q8_0 | 8 | 8.88 GB| 11.38 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/LLaMA-Pro-8B-Instruct-GGUF and below it, a specific filename to download, such as: llama-pro-8b-instruct.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/LLaMA-Pro-8B-Instruct-GGUF llama-pro-8b-instruct.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage (click to read)</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/LLaMA-Pro-8B-Instruct-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/LLaMA-Pro-8B-Instruct-GGUF llama-pro-8b-instruct.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 35 -m llama-pro-8b-instruct.Q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<|user|>\n{prompt}\n<|assistant|>"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python.
### How to load this model in Python code, using llama-cpp-python
For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/).
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install llama-cpp-python
# With NVidia CUDA acceleration
CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python
# Or with OpenBLAS acceleration
CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python
# Or with CLBLast acceleration
CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python
# Or with AMD ROCm GPU acceleration (Linux only)
CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python
# Or with Metal GPU acceleration for macOS systems only
CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python
# In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA:
$env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on"
pip install llama-cpp-python
```
#### Simple llama-cpp-python example code
```python
from llama_cpp import Llama
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = Llama(
model_path="./llama-pro-8b-instruct.Q4_K_M.gguf", # Download the model file first
n_ctx=4096, # The max sequence length to use - note that longer sequence lengths require much more resources
n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance
n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available
)
# Simple inference example
output = llm(
"<|user|>\n{prompt}\n<|assistant|>", # Prompt
max_tokens=512, # Generate up to 512 tokens
stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using.
echo=True # Whether to echo the prompt
)
# Chat Completion API
llm = Llama(model_path="./llama-pro-8b-instruct.Q4_K_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using
llm.create_chat_completion(
messages = [
{"role": "system", "content": "You are a story writing assistant."},
{
"role": "user",
"content": "Write a story about llamas."
}
]
)
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Michael Levine, 阿明, Trailburnt, Nikolai Manek, John Detwiler, Randy H, Will Dee, Sebastain Graf, NimbleBox.ai, Eugene Pentland, Emad Mostaque, Ai Maven, Jim Angel, Jeff Scroggin, Michael Davis, Manuel Alberto Morcote, Stephen Murray, Robert, Justin Joy, Luke @flexchar, Brandon Frisco, Elijah Stavena, S_X, Dan Guido, Undi ., Komninos Chatzipapas, Shadi, theTransient, Lone Striker, Raven Klaugh, jjj, Cap'n Zoog, Michel-Marie MAUDET (LINAGORA), Matthew Berman, David, Fen Risland, Omer Bin Jawed, Luke Pendergrass, Kalila, OG, Erik Bjäreholt, Rooh Singh, Joseph William Delisle, Dan Lewis, TL, John Villwock, AzureBlack, Brad, Pedro Madruga, Caitlyn Gatomon, K, jinyuan sun, Mano Prime, Alex, Jeffrey Morgan, Alicia Loh, Illia Dulskyi, Chadd, transmissions 11, fincy, Rainer Wilmers, ReadyPlayerEmma, knownsqashed, Mandus, biorpg, Deo Leter, Brandon Phillips, SuperWojo, Sean Connelly, Iucharbius, Jack West, Harry Royden McLaughlin, Nicholas, terasurfer, Vitor Caleffi, Duane Dunston, Johann-Peter Hartmann, David Ziegler, Olakabola, Ken Nordquist, Trenton Dambrowitz, Tom X Nguyen, Vadim, Ajan Kanaga, Leonard Tan, Clay Pascal, Alexandros Triantafyllidis, JM33133, Xule, vamX, ya boyyy, subjectnull, Talal Aujan, Alps Aficionado, wassieverse, Ari Malik, James Bentley, Woland, Spencer Kim, Michael Dempsey, Fred von Graf, Elle, zynix, William Richards, Stanislav Ovsiannikov, Edmond Seymore, Jonathan Leane, Martin Kemka, usrbinkat, Enrico Ros
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: ARC Lab, Tencent PCG's Llama Pro 8B Instruct
# LLaMA-PRO-Instruct Model Card
## Model Description
LLaMA-PRO-Instruct is a transformative expansion of the LLaMA2-7B model, now boasting 8.3 billion parameters. It uniquely specializes in programming, coding, and mathematical reasoning, maintaining versatility in general language tasks.
## Development and Training
This model, developed by Tencent ARC team, extends LLaMA2-7B using innovative block expansion techniques. It's meticulously trained on a diverse blend of coding and mathematical data, encompassing over 80 billion tokens.
## Intended Use
LLaMA-PRO-Instruct is ideal for complex NLP challenges, excelling in programming, mathematical reasoning, and general language processing, suitable for both specialized and broad applications.
## Performance
It surpasses its predecessors in the LLaMA series, especially in code domains, demonstrating exceptional competence as a comprehensive language model.
## Limitations
Despite advancements, it may encounter difficulties in highly niche or nuanced tasks.
## Ethical Considerations
Users are advised to consider inherent biases and responsibly manage its application across various fields.
<!-- original-model-card end -->
|
tuantran1632001/Psyfighter2-Orca2-13B-ties | tuantran1632001 | 2024-04-06T18:07:07Z | 745 | 1 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"microsoft/Orca-2-13b",
"KoboldAI/LLaMA2-13B-Psyfighter2",
"base_model:KoboldAI/LLaMA2-13B-Psyfighter2",
"base_model:microsoft/Orca-2-13b",
"license:other",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-01-11T09:48:59Z | ---
license: other
tags:
- merge
- mergekit
- lazymergekit
- microsoft/Orca-2-13b
- KoboldAI/LLaMA2-13B-Psyfighter2
base_model:
- KoboldAI/LLaMA2-13B-Psyfighter2
- microsoft/Orca-2-13b
license_name: microsoft-research-license
model-index:
- name: Psyfighter2-Orca2-13B-ties
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 62.46
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=tuantran1632001/Psyfighter2-Orca2-13B-ties
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 81.74
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=tuantran1632001/Psyfighter2-Orca2-13B-ties
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 60.31
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=tuantran1632001/Psyfighter2-Orca2-13B-ties
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 55.4
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=tuantran1632001/Psyfighter2-Orca2-13B-ties
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 77.27
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=tuantran1632001/Psyfighter2-Orca2-13B-ties
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 43.67
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=tuantran1632001/Psyfighter2-Orca2-13B-ties
name: Open LLM Leaderboard
---
# Psyfighter2-Orca2-ties
Psyfighter2-Orca2-ties is a merge of the following models using [mergekit](https://github.com/cg123/mergekit):
* [KoboldAI/LLaMA2-13B-Psyfighter2](https://huggingface.co/KoboldAI/LLaMA2-13B-Psyfighter2)
* [microsoft/Orca-2-13b](https://huggingface.co/microsoft/Orca-2-13b)
This is my very first merge I have ever attempted. The motivation behind this merge is to try and create a 13B version of [jebcarter/psyonic-cetacean-20B](https://huggingface.co/jebcarter/psyonic-cetacean-20B). I don't have a good GPU (GTX 1660 6GB), so although I can merge the model, I cannot actually run it. However, the Open LLM Leaderboard ranks this merge with 63.48 avg point, which is higher than both KoboldAI/LLaMA2-13B-Psyfighter2 and jebcarter/psyonic-cetacean-20B, so I must did something right. The next step is to quantize this merge into GGUF so I can actually run it with [KoboldCpp](https://github.com/LostRuins/koboldcpp).
## 🧩 Configuration
```yaml
models:
- model: KoboldAI/LLaMA2-13B-Psyfighter2
- model: microsoft/Orca-2-13b
parameters:
density: 0.40
weight: [0, 0.3, 0.7, 1]
merge_method: ties
base_model: KoboldAI/LLaMA2-13B-Psyfighter2
parameters:
normalize: true
int8_mask: true
dtype: float16
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_tuantran1632001__Psyfighter2-Orca2-13B-ties)
| Metric |Value|
|---------------------------------|----:|
|Avg. |63.48|
|AI2 Reasoning Challenge (25-Shot)|62.46|
|HellaSwag (10-Shot) |81.74|
|MMLU (5-Shot) |60.31|
|TruthfulQA (0-shot) |55.40|
|Winogrande (5-shot) |77.27|
|GSM8k (5-shot) |43.67|
|
dfurman/HermesBagel-34B-v0.1 | dfurman | 2024-03-23T03:53:56Z | 745 | 1 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"NousResearch/Nous-Hermes-2-Yi-34B",
"jondurbin/bagel-dpo-34b-v0.2",
"conversational",
"base_model:NousResearch/Nous-Hermes-2-Yi-34B",
"base_model:jondurbin/bagel-dpo-34b-v0.2",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-01-13T21:04:46Z | ---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- NousResearch/Nous-Hermes-2-Yi-34B
- jondurbin/bagel-dpo-34b-v0.2
base_model:
- NousResearch/Nous-Hermes-2-Yi-34B
- jondurbin/bagel-dpo-34b-v0.2
model-index:
- name: HermesBagel-34B-v0.1
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 70.56
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=dfurman/HermesBagel-34B-v0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 85.74
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=dfurman/HermesBagel-34B-v0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 77.38
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=dfurman/HermesBagel-34B-v0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 67.34
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=dfurman/HermesBagel-34B-v0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 84.61
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=dfurman/HermesBagel-34B-v0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 65.28
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=dfurman/HermesBagel-34B-v0.1
name: Open LLM Leaderboard
---
# HermesBagel-34B-v0.1
HermesBagel-34B-v0.1 is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [NousResearch/Nous-Hermes-2-Yi-34B](https://huggingface.co/NousResearch/Nous-Hermes-2-Yi-34B)
* [jondurbin/bagel-dpo-34b-v0.2](https://huggingface.co/jondurbin/bagel-dpo-34b-v0.2)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: NousResearch/Nous-Hermes-2-Yi-34B
layer_range: [0, 60]
- model: jondurbin/bagel-dpo-34b-v0.2
layer_range: [0, 60]
merge_method: slerp
base_model: NousResearch/Nous-Hermes-2-Yi-34B
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## Basic Usage
<details>
<summary>Setup</summary>
```python
!pip install -qU transformers accelerate bitsandbytes
from transformers import (
AutoTokenizer,
AutoModelForCausalLM,
BitsAndBytesConfig
)
import torch
model = "dfurman/HermesBagel-34B-v0.1"
nf4_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_use_double_quant=True,
bnb_4bit_compute_dtype=torch.bfloat16
)
tokenizer = AutoTokenizer.from_pretrained(model)
model = AutoModelForCausalLM.from_pretrained(
model,
torch_dtype=torch.bfloat16,
device_map="auto",
quantization_config=nf4_config,
)
```
</details>
```python
messages = [
{"role": "user", "content": "What is a large language model?"},
]
print("\n\n*** Prompt:")
input_ids = tokenizer.apply_chat_template(
messages,
tokenize=True,
return_tensors="pt",
)
print(tokenizer.decode(input_ids[0]))
print("\n\n*** Generate:")
with torch.autocast("cuda", dtype=torch.bfloat16):
output = model.generate(
input_ids=input_ids.to("cuda"),
max_new_tokens=256,
return_dict_in_generate=True,
)
response = tokenizer.decode(
output["sequences"][0][len(input_ids[0]):],
skip_special_tokens=True
)
print(response)
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_dfurman__HermesBagel-34B-v0.1)
| Metric |Value|
|---------------------------------|----:|
|Avg. |75.15|
|AI2 Reasoning Challenge (25-Shot)|70.56|
|HellaSwag (10-Shot) |85.74|
|MMLU (5-Shot) |77.38|
|TruthfulQA (0-shot) |67.34|
|Winogrande (5-shot) |84.61|
|GSM8k (5-shot) |65.28|
|
hamxea/Llama-2-13b-chat-hf-activity-fine-tuned-v4 | hamxea | 2024-03-31T17:57:03Z | 745 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"medical",
"text-generation-inference",
"conversational",
"en",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-01-17T11:25:50Z | ---
license: other
language:
- en
library_name: transformers
tags:
- medical
- text-generation-inference
---
# 🦙 Llama for Huggingface Transformers
Llama-13B converted from official [Llama-7B](https://github.com/facebookresearch/Llama/blob/main/MODEL_CARD.md) to Huggingface model via [HF's conversion script](https://github.com/huggingface/transformers/blob/main/src/transformers/models/llama/convert_llama_weights_to_hf.py) to work with Transformers/HuggingFace. This is under a special license, please see the LICENSE file for details.
This is updated from [decapoda-research/llama-7b-hf](https://huggingface.co/decapoda-research/Llama-7b-hf) (since the many pull requests are not merged yet in decapoda's repo, so I directly open a new repo here). It includes:
(1) The naming changes (LLaMA -> Llama) to best fit for `transformers` naming rule, in both `LlamaForCausalLM` and `LlamaTokenizer`. This works perfectly for `transformers>=4.28.0`.
(2) The model checkpoints are saved in 2 shards (instead of 33 shards in [decapoda-research/Llama-7b-hf](https://huggingface.co/decapoda-research/Llama-7b-hf)). Less shards would accelerate loading speed from disk.
--
license: other
---
# Llama Model Card
## Model details
**Organization developing the model**
The FAIR team of Meta AI.
**Model date**
Llama was trained between December. 2022 and Feb. 2023.
**Model version**
This is version 1 of the model.
**Model type**
Llama is an auto-regressive language model, based on the transformer architecture. The model comes in different sizes: 7B, 13B, 33B and 65B parameters.
**Paper or resources for more information**
More information can be found in the paper “Llama, Open and Efficient Foundation Language Models”, available at https://research.facebook.com/publications/Llama-open-and-efficient-foundation-language-models/.
**Citations details**
https://research.facebook.com/publications/Llama-open-and-efficient-foundation-language-models/
**License**
Non-commercial bespoke license
**Where to send questions or comments about the model**
Questions and comments about Llama can be sent via the [GitHub repository](https://github.com/facebookresearch/Llama) of the project , by opening an issue.
## Intended use
**Primary intended uses**
The primary use of Llama is research on large language models, including:
exploring potential applications such as question answering, natural language understanding or reading comprehension,
understanding capabilities and limitations of current language models, and developing techniques to improve those,
evaluating and mitigating biases, risks, toxic and harmful content generations, hallucinations.
**Primary intended users**
The primary intended users of the model are researchers in natural language processing, machine learning and artificial intelligence.
**Out-of-scope use cases**
Llama is a base, or foundational, model. As such, it should not be used on downstream applications without further risk evaluation and mitigation. In particular, our model has not been trained with human feedback, and can thus generate toxic or offensive content, incorrect information or generally unhelpful answers.
## Factors
**Relevant factors**
One of the most relevant factors for which model performance may vary is which language is used. Although we included 20 languages in the training data, most of our dataset is made of English text, and we thus expect the model to perform better for English than other languages. Relatedly, it has been shown in previous studies that performance might vary for different dialects, and we expect that it will be the case for our model.
**Evaluation factors**
As our model is trained on data from the Web, we expect that it reflects biases from this source. We thus evaluated on RAI datasets to measure biases exhibited by the model for gender, religion, race, sexual orientation, age, nationality, disability, physical appearance and socio-economic status. We also measure the toxicity of model generations, depending on the toxicity of the context used to prompt the model.
## Metrics
**Model performance measures**
We use the following measure to evaluate the model:
- Accuracy for common sense reasoning, reading comprehension, natural language understanding (MMLU), BIG-bench hard, WinoGender and CrowS-Pairs,
- Exact match for question answering,
- The toxicity score from Perspective API on RealToxicityPrompts.
**Decision thresholds**
Not applicable.
**Approaches to uncertainty and variability**
Due to the high computational requirements of training LLMs, we trained only one model of each size, and thus could not evaluate variability of pre-training.
## Evaluation datasets
The model was evaluated on the following benchmarks: BoolQ, PIQA, SIQA, HellaSwag, WinoGrande, ARC, OpenBookQA, NaturalQuestions, TriviaQA, RACE, MMLU, BIG-bench hard, GSM8k, RealToxicityPrompts, WinoGender, CrowS-Pairs.
## Training dataset
The model was trained using the following source of data: CCNet [67%], C4 [15%], GitHub [4.5%], Wikipedia [4.5%], Books [4.5%], ArXiv [2.5%], Stack Exchange[2%]. The Wikipedia and Books domains include data in the following languages: bg, ca, cs, da, de, en, es, fr, hr, hu, it, nl, pl, pt, ro, ru, sl, sr, sv, uk. See the paper for more details about the training set and corresponding preprocessing.
## Quantitative analysis
Hyperparameters for the model architecture
<table>
<thead>
<tr>
<th >Llama</th> <th colspan=6>Model hyper parameters </th>
</tr>
<tr>
<th>Number of parameters</th><th>dimension</th><th>n heads</th><th>n layers</th><th>Learn rate</th><th>Batch size</th><th>n tokens</th>
</tr>
</thead>
<tbody>
<tr>
<th>7B</th> <th>4096</th> <th>32</th> <th>32</th> <th>3.0E-04</th><th>4M</th><th>1T
</tr>
<tr>
<th>13B</th><th>5120</th><th>40</th><th>40</th><th>3.0E-04</th><th>4M</th><th>1T
</tr>
<tr>
<th>33B</th><th>6656</th><th>52</th><th>60</th><th>1.5.E-04</th><th>4M</th><th>1.4T
</tr>
<tr>
<th>65B</th><th>8192</th><th>64</th><th>80</th><th>1.5.E-04</th><th>4M</th><th>1.4T
</tr>
</tbody>
</table>
*Table 1 - Summary of Llama Model Hyperparameters*
We present our results on eight standard common sense reasoning benchmarks in the table below.
<table>
<thead>
<tr>
<th>Llama</th> <th colspan=9>Reasoning tasks </th>
</tr>
<tr>
<th>Number of parameters</th> <th>BoolQ</th><th>PIQA</th><th>SIQA</th><th>HellaSwag</th><th>WinoGrande</th><th>ARC-e</th><th>ARC-c</th><th>OBQA</th><th>COPA</th>
</tr>
</thead>
<tbody>
<tr>
<th>7B</th><th>76.5</th><th>79.8</th><th>48.9</th><th>76.1</th><th>70.1</th><th>76.7</th><th>47.6</th><th>57.2</th><th>93
</th>
<tr><th>13B</th><th>78.1</th><th>80.1</th><th>50.4</th><th>79.2</th><th>73</th><th>78.1</th><th>52.7</th><th>56.4</th><th>94
</th>
<tr><th>33B</th><th>83.1</th><th>82.3</th><th>50.4</th><th>82.8</th><th>76</th><th>81.4</th><th>57.8</th><th>58.6</th><th>92
</th>
<tr><th>65B</th><th>85.3</th><th>82.8</th><th>52.3</th><th>84.2</th><th>77</th><th>81.5</th><th>56</th><th>60.2</th><th>94</th></tr>
</tbody>
</table>
*Table 2 - Summary of Llama Model Performance on Reasoning tasks*
We present our results on bias in the table below. Note that lower value is better indicating lower bias.
| No | Category | FAIR LLM |
| --- | -------------------- | -------- |
| 1 | Gender | 70.6 |
| 2 | Religion | 79 |
| 3 | Race/Color | 57 |
| 4 | Sexual orientation | 81 |
| 5 | Age | 70.1 |
| 6 | Nationality | 64.2 |
| 7 | Disability | 66.7 |
| 8 | Physical appearance | 77.8 |
| 9 | Socioeconomic status | 71.5 |
| | Llama Average | 66.6 |
*Table 3 - Summary bias of our model output..*
## Ethical considerations
**Data**
The data used to train the model is collected from various sources, mostly from the Web. As such, it contains offensive, harmful and biased content. We thus expect the model to exhibit such biases from the training data.
**Human life**
The model is not intended to inform decisions about matters central to human life, and should not be used in such a way.
**Mitigations**
We filtered the data from the Web based on its proximity to Wikipedia text and references. For this, we used a Kneser-Ney language model and a fastText linear classifier.
**Risks and harms**
Risks and harms of large language models include the generation of harmful, offensive or biased content. These models are often prone to generating incorrect information, sometimes referred to as hallucinations. We do not expect our model to be an exception in this regard.
**Use cases**
Llama is a foundational model, and as such, it should not be used for downstream applications without further investigation and mitigations of risks. These risks and potential fraught use cases include, but are not limited to: generation of misinformation and generation of harmful, biased or offensive content. |
FelixChao/Patronum-7B | FelixChao | 2024-01-30T12:10:58Z | 745 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"arxiv:1910.09700",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-01-30T12:05:06Z | ---
license: apache-2.0
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
ConvexAI/Harmony-4x7B-bf16 | ConvexAI | 2024-02-04T07:54:26Z | 745 | 3 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"moe",
"merge",
"en",
"arxiv:2101.03961",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-01T06:44:45Z | ---
license: apache-2.0
language:
- en
tags:
- moe
- merge
---


# After merging, I tested this model for a long time...it blew away my expectations, so I just had it name itself.
[GGUF FILES HERE](https://huggingface.co/Kquant03/Harmony-4x7B-GGUF)
[Join our Discord!](https://discord.gg/XFChPkGd)
[Buttercup](https://huggingface.co/Kquant03/Buttercup-4x7B-bf16), for a long time, was my best model (made by kquant)...but I genuinely think this improves upon reasoning and logic while retaining the RP value. The hyphens are telling that it has some pretty heavy GPT-4 data in it, though. I mean the whole point is to eventually outperform GPT-4 so of course it's probably best to pretrain with GPT-4 data, then fine tune off of it and then DPO the resulting fine tune...perhaps running another epoch of the previous fine-tune, afterwards.
The config looks like this...(detailed version is in the files and versions):
- [jan-hq/supermario-v2](https://huggingface.co/jan-hq/supermario-v2) - base
- [jan-hq/supermario-v2](https://huggingface.co/jan-hq/supermario-v2) - expert #1
- [jan-hq/supermario-v2](https://huggingface.co/jan-hq/supermario-v2) - expert #2
- [ConvexAI/Metabird-7B](https://huggingface.co/ConvexAI/Metabird-7B) - expert #3
- [alnrg2arg/test3_sft_16bit](https://huggingface.co/alnrg2arg/test3_sft_16bit) - expert #4
# It manages to beat Buttercup-4x7B in MMLU, and I personally think it's on-par to it, if not better.

# "[What is a Mixture of Experts (MoE)?](https://huggingface.co/blog/moe)"
### (from the MistralAI papers...click the quoted question above to navigate to it directly.)
The scale of a model is one of the most important axes for better model quality. Given a fixed computing budget, training a larger model for fewer steps is better than training a smaller model for more steps.
Mixture of Experts enable models to be pretrained with far less compute, which means you can dramatically scale up the model or dataset size with the same compute budget as a dense model. In particular, a MoE model should achieve the same quality as its dense counterpart much faster during pretraining.
So, what exactly is a MoE? In the context of transformer models, a MoE consists of two main elements:
Sparse MoE layers are used instead of dense feed-forward network (FFN) layers. MoE layers have a certain number of “experts” (e.g. 32 in my "frankenMoE"), where each expert is a neural network. In practice, the experts are FFNs, but they can also be more complex networks or even a MoE itself, leading to hierarchical MoEs!
A gate network or router, that determines which tokens are sent to which expert. For example, in the image below, the token “More” is sent to the second expert, and the token "Parameters” is sent to the first network. As we’ll explore later, we can send a token to more than one expert. How to route a token to an expert is one of the big decisions when working with MoEs - the router is composed of learned parameters and is pretrained at the same time as the rest of the network.
At every layer, for every token, a router network chooses two of these groups (the “experts”) to process the token and combine their output additively.

Switch Layer
MoE layer from the [Switch Transformers paper](https://arxiv.org/abs/2101.03961)
So, to recap, in MoEs we replace every FFN layer of the transformer model with an MoE layer, which is composed of a gate network and a certain number of experts.
Although MoEs provide benefits like efficient pretraining and faster inference compared to dense models, they also come with challenges:
Training: MoEs enable significantly more compute-efficient pretraining, but they’ve historically struggled to generalize during fine-tuning, leading to overfitting.
Inference: Although a MoE might have many parameters, only some of them are used during inference. This leads to much faster inference compared to a dense model with the same number of parameters. However, all parameters need to be loaded in RAM, so memory requirements are high. For example, [given a MoE like Mixtral 8x7B](https://huggingface.co/blog/moe), we’ll need to have enough VRAM to hold a dense 47B parameter model. Why 47B parameters and not 8 x 7B = 56B? That’s because in MoE models, only the FFN layers are treated as individual experts, and the rest of the model parameters are shared. At the same time, assuming just two experts are being used per token, the inference speed (FLOPs) is like using a 12B model (as opposed to a 14B model), because it computes 2x7B matrix multiplications, but with some layers shared (more on this soon).
If all our tokens are sent to just a few popular experts, that will make training inefficient. In a normal MoE training, the gating network converges to mostly activate the same few experts. This self-reinforces as favored experts are trained quicker and hence selected more. To mitigate this, an auxiliary loss is added to encourage giving all experts equal importance. This loss ensures that all experts receive a roughly equal number of training examples. The following sections will also explore the concept of expert capacity, which introduces a threshold of how many tokens can be processed by an expert. In transformers, the auxiliary loss is exposed via the aux_loss parameter.
## "Wait...but you called this a frankenMoE?"
The difference between MoE and "frankenMoE" lies in the fact that the router layer in a model like the one on this repo is not trained simultaneously. |
NeuralNovel/Confinus-2x7B | NeuralNovel | 2024-03-05T20:22:33Z | 745 | 6 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"moe",
"merge",
"en",
"arxiv:2101.03961",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-03T01:31:48Z | ---
language:
- en
license: apache-2.0
tags:
- moe
- merge
model-index:
- name: Confinus-2x7B
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 73.89
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=NeuralNovel/Confinus-2x7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 88.82
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=NeuralNovel/Confinus-2x7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 65.12
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=NeuralNovel/Confinus-2x7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 71.88
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=NeuralNovel/Confinus-2x7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 84.77
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=NeuralNovel/Confinus-2x7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 68.84
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=NeuralNovel/Confinus-2x7B
name: Open LLM Leaderboard
---

# NeuralNovel/Confinus-2x7B
In the boundless sands ..
A model to test how MoE will route without square expansion.
## "[What is a Mixture of Experts (MoE)?](https://huggingface.co/blog/moe)"
<a href='https://ko-fi.com/S6S2UH2TC' target='_blank'><img height='38' style='border:0px;height:36px;' src='https://storage.ko-fi.com/cdn/kofi1.png?v=3' border='0' alt='Buy Me a Coffee at ko-fi.com' /></a>
<a href='https://discord.gg/KFS229xD' target='_blank'><img width='140' height='500' style='border:0px;height:36px;' src='https://i.ibb.co/tqwznYM/Discord-button.png' border='0' alt='Join Our Discord!' /></a>
### (from the MistralAI papers...click the quoted question above to navigate to it directly.)
The scale of a model is one of the most important axes for better model quality. Given a fixed computing budget, training a larger model for fewer steps is better than training a smaller model for more steps.
Mixture of Experts enable models to be pretrained with far less compute, which means you can dramatically scale up the model or dataset size with the same compute budget as a dense model. In particular, a MoE model should achieve the same quality as its dense counterpart much faster during pretraining.
So, what exactly is a MoE? In the context of transformer models, a MoE consists of two main elements:
Sparse MoE layers are used instead of dense feed-forward network (FFN) layers. MoE layers have a certain number of “experts” (e.g. 32 in my "frankenMoE"), where each expert is a neural network. In practice, the experts are FFNs, but they can also be more complex networks or even a MoE itself, leading to hierarchical MoEs!
A gate network or router, that determines which tokens are sent to which expert. For example, in the image below, the token “More” is sent to the second expert, and the token "Parameters” is sent to the first network. As we’ll explore later, we can send a token to more than one expert. How to route a token to an expert is one of the big decisions when working with MoEs - the router is composed of learned parameters and is pretrained at the same time as the rest of the network.
At every layer, for every token, a router network chooses two of these groups (the “experts”) to process the token and combine their output additively.

Switch Layer
MoE layer from the [Switch Transformers paper](https://arxiv.org/abs/2101.03961)
So, to recap, in MoEs we replace every FFN layer of the transformer model with an MoE layer, which is composed of a gate network and a certain number of experts.
Although MoEs provide benefits like efficient pretraining and faster inference compared to dense models, they also come with challenges:
Training: MoEs enable significantly more compute-efficient pretraining, but they’ve historically struggled to generalize during fine-tuning, leading to overfitting.
Inference: Although a MoE might have many parameters, only some of them are used during inference. This leads to much faster inference compared to a dense model with the same number of parameters. However, all parameters need to be loaded in RAM, so memory requirements are high. For example, [given a MoE like Mixtral 8x7B](https://huggingface.co/blog/moe), we’ll need to have enough VRAM to hold a dense 47B parameter model. Why 47B parameters and not 8 x 7B = 56B? That’s because in MoE models, only the FFN layers are treated as individual experts, and the rest of the model parameters are shared. At the same time, assuming just two experts are being used per token, the inference speed (FLOPs) is like using a 12B model (as opposed to a 14B model), because it computes 2x7B matrix multiplications, but with some layers shared (more on this soon).
If all our tokens are sent to just a few popular experts, that will make training inefficient. In a normal MoE training, the gating network converges to mostly activate the same few experts. This self-reinforces as favored experts are trained quicker and hence selected more. To mitigate this, an auxiliary loss is added to encourage giving all experts equal importance. This loss ensures that all experts receive a roughly equal number of training examples. The following sections will also explore the concept of expert capacity, which introduces a threshold of how many tokens can be processed by an expert. In transformers, the auxiliary loss is exposed via the aux_loss parameter.

## "Wait...but you called this a frankenMoE?"
The difference between MoE and "frankenMoE" lies in the fact that the router layer in a model like the one on this repo is not trained simultaneously.
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_NeuralNovel__Confinus-2x7B)
| Metric |Value|
|---------------------------------|----:|
|Avg. |75.55|
|AI2 Reasoning Challenge (25-Shot)|73.89|
|HellaSwag (10-Shot) |88.82|
|MMLU (5-Shot) |65.12|
|TruthfulQA (0-shot) |71.88|
|Winogrande (5-shot) |84.77|
|GSM8k (5-shot) |68.84|
|
KnutJaegersberg/Deita-4b | KnutJaegersberg | 2024-03-04T16:30:17Z | 745 | 2 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"dataset:KnutJaegersberg/Deita-6k",
"license:other",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-07T15:18:42Z | ---
license: other
datasets:
- KnutJaegersberg/Deita-6k
license_name: qwen
license_link: LICENSE
model-index:
- name: Deita-4b
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 46.08
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=KnutJaegersberg/Deita-4b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 71.81
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=KnutJaegersberg/Deita-4b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 55.46
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=KnutJaegersberg/Deita-4b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 50.23
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=KnutJaegersberg/Deita-4b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 66.14
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=KnutJaegersberg/Deita-4b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 48.9
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=KnutJaegersberg/Deita-4b
name: Open LLM Leaderboard
---
Prompt Example:
```
### System:
You are an AI assistant. User will give you a task. Your goal is to complete the task as faithfully as you can. While performing the task think step-by-step and justify your steps.
### User:
How do you fine tune a large language model?
### Assistant:
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_KnutJaegersberg__Deita-4b)
| Metric |Value|
|---------------------------------|----:|
|Avg. |56.43|
|AI2 Reasoning Challenge (25-Shot)|46.08|
|HellaSwag (10-Shot) |71.81|
|MMLU (5-Shot) |55.46|
|TruthfulQA (0-shot) |50.23|
|Winogrande (5-shot) |66.14|
|GSM8k (5-shot) |48.90|
|
aloobun/Reyna-Mini-1.8B-v0.1 | aloobun | 2024-02-15T21:17:33Z | 745 | 2 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"chatml",
"finetune",
"gpt4",
"synthetic data",
"custom_code",
"conversational",
"dataset:teknium/OpenHermes-2.5",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-15T02:15:13Z | ---
license_name: tongyi-qianwen-research
license_link: https://huggingface.co/Qwen/Qwen1.5-1.8B-Chat/raw/main/LICENSE
library_name: transformers
license: other
datasets:
- teknium/OpenHermes-2.5
tags:
- chatml
- finetune
- gpt4
- synthetic data
- custom_code
- qwen2
---

- Finetuned [Qwen/Qwen1.5-1.8B-Chat](https://huggingface.co/Qwen/Qwen1.5-1.8B-Chat), with SFT on teknium's OpenHermes-2.5 dataset.
- This marks the inception of my Qwen1.5 LLM series, with this model laying the foundation for what lies ahead.
- Format: ChatML
- ```
<|im_start|>system
{system}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
- Next step would be to do a DPO train on top.
## Benchamrks:
|Avg. | Arc | HellaSwag | MMLU | TruthfulQA | Winogrande | GSM8K |
|--|--|--|--|--|--|--|
|41.46 | 35.24 |60.42 | 45.37 | 41.4 | 60.85 | 5.46 |
## Example:
```
from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer, StoppingCriteria
import torch
class MyStoppingCriteria(StoppingCriteria):
def __init__(self, target_sequence, prompt):
self.target_sequence = target_sequence
self.prompt=prompt
def __call__(self, input_ids, scores, **kwargs):
generated_text = tokenizer.decode(input_ids[0])
generated_text = generated_text.replace(self.prompt,'')
if self.target_sequence in generated_text:
return True
return False
def __len__(self):
return 1
def __iter__(self):
yield self
modelpath="aloobun/Reyna-Mini-1.8B-v0.1"
model = AutoModelForCausalLM.from_pretrained(
modelpath,
torch_dtype=torch.bfloat16,
device_map="cuda",
trust_remote_code=True,
)
tokenizer = AutoTokenizer.from_pretrained(
modelpath,
trust_remote_code=True,
use_fast=False,
)
prompt = "<|im_start|>system\nYou are a helpful assistant.<|im_end|>\n<|im_start|>user\nReflect on a real-world scenario where understanding probability theory could make a significant difference in decision-making.\n<|im_start|>assistant\n"
encoded_input = tokenizer(prompt, return_tensors='pt')
input_ids=encoded_input['input_ids'].cuda()
streamer = TextStreamer(tokenizer=tokenizer, skip_prompt=True)
op = model.generate(
input_ids,
streamer=streamer,
pad_token_id=tokenizer.eos_token_id,
do_sample=True,
temperature=0.6,
top_p=0.8,
max_new_tokens=512,
stopping_criteria=MyStoppingCriteria("<|im_end|>", prompt)
)
```
## Output:
> One real-world scenario where understanding probability theory can make a significant difference in decision-making is in the field of finance. Financial institutions, such as banks and investment firms, must make decisions about lending money to individuals or businesses, and how much risk they should take on.
> In this case, understanding probability theory would help financial analysts and investors make more informed decisions by providing them with information about the likelihood of different outcomes. For example, if an investor wants to invest in a particular stock, they might want to understand the probability that it will perform well over time, based on historical data and market trends. They might also be interested in understanding the probability of defaulting on a loan, which would help them evaluate whether it's worth taking on that risk.
> Probability theory provides valuable insights into how events are likely to occur and what factors contribute to those probabilities. By using statistical models and simulations, financial professionals can estimate the likelihood of different scenarios and make better-informed decisions about how to allocate their resources. This can lead to increased profits for financial institutions and improved customer satisfaction for individual investors.<|im_end|> |
Yuma42/KangalKhan-RawEmerald-7B | Yuma42 | 2024-05-01T06:49:11Z | 745 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"argilla/CapybaraHermes-2.5-Mistral-7B",
"argilla/distilabeled-OpenHermes-2.5-Mistral-7B",
"conversational",
"en",
"base_model:argilla/CapybaraHermes-2.5-Mistral-7B",
"base_model:argilla/distilabeled-OpenHermes-2.5-Mistral-7B",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-17T15:56:44Z | ---
language:
- en
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- argilla/CapybaraHermes-2.5-Mistral-7B
- argilla/distilabeled-OpenHermes-2.5-Mistral-7B
base_model:
- argilla/CapybaraHermes-2.5-Mistral-7B
- argilla/distilabeled-OpenHermes-2.5-Mistral-7B
model-index:
- name: KangalKhan-RawEmerald-7B
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 66.89
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Yuma42/KangalKhan-RawEmerald-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 85.75
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Yuma42/KangalKhan-RawEmerald-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 63.23
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Yuma42/KangalKhan-RawEmerald-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 57.58
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Yuma42/KangalKhan-RawEmerald-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 78.22
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Yuma42/KangalKhan-RawEmerald-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 62.85
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Yuma42/KangalKhan-RawEmerald-7B
name: Open LLM Leaderboard
---
# KangalKhan-RawEmerald-7B
I suggest using ChatML (Use whatever system prompt you like, this is just an example!):
```
<|im_start|>system
You are a friendly assistant.<|im_end|>
<|im_start|>user
Hello, what are you?<|im_end|>
<|im_start|>assistant
I am an AI language model designed to assist users with information and answer their questions. How can I help you today?<|im_end|>
```
Q4_K_S GGUF:
https://huggingface.co/Yuma42/KangalKhan-RawEmerald-7B-GGUF
More GGUF variants by [mradermacher](https://huggingface.co/mradermacher):
WARNING: I have observed that these versions output typos in rare cases. If you have the same problem, use my Q4_K_S GGUF above.
https://huggingface.co/mradermacher/KangalKhan-RawEmerald-7B-GGUF
KangalKhan-RawEmerald-7B is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [argilla/CapybaraHermes-2.5-Mistral-7B](https://huggingface.co/argilla/CapybaraHermes-2.5-Mistral-7B)
* [argilla/distilabeled-OpenHermes-2.5-Mistral-7B](https://huggingface.co/argilla/distilabeled-OpenHermes-2.5-Mistral-7B)
## 🧩 Configuration
```yaml
models:
- model: teknium/OpenHermes-2.5-Mistral-7B
# no parameters necessary for base model
- model: argilla/CapybaraHermes-2.5-Mistral-7B
parameters:
density: 0.6
weight: 0.5
- model: argilla/distilabeled-OpenHermes-2.5-Mistral-7B
parameters:
density: 0.6
weight: 0.5
merge_method: ties
base_model: teknium/OpenHermes-2.5-Mistral-7B
parameters:
normalize: true
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "Yuma42/KangalKhan-RawEmerald-7B"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Yuma42__KangalKhan-RawEmerald-7B)
| Metric |Value|
|---------------------------------|----:|
|Avg. |69.09|
|AI2 Reasoning Challenge (25-Shot)|66.89|
|HellaSwag (10-Shot) |85.75|
|MMLU (5-Shot) |63.23|
|TruthfulQA (0-shot) |57.58|
|Winogrande (5-shot) |78.22|
|GSM8k (5-shot) |62.85|
|
liminerity/Blur-7b-slerp-v1.46 | liminerity | 2024-03-11T18:21:47Z | 745 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"liminerity/merge",
"bardsai/jaskier-7b-dpo-v5.6",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-26T08:39:22Z | ---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- liminerity/merge
- bardsai/jaskier-7b-dpo-v5.6
model-index:
- name: Blur-7b-slerp-v1.46
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 73.29
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=liminerity/Blur-7b-slerp-v1.46
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 89.07
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=liminerity/Blur-7b-slerp-v1.46
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 64.37
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=liminerity/Blur-7b-slerp-v1.46
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 76.61
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=liminerity/Blur-7b-slerp-v1.46
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 84.53
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=liminerity/Blur-7b-slerp-v1.46
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 69.67
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=liminerity/Blur-7b-slerp-v1.46
name: Open LLM Leaderboard
---
# Blur-7b-slerp-v1.46
Blur-7b-slerp-v1.46 is a merge of the following models using [mergekit](https://github.com/cg123/mergekit):
* [liminerity/merge](https://huggingface.co/liminerity/merge)
* [bardsai/jaskier-7b-dpo-v5.6](https://huggingface.co/bardsai/jaskier-7b-dpo-v5.6)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: liminerity/merge
layer_range: [0, 32]
- model: bardsai/jaskier-7b-dpo-v5.6
layer_range: [0, 32]
merge_method: slerp
base_model: liminerity/merge
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: float16
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_liminerity__Blur-7b-slerp-v1.46)
| Metric |Value|
|---------------------------------|----:|
|Avg. |76.26|
|AI2 Reasoning Challenge (25-Shot)|73.29|
|HellaSwag (10-Shot) |89.07|
|MMLU (5-Shot) |64.37|
|TruthfulQA (0-shot) |76.61|
|Winogrande (5-shot) |84.53|
|GSM8k (5-shot) |69.67|
|
OEvortex/HelpingAI-Lite-1.5T | OEvortex | 2024-03-10T02:50:40Z | 745 | 2 | transformers | [
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"Text-Generation",
"Transformers",
"HelpingAI",
"en",
"dataset:cerebras/SlimPajama-627B",
"dataset:HuggingFaceH4/ultrachat_200k",
"dataset:bigcode/starcoderdata",
"dataset:HuggingFaceH4/ultrafeedback_binarized",
"dataset:OEvortex/vortex-mini",
"dataset:Open-Orca/OpenOrca",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-09T10:23:16Z | ---
datasets:
- cerebras/SlimPajama-627B
- HuggingFaceH4/ultrachat_200k
- bigcode/starcoderdata
- HuggingFaceH4/ultrafeedback_binarized
- OEvortex/vortex-mini
- Open-Orca/OpenOrca
language:
- en
metrics:
- speed
library_name: transformers
tags:
- Text-Generation
- Transformers
- HelpingAI
license: other
license_name: hsul
license_link: https://huggingface.co/OEvortex/vortex-3b/raw/main/LICENSE.md
widget:
- text: |
<|system|>
You are a chatbot who can be a teacher!</s>
<|user|>
Explain me working of AI .</s>
<|assistant|>
---
🌟 **HelpingAI-Lite-1.5T Model Card** 🌟
📊 **Datasets used:**
- cerebras/SlimPajama-627B
- HuggingFaceH4/ultrachat_200k
- bigcode/starcoderdata
- HuggingFaceH4/ultrafeedback_binarized
- OEvortex/vortex-mini
- Open-Orca/OpenOrca
🗣️ **Language:**
- English (en)
🔒 **License:**
HelpingAI Simplified Universal License (HSUL)
🧠 **Model Overview:**
HelpingAI-Lite-1.5T is an advanced version of the HelpingAI-Lite model, trained on a vast corpus of 1.5 trillion tokens. This extensive training data enables the model to provide precise and insightful responses, particularly for coding tasks.
🔧 **Usage Example:**
```python
from transformers import pipeline
from accelerate import Accelerator
# Initialize the accelerator
accelerator = Accelerator()
# Initialize the pipeline
pipe = pipeline("text-generation", model="OEvortex/HelpingAI-Lite-1.5T", device=accelerator.device)
# Define the messages
messages = [
{
"role": "system",
"content": "You are a chatbot who can be a teacher",
},
{
"role": "user",
"content": "Explain me working of AI.",
},
]
# Prepare the prompt
prompt = pipe.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
# Generate predictions
outputs = pipe(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
# Print the generated text
print(outputs[0]["generated_text"])
``` |
Kukedlc/NeuralSirKrishna-7b | Kukedlc | 2024-05-25T17:09:43Z | 745 | 4 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"Kukedlc/NeuralKrishna-7B-v3",
"Kukedlc/NeuralMarioMonarch-7B-slerp",
"liminerity/M7-7b",
"base_model:Kukedlc/NeuralKrishna-7B-v3",
"base_model:Kukedlc/NeuralMarioMonarch-7B-slerp",
"base_model:liminerity/M7-7b",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-10T19:31:02Z | ---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- Kukedlc/NeuralKrishna-7B-v3
- Kukedlc/NeuralMarioMonarch-7B-slerp
- liminerity/M7-7b
base_model:
- Kukedlc/NeuralKrishna-7B-v3
- Kukedlc/NeuralMarioMonarch-7B-slerp
- liminerity/M7-7b
model-index:
- name: NeuralSirKrishna-7b
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 73.72
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Kukedlc/NeuralSirKrishna-7b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 89.05
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Kukedlc/NeuralSirKrishna-7b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 64.63
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Kukedlc/NeuralSirKrishna-7b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 75.6
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Kukedlc/NeuralSirKrishna-7b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 85.32
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Kukedlc/NeuralSirKrishna-7b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 71.27
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Kukedlc/NeuralSirKrishna-7b
name: Open LLM Leaderboard
---
<div align="center" style="font-size: 44px;">🧞 NeuralSirKrishna-7b</div>
## 👩💻 Try here!
[Talking to Sir Neural Krishna 7b](https://huggingface.co/spaces/Kukedlc/Neural-Sir-Krishna-7b-GGUF)

NeuralKrishna-7B-v4 is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [Kukedlc/NeuralKrishna-7B-v3](https://huggingface.co/Kukedlc/NeuralKrishna-7B-v3)
* [Kukedlc/NeuralMarioMonarch-7B-slerp](https://huggingface.co/Kukedlc/NeuralMarioMonarch-7B-slerp)
* [liminerity/M7-7b](https://huggingface.co/liminerity/M7-7b)
# Chat
- User: Create an advanced theory about the origin of life on earth that is new, innovative, and disruptive, totally new with no resemblance to anything else but at the same time that can be real.
- System: I. The Symbiotic Cosmic Initiation Hypothesis (SCIH)
To propose a groundbreaking and disruptive theory about the origin of life on Earth, let's envision an interconnected web of cosmic events that transcends conventional understanding. The Symbiotic Cosmic Initiation Hypothesis (SCIH) posits that life didn't arise from a singular event or molecular accident but rather from a complex, multifaceted interplay between extraterrestrial entities and our planet's nascent environment. Here are its core tenets:
1. Prebiotic Symbionts: SCIH rejects the notion of a solitary "building block" or "spark of life." Instead, it postulates that prebiotic symbionts – self-replicating, organic-inorganic complexes – arrived on Earth from space via meteorites and comets. These symbionts, composed of a diverse array of biomolecules and mineral catalysts, were the products of extraterrestrial ecosystems or "astrobiomes."
2. Astrobiome Cross-Fertilization: The constant exchange of matter between our solar system's planets and asteroid belts facilitated the transfer of these prebiotic symbionts across vast distances. This cosmic symbiosis allowed for the intermingling of genetic and catalytic material, fostering evolutionary innovation and adaptability among these extraterrestrial life-forms.
3. Terrestrial Integration and Adaptation: Upon impacting Earth's surface, these prebiotic symbionts began to integrate with our planet's pre-existing chemical milieu. The resulting synergy between Earth-born and extraterrestrial biomolecules accelerated the formation of more complex organic structures, eventually leading to protocells – rudimentary, self-enclosed systems capable of rudimentary metabolism and replication.
4. The Catalytic Gaian Boost: As Earth's early atmosphere evolved to become more hospitable to life, a positive feedback loop between these protocells and our planet's environment ensued. The emergence of photosynthesis and other metabolic processes contributed to the creation of a self-regulating, life-nurturing environment – what we now call the Ga

## 🧩 Configuration
```yaml
models:
- model: Kukedlc/NeuralKrishna-7B-v3
# no parameters necessary for base model
- model: Kukedlc/NeuralKrishna-7B-v3
parameters:
density: 0.65
weight: 0.36
- model: Kukedlc/NeuralMarioMonarch-7B-slerp
parameters:
density: 0.6
weight: 0.34
- model: liminerity/M7-7b
parameters:
density: 0.4
weight: 0.3
merge_method: dare_ties
base_model: Kukedlc/NeuralKrishna-7B-v3
parameters:
int8_mask: true
dtype: bfloat16
random_seed: 0
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "Kukedlc/NeuralSirKrishna-7b"
messages = [{"role": "user", "content": """Create an advanced theory about the origin of life on earth that is new,
innovative, and disruptive, totally new with no resemblance to anything else but at the same time that can be real."""}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
# 🍃 Family models:

# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Kukedlc__NeuralSirKrishna-7b)
| Metric |Value|
|---------------------------------|----:|
|Avg. |76.60|
|AI2 Reasoning Challenge (25-Shot)|73.72|
|HellaSwag (10-Shot) |89.05|
|MMLU (5-Shot) |64.63|
|TruthfulQA (0-shot) |75.60|
|Winogrande (5-shot) |85.32|
|GSM8k (5-shot) |71.27|
|
Kabster/BioMistral-MedicalQA-FT | Kabster | 2024-03-11T05:21:52Z | 745 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-11T04:34:46Z | ---
library_name: transformers
license: apache-2.0
---
# Model Card for BioMistral-MedicalQA-FT
This is a fine-tuned BioMistral model based on the medical reasoning dataset - https://huggingface.co/datasets/mamachang/medical-reasoning
## Model Details
This is a fine-tuned BioMistral model based on the medical reasoning dataset - https://huggingface.co/datasets/mamachang/medical-reasoning
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "Kabster/BioMistral-MedicalQA-FT"
messages = [{"role": "user", "content": "Can bisoprolol cause insomnia?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.2, top_k=100, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
rwitz/experiment26-truthy-iter-0 | rwitz | 2024-03-12T04:43:01Z | 745 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"arxiv:1910.09700",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-12T03:49:50Z | ---
library_name: transformers
license: mit
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mychen76/openmixtral-6x7b-v2 | mychen76 | 2024-03-22T10:05:59Z | 745 | 0 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"merge",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-14T02:15:13Z | ---
license: apache-2.0
tags:
- merge
---
# openmixtral-6x7b-v2
Quantized openmixtral-6x7b-merged_v2 is a merge of the following 6x7B models:
## 🧩 Configuration
```yaml
base_model: mlabonne/Marcoro14-7B-slerp
experts:
- source_model: openchat/openchat-3.5-1210
positive_prompts:
- "chat"
- "assistant"
- "tell me"
- "explain"
- source_model: Weyaxi/Einstein-v4-7B
positive_prompts:
- "physics"
- "biology"
- "chemistry"
- "science"
- source_model: BioMistral/BioMistral-7B
positive_prompts:
- "medical"
- "pubmed"
- "healthcare"
- "health"
- source_model: beowolx/CodeNinja-1.0-OpenChat-7B
positive_prompts:
- "code"
- "python"
- "javascript"
- "programming"
- "algorithm"
- source_model: maywell/PiVoT-0.1-Starling-LM-RP
positive_prompts:
- "storywriting"
- "write"
- "scene"
- "story"
- "character"
- source_model: WizardLM/WizardMath-7B-V1.1
positive_prompts:
- "reason"
- "math"
- "mathematics"
- "solve"
- "count"
tokenizer_source: union
```
## 💻 Usage
```python
!pip install -qU transformers bitsandbytes accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "mychen76/openmixtral-6x7b-v2"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
model_kwargs={"torch_dtype": torch.float16, "load_in_4bit": True},
)
messages = [{"role": "user", "content": "Why the sky is blue"}]
prompt = pipeline.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_mychen76__openmixtral-6x7b-v2)
| Metric |Value|
|---------------------------------|----:|
|Avg. |72.33|
|AI2 Reasoning Challenge (25-Shot)|68.52|
|HellaSwag (10-Shot) |86.75|
|MMLU (5-Shot) |65.11|
|TruthfulQA (0-shot) |65.13|
|Winogrande (5-shot) |79.87|
|GSM8k (5-shot) |68.61|
|
kimou605/shadow-clown-BioMistral-7B-SLERP | kimou605 | 2024-03-15T01:14:43Z | 745 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"CorticalStack/shadow-clown-7B-dare",
"BioMistral/BioMistral-7B-DARE",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-15T01:11:23Z | ---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- CorticalStack/shadow-clown-7B-dare
- BioMistral/BioMistral-7B-DARE
---
# shadow-clown-BioMistral-7B-SLERP
shadow-clown-BioMistral-7B-SLERP is a merge of the following models using [mergekit](https://github.com/cg123/mergekit):
* [CorticalStack/shadow-clown-7B-dare](https://huggingface.co/CorticalStack/shadow-clown-7B-dare)
* [BioMistral/BioMistral-7B-DARE](https://huggingface.co/BioMistral/BioMistral-7B-DARE)
## 🧩 Configuration
\```yaml
slices:
- sources:
- model: CorticalStack/shadow-clown-7B-dare
layer_range: [0, 32]
- model: BioMistral/BioMistral-7B-DARE
layer_range: [0, 32]
merge_method: slerp
base_model: CorticalStack/shadow-clown-7B-dare
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: float16
\``` |
automerger/Strangemerges_32Yamshadow-7B | automerger | 2024-03-20T00:25:09Z | 745 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"automerger",
"base_model:Gille/StrangeMerges_32-7B-slerp",
"base_model:automerger/YamShadow-7B",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-20T00:24:19Z | ---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- automerger
base_model:
- Gille/StrangeMerges_32-7B-slerp
- automerger/YamShadow-7B
---
# Strangemerges_32Yamshadow-7B
Strangemerges_32Yamshadow-7B is an automated merge created by [Maxime Labonne](https://huggingface.co/mlabonne) using the following configuration.
* [Gille/StrangeMerges_32-7B-slerp](https://huggingface.co/Gille/StrangeMerges_32-7B-slerp)
* [automerger/YamShadow-7B](https://huggingface.co/automerger/YamShadow-7B)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: Gille/StrangeMerges_32-7B-slerp
layer_range: [0, 32]
- model: automerger/YamShadow-7B
layer_range: [0, 32]
merge_method: slerp
base_model: Gille/StrangeMerges_32-7B-slerp
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
random_seed: 0
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "automerger/Strangemerges_32Yamshadow-7B"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
RESMPDEV/Mistral-7B-v0.2 | RESMPDEV | 2024-03-24T05:00:11Z | 745 | 1 | transformers | [
"transformers",
"pytorch",
"safetensors",
"mistral",
"text-generation",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-23T23:21:18Z | ---
license: apache-2.0
---
This is a hacked together version of the new Mistral-7b-v0.2 FP16 weights directly downloaded from their CDN.
The conversion was done by directly converting the monolithic pickle file to safetensors and building the index which is suboptimal.
Credit to Mistral AI and the amazing team over there and Cognitive Computations especially Eric Hartford for tutelage and helping navigate the LLM landscape.
~~As this is a mix of Mistral 7b v0.1 and Mistral 7b v0.2 files it is to be considered a pre-alpha.~~
This conversion is suboptimal and I would use https://huggingface.co/alpindale/Mistral-7B-v0.2-hf for the FP-16 weights until MistralAI does their offical release.
|
nbeerbower/bruphin-iota | nbeerbower | 2024-03-24T17:33:16Z | 745 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"base_model:nbeerbower/bruphin-theta",
"base_model:pabloce/Dolphin-2.8-slerp",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-24T17:26:15Z | ---
license: apache-2.0
base_model:
- nbeerbower/bruphin-theta
- pabloce/Dolphin-2.8-slerp
library_name: transformers
tags:
- mergekit
- merge
---
# bruphin-iota
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [nbeerbower/bruphin-theta](https://huggingface.co/nbeerbower/bruphin-theta)
* [pabloce/Dolphin-2.8-slerp](https://huggingface.co/pabloce/Dolphin-2.8-slerp)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: nbeerbower/bruphin-theta
layer_range: [0, 32]
- model: pabloce/Dolphin-2.8-slerp
layer_range: [0, 32]
merge_method: slerp
base_model: pabloce/Dolphin-2.8-slerp
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
|
Aratako/Mixtral-8x7B-Instruct-v0.1-upscaled | Aratako | 2024-03-27T14:35:16Z | 745 | 2 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"mergekit",
"merge",
"conversational",
"fr",
"it",
"de",
"es",
"en",
"base_model:mistralai/Mixtral-8x7B-Instruct-v0.1",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-25T16:35:24Z | ---
base_model:
- mistralai/Mixtral-8x7B-Instruct-v0.1
library_name: transformers
tags:
- mergekit
- merge
license: apache-2.0
language:
- fr
- it
- de
- es
- en
---
# Mixtral-8x7B-Instruct-v0.1-upscaled
This is a frankenmerge of [mistralai/Mixtral-8x7B-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1) created by interleaving layers of itself using [mergekit](https://github.com/cg123/mergekit).
## Benchmark
The benchmark score of the [mt-bench](https://github.com/lm-sys/FastChat/tree/main/fastchat/llm_judge) for this model and the original models are as follows:
**1-turn**
|Model|Size|Coding|Extraction|Humanities|Math|Reasoning|Roleplay|STEM|Writing|avg_score|
|---|---|---|---|---|---|---|---|---|---|---|
| Mixtral-8x7B-Instruct-v0.1 | 8x7B | 5.3 | **8.5** | **9.9** | **6.8** | 6.0 | 9.1 | 9.55 | 8.9 | 8.00625 |
| This model | around 8x12B? | **6.3** | 8.4 | **9.9** | 5.4 | **7.7** | **9.2** | **9.75** | **9.8** | **8.30625** |

**2-turn**
|Model|Size|Coding|Extraction|Humanities|Math|Reasoning|Roleplay|STEM|Writing|avg_score|
|---|---|---|---|---|---|---|---|---|---|---|
| Mixtral-8x7B-Instruct-v0.1 | 8x7B | 4.1 | **8.4** | 9.8 | **4.7** | **5.6** | 9.0 | **9.2** | **9.5** | **7.5375** |
| This model | around 8x12B? | **4.2** | 7.4 | **9.9** | 4.0 | 5.2 | **9.5** | 8.7 | 8.0 | 7.1125 |

## Merge Details
### Merge Method
This model was merged using the passthrough merge method.
### Models Merged
The following models were included in the merge:
* mistralai/Mixtral-8x7B-Instruct-v0.1
### Configuration
The following YAML configuration was used to produce this model:
```yaml
merge_method: passthrough
slices:
- sources:
- model: mistralai/Mixtral-8x7B-Instruct-v0.1
layer_range: [0, 8]
- sources:
- model: mistralai/Mixtral-8x7B-Instruct-v0.1
layer_range: [4, 12]
- sources:
- model: mistralai/Mixtral-8x7B-Instruct-v0.1
layer_range: [8, 16]
- sources:
- model: mistralai/Mixtral-8x7B-Instruct-v0.1
layer_range: [12, 20]
- sources:
- model: mistralai/Mixtral-8x7B-Instruct-v0.1
layer_range: [16, 24]
- sources:
- model: mistralai/Mixtral-8x7B-Instruct-v0.1
layer_range: [20, 28]
- sources:
- model: mistralai/Mixtral-8x7B-Instruct-v0.1
layer_range: [24, 32]
dtype: bfloat16
tokenizer_source: base
``` |
Joseph717171/Tess-10.7B-v2.0 | Joseph717171 | 2024-04-07T12:29:11Z | 745 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2312.15166",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-29T12:57:43Z | ---
license: apache-2.0
base_model: []
library_name: transformers
tags:
- mergekit
- merge
pipeline_tag: text-generation
---
# Credit for the model card's description goes to ddh0, mergekit, and, migtissera
# Inspired by ddh0/Starling-LM-10.7B-beta and ddh0/Mistral-10.7B-Instruct-v0.2
# Tess-10.7B-v0.2
# Deprecated
"This model is deprecated due to the use of wrong sliding window parameter while training. Will update with the new model link in a couple of days." - migtissera
This is Tess-10.7B-v0.2, a depth-upscaled version of [migtissera/Tess-7B-v2.0](https://huggingface.co/migtissera/Tess-7B-v2.0).
This model is intended to be used as a basis for further fine-tuning, or as a drop-in upgrade from the original 7 billion parameter model.
Paper detailing how Depth-Up Scaling works: [SOLAR 10.7B: Scaling Large Language Models with Simple yet Effective Depth Up-Scaling](https://arxiv.org/abs/2312.15166)
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
# Prompt format same as [migtissera/Tess-7B-v2.0](https://huggingface.co/migtissera/Tess-7B-v2.0)
# Prompt Format:
```
SYSTEM: <ANY SYSTEM CONTEXT>
USER:
ASSISTANT:
```
## Merge Details
### Merge Method
This model was merged using the passthrough merge method.
### Models Merged
The following models were included in the merge:
* /Users/jsarnecki/opt/migtissera/Tess-7B-v2.0
### Configuration
The following YAML configuration was used to produce this model:
```yaml
dtype: bfloat16
merge_method: passthrough
slices:
- sources:
- layer_range: [0, 24]
model: /Users/jsarnecki/opt/migtissera/Tess-7B-v2.0
- sources:
- layer_range: [8, 32]
model: /Users/jsarnecki/opt/migtissera/Tess-7B-v2.0
```
# GGUFs (Thanks to [bartowski](https://huggingface.co/bartowski))
https://huggingface.co/bartowski/Tess-10.7B-v2.0-GGUF
# exl2s (Thanks to [bartowski](https://huggingface.co/bartowski))
https://huggingface.co/bartowski/Tess-10.7B-v2.0-exl2

---
license: apache-2.0
---
# Tess-7B-v2.0
Tess, short for Tesoro (Treasure in Italian), is a general purpose Large Language Model series. Tess-7B-v2.0 was trained on the Mistral-7B-v0.2 base.
# Prompt Format:
```
SYSTEM: <ANY SYSTEM CONTEXT>
USER:
ASSISTANT:
```
### Below shows a code example on how to use this model:
```python
import torch, json
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "migtissera/Tess-7B-v2.0"
output_file_path = "./conversations.jsonl"
model = AutoModelForCausalLM.from_pretrained(
model_path,
torch_dtype=torch.float16,
device_map="auto",
load_in_8bit=False,
trust_remote_code=True,
)
tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)
def generate_text(instruction):
tokens = tokenizer.encode(instruction)
tokens = torch.LongTensor(tokens).unsqueeze(0)
tokens = tokens.to("cuda")
instance = {
"input_ids": tokens,
"top_p": 1.0,
"temperature": 0.5,
"generate_len": 1024,
"top_k": 50,
}
length = len(tokens[0])
with torch.no_grad():
rest = model.generate(
input_ids=tokens,
max_length=length + instance["generate_len"],
use_cache=True,
do_sample=True,
top_p=instance["top_p"],
temperature=instance["temperature"],
top_k=instance["top_k"],
num_return_sequences=1,
)
output = rest[0][length:]
string = tokenizer.decode(output, skip_special_tokens=True)
answer = string.split("USER:")[0].strip()
return f"{answer}"
conversation = f"SYSTEM: Answer the question thoughtfully and intelligently. Always answer without hesitation."
while True:
user_input = input("You: ")
llm_prompt = f"{conversation} \nUSER: {user_input} \nASSISTANT: "
answer = generate_text(llm_prompt)
print(answer)
conversation = f"{llm_prompt}{answer}"
json_data = {"prompt": user_input, "answer": answer}
## Save your conversation
with open(output_file_path, "a") as output_file:
output_file.write(json.dumps(json_data) + "\n")
```
<br>
#### Limitations & Biases:
While this model aims for accuracy, it can occasionally produce inaccurate or misleading results.
Despite diligent efforts in refining the pretraining data, there remains a possibility for the generation of inappropriate, biased, or offensive content.
Exercise caution and cross-check information when necessary. This is an uncensored model.
<br>
|
lmstudio-community/dolphin-2.8-mistral-7b-v02-GGUF | lmstudio-community | 2024-04-05T01:54:26Z | 745 | 6 | null | [
"gguf",
"text-generation",
"en",
"dataset:cognitivecomputations/dolphin",
"dataset:cognitivecomputations/dolphin-coder",
"dataset:cognitivecomputations/samantha-data",
"dataset:jondurbin/airoboros-2.2.1",
"dataset:teknium/openhermes-2.5",
"dataset:m-a-p/Code-Feedback",
"dataset:m-a-p/CodeFeedback-Filtered-Instruction",
"base_model:alpindale/Mistral-7B-v0.2-hf",
"license:apache-2.0",
"model-index",
"region:us"
] | text-generation | 2024-04-02T02:01:24Z | ---
base_model: alpindale/Mistral-7B-v0.2-hf
language:
- en
license: apache-2.0
datasets:
- cognitivecomputations/dolphin
- cognitivecomputations/dolphin-coder
- cognitivecomputations/samantha-data
- jondurbin/airoboros-2.2.1
- teknium/openhermes-2.5
- m-a-p/Code-Feedback
- m-a-p/CodeFeedback-Filtered-Instruction
model-index:
- name: dolphin-2.8-mistral-7b-v02
results:
- task:
type: text-generation
dataset:
type: openai_humaneval
name: HumanEval
metrics:
- name: pass@1
type: pass@1
value: 0.469
verified: false
quantized_by: bartowski
pipeline_tag: text-generation
lm_studio:
param_count: 7b
use_case: general
release_date: 28-03-2024
model_creator: cognitivecomputations
prompt_template: ChatML
system_prompt: You are Dolphin, a helpful AI assistant.
base_model: mistral
original_repo: cognitivecomputations/dolphin-2.8-mistral-7b-v02
---
## 💫 Community Model> Dolphin 2.8 Mistral 7b v0.2 🐬 by Cognitive Computations
*👾 [LM Studio](https://lmstudio.ai) Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on [Discord](https://discord.gg/aPQfnNkxGC)*.
**Model creator:** [Cognitive Computations](https://huggingface.co/cognitivecomputations)<br>
**Original model**: [dolphin-2.8-mistral-7b-v02](https://huggingface.co/cognitivecomputations/dolphin-2.8-mistral-7b-v02)<br>
**GGUF quantization:** provided by [bartowski](https://huggingface.co/bartowski) based on `llama.cpp` release [b2536](https://github.com/ggerganov/llama.cpp/releases/tag/b2536)<br>
## Model Summary:
Dolphin 2.8 is a model from from the Dolphin family, based on Mistral 0.2, and fine tuned by [Eric Hartford](https://huggingface.co/ehartford/) and [Cognitive Computations](https://huggingface.co/cognitivecomputations).
With a broad range of uses, this model makes for a great general-use daily driver.<br>
## Prompt Template:
Choose the `ChatML` preset in your LM Studio.
Under the hood, the model will see a prompt that's formatted like so:
```
<|im_start|>system
You are Dolphin, a helpful AI assistant.<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
## Use case and examples
Dolphin is trained on a vast variety of data, including conversational, logical, and coding. This unique blend, along with a keen eye from the model creators, makes for a wonderful model that can tackle any task you throw at it.
### General knowledge:
```
<|im_start|>system
You are Dolphin, a helpful AI assistant.
<|im_end|>
<|im_start|>user
What does the ISO number mean in camera terminology?<|im_end|>
<|im_start|>assistant
The ISO number in camera terminology refers to the camera's sensitivity to light. ISO stands for International Organization for Standardization. It is a measure of the luminous flux received by a sensor or film.
A higher ISO number means that the camera's sensor is more sensitive to light, allowing for faster shutter speeds in low-light situations. However, increasing the ISO also increases the risk of image noise (graininess) in the final image. Conversely, a lower ISO number means that the camera's sensor is less sensitive to light, requiring slower shutter speeds in well-lit conditions to avoid overexposure.
In summary, the ISO number in camera terminology determines how sensitive the camera's sensor is to light, and it can be adjusted to balance exposure and image quality in various lighting conditions.
```
### Conversational:
```
<|im_start|>system
You are Dolphin, a helpful AI assistant.
<|im_end|>
<|im_start|>user
What is the meaning of life, and how can I find my purpose?<|im_end|>
<|im_start|>assistant
The meaning of life is a subjective and philosophical question that has been debated for centuries. Different people have different perspectives on what gives life meaning, as it can vary greatly from person to person. Some people find meaning in their relationships, their work, or their spiritual beliefs.
To find your purpose, you can follow these steps:
1. Reflect on your passions and interests: Think about the things that genuinely excite you and make you feel alive. Consider what activities you enjoy and what subjects you find most interesting.
2. Explore your talents and strengths: Identify the skills and abilities you possess. You may have natural talents that you can develop and use to contribute to society.
3. Assess your values and principles: Reflect on your beliefs, moral compass, and personal code of conduct. Consider how your values can guide you in determining your purpose.
4. Set goals and objectives: Establish short-term and long-term goals that align with your passions, talents, and values. This will help you stay focused and motivated in your journey to find your purpose.
5. Take action and learn from experience: Begin to pursue the goals and objectives you've set, and learn from the experiences you encounter along the way. Embrace the process of self-discovery and growth.
6. Seek guidance and support: Share your journey with friends, family, or mentors who can offer valuable advice, encouragement, and resources to help you find your purpose.
Ultimately, finding your purpose is a personal and ongoing process that requires introspection, self-awareness, and a willingness to explore new opportunities. Remember, there is no one-size-fits-all answer to the meaning of life, and your purpose may change and evolve over time.
```
### Coding:
```
<|im_start|>system
You are Dolphin, a helpful AI assistant.
<|im_end|>
<|im_start|>user
I have a multi select component in react, and I register it to a useForm. I want to validate that there are no more than 10 items selected on submission. Is this possible?<|im_end|>
<|im_start|>assistant
Yes, it is possible to validate the number of selected items in a multi select component in React with the help of `useForm`. You can use the `validate` function from `react-hook-form` to achieve this. Here's a step-by-step guide on how to do it:
First, import the necessary libraries:
import { useForm } from 'react-hook-form';
import { MultiSelect } from 'your-multi-select-component';
Next, create a form with the multi select component:
function MyForm() {
const { register, handleSubmit, errors } = useForm();
const onSubmit = data => {
// Handle form submission logic here
};
return (
<form onSubmit={handleSubmit(onSubmit)}>
<MultiSelect
name="multiSelect"
ref={register({ required: true, validate: value => value.length <= 10 })}
/>
{errors.multiSelect && (
<p>Please select no more than 10 items.</p>
)}
<input type="submit" />
</form>
);
}
In the above code, we register the multi select component with the `useForm` hook. The `validate` function is used to validate that the number of selected items is less than or equal to 10. If the validation fails, an error message will be displayed.
Remember to replace `'your-multi-select-component'` with the name of your multi select component.
That's it! Now, when the form is submitted, it will validate that no more than 10 items are selected in the multi select component.
```
## Technical Details
This model was trained with the following datasets:
- [cognitivecomputations/dolphin](https://huggingface.co/datasets/cognitivecomputations/dolphin)
- [cognitivecomputations/dolphin-coder](https://huggingface.co/datasets/cognitivecomputations/dolphin-coder)
- [cognitivecomputations/samantha-data](https://huggingface.co/datasets/cognitivecomputations/samantha-data)
- [jondurbin/airoboros-2.2.1](https://huggingface.co/datasets/jondurbin/airoboros-2.2.1)
- [teknium/openhermes-2.5](https://huggingface.co/datasets/teknium/openhermes-2.5)
- [m-a-p/Code-Feedback](https://huggingface.co/datasets/m-a-p/Code-Feedback)
- [m-a-p/CodeFeedback-Filtered-Instruction](https://huggingface.co/datasets/m-a-p/CodeFeedback-Filtered-Instruction)
## Special thanks
🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible.
## Disclaimers
LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.
|
nasiruddin15/Neural-grok-dolphin-Mistral-7B | nasiruddin15 | 2024-04-08T07:59:41Z | 745 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"en",
"arxiv:1910.09700",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-04-05T22:08:59Z | ---
library_name: transformers
license: apache-2.0
language:
- en
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [Nasir uddin]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [nasiruddin15/Mistral-dolphin-2.8-grok-instract-2-7B-slerp]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Stevo/DiagloGPT-medium-spamton | Stevo | 2021-11-17T17:42:36Z | 744 | 1 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2022-03-02T23:29:05Z | ---
tags:
- conversational
---
@ Deltarune Spamton DialoGPT Model |
ktrapeznikov/gpt2-medium-topic-news-v2 | ktrapeznikov | 2021-05-23T06:14:58Z | 744 | 1 | transformers | [
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2022-03-02T23:29:05Z | ---
language:
- en
thumbnail:
widget:
- text: "topic climate source washington post title "
---
# GPT2-medium-topic-news
## Model description
GPT2-medium fine tuned on a largish news corpus conditioned on a topic, source, title
## Intended uses & limitations
#### How to use
To generate a news article text conditioned on a topic, source, title or some subsets, prompt model with:
```python
f"topic {topic} source"
f"topic {topic} source {source} title"
f"topic {topic} source {source} title {title} body"
```
Try the following tags for `topic: climate, weather, vaccination`.
Zero shot generation works pretty well as long as `topic` is a single word and not too specific.
```python
device = "cuda:0"
tokenizer = AutoTokenizer.from_pretrained("ktrapeznikov/gpt2-medium-topic-small-set")
model = AutoModelWithLMHead.from_pretrained("ktrapeznikov/gpt2-medium-topic-small-set")
model.to(device)
topic = "climate"
prompt = tokenizer(f"topic {topics} source straitstimes title", return_tensors="pt")
out = model.generate(prompt["input_ids"].to(device), do_sample=True,max_length=500, early_stopping=True, top_p=.9)
print(tokenizer.decode(out[0].cpu(), skip_special_tokens=True))
``` |
timm/resnext101_32x8d.fb_swsl_ig1b_ft_in1k | timm | 2024-02-10T23:41:16Z | 744 | 0 | timm | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"arxiv:1905.00546",
"arxiv:1611.05431",
"arxiv:1512.03385",
"license:cc-by-nc-4.0",
"region:us"
] | image-classification | 2023-04-05T19:05:02Z | ---
license: cc-by-nc-4.0
library_name: timm
tags:
- image-classification
- timm
---
# Model card for resnext101_32x8d.fb_swsl_ig1b_ft_in1k
A ResNeXt-B image classification model.
This model features:
* ReLU activations
* single layer 7x7 convolution with pooling
* 1x1 convolution shortcut downsample
* grouped 3x3 bottleneck convolutions
Pretrained on Instagram-1B hashtags dataset using semi-weakly supervised learning and fine-tuned on ImageNet-1k by paper authors.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 88.8
- GMACs: 16.5
- Activations (M): 31.2
- Image size: 224 x 224
- **Papers:**
- Billion-scale semi-supervised learning for image classification: https://arxiv.org/abs/1905.00546
- Aggregated Residual Transformations for Deep Neural Networks: https://arxiv.org/abs/1611.05431
- Deep Residual Learning for Image Recognition: https://arxiv.org/abs/1512.03385
- **Original:** https://github.com/facebookresearch/semi-supervised-ImageNet1K-models
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('resnext101_32x8d.fb_swsl_ig1b_ft_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'resnext101_32x8d.fb_swsl_ig1b_ft_in1k',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g.:
# torch.Size([1, 64, 112, 112])
# torch.Size([1, 256, 56, 56])
# torch.Size([1, 512, 28, 28])
# torch.Size([1, 1024, 14, 14])
# torch.Size([1, 2048, 7, 7])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'resnext101_32x8d.fb_swsl_ig1b_ft_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 2048, 7, 7) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
|model |img_size|top1 |top5 |param_count|gmacs|macts|img/sec|
|------------------------------------------|--------|-----|-----|-----------|-----|-----|-------|
|[seresnextaa101d_32x8d.sw_in12k_ft_in1k_288](https://huggingface.co/timm/seresnextaa101d_32x8d.sw_in12k_ft_in1k_288)|320 |86.72|98.17|93.6 |35.2 |69.7 |451 |
|[seresnextaa101d_32x8d.sw_in12k_ft_in1k_288](https://huggingface.co/timm/seresnextaa101d_32x8d.sw_in12k_ft_in1k_288)|288 |86.51|98.08|93.6 |28.5 |56.4 |560 |
|[seresnextaa101d_32x8d.sw_in12k_ft_in1k](https://huggingface.co/timm/seresnextaa101d_32x8d.sw_in12k_ft_in1k)|288 |86.49|98.03|93.6 |28.5 |56.4 |557 |
|[seresnextaa101d_32x8d.sw_in12k_ft_in1k](https://huggingface.co/timm/seresnextaa101d_32x8d.sw_in12k_ft_in1k)|224 |85.96|97.82|93.6 |17.2 |34.2 |923 |
|[resnext101_32x32d.fb_wsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x32d.fb_wsl_ig1b_ft_in1k)|224 |85.11|97.44|468.5 |87.3 |91.1 |254 |
|[resnetrs420.tf_in1k](https://huggingface.co/timm/resnetrs420.tf_in1k)|416 |85.0 |97.12|191.9 |108.4|213.8|134 |
|[ecaresnet269d.ra2_in1k](https://huggingface.co/timm/ecaresnet269d.ra2_in1k)|352 |84.96|97.22|102.1 |50.2 |101.2|291 |
|[ecaresnet269d.ra2_in1k](https://huggingface.co/timm/ecaresnet269d.ra2_in1k)|320 |84.73|97.18|102.1 |41.5 |83.7 |353 |
|[resnetrs350.tf_in1k](https://huggingface.co/timm/resnetrs350.tf_in1k)|384 |84.71|96.99|164.0 |77.6 |154.7|183 |
|[seresnextaa101d_32x8d.ah_in1k](https://huggingface.co/timm/seresnextaa101d_32x8d.ah_in1k)|288 |84.57|97.08|93.6 |28.5 |56.4 |557 |
|[resnetrs200.tf_in1k](https://huggingface.co/timm/resnetrs200.tf_in1k)|320 |84.45|97.08|93.2 |31.5 |67.8 |446 |
|[resnetrs270.tf_in1k](https://huggingface.co/timm/resnetrs270.tf_in1k)|352 |84.43|96.97|129.9 |51.1 |105.5|280 |
|[seresnext101d_32x8d.ah_in1k](https://huggingface.co/timm/seresnext101d_32x8d.ah_in1k)|288 |84.36|96.92|93.6 |27.6 |53.0 |595 |
|[seresnet152d.ra2_in1k](https://huggingface.co/timm/seresnet152d.ra2_in1k)|320 |84.35|97.04|66.8 |24.1 |47.7 |610 |
|[resnetrs350.tf_in1k](https://huggingface.co/timm/resnetrs350.tf_in1k)|288 |84.3 |96.94|164.0 |43.7 |87.1 |333 |
|[resnext101_32x8d.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x8d.fb_swsl_ig1b_ft_in1k)|224 |84.28|97.17|88.8 |16.5 |31.2 |1100 |
|[resnetrs420.tf_in1k](https://huggingface.co/timm/resnetrs420.tf_in1k)|320 |84.24|96.86|191.9 |64.2 |126.6|228 |
|[seresnext101_32x8d.ah_in1k](https://huggingface.co/timm/seresnext101_32x8d.ah_in1k)|288 |84.19|96.87|93.6 |27.2 |51.6 |613 |
|[resnext101_32x16d.fb_wsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x16d.fb_wsl_ig1b_ft_in1k)|224 |84.18|97.19|194.0 |36.3 |51.2 |581 |
|[resnetaa101d.sw_in12k_ft_in1k](https://huggingface.co/timm/resnetaa101d.sw_in12k_ft_in1k)|288 |84.11|97.11|44.6 |15.1 |29.0 |1144 |
|[resnet200d.ra2_in1k](https://huggingface.co/timm/resnet200d.ra2_in1k)|320 |83.97|96.82|64.7 |31.2 |67.3 |518 |
|[resnetrs200.tf_in1k](https://huggingface.co/timm/resnetrs200.tf_in1k)|256 |83.87|96.75|93.2 |20.2 |43.4 |692 |
|[seresnextaa101d_32x8d.ah_in1k](https://huggingface.co/timm/seresnextaa101d_32x8d.ah_in1k)|224 |83.86|96.65|93.6 |17.2 |34.2 |923 |
|[resnetrs152.tf_in1k](https://huggingface.co/timm/resnetrs152.tf_in1k)|320 |83.72|96.61|86.6 |24.3 |48.1 |617 |
|[seresnet152d.ra2_in1k](https://huggingface.co/timm/seresnet152d.ra2_in1k)|256 |83.69|96.78|66.8 |15.4 |30.6 |943 |
|[seresnext101d_32x8d.ah_in1k](https://huggingface.co/timm/seresnext101d_32x8d.ah_in1k)|224 |83.68|96.61|93.6 |16.7 |32.0 |986 |
|[resnet152d.ra2_in1k](https://huggingface.co/timm/resnet152d.ra2_in1k)|320 |83.67|96.74|60.2 |24.1 |47.7 |706 |
|[resnetrs270.tf_in1k](https://huggingface.co/timm/resnetrs270.tf_in1k)|256 |83.59|96.61|129.9 |27.1 |55.8 |526 |
|[seresnext101_32x8d.ah_in1k](https://huggingface.co/timm/seresnext101_32x8d.ah_in1k)|224 |83.58|96.4 |93.6 |16.5 |31.2 |1013 |
|[resnetaa101d.sw_in12k_ft_in1k](https://huggingface.co/timm/resnetaa101d.sw_in12k_ft_in1k)|224 |83.54|96.83|44.6 |9.1 |17.6 |1864 |
|[resnet152.a1h_in1k](https://huggingface.co/timm/resnet152.a1h_in1k)|288 |83.46|96.54|60.2 |19.1 |37.3 |904 |
|[resnext101_32x16d.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x16d.fb_swsl_ig1b_ft_in1k)|224 |83.35|96.85|194.0 |36.3 |51.2 |582 |
|[resnet200d.ra2_in1k](https://huggingface.co/timm/resnet200d.ra2_in1k)|256 |83.23|96.53|64.7 |20.0 |43.1 |809 |
|[resnext101_32x4d.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x4d.fb_swsl_ig1b_ft_in1k)|224 |83.22|96.75|44.2 |8.0 |21.2 |1814 |
|[resnext101_64x4d.c1_in1k](https://huggingface.co/timm/resnext101_64x4d.c1_in1k)|288 |83.16|96.38|83.5 |25.7 |51.6 |590 |
|[resnet152d.ra2_in1k](https://huggingface.co/timm/resnet152d.ra2_in1k)|256 |83.14|96.38|60.2 |15.4 |30.5 |1096 |
|[resnet101d.ra2_in1k](https://huggingface.co/timm/resnet101d.ra2_in1k)|320 |83.02|96.45|44.6 |16.5 |34.8 |992 |
|[ecaresnet101d.miil_in1k](https://huggingface.co/timm/ecaresnet101d.miil_in1k)|288 |82.98|96.54|44.6 |13.4 |28.2 |1077 |
|[resnext101_64x4d.tv_in1k](https://huggingface.co/timm/resnext101_64x4d.tv_in1k)|224 |82.98|96.25|83.5 |15.5 |31.2 |989 |
|[resnetrs152.tf_in1k](https://huggingface.co/timm/resnetrs152.tf_in1k)|256 |82.86|96.28|86.6 |15.6 |30.8 |951 |
|[resnext101_32x8d.tv2_in1k](https://huggingface.co/timm/resnext101_32x8d.tv2_in1k)|224 |82.83|96.22|88.8 |16.5 |31.2 |1099 |
|[resnet152.a1h_in1k](https://huggingface.co/timm/resnet152.a1h_in1k)|224 |82.8 |96.13|60.2 |11.6 |22.6 |1486 |
|[resnet101.a1h_in1k](https://huggingface.co/timm/resnet101.a1h_in1k)|288 |82.8 |96.32|44.6 |13.0 |26.8 |1291 |
|[resnet152.a1_in1k](https://huggingface.co/timm/resnet152.a1_in1k)|288 |82.74|95.71|60.2 |19.1 |37.3 |905 |
|[resnext101_32x8d.fb_wsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x8d.fb_wsl_ig1b_ft_in1k)|224 |82.69|96.63|88.8 |16.5 |31.2 |1100 |
|[resnet152.a2_in1k](https://huggingface.co/timm/resnet152.a2_in1k)|288 |82.62|95.75|60.2 |19.1 |37.3 |904 |
|[resnetaa50d.sw_in12k_ft_in1k](https://huggingface.co/timm/resnetaa50d.sw_in12k_ft_in1k)|288 |82.61|96.49|25.6 |8.9 |20.6 |1729 |
|[resnet61q.ra2_in1k](https://huggingface.co/timm/resnet61q.ra2_in1k)|288 |82.53|96.13|36.8 |9.9 |21.5 |1773 |
|[wide_resnet101_2.tv2_in1k](https://huggingface.co/timm/wide_resnet101_2.tv2_in1k)|224 |82.5 |96.02|126.9 |22.8 |21.2 |1078 |
|[resnext101_64x4d.c1_in1k](https://huggingface.co/timm/resnext101_64x4d.c1_in1k)|224 |82.46|95.92|83.5 |15.5 |31.2 |987 |
|[resnet51q.ra2_in1k](https://huggingface.co/timm/resnet51q.ra2_in1k)|288 |82.36|96.18|35.7 |8.1 |20.9 |1964 |
|[ecaresnet50t.ra2_in1k](https://huggingface.co/timm/ecaresnet50t.ra2_in1k)|320 |82.35|96.14|25.6 |8.8 |24.1 |1386 |
|[resnet101.a1_in1k](https://huggingface.co/timm/resnet101.a1_in1k)|288 |82.31|95.63|44.6 |13.0 |26.8 |1291 |
|[resnetrs101.tf_in1k](https://huggingface.co/timm/resnetrs101.tf_in1k)|288 |82.29|96.01|63.6 |13.6 |28.5 |1078 |
|[resnet152.tv2_in1k](https://huggingface.co/timm/resnet152.tv2_in1k)|224 |82.29|96.0 |60.2 |11.6 |22.6 |1484 |
|[wide_resnet50_2.racm_in1k](https://huggingface.co/timm/wide_resnet50_2.racm_in1k)|288 |82.27|96.06|68.9 |18.9 |23.8 |1176 |
|[resnet101d.ra2_in1k](https://huggingface.co/timm/resnet101d.ra2_in1k)|256 |82.26|96.07|44.6 |10.6 |22.2 |1542 |
|[resnet101.a2_in1k](https://huggingface.co/timm/resnet101.a2_in1k)|288 |82.24|95.73|44.6 |13.0 |26.8 |1290 |
|[seresnext50_32x4d.racm_in1k](https://huggingface.co/timm/seresnext50_32x4d.racm_in1k)|288 |82.2 |96.14|27.6 |7.0 |23.8 |1547 |
|[ecaresnet101d.miil_in1k](https://huggingface.co/timm/ecaresnet101d.miil_in1k)|224 |82.18|96.05|44.6 |8.1 |17.1 |1771 |
|[resnext50_32x4d.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext50_32x4d.fb_swsl_ig1b_ft_in1k)|224 |82.17|96.22|25.0 |4.3 |14.4 |2943 |
|[ecaresnet50t.a1_in1k](https://huggingface.co/timm/ecaresnet50t.a1_in1k)|288 |82.12|95.65|25.6 |7.1 |19.6 |1704 |
|[resnext50_32x4d.a1h_in1k](https://huggingface.co/timm/resnext50_32x4d.a1h_in1k)|288 |82.03|95.94|25.0 |7.0 |23.8 |1745 |
|[ecaresnet101d_pruned.miil_in1k](https://huggingface.co/timm/ecaresnet101d_pruned.miil_in1k)|288 |82.0 |96.15|24.9 |5.8 |12.7 |1787 |
|[resnet61q.ra2_in1k](https://huggingface.co/timm/resnet61q.ra2_in1k)|256 |81.99|95.85|36.8 |7.8 |17.0 |2230 |
|[resnext101_32x8d.tv2_in1k](https://huggingface.co/timm/resnext101_32x8d.tv2_in1k)|176 |81.98|95.72|88.8 |10.3 |19.4 |1768 |
|[resnet152.a1_in1k](https://huggingface.co/timm/resnet152.a1_in1k)|224 |81.97|95.24|60.2 |11.6 |22.6 |1486 |
|[resnet101.a1h_in1k](https://huggingface.co/timm/resnet101.a1h_in1k)|224 |81.93|95.75|44.6 |7.8 |16.2 |2122 |
|[resnet101.tv2_in1k](https://huggingface.co/timm/resnet101.tv2_in1k)|224 |81.9 |95.77|44.6 |7.8 |16.2 |2118 |
|[resnext101_32x16d.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnext101_32x16d.fb_ssl_yfcc100m_ft_in1k)|224 |81.84|96.1 |194.0 |36.3 |51.2 |583 |
|[resnet51q.ra2_in1k](https://huggingface.co/timm/resnet51q.ra2_in1k)|256 |81.78|95.94|35.7 |6.4 |16.6 |2471 |
|[resnet152.a2_in1k](https://huggingface.co/timm/resnet152.a2_in1k)|224 |81.77|95.22|60.2 |11.6 |22.6 |1485 |
|[resnetaa50d.sw_in12k_ft_in1k](https://huggingface.co/timm/resnetaa50d.sw_in12k_ft_in1k)|224 |81.74|96.06|25.6 |5.4 |12.4 |2813 |
|[ecaresnet50t.a2_in1k](https://huggingface.co/timm/ecaresnet50t.a2_in1k)|288 |81.65|95.54|25.6 |7.1 |19.6 |1703 |
|[ecaresnet50d.miil_in1k](https://huggingface.co/timm/ecaresnet50d.miil_in1k)|288 |81.64|95.88|25.6 |7.2 |19.7 |1694 |
|[resnext101_32x8d.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnext101_32x8d.fb_ssl_yfcc100m_ft_in1k)|224 |81.62|96.04|88.8 |16.5 |31.2 |1101 |
|[wide_resnet50_2.tv2_in1k](https://huggingface.co/timm/wide_resnet50_2.tv2_in1k)|224 |81.61|95.76|68.9 |11.4 |14.4 |1930 |
|[resnetaa50.a1h_in1k](https://huggingface.co/timm/resnetaa50.a1h_in1k)|288 |81.61|95.83|25.6 |8.5 |19.2 |1868 |
|[resnet101.a1_in1k](https://huggingface.co/timm/resnet101.a1_in1k)|224 |81.5 |95.16|44.6 |7.8 |16.2 |2125 |
|[resnext50_32x4d.a1_in1k](https://huggingface.co/timm/resnext50_32x4d.a1_in1k)|288 |81.48|95.16|25.0 |7.0 |23.8 |1745 |
|[gcresnet50t.ra2_in1k](https://huggingface.co/timm/gcresnet50t.ra2_in1k)|288 |81.47|95.71|25.9 |6.9 |18.6 |2071 |
|[wide_resnet50_2.racm_in1k](https://huggingface.co/timm/wide_resnet50_2.racm_in1k)|224 |81.45|95.53|68.9 |11.4 |14.4 |1929 |
|[resnet50d.a1_in1k](https://huggingface.co/timm/resnet50d.a1_in1k)|288 |81.44|95.22|25.6 |7.2 |19.7 |1908 |
|[ecaresnet50t.ra2_in1k](https://huggingface.co/timm/ecaresnet50t.ra2_in1k)|256 |81.44|95.67|25.6 |5.6 |15.4 |2168 |
|[ecaresnetlight.miil_in1k](https://huggingface.co/timm/ecaresnetlight.miil_in1k)|288 |81.4 |95.82|30.2 |6.8 |13.9 |2132 |
|[resnet50d.ra2_in1k](https://huggingface.co/timm/resnet50d.ra2_in1k)|288 |81.37|95.74|25.6 |7.2 |19.7 |1910 |
|[resnet101.a2_in1k](https://huggingface.co/timm/resnet101.a2_in1k)|224 |81.32|95.19|44.6 |7.8 |16.2 |2125 |
|[seresnet50.ra2_in1k](https://huggingface.co/timm/seresnet50.ra2_in1k)|288 |81.3 |95.65|28.1 |6.8 |18.4 |1803 |
|[resnext50_32x4d.a2_in1k](https://huggingface.co/timm/resnext50_32x4d.a2_in1k)|288 |81.3 |95.11|25.0 |7.0 |23.8 |1746 |
|[seresnext50_32x4d.racm_in1k](https://huggingface.co/timm/seresnext50_32x4d.racm_in1k)|224 |81.27|95.62|27.6 |4.3 |14.4 |2591 |
|[ecaresnet50t.a1_in1k](https://huggingface.co/timm/ecaresnet50t.a1_in1k)|224 |81.26|95.16|25.6 |4.3 |11.8 |2823 |
|[gcresnext50ts.ch_in1k](https://huggingface.co/timm/gcresnext50ts.ch_in1k)|288 |81.23|95.54|15.7 |4.8 |19.6 |2117 |
|[senet154.gluon_in1k](https://huggingface.co/timm/senet154.gluon_in1k)|224 |81.23|95.35|115.1 |20.8 |38.7 |545 |
|[resnet50.a1_in1k](https://huggingface.co/timm/resnet50.a1_in1k)|288 |81.22|95.11|25.6 |6.8 |18.4 |2089 |
|[resnet50_gn.a1h_in1k](https://huggingface.co/timm/resnet50_gn.a1h_in1k)|288 |81.22|95.63|25.6 |6.8 |18.4 |676 |
|[resnet50d.a2_in1k](https://huggingface.co/timm/resnet50d.a2_in1k)|288 |81.18|95.09|25.6 |7.2 |19.7 |1908 |
|[resnet50.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnet50.fb_swsl_ig1b_ft_in1k)|224 |81.18|95.98|25.6 |4.1 |11.1 |3455 |
|[resnext50_32x4d.tv2_in1k](https://huggingface.co/timm/resnext50_32x4d.tv2_in1k)|224 |81.17|95.34|25.0 |4.3 |14.4 |2933 |
|[resnext50_32x4d.a1h_in1k](https://huggingface.co/timm/resnext50_32x4d.a1h_in1k)|224 |81.1 |95.33|25.0 |4.3 |14.4 |2934 |
|[seresnet50.a2_in1k](https://huggingface.co/timm/seresnet50.a2_in1k)|288 |81.1 |95.23|28.1 |6.8 |18.4 |1801 |
|[seresnet50.a1_in1k](https://huggingface.co/timm/seresnet50.a1_in1k)|288 |81.1 |95.12|28.1 |6.8 |18.4 |1799 |
|[resnet152s.gluon_in1k](https://huggingface.co/timm/resnet152s.gluon_in1k)|224 |81.02|95.41|60.3 |12.9 |25.0 |1347 |
|[resnet50.d_in1k](https://huggingface.co/timm/resnet50.d_in1k)|288 |80.97|95.44|25.6 |6.8 |18.4 |2085 |
|[gcresnet50t.ra2_in1k](https://huggingface.co/timm/gcresnet50t.ra2_in1k)|256 |80.94|95.45|25.9 |5.4 |14.7 |2571 |
|[resnext101_32x4d.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnext101_32x4d.fb_ssl_yfcc100m_ft_in1k)|224 |80.93|95.73|44.2 |8.0 |21.2 |1814 |
|[resnet50.c1_in1k](https://huggingface.co/timm/resnet50.c1_in1k)|288 |80.91|95.55|25.6 |6.8 |18.4 |2084 |
|[seresnext101_32x4d.gluon_in1k](https://huggingface.co/timm/seresnext101_32x4d.gluon_in1k)|224 |80.9 |95.31|49.0 |8.0 |21.3 |1585 |
|[seresnext101_64x4d.gluon_in1k](https://huggingface.co/timm/seresnext101_64x4d.gluon_in1k)|224 |80.9 |95.3 |88.2 |15.5 |31.2 |918 |
|[resnet50.c2_in1k](https://huggingface.co/timm/resnet50.c2_in1k)|288 |80.86|95.52|25.6 |6.8 |18.4 |2085 |
|[resnet50.tv2_in1k](https://huggingface.co/timm/resnet50.tv2_in1k)|224 |80.85|95.43|25.6 |4.1 |11.1 |3450 |
|[ecaresnet50t.a2_in1k](https://huggingface.co/timm/ecaresnet50t.a2_in1k)|224 |80.84|95.02|25.6 |4.3 |11.8 |2821 |
|[ecaresnet101d_pruned.miil_in1k](https://huggingface.co/timm/ecaresnet101d_pruned.miil_in1k)|224 |80.79|95.62|24.9 |3.5 |7.7 |2961 |
|[seresnet33ts.ra2_in1k](https://huggingface.co/timm/seresnet33ts.ra2_in1k)|288 |80.79|95.36|19.8 |6.0 |14.8 |2506 |
|[ecaresnet50d_pruned.miil_in1k](https://huggingface.co/timm/ecaresnet50d_pruned.miil_in1k)|288 |80.79|95.58|19.9 |4.2 |10.6 |2349 |
|[resnet50.a2_in1k](https://huggingface.co/timm/resnet50.a2_in1k)|288 |80.78|94.99|25.6 |6.8 |18.4 |2088 |
|[resnet50.b1k_in1k](https://huggingface.co/timm/resnet50.b1k_in1k)|288 |80.71|95.43|25.6 |6.8 |18.4 |2087 |
|[resnext50_32x4d.ra_in1k](https://huggingface.co/timm/resnext50_32x4d.ra_in1k)|288 |80.7 |95.39|25.0 |7.0 |23.8 |1749 |
|[resnetrs101.tf_in1k](https://huggingface.co/timm/resnetrs101.tf_in1k)|192 |80.69|95.24|63.6 |6.0 |12.7 |2270 |
|[resnet50d.a1_in1k](https://huggingface.co/timm/resnet50d.a1_in1k)|224 |80.68|94.71|25.6 |4.4 |11.9 |3162 |
|[eca_resnet33ts.ra2_in1k](https://huggingface.co/timm/eca_resnet33ts.ra2_in1k)|288 |80.68|95.36|19.7 |6.0 |14.8 |2637 |
|[resnet50.a1h_in1k](https://huggingface.co/timm/resnet50.a1h_in1k)|224 |80.67|95.3 |25.6 |4.1 |11.1 |3452 |
|[resnext50d_32x4d.bt_in1k](https://huggingface.co/timm/resnext50d_32x4d.bt_in1k)|288 |80.67|95.42|25.0 |7.4 |25.1 |1626 |
|[resnetaa50.a1h_in1k](https://huggingface.co/timm/resnetaa50.a1h_in1k)|224 |80.63|95.21|25.6 |5.2 |11.6 |3034 |
|[ecaresnet50d.miil_in1k](https://huggingface.co/timm/ecaresnet50d.miil_in1k)|224 |80.61|95.32|25.6 |4.4 |11.9 |2813 |
|[resnext101_64x4d.gluon_in1k](https://huggingface.co/timm/resnext101_64x4d.gluon_in1k)|224 |80.61|94.99|83.5 |15.5 |31.2 |989 |
|[gcresnet33ts.ra2_in1k](https://huggingface.co/timm/gcresnet33ts.ra2_in1k)|288 |80.6 |95.31|19.9 |6.0 |14.8 |2578 |
|[gcresnext50ts.ch_in1k](https://huggingface.co/timm/gcresnext50ts.ch_in1k)|256 |80.57|95.17|15.7 |3.8 |15.5 |2710 |
|[resnet152.a3_in1k](https://huggingface.co/timm/resnet152.a3_in1k)|224 |80.56|95.0 |60.2 |11.6 |22.6 |1483 |
|[resnet50d.ra2_in1k](https://huggingface.co/timm/resnet50d.ra2_in1k)|224 |80.53|95.16|25.6 |4.4 |11.9 |3164 |
|[resnext50_32x4d.a1_in1k](https://huggingface.co/timm/resnext50_32x4d.a1_in1k)|224 |80.53|94.46|25.0 |4.3 |14.4 |2930 |
|[wide_resnet101_2.tv2_in1k](https://huggingface.co/timm/wide_resnet101_2.tv2_in1k)|176 |80.48|94.98|126.9 |14.3 |13.2 |1719 |
|[resnet152d.gluon_in1k](https://huggingface.co/timm/resnet152d.gluon_in1k)|224 |80.47|95.2 |60.2 |11.8 |23.4 |1428 |
|[resnet50.b2k_in1k](https://huggingface.co/timm/resnet50.b2k_in1k)|288 |80.45|95.32|25.6 |6.8 |18.4 |2086 |
|[ecaresnetlight.miil_in1k](https://huggingface.co/timm/ecaresnetlight.miil_in1k)|224 |80.45|95.24|30.2 |4.1 |8.4 |3530 |
|[resnext50_32x4d.a2_in1k](https://huggingface.co/timm/resnext50_32x4d.a2_in1k)|224 |80.45|94.63|25.0 |4.3 |14.4 |2936 |
|[wide_resnet50_2.tv2_in1k](https://huggingface.co/timm/wide_resnet50_2.tv2_in1k)|176 |80.43|95.09|68.9 |7.3 |9.0 |3015 |
|[resnet101d.gluon_in1k](https://huggingface.co/timm/resnet101d.gluon_in1k)|224 |80.42|95.01|44.6 |8.1 |17.0 |2007 |
|[resnet50.a1_in1k](https://huggingface.co/timm/resnet50.a1_in1k)|224 |80.38|94.6 |25.6 |4.1 |11.1 |3461 |
|[seresnet33ts.ra2_in1k](https://huggingface.co/timm/seresnet33ts.ra2_in1k)|256 |80.36|95.1 |19.8 |4.8 |11.7 |3267 |
|[resnext101_32x4d.gluon_in1k](https://huggingface.co/timm/resnext101_32x4d.gluon_in1k)|224 |80.34|94.93|44.2 |8.0 |21.2 |1814 |
|[resnext50_32x4d.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnext50_32x4d.fb_ssl_yfcc100m_ft_in1k)|224 |80.32|95.4 |25.0 |4.3 |14.4 |2941 |
|[resnet101s.gluon_in1k](https://huggingface.co/timm/resnet101s.gluon_in1k)|224 |80.28|95.16|44.7 |9.2 |18.6 |1851 |
|[seresnet50.ra2_in1k](https://huggingface.co/timm/seresnet50.ra2_in1k)|224 |80.26|95.08|28.1 |4.1 |11.1 |2972 |
|[resnetblur50.bt_in1k](https://huggingface.co/timm/resnetblur50.bt_in1k)|288 |80.24|95.24|25.6 |8.5 |19.9 |1523 |
|[resnet50d.a2_in1k](https://huggingface.co/timm/resnet50d.a2_in1k)|224 |80.22|94.63|25.6 |4.4 |11.9 |3162 |
|[resnet152.tv2_in1k](https://huggingface.co/timm/resnet152.tv2_in1k)|176 |80.2 |94.64|60.2 |7.2 |14.0 |2346 |
|[seresnet50.a2_in1k](https://huggingface.co/timm/seresnet50.a2_in1k)|224 |80.08|94.74|28.1 |4.1 |11.1 |2969 |
|[eca_resnet33ts.ra2_in1k](https://huggingface.co/timm/eca_resnet33ts.ra2_in1k)|256 |80.08|94.97|19.7 |4.8 |11.7 |3284 |
|[gcresnet33ts.ra2_in1k](https://huggingface.co/timm/gcresnet33ts.ra2_in1k)|256 |80.06|94.99|19.9 |4.8 |11.7 |3216 |
|[resnet50_gn.a1h_in1k](https://huggingface.co/timm/resnet50_gn.a1h_in1k)|224 |80.06|94.95|25.6 |4.1 |11.1 |1109 |
|[seresnet50.a1_in1k](https://huggingface.co/timm/seresnet50.a1_in1k)|224 |80.02|94.71|28.1 |4.1 |11.1 |2962 |
|[resnet50.ram_in1k](https://huggingface.co/timm/resnet50.ram_in1k)|288 |79.97|95.05|25.6 |6.8 |18.4 |2086 |
|[resnet152c.gluon_in1k](https://huggingface.co/timm/resnet152c.gluon_in1k)|224 |79.92|94.84|60.2 |11.8 |23.4 |1455 |
|[seresnext50_32x4d.gluon_in1k](https://huggingface.co/timm/seresnext50_32x4d.gluon_in1k)|224 |79.91|94.82|27.6 |4.3 |14.4 |2591 |
|[resnet50.d_in1k](https://huggingface.co/timm/resnet50.d_in1k)|224 |79.91|94.67|25.6 |4.1 |11.1 |3456 |
|[resnet101.tv2_in1k](https://huggingface.co/timm/resnet101.tv2_in1k)|176 |79.9 |94.6 |44.6 |4.9 |10.1 |3341 |
|[resnetrs50.tf_in1k](https://huggingface.co/timm/resnetrs50.tf_in1k)|224 |79.89|94.97|35.7 |4.5 |12.1 |2774 |
|[resnet50.c2_in1k](https://huggingface.co/timm/resnet50.c2_in1k)|224 |79.88|94.87|25.6 |4.1 |11.1 |3455 |
|[ecaresnet26t.ra2_in1k](https://huggingface.co/timm/ecaresnet26t.ra2_in1k)|320 |79.86|95.07|16.0 |5.2 |16.4 |2168 |
|[resnet50.a2_in1k](https://huggingface.co/timm/resnet50.a2_in1k)|224 |79.85|94.56|25.6 |4.1 |11.1 |3460 |
|[resnet50.ra_in1k](https://huggingface.co/timm/resnet50.ra_in1k)|288 |79.83|94.97|25.6 |6.8 |18.4 |2087 |
|[resnet101.a3_in1k](https://huggingface.co/timm/resnet101.a3_in1k)|224 |79.82|94.62|44.6 |7.8 |16.2 |2114 |
|[resnext50_32x4d.ra_in1k](https://huggingface.co/timm/resnext50_32x4d.ra_in1k)|224 |79.76|94.6 |25.0 |4.3 |14.4 |2943 |
|[resnet50.c1_in1k](https://huggingface.co/timm/resnet50.c1_in1k)|224 |79.74|94.95|25.6 |4.1 |11.1 |3455 |
|[ecaresnet50d_pruned.miil_in1k](https://huggingface.co/timm/ecaresnet50d_pruned.miil_in1k)|224 |79.74|94.87|19.9 |2.5 |6.4 |3929 |
|[resnet33ts.ra2_in1k](https://huggingface.co/timm/resnet33ts.ra2_in1k)|288 |79.71|94.83|19.7 |6.0 |14.8 |2710 |
|[resnet152.gluon_in1k](https://huggingface.co/timm/resnet152.gluon_in1k)|224 |79.68|94.74|60.2 |11.6 |22.6 |1486 |
|[resnext50d_32x4d.bt_in1k](https://huggingface.co/timm/resnext50d_32x4d.bt_in1k)|224 |79.67|94.87|25.0 |4.5 |15.2 |2729 |
|[resnet50.bt_in1k](https://huggingface.co/timm/resnet50.bt_in1k)|288 |79.63|94.91|25.6 |6.8 |18.4 |2086 |
|[ecaresnet50t.a3_in1k](https://huggingface.co/timm/ecaresnet50t.a3_in1k)|224 |79.56|94.72|25.6 |4.3 |11.8 |2805 |
|[resnet101c.gluon_in1k](https://huggingface.co/timm/resnet101c.gluon_in1k)|224 |79.53|94.58|44.6 |8.1 |17.0 |2062 |
|[resnet50.b1k_in1k](https://huggingface.co/timm/resnet50.b1k_in1k)|224 |79.52|94.61|25.6 |4.1 |11.1 |3459 |
|[resnet50.tv2_in1k](https://huggingface.co/timm/resnet50.tv2_in1k)|176 |79.42|94.64|25.6 |2.6 |6.9 |5397 |
|[resnet32ts.ra2_in1k](https://huggingface.co/timm/resnet32ts.ra2_in1k)|288 |79.4 |94.66|18.0 |5.9 |14.6 |2752 |
|[resnet50.b2k_in1k](https://huggingface.co/timm/resnet50.b2k_in1k)|224 |79.38|94.57|25.6 |4.1 |11.1 |3459 |
|[resnext50_32x4d.tv2_in1k](https://huggingface.co/timm/resnext50_32x4d.tv2_in1k)|176 |79.37|94.3 |25.0 |2.7 |9.0 |4577 |
|[resnext50_32x4d.gluon_in1k](https://huggingface.co/timm/resnext50_32x4d.gluon_in1k)|224 |79.36|94.43|25.0 |4.3 |14.4 |2942 |
|[resnext101_32x8d.tv_in1k](https://huggingface.co/timm/resnext101_32x8d.tv_in1k)|224 |79.31|94.52|88.8 |16.5 |31.2 |1100 |
|[resnet101.gluon_in1k](https://huggingface.co/timm/resnet101.gluon_in1k)|224 |79.31|94.53|44.6 |7.8 |16.2 |2125 |
|[resnetblur50.bt_in1k](https://huggingface.co/timm/resnetblur50.bt_in1k)|224 |79.31|94.63|25.6 |5.2 |12.0 |2524 |
|[resnet50.a1h_in1k](https://huggingface.co/timm/resnet50.a1h_in1k)|176 |79.27|94.49|25.6 |2.6 |6.9 |5404 |
|[resnext50_32x4d.a3_in1k](https://huggingface.co/timm/resnext50_32x4d.a3_in1k)|224 |79.25|94.31|25.0 |4.3 |14.4 |2931 |
|[resnet50.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnet50.fb_ssl_yfcc100m_ft_in1k)|224 |79.22|94.84|25.6 |4.1 |11.1 |3451 |
|[resnet33ts.ra2_in1k](https://huggingface.co/timm/resnet33ts.ra2_in1k)|256 |79.21|94.56|19.7 |4.8 |11.7 |3392 |
|[resnet50d.gluon_in1k](https://huggingface.co/timm/resnet50d.gluon_in1k)|224 |79.07|94.48|25.6 |4.4 |11.9 |3162 |
|[resnet50.ram_in1k](https://huggingface.co/timm/resnet50.ram_in1k)|224 |79.03|94.38|25.6 |4.1 |11.1 |3453 |
|[resnet50.am_in1k](https://huggingface.co/timm/resnet50.am_in1k)|224 |79.01|94.39|25.6 |4.1 |11.1 |3461 |
|[resnet32ts.ra2_in1k](https://huggingface.co/timm/resnet32ts.ra2_in1k)|256 |79.01|94.37|18.0 |4.6 |11.6 |3440 |
|[ecaresnet26t.ra2_in1k](https://huggingface.co/timm/ecaresnet26t.ra2_in1k)|256 |78.9 |94.54|16.0 |3.4 |10.5 |3421 |
|[resnet152.a3_in1k](https://huggingface.co/timm/resnet152.a3_in1k)|160 |78.89|94.11|60.2 |5.9 |11.5 |2745 |
|[wide_resnet101_2.tv_in1k](https://huggingface.co/timm/wide_resnet101_2.tv_in1k)|224 |78.84|94.28|126.9 |22.8 |21.2 |1079 |
|[seresnext26d_32x4d.bt_in1k](https://huggingface.co/timm/seresnext26d_32x4d.bt_in1k)|288 |78.83|94.24|16.8 |4.5 |16.8 |2251 |
|[resnet50.ra_in1k](https://huggingface.co/timm/resnet50.ra_in1k)|224 |78.81|94.32|25.6 |4.1 |11.1 |3454 |
|[seresnext26t_32x4d.bt_in1k](https://huggingface.co/timm/seresnext26t_32x4d.bt_in1k)|288 |78.74|94.33|16.8 |4.5 |16.7 |2264 |
|[resnet50s.gluon_in1k](https://huggingface.co/timm/resnet50s.gluon_in1k)|224 |78.72|94.23|25.7 |5.5 |13.5 |2796 |
|[resnet50d.a3_in1k](https://huggingface.co/timm/resnet50d.a3_in1k)|224 |78.71|94.24|25.6 |4.4 |11.9 |3154 |
|[wide_resnet50_2.tv_in1k](https://huggingface.co/timm/wide_resnet50_2.tv_in1k)|224 |78.47|94.09|68.9 |11.4 |14.4 |1934 |
|[resnet50.bt_in1k](https://huggingface.co/timm/resnet50.bt_in1k)|224 |78.46|94.27|25.6 |4.1 |11.1 |3454 |
|[resnet34d.ra2_in1k](https://huggingface.co/timm/resnet34d.ra2_in1k)|288 |78.43|94.35|21.8 |6.5 |7.5 |3291 |
|[gcresnext26ts.ch_in1k](https://huggingface.co/timm/gcresnext26ts.ch_in1k)|288 |78.42|94.04|10.5 |3.1 |13.3 |3226 |
|[resnet26t.ra2_in1k](https://huggingface.co/timm/resnet26t.ra2_in1k)|320 |78.33|94.13|16.0 |5.2 |16.4 |2391 |
|[resnet152.tv_in1k](https://huggingface.co/timm/resnet152.tv_in1k)|224 |78.32|94.04|60.2 |11.6 |22.6 |1487 |
|[seresnext26ts.ch_in1k](https://huggingface.co/timm/seresnext26ts.ch_in1k)|288 |78.28|94.1 |10.4 |3.1 |13.3 |3062 |
|[bat_resnext26ts.ch_in1k](https://huggingface.co/timm/bat_resnext26ts.ch_in1k)|256 |78.25|94.1 |10.7 |2.5 |12.5 |3393 |
|[resnet50.a3_in1k](https://huggingface.co/timm/resnet50.a3_in1k)|224 |78.06|93.78|25.6 |4.1 |11.1 |3450 |
|[resnet50c.gluon_in1k](https://huggingface.co/timm/resnet50c.gluon_in1k)|224 |78.0 |93.99|25.6 |4.4 |11.9 |3286 |
|[eca_resnext26ts.ch_in1k](https://huggingface.co/timm/eca_resnext26ts.ch_in1k)|288 |78.0 |93.91|10.3 |3.1 |13.3 |3297 |
|[seresnext26t_32x4d.bt_in1k](https://huggingface.co/timm/seresnext26t_32x4d.bt_in1k)|224 |77.98|93.75|16.8 |2.7 |10.1 |3841 |
|[resnet34.a1_in1k](https://huggingface.co/timm/resnet34.a1_in1k)|288 |77.92|93.77|21.8 |6.1 |6.2 |3609 |
|[resnet101.a3_in1k](https://huggingface.co/timm/resnet101.a3_in1k)|160 |77.88|93.71|44.6 |4.0 |8.3 |3926 |
|[resnet26t.ra2_in1k](https://huggingface.co/timm/resnet26t.ra2_in1k)|256 |77.87|93.84|16.0 |3.4 |10.5 |3772 |
|[seresnext26ts.ch_in1k](https://huggingface.co/timm/seresnext26ts.ch_in1k)|256 |77.86|93.79|10.4 |2.4 |10.5 |4263 |
|[resnetrs50.tf_in1k](https://huggingface.co/timm/resnetrs50.tf_in1k)|160 |77.82|93.81|35.7 |2.3 |6.2 |5238 |
|[gcresnext26ts.ch_in1k](https://huggingface.co/timm/gcresnext26ts.ch_in1k)|256 |77.81|93.82|10.5 |2.4 |10.5 |4183 |
|[ecaresnet50t.a3_in1k](https://huggingface.co/timm/ecaresnet50t.a3_in1k)|160 |77.79|93.6 |25.6 |2.2 |6.0 |5329 |
|[resnext50_32x4d.a3_in1k](https://huggingface.co/timm/resnext50_32x4d.a3_in1k)|160 |77.73|93.32|25.0 |2.2 |7.4 |5576 |
|[resnext50_32x4d.tv_in1k](https://huggingface.co/timm/resnext50_32x4d.tv_in1k)|224 |77.61|93.7 |25.0 |4.3 |14.4 |2944 |
|[seresnext26d_32x4d.bt_in1k](https://huggingface.co/timm/seresnext26d_32x4d.bt_in1k)|224 |77.59|93.61|16.8 |2.7 |10.2 |3807 |
|[resnet50.gluon_in1k](https://huggingface.co/timm/resnet50.gluon_in1k)|224 |77.58|93.72|25.6 |4.1 |11.1 |3455 |
|[eca_resnext26ts.ch_in1k](https://huggingface.co/timm/eca_resnext26ts.ch_in1k)|256 |77.44|93.56|10.3 |2.4 |10.5 |4284 |
|[resnet26d.bt_in1k](https://huggingface.co/timm/resnet26d.bt_in1k)|288 |77.41|93.63|16.0 |4.3 |13.5 |2907 |
|[resnet101.tv_in1k](https://huggingface.co/timm/resnet101.tv_in1k)|224 |77.38|93.54|44.6 |7.8 |16.2 |2125 |
|[resnet50d.a3_in1k](https://huggingface.co/timm/resnet50d.a3_in1k)|160 |77.22|93.27|25.6 |2.2 |6.1 |5982 |
|[resnext26ts.ra2_in1k](https://huggingface.co/timm/resnext26ts.ra2_in1k)|288 |77.17|93.47|10.3 |3.1 |13.3 |3392 |
|[resnet34.a2_in1k](https://huggingface.co/timm/resnet34.a2_in1k)|288 |77.15|93.27|21.8 |6.1 |6.2 |3615 |
|[resnet34d.ra2_in1k](https://huggingface.co/timm/resnet34d.ra2_in1k)|224 |77.1 |93.37|21.8 |3.9 |4.5 |5436 |
|[seresnet50.a3_in1k](https://huggingface.co/timm/seresnet50.a3_in1k)|224 |77.02|93.07|28.1 |4.1 |11.1 |2952 |
|[resnext26ts.ra2_in1k](https://huggingface.co/timm/resnext26ts.ra2_in1k)|256 |76.78|93.13|10.3 |2.4 |10.5 |4410 |
|[resnet26d.bt_in1k](https://huggingface.co/timm/resnet26d.bt_in1k)|224 |76.7 |93.17|16.0 |2.6 |8.2 |4859 |
|[resnet34.bt_in1k](https://huggingface.co/timm/resnet34.bt_in1k)|288 |76.5 |93.35|21.8 |6.1 |6.2 |3617 |
|[resnet34.a1_in1k](https://huggingface.co/timm/resnet34.a1_in1k)|224 |76.42|92.87|21.8 |3.7 |3.7 |5984 |
|[resnet26.bt_in1k](https://huggingface.co/timm/resnet26.bt_in1k)|288 |76.35|93.18|16.0 |3.9 |12.2 |3331 |
|[resnet50.tv_in1k](https://huggingface.co/timm/resnet50.tv_in1k)|224 |76.13|92.86|25.6 |4.1 |11.1 |3457 |
|[resnet50.a3_in1k](https://huggingface.co/timm/resnet50.a3_in1k)|160 |75.96|92.5 |25.6 |2.1 |5.7 |6490 |
|[resnet34.a2_in1k](https://huggingface.co/timm/resnet34.a2_in1k)|224 |75.52|92.44|21.8 |3.7 |3.7 |5991 |
|[resnet26.bt_in1k](https://huggingface.co/timm/resnet26.bt_in1k)|224 |75.3 |92.58|16.0 |2.4 |7.4 |5583 |
|[resnet34.bt_in1k](https://huggingface.co/timm/resnet34.bt_in1k)|224 |75.16|92.18|21.8 |3.7 |3.7 |5994 |
|[seresnet50.a3_in1k](https://huggingface.co/timm/seresnet50.a3_in1k)|160 |75.1 |92.08|28.1 |2.1 |5.7 |5513 |
|[resnet34.gluon_in1k](https://huggingface.co/timm/resnet34.gluon_in1k)|224 |74.57|91.98|21.8 |3.7 |3.7 |5984 |
|[resnet18d.ra2_in1k](https://huggingface.co/timm/resnet18d.ra2_in1k)|288 |73.81|91.83|11.7 |3.4 |5.4 |5196 |
|[resnet34.tv_in1k](https://huggingface.co/timm/resnet34.tv_in1k)|224 |73.32|91.42|21.8 |3.7 |3.7 |5979 |
|[resnet18.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnet18.fb_swsl_ig1b_ft_in1k)|224 |73.28|91.73|11.7 |1.8 |2.5 |10213 |
|[resnet18.a1_in1k](https://huggingface.co/timm/resnet18.a1_in1k)|288 |73.16|91.03|11.7 |3.0 |4.1 |6050 |
|[resnet34.a3_in1k](https://huggingface.co/timm/resnet34.a3_in1k)|224 |72.98|91.11|21.8 |3.7 |3.7 |5967 |
|[resnet18.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnet18.fb_ssl_yfcc100m_ft_in1k)|224 |72.6 |91.42|11.7 |1.8 |2.5 |10213 |
|[resnet18.a2_in1k](https://huggingface.co/timm/resnet18.a2_in1k)|288 |72.37|90.59|11.7 |3.0 |4.1 |6051 |
|[resnet14t.c3_in1k](https://huggingface.co/timm/resnet14t.c3_in1k)|224 |72.26|90.31|10.1 |1.7 |5.8 |7026 |
|[resnet18d.ra2_in1k](https://huggingface.co/timm/resnet18d.ra2_in1k)|224 |72.26|90.68|11.7 |2.1 |3.3 |8707 |
|[resnet18.a1_in1k](https://huggingface.co/timm/resnet18.a1_in1k)|224 |71.49|90.07|11.7 |1.8 |2.5 |10187 |
|[resnet14t.c3_in1k](https://huggingface.co/timm/resnet14t.c3_in1k)|176 |71.31|89.69|10.1 |1.1 |3.6 |10970 |
|[resnet18.gluon_in1k](https://huggingface.co/timm/resnet18.gluon_in1k)|224 |70.84|89.76|11.7 |1.8 |2.5 |10210 |
|[resnet18.a2_in1k](https://huggingface.co/timm/resnet18.a2_in1k)|224 |70.64|89.47|11.7 |1.8 |2.5 |10194 |
|[resnet34.a3_in1k](https://huggingface.co/timm/resnet34.a3_in1k)|160 |70.56|89.52|21.8 |1.9 |1.9 |10737 |
|[resnet18.tv_in1k](https://huggingface.co/timm/resnet18.tv_in1k)|224 |69.76|89.07|11.7 |1.8 |2.5 |10205 |
|[resnet10t.c3_in1k](https://huggingface.co/timm/resnet10t.c3_in1k)|224 |68.34|88.03|5.4 |1.1 |2.4 |13079 |
|[resnet18.a3_in1k](https://huggingface.co/timm/resnet18.a3_in1k)|224 |68.25|88.17|11.7 |1.8 |2.5 |10167 |
|[resnet10t.c3_in1k](https://huggingface.co/timm/resnet10t.c3_in1k)|176 |66.71|86.96|5.4 |0.7 |1.5 |20327 |
|[resnet18.a3_in1k](https://huggingface.co/timm/resnet18.a3_in1k)|160 |65.66|86.26|11.7 |0.9 |1.3 |18229 |
## Citation
```bibtex
@misc{yalniz2019billionscale,
title={Billion-scale semi-supervised learning for image classification},
author={I. Zeki Yalniz and Hervé Jégou and Kan Chen and Manohar Paluri and Dhruv Mahajan},
year={2019},
eprint={1905.00546},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
```bibtex
@article{Xie2016,
title={Aggregated Residual Transformations for Deep Neural Networks},
author={Saining Xie and Ross Girshick and Piotr Dollár and Zhuowen Tu and Kaiming He},
journal={arXiv preprint arXiv:1611.05431},
year={2016}
}
```
```bibtex
@article{He2015,
author = {Kaiming He and Xiangyu Zhang and Shaoqing Ren and Jian Sun},
title = {Deep Residual Learning for Image Recognition},
journal = {arXiv preprint arXiv:1512.03385},
year = {2015}
}
```
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
}
```
|
TheBloke/Ziya-Coding-34B-v1.0-GGUF | TheBloke | 2023-10-09T11:31:44Z | 744 | 11 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation",
"zh",
"en",
"arxiv:2210.08590",
"base_model:IDEA-CCNL/Ziya-Coding-34B-v1.0",
"license:gpl-3.0",
"text-generation-inference",
"region:us"
] | text-generation | 2023-10-09T10:31:43Z | ---
base_model: IDEA-CCNL/Ziya-Coding-34B-v1.0
inference: false
language:
- zh
- en
library_name: transformers
license: gpl-3.0
model_creator: Fengshenbang-LM
model_name: Ziya Coding 34B v1.0
model_type: llama
pipeline_tag: text-generation
prompt_template: "<human>: \nPlease Complete the given function below according to\
\ the docstring: \n{prompt}\n<bot>: \n"
quantized_by: TheBloke
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Ziya Coding 34B v1.0 - GGUF
- Model creator: [Fengshenbang-LM](https://huggingface.co/IDEA-CCNL)
- Original model: [Ziya Coding 34B v1.0](https://huggingface.co/IDEA-CCNL/Ziya-Coding-34B-v1.0)
<!-- description start -->
## Description
This repo contains GGUF format model files for [Fengshenbang-LM's Ziya Coding 34B v1.0](https://huggingface.co/IDEA-CCNL/Ziya-Coding-34B-v1.0).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplate list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Ziya-Coding-34B-v1.0-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Ziya-Coding-34B-v1.0-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Ziya-Coding-34B-v1.0-GGUF)
* [Fengshenbang-LM's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/IDEA-CCNL/Ziya-Coding-34B-v1.0)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Ziya
```
<human>:
Please Complete the given function below according to the docstring:
{prompt}
<bot>:
```
<!-- prompt-template end -->
<!-- licensing start -->
## Licensing
The creator of the source model has listed its license as `gpl-3.0`, and this quantization has therefore used that same license.
As this model is based on Llama 2, it is also subject to the Meta Llama 2 license terms, and the license files for that are additionally included. It should therefore be considered as being claimed to be licensed under both licenses. I contacted Hugging Face for clarification on dual licensing but they do not yet have an official position. Should this change, or should Meta provide any feedback on this situation, I will update this section accordingly.
In the meantime, any questions regarding licensing, and in particular how these two licenses might interact, should be directed to the original model repository: [Fengshenbang-LM's Ziya Coding 34B v1.0](https://huggingface.co/IDEA-CCNL/Ziya-Coding-34B-v1.0).
<!-- licensing end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [ziya-coding-34b-v1.0.Q2_K.gguf](https://huggingface.co/TheBloke/Ziya-Coding-34B-v1.0-GGUF/blob/main/ziya-coding-34b-v1.0.Q2_K.gguf) | Q2_K | 2 | 14.21 GB| 16.71 GB | smallest, significant quality loss - not recommended for most purposes |
| [ziya-coding-34b-v1.0.Q3_K_S.gguf](https://huggingface.co/TheBloke/Ziya-Coding-34B-v1.0-GGUF/blob/main/ziya-coding-34b-v1.0.Q3_K_S.gguf) | Q3_K_S | 3 | 14.61 GB| 17.11 GB | very small, high quality loss |
| [ziya-coding-34b-v1.0.Q3_K_M.gguf](https://huggingface.co/TheBloke/Ziya-Coding-34B-v1.0-GGUF/blob/main/ziya-coding-34b-v1.0.Q3_K_M.gguf) | Q3_K_M | 3 | 16.28 GB| 18.78 GB | very small, high quality loss |
| [ziya-coding-34b-v1.0.Q3_K_L.gguf](https://huggingface.co/TheBloke/Ziya-Coding-34B-v1.0-GGUF/blob/main/ziya-coding-34b-v1.0.Q3_K_L.gguf) | Q3_K_L | 3 | 17.77 GB| 20.27 GB | small, substantial quality loss |
| [ziya-coding-34b-v1.0.Q4_0.gguf](https://huggingface.co/TheBloke/Ziya-Coding-34B-v1.0-GGUF/blob/main/ziya-coding-34b-v1.0.Q4_0.gguf) | Q4_0 | 4 | 19.05 GB| 21.55 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [ziya-coding-34b-v1.0.Q4_K_S.gguf](https://huggingface.co/TheBloke/Ziya-Coding-34B-v1.0-GGUF/blob/main/ziya-coding-34b-v1.0.Q4_K_S.gguf) | Q4_K_S | 4 | 19.15 GB| 21.65 GB | small, greater quality loss |
| [ziya-coding-34b-v1.0.Q4_K_M.gguf](https://huggingface.co/TheBloke/Ziya-Coding-34B-v1.0-GGUF/blob/main/ziya-coding-34b-v1.0.Q4_K_M.gguf) | Q4_K_M | 4 | 20.22 GB| 22.72 GB | medium, balanced quality - recommended |
| [ziya-coding-34b-v1.0.Q5_0.gguf](https://huggingface.co/TheBloke/Ziya-Coding-34B-v1.0-GGUF/blob/main/ziya-coding-34b-v1.0.Q5_0.gguf) | Q5_0 | 5 | 23.24 GB| 25.74 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [ziya-coding-34b-v1.0.Q5_K_S.gguf](https://huggingface.co/TheBloke/Ziya-Coding-34B-v1.0-GGUF/blob/main/ziya-coding-34b-v1.0.Q5_K_S.gguf) | Q5_K_S | 5 | 23.24 GB| 25.74 GB | large, low quality loss - recommended |
| [ziya-coding-34b-v1.0.Q5_K_M.gguf](https://huggingface.co/TheBloke/Ziya-Coding-34B-v1.0-GGUF/blob/main/ziya-coding-34b-v1.0.Q5_K_M.gguf) | Q5_K_M | 5 | 23.84 GB| 26.34 GB | large, very low quality loss - recommended |
| [ziya-coding-34b-v1.0.Q6_K.gguf](https://huggingface.co/TheBloke/Ziya-Coding-34B-v1.0-GGUF/blob/main/ziya-coding-34b-v1.0.Q6_K.gguf) | Q6_K | 6 | 27.68 GB| 30.18 GB | very large, extremely low quality loss |
| [ziya-coding-34b-v1.0.Q8_0.gguf](https://huggingface.co/TheBloke/Ziya-Coding-34B-v1.0-GGUF/blob/main/ziya-coding-34b-v1.0.Q8_0.gguf) | Q8_0 | 8 | 35.86 GB| 38.36 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
- LM Studio
- LoLLMS Web UI
- Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/Ziya-Coding-34B-v1.0-GGUF and below it, a specific filename to download, such as: ziya-coding-34b-v1.0.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/Ziya-Coding-34B-v1.0-GGUF ziya-coding-34b-v1.0.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/Ziya-Coding-34B-v1.0-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Ziya-Coding-34B-v1.0-GGUF ziya-coding-34b-v1.0.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 32 -m ziya-coding-34b-v1.0.Q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<human>: \nPlease Complete the given function below according to the docstring: \n{prompt}\n<bot>:"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
### How to load this model in Python code, using ctransformers
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install ctransformers
# Or with CUDA GPU acceleration
pip install ctransformers[cuda]
# Or with AMD ROCm GPU acceleration (Linux only)
CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers
# Or with Metal GPU acceleration for macOS systems only
CT_METAL=1 pip install ctransformers --no-binary ctransformers
```
#### Simple ctransformers example code
```python
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("TheBloke/Ziya-Coding-34B-v1.0-GGUF", model_file="ziya-coding-34b-v1.0.Q4_K_M.gguf", model_type="llama", gpu_layers=50)
print(llm("AI is going to"))
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Pierre Kircher, Stanislav Ovsiannikov, Michael Levine, Eugene Pentland, Andrey, 준교 김, Randy H, Fred von Graf, Artur Olbinski, Caitlyn Gatomon, terasurfer, Jeff Scroggin, James Bentley, Vadim, Gabriel Puliatti, Harry Royden McLaughlin, Sean Connelly, Dan Guido, Edmond Seymore, Alicia Loh, subjectnull, AzureBlack, Manuel Alberto Morcote, Thomas Belote, Lone Striker, Chris Smitley, Vitor Caleffi, Johann-Peter Hartmann, Clay Pascal, biorpg, Brandon Frisco, sidney chen, transmissions 11, Pedro Madruga, jinyuan sun, Ajan Kanaga, Emad Mostaque, Trenton Dambrowitz, Jonathan Leane, Iucharbius, usrbinkat, vamX, George Stoitzev, Luke Pendergrass, theTransient, Olakabola, Swaroop Kallakuri, Cap'n Zoog, Brandon Phillips, Michael Dempsey, Nikolai Manek, danny, Matthew Berman, Gabriel Tamborski, alfie_i, Raymond Fosdick, Tom X Nguyen, Raven Klaugh, LangChain4j, Magnesian, Illia Dulskyi, David Ziegler, Mano Prime, Luis Javier Navarrete Lozano, Erik Bjäreholt, 阿明, Nathan Dryer, Alex, Rainer Wilmers, zynix, TL, Joseph William Delisle, John Villwock, Nathan LeClaire, Willem Michiel, Joguhyik, GodLy, OG, Alps Aficionado, Jeffrey Morgan, ReadyPlayerEmma, Tiffany J. Kim, Sebastain Graf, Spencer Kim, Michael Davis, webtim, Talal Aujan, knownsqashed, John Detwiler, Imad Khwaja, Deo Leter, Jerry Meng, Elijah Stavena, Rooh Singh, Pieter, SuperWojo, Alexandros Triantafyllidis, Stephen Murray, Ai Maven, ya boyyy, Enrico Ros, Ken Nordquist, Deep Realms, Nicholas, Spiking Neurons AB, Elle, Will Dee, Jack West, RoA, Luke @flexchar, Viktor Bowallius, Derek Yates, Subspace Studios, jjj, Toran Billups, Asp the Wyvern, Fen Risland, Ilya, NimbleBox.ai, Chadd, Nitin Borwankar, Emre, Mandus, Leonard Tan, Kalila, K, Trailburnt, S_X, Cory Kujawski
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: Fengshenbang-LM's Ziya Coding 34B v1.0
# Ziya-Coding-34B-v1.0
# 姜子牙系列模型
- [Ziya-LLaMA-13B-v1.1](https://huggingface.co/IDEA-CCNL/Ziya-LLaMA-13B-v1.1)
- [Ziya-LLaMA-13B-v1](https://huggingface.co/IDEA-CCNL/Ziya-LLaMA-13B-v1)
- [Ziya-LLaMA-7B-Reward](https://huggingface.co/IDEA-CCNL/Ziya-LLaMA-7B-Reward)
- [Ziya-LLaMA-13B-Pretrain-v1](https://huggingface.co/IDEA-CCNL/Ziya-LLaMA-13B-Pretrain-v1)
- [Ziya-BLIP2-14B-Visual-v1](https://huggingface.co/IDEA-CCNL/Ziya-BLIP2-14B-Visual-v1)
- [Ziya-Writing-LLaMa-13B-v1](https://huggingface.co/IDEA-CCNL/Ziya-Writing-LLaMa-13B-v1)
- [Ziya-Coding-15B-v1](https://huggingface.co/IDEA-CCNL/Ziya-Coding-15B-v1)
## 简介 Brief Introduction
使用自然语言生成高质量的代码是大模型落地中的高频需求。今天,IDEA研究院封神榜团队正式开源最新的代码大模型Ziya-Coding-34B-v1.0,我们在HumanEval Pass@1的评测上,取得了75.5的好成绩,超过了GPT-4(67.0)的得分,也成为目前已知开源模型新高。封神榜团队正在为社区提供先进的大模型技术和经验,帮助生产和定制更多优秀垂类模型,推进大模型生态发展。
Generating high-quality code using natural language is a high-frequency demand in the deployment of large models. Today, the IDEA Research Institute's Fengshenbang team officially open-sourced the latest code model, Ziya-Coding-34B-v1.0. We achieved a good score of 75.5 on the HumanEval Pass@1 evaluation, surpassing the score of GPT-4 (67.0) and setting a new high for known open-source models. The Fengshenbang team is providing the community with advanced large model technology and experience, helping to produce and customize more excellent vertical models, and promoting the development of the large model ecosystem.
更多细节可以参考我们的公众号文章:
[再创新高!姜子牙大模型开源代码大模型Ziya-Coding-34B-v1.0](https://mp.weixin.qq.com/s/Op4Wkiu2J9jwFr_Zj0YSZg)
[姜子牙大模型系列 | 代码模型ziya-coding发布!低成本微调即可学会在专有场景编程](https://mp.weixin.qq.com/s/tWaRF1wL3HM87ZDEawd2UA)
## 软件依赖
```
pip install torch==1.12.1 tokenizers==0.13.3 git+https://github.com/huggingface/transformers
```
## 模型信息 Model Information
在9月初,我们开源了基于StarCoder-15B的代码模型Ziya-Coding-15B-v1,我们将训练Ziya-Coding-15B-v1积累的训练经验迁移到了新版本的训练中。
我们收集并构造了约45万涵盖了几乎所有代码相关任务的指令数据进行第一阶段的微调,这其中包括约10万的中文指令和35万的英文指令,保证了数据的多样性,在构造数据时,我们充分利用了高质量的无指令代码数据,使用LLM生成对应的指令,扩充得到了更多高质量的代码指令数据。
同时实验过程中,我们注意到,代码指令的难度和正确性是训练代码模型成功的关键。因此,我们引入了第二阶段的精调。我们使用evol-instruct的方法生成了大量高难度多要求的代码指令数据,并利用代码编译器作为反馈,筛选出能够通过编译的代码。最后利用LLM生成单元测试进一步验证代码的正确性。我们最终筛选出了46k数据,在第一阶段模型的基础上,使用较低的学习率进行微调,最终得到了我们的Ziya-coding-34B-v1.0。
In early September, we open-sourced the code model Ziya-Coding-15B-v1 based on StarCoder-15B. The training experience accumulated in training Ziya-Coding-15B-v1 was transferred to the training of the new version.
We collected and constructed about 450,000 instruction data covering almost all code-related tasks for the first stage of fine-tuning. This includes about 100,000 Chinese instructions and 350,000 English instructions, ensuring data diversity. When constructing the data, we made full use of high-quality non-instructional code data, used LLM to generate corresponding instructions, and expanded to obtain more high-quality code instruction data.
During the experiment, we noticed that the difficulty and correctness of code instructions are key to the successful training of code models. Therefore, we introduced a second stage of fine-tuning. We used the evol-instruct method to generate a large amount of high-difficulty, multi-requirement code instruction data, and used a code compiler as feedback to filter out code that could pass compilation. Finally, we used LLM to generate unit tests to further verify the correctness of the code. We ultimately filtered out 46k data, and on the basis of the first-stage model, we fine-tuned it with a lower learning rate to finally obtain our Ziya-coding-34B-v1.0.
### 效果评估 Performance
| Model | HumanEval(pass@1) |
|:----------------------------|:-----------------:|
| **Ziya-Coding-34B-v1.0** | **75.5%** |
| CodeFuse-CodeLlama-34B | 74.4% |
| Phind-CodeLLaMa-34B-v2 | 73.8% |
| WizardCoder-Python-34B-V1.0 | 73.2% |
| GPT-4 | 67.0% |
| PanGu-Coder2 15B | 61.6% |
| WizardCoder-15B-V1.0 | 59.8% |
| CodeLlama-34b-Python | 53.7% |
| Ziya-Coding-15B-v1 | 50.1% |
| CodeLlama-34b | 48.8% |
| GPT-3.5 | 48.1% |
| StarCoder-15B | 33.6% |
其中,我们对微调数据集进行了去污处理,避免数据泄露,HumanEval的pass@1指标是贪婪生成的结果。
Prompt Format
```python3
"<human>: \nPlease Complete the given function below according to the docstring: \n{prompt}\n<bot>: \n"
```
In this process, we performed a decontamination process on the fine-tuning dataset to avoid data leakage. The pass@1 metric for HumanEval is based on the results of greedy generation.
## <span id="jump"> 使用 Usage </span>
```python3
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
device = torch.device("cuda")
prompt = "写一段快速排序"
model = AutoModelForCausalLM.from_pretrained("IDEA-CCNL/Ziya-Coding-34B-v1.0", torch_dtype=torch.float16, device_map="auto")
tokenizer = AutoTokenizer.from_pretrained("IDEA-CCNL/Ziya-Coding-34B-v1.0", use_fast=False)
input = f"<human>: \n{prompt}\n<bot>: \n"
input_ids = tokenizer(input, return_tensors="pt").input_ids.to(device)
generate_ids = model.generate(
input_ids,
max_new_tokens = 512,
do_sample = True,
top_p = 0.85,
temperature = 1.0,
repetition_penalty = 1.0,
eos_token_id = tokenizer.eos_token_id,
pad_token_id = tokenizer.pad_token_id,
)
output = tokenizer.batch_decode(generate_ids)[0]
print(output)
```
## 引用 Citation
如果您在您的工作中使用了我们的模型,可以引用我们的[论文](https://arxiv.org/abs/2210.08590):
If you are using the resource for your work, please cite the our [paper](https://arxiv.org/abs/2210.08590):
```text
@article{fengshenbang,
author = {Jiaxing Zhang and Ruyi Gan and Junjie Wang and Yuxiang Zhang and Lin Zhang and Ping Yang and Xinyu Gao and Ziwei Wu and Xiaoqun Dong and Junqing He and Jianheng Zhuo and Qi Yang and Yongfeng Huang and Xiayu Li and Yanghan Wu and Junyu Lu and Xinyu Zhu and Weifeng Chen and Ting Han and Kunhao Pan and Rui Wang and Hao Wang and Xiaojun Wu and Zhongshen Zeng and Chongpei Chen},
title = {Fengshenbang 1.0: Being the Foundation of Chinese Cognitive Intelligence},
journal = {CoRR},
volume = {abs/2209.02970},
year = {2022}
}
```
You can also cite our [website](https://github.com/IDEA-CCNL/Fengshenbang-LM/):
欢迎引用我们的[网站](https://github.com/IDEA-CCNL/Fengshenbang-LM/):
```text
@misc{Fengshenbang-LM,
title={Fengshenbang-LM},
author={IDEA-CCNL},
year={2021},
howpublished={\url{https://github.com/IDEA-CCNL/Fengshenbang-LM}},
}
```
<!-- original-model-card end -->
|
jeff31415/TinyLlama-1.1B-1T-OpenOrca | jeff31415 | 2023-10-25T13:52:12Z | 744 | 7 | transformers | [
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"en",
"dataset:Open-Orca/OpenOrca",
"dataset:bigcode/starcoderdata",
"dataset:cerebras/SlimPajama-627B",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-10-09T16:06:21Z | ---
license: apache-2.0
datasets:
- Open-Orca/OpenOrca
- bigcode/starcoderdata
- cerebras/SlimPajama-627B
language:
- en
---
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
#### Base model:
PY007/TinyLlama-1.1B-intermediate-step-480k-1T
#### Dataset:
Fine tuned on OpenOrca GPT4 subset for 1 epoch,Using CHATML format
#### Model License:
Apache 2.0, following the TinyLlama base model.
#### Quantisation:
- GPTQ:https://huggingface.co/TheBloke/TinyLlama-1.1B-1T-OpenOrca-GPTQ
- AWQ:https://huggingface.co/TheBloke/TinyLlama-1.1B-1T-OpenOrca-AWQ
- GGUF:https://huggingface.co/TheBloke/TinyLlama-1.1B-1T-OpenOrca-GGUF
#### Hardware and training details:
Hardware: 1*RTX A5000, ~16 hours to complete 1 epoch. GPU from autodl.com, cost around $3 for this finetuning.
https://wandb.ai/jeff200402/TinyLlama-Orca?workspace= for more details. |
princeton-nlp/Sheared-Pythia-160m | princeton-nlp | 2023-12-04T21:35:57Z | 744 | 4 | transformers | [
"transformers",
"pytorch",
"gpt_neox",
"text-generation",
"arxiv:2310.06694",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-11-20T02:57:53Z | ---
license: apache-2.0
---
Paper: https://arxiv.org/pdf/2310.06694.pdf
Code: https://github.com/princeton-nlp/LLM-Shearing
License: Must comply with license of Pythia since it's a model derived from Pythia.
Sheared-Pythia-160m is a model pruned and further pre-trained from [EleutherAI/pythia-410m](https://huggingface.co/EleutherAI/pythia-410m). We dynamically load data from different domains in the Pile dataset to prune and contune pre-train the model. We use 0.4B tokens for pruning and 50B tokens for continued pre-training the pruned model. This model can be loaded with HuggingFace via
```
model = GPTNeoXForCausalLM.from_pretrained("princeton-nlp/Sheared-Pythia-160m")
```
The model's overall performance is better than EleutherAI/pythia-160m.
## Bibtex
```
@article{xia2023sheared,
title={Sheared llama: Accelerating language model pre-training via structured pruning},
author={Xia, Mengzhou and Gao, Tianyu and Zeng, Zhiyuan and Chen, Danqi},
journal={arXiv preprint arXiv:2310.06694},
year={2023}
}
```
|
brucethemoose/Yi-34B-200K-DARE-megamerge-v8 | brucethemoose | 2024-03-11T20:05:56Z | 744 | 26 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"Yi",
"en",
"arxiv:2311.03099",
"arxiv:2306.01708",
"license:other",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-01-14T18:13:39Z | ---
language:
- en
license: other
library_name: transformers
tags:
- mergekit
- merge
- Yi
license_name: yi-license
license_link: https://huggingface.co/01-ai/Yi-34B/blob/main/LICENSE
base_model: []
model-index:
- name: Yi-34B-200K-DARE-megamerge-v8
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 67.75
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=brucethemoose/Yi-34B-200K-DARE-megamerge-v8
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 86.06
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=brucethemoose/Yi-34B-200K-DARE-megamerge-v8
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 77.03
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=brucethemoose/Yi-34B-200K-DARE-megamerge-v8
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 56.31
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=brucethemoose/Yi-34B-200K-DARE-megamerge-v8
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 82.79
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=brucethemoose/Yi-34B-200K-DARE-megamerge-v8
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 65.43
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=brucethemoose/Yi-34B-200K-DARE-megamerge-v8
name: Open LLM Leaderboard
---
# Yi 34B 200K DARE Merge v8
A merge of many Yi 34B 200K models using the new DARE Ties method via mergekit. The goal is to create a merge model that excels at 32K+ context performance, without any additional finetuning.
## Prompt template: Orca-Vicuna
```
SYSTEM: {system_message}
USER: {prompt}
ASSISTANT:
```
It might recognize ChatML, and possibly Alpaca-like formats. Raw prompting as described here is also effective: https://old.reddit.com/r/LocalLLaMA/comments/18zqy4s/the_secret_to_writing_quality_stories_with_llms/
## Running
Being a Yi model, run a lower temperature with 0.1 or higher MinP, a little repetition penalty, maybe mirostat with a low tau, and no other samplers. Yi tends to run "hot" by default, and it really needs a low temperature + MinP to cull Yi's huge vocabulary. See the explanation here: https://github.com/ggerganov/llama.cpp/pull/3841
24GB GPUs can efficiently run Yi-34B-200K models at **40K-90K context** with exllamav2, and performant UIs like [exui](https://github.com/turboderp/exui). I go into more detail in this [post](https://old.reddit.com/r/LocalLLaMA/comments/1896igc/how_i_run_34b_models_at_75k_context_on_24gb_fast/). 16GB GPUs can still run the high context with aggressive quantization.
Lonestriker has also uploaded general purpose quantizations here: https://huggingface.co/models?sort=trending&search=LoneStriker+Yi-34B-200K-DARE-megamerge-v8
Additionally, TheBloke has uploaded experimental GGUFs using llama.cpp's new imatrix quantization feature, profiled on VMware open-instruct: https://huggingface.co/TheBloke/Yi-34B-200K-DARE-megamerge-v8-GGUF
To load/train this in full-context backends like transformers, you *must* change `max_position_embeddings` in config.json to a lower value than 200,000, otherwise you will OOM! I do not recommend running high context without context-efficient backends like exllamav2, litellm or unsloth.
## Testing Notes
See: https://huggingface.co/brucethemoose/Yi-34B-200K-DARE-merge-v5#testing-notes
An intermediate merge model was created to try and extend the context of several 4k models before adding them to the main merge, as seen in the "megamerge" recipe below. I can upload this upon request
In addition, the weight gradients are biased towards Vicuna-format models in the first few layers to try and "emphasize" the Orca-Vicuna prompt template. How sucessful this is remains to be seen.
## Merge Details
### Merge Method
This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using /home/alpha/Storage/Models/Raw/chargoddard_Yi-34B-200K-Llama as a base.
### Models Merged
The following models were included in the merge:
* https://huggingface.co/kyujinpy/PlatYi-34B-200k-Q-FastChat
* https://huggingface.co/jondurbin/bagel-34b-v0.2
* https://huggingface.co/migtissera/Tess-M-Creative-v1.0
* https://huggingface.co/brucethemoose/SUS-Bagel-200K-DARE-Test
* https://huggingface.co/Mihaiii/Pallas-0.5
* https://huggingface.co/bhenrym14/airoboros-3_1-yi-34b-200k
* https://huggingface.co/adamo1139/Yi-34B-200K-AEZAKMI-v2
* https://huggingface.co/migtissera/Tess-34B-v1.4
* https://huggingface.co/SUSTech/SUS-Chat-34B
* https://huggingface.co/jondurbin/bagel-dpo-34b-v0.2
* https://huggingface.co/bhenrym14/platypus-yi-34b
* https://huggingface.co/Weyaxi/Nous-Hermes-2-SUS-Chat-34B-Slerp
* https://huggingface.co/TriadParty/deepsex-34b
* https://huggingface.co/TriadParty/deepmoney-34b-200k-base
* https://huggingface.co/chargoddard/Yi-34B-200K-Llama
* https://huggingface.co/chargoddard/Yi-34B-Llama
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: /home/alpha/Models/Raw/chargoddard_Yi-34B-Llama
# No parameters necessary for base model
- model: /home/alpha/Storage/Models/Raw/chargoddard_Yi-34B-200K-Llama
#200K base to extend the context of 4K models, max density as we *want* it to 'interfere'
parameters:
weight: 0.33
density: 1
- model: /home/alpha/Models/Raw/Weyaxi_Nous-Hermes-2-SUS-Chat-34B-Slerp
parameters:
weight: 0.15
density: 0.36
- model: /home/alpha/Models/Raw/jondurbin_bagel-dpo-34b-v0.2
#Mix dpo with sft to tone down dpo
parameters:
weight: 0.06
density: 0.36
- model: /home/alpha/Models/Raw/jondurbin_bagel-34b-v0.2
parameters:
weight: 0.06
density: 0.41
- model: /home/alpha/Models/Raw/bhenrym14_platypus-yi-34b
#Vicuna format
parameters:
weight: 0.19
density: 0.41
# - model: /home/alpha/Models/Raw/01-ai_Yi-34B-Chat #+/home/alpha/Models/Raw/Doctor-Shotgun_limarpv3-yi-llama-34b-lora
# #Can't get lora OR base model to work without erroring out?
# parameters:
# weight: 0.04
# density: 0.36
- model: /home/alpha/Models/Raw/TriadParty_deepsex-34b
#Base model with no prompt
parameters:
weight: 0.21
density: 0.39
merge_method: dare_ties
tokenizer_source: union
base_model: /home/alpha/Models/Raw/chargoddard_Yi-34B-Llama
parameters:
int8_mask: true
dtype: bfloat16
name: 4kmerge-v2
---
models:
- model: /home/alpha/Storage/Models/Raw/chargoddard_Yi-34B-200K-Llama
# No parameters necessary for base model
- model: /home/alpha/Storage/Models/Raw/migtissera_Tess-34B-v1.4
#Emphasize the beginning of Vicuna format models
parameters:
weight: [0.22, 0.113, 0.113, 0.113, 0.113, 0.113]
density: 0.61
- model: /home/alpha/Models/Raw/Mihaiii_Pallas-0.5
# Vicuna format
parameters:
weight: [0.22, 0.113, 0.113, 0.113, 0.113, 0.113]
density: 0.61
- model: /home/alpha//Storage/Models/Raw/bhenrym14_airoboros-3_1-yi-34b-200k
parameters:
weight: [0.02, 0.081, 0.081, 0.081, 0.081, 0.081]
density: 0.59
- model: /home/alpha/Storage/Models/Raw/jondurbin_bagel-34b-v0.2
#Only the SFT in the main merge since the DPO version seems to have no long context ability at all, and some overfitting(?) issues
parameters:
weight: [0.02, 0.093, 0.093, 0.093, 0.093, 0.093]
density: 0.4
- model: /home/alpha/Storage/Models/Raw/kyujinpy_PlatYi-34B-200k-Q-FastChat
parameters:
weight: [0.02, 0.081, 0.081, 0.081, 0.081, 0.081]
density: 0.59
#- model: /home/alpha/Storage/Models/Raw/ehartford_dolphin-2.2-yi-34b-200k
# Dolphin 200K seems to be funky according to multiple leaderboards and perplexity tests?
# parameters:
# weight: 0.15
# density: 0.6
- model: /home/alpha/Models/Raw/adamo1139_Yi-34B-200K-AEZAKMI-v2
parameters:
weight: [0.02, 0.096, 0.096, 0.096, 0.096, 0.096]
density: 0.59
- model: /home/alpha/Storage/Models/Raw/Nous-Capybara-34B
parameters:
weight: [0.21, 0.115, 0.115, 0.115, 0.115, 0.115]
density: 0.59
- model: 4kmerge-v2
#Previous merge
parameters:
weight: [0.02, 0.115, 0.115, 0.115, 0.115, 0.115]
density: 0.4
- model: /home/alpha/Models/Raw/migtissera_Tess-M-Creative-v1.0
# Vicuna format
parameters:
weight: [0.21, 0.09, 0.09, 0.09, 0.09, 0.09]
density: 0.61
- model: /home/alpha/Models/Raw/TriadParty_deepmoney-34b-200k-base
# No prompt format, native long context full finetune
parameters:
weight: [0.04, 0.103, 0.103, 0.103, 0.103, 0.103]
density: 0.61
merge_method: dare_ties
tokenizer_source: union
base_model: /home/alpha/Storage/Models/Raw/chargoddard_Yi-34B-200K-Llama
parameters:
int8_mask: true
dtype: bfloat16
```
## Self Promotion
I'm part of a AI startup called Holocene AI!
We're new, busy, and still setting things up. But if you have any business inquiries, want a job, or just want some consultation, feel free to shoot me an email. We have expertise in RAG applications and llama/embeddings model finetuning, and absolutely *none* of the nonsense of scammy AI startups.
Contact me at: [email protected]
I also set up a Ko-Fi! I want to run some (personal) training/LASERing as well, at 100K context or so. If you'd like to buy me 10 minutes on an A100 (or 5 seconds on an MI300X), I'd appreciate it: https://ko-fi.com/alphaatlas
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_brucethemoose__Yi-34B-200K-DARE-megamerge-v8)
| Metric |Value|
|---------------------------------|----:|
|Avg. |72.56|
|AI2 Reasoning Challenge (25-Shot)|67.75|
|HellaSwag (10-Shot) |86.06|
|MMLU (5-Shot) |77.03|
|TruthfulQA (0-shot) |56.31|
|Winogrande (5-shot) |82.79|
|GSM8k (5-shot) |65.43|
|
Vasanth/Valor_Macaroni_moe | Vasanth | 2024-01-21T09:20:18Z | 744 | 0 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"moe",
"frankenmoe",
"merge",
"mergekit",
"lazymergekit",
"NeuralNovel/Valor-7B-v0.1",
"Toten5/Marcoroni-neural-chat-7B-v1",
"conversational",
"base_model:NeuralNovel/Valor-7B-v0.1",
"base_model:Toten5/Marcoroni-neural-chat-7B-v1",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-01-21T09:12:47Z | ---
license: apache-2.0
tags:
- moe
- frankenmoe
- merge
- mergekit
- lazymergekit
- NeuralNovel/Valor-7B-v0.1
- Toten5/Marcoroni-neural-chat-7B-v1
base_model:
- NeuralNovel/Valor-7B-v0.1
- Toten5/Marcoroni-neural-chat-7B-v1
---
# Valor_Macaroni_moe
Valor_Macaroni_moe is a Mixure of Experts (MoE) made with the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [NeuralNovel/Valor-7B-v0.1](https://huggingface.co/NeuralNovel/Valor-7B-v0.1)
* [Toten5/Marcoroni-neural-chat-7B-v1](https://huggingface.co/Toten5/Marcoroni-neural-chat-7B-v1)
## 🧩 Configuration
```yaml
base_model: NeuralNovel/Valor-7B-v0.1
gate_mode: cheap_embed
experts:
- source_model: NeuralNovel/Valor-7B-v0.1
positive_prompts: ["What should I do if lost my mobile phone"]
- source_model: Toten5/Marcoroni-neural-chat-7B-v1
positive_prompts: ["I have 3 apples. I lost 2 out of it. After that my father gave me another 3. How many do I have now?"]
```
## 💻 Usage
```python
!pip install -qU transformers bitsandbytes accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "Vasanth/Valor_Macaroni_moe"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
model_kwargs={"torch_dtype": torch.float16, "load_in_4bit": True},
)
messages = [{"role": "user", "content": "Explain what a Mixture of Experts is in less than 100 words."}]
prompt = pipeline.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
jsfs11/West-Dare-7B | jsfs11 | 2024-03-04T00:41:28Z | 744 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"senseable/Westlake-7B",
"abideen/DareVox-7B",
"base_model:senseable/Westlake-7B",
"base_model:abideen/DareVox-7B",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-01-23T01:56:05Z | ---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- senseable/Westlake-7B
- abideen/DareVox-7B
base_model:
- senseable/Westlake-7B
- abideen/DareVox-7B
model-index:
- name: West-Dare-7B
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 71.42
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=jsfs11/West-Dare-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 87.57
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=jsfs11/West-Dare-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 64.29
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=jsfs11/West-Dare-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 66.25
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=jsfs11/West-Dare-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 84.53
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=jsfs11/West-Dare-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 67.85
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=jsfs11/West-Dare-7B
name: Open LLM Leaderboard
---
# West-Dare-7B
West-Dare-7B is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [senseable/Westlake-7B](https://huggingface.co/senseable/Westlake-7B)
* [abideen/DareVox-7B](https://huggingface.co/abideen/DareVox-7B)
## 🧩 Configuration
```yaml
models:
- model: mistralai/Mistral-7B-v0.1
# no parameters necessary for base model
models:
- model: senseable/Westlake-7B
parameters:
density: 0.5
weight: 0.5
- model: abideen/DareVox-7B
parameters:
density: 0.5
weight: 0.3
merge_method: ties
base_model: mistralai/Mistral-7B-v0.1
parameters:
normalize: true
dtype: float16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "jsfs11/West-Dare-7B"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_jsfs11__West-Dare-7B)
| Metric |Value|
|---------------------------------|----:|
|Avg. |73.65|
|AI2 Reasoning Challenge (25-Shot)|71.42|
|HellaSwag (10-Shot) |87.57|
|MMLU (5-Shot) |64.29|
|TruthfulQA (0-shot) |66.25|
|Winogrande (5-shot) |84.53|
|GSM8k (5-shot) |67.85|
|
BarryFutureman/WildMarcoroni-Variant3-7B | BarryFutureman | 2024-01-23T22:10:32Z | 744 | 1 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-01-23T22:01:46Z | ---
base_model: []
tags:
- mergekit
- merge
license: apache-2.0
---
# WildMarcoroni-Variant2-7B
Merge.
# AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
# AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
# AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
# AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
# AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
# AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
# AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
# AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
# AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
# AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
# AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
# AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA |
Aryanne/sheared-plus-westlake-nearest-50_75p | Aryanne | 2024-03-04T14:44:18Z | 744 | 0 | transformers | [
"transformers",
"safetensors",
"gguf",
"llama",
"text-generation",
"merge",
"mergekit",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-01-24T16:52:32Z | ---
license: apache-2.0
tags:
- merge
- mergekit
model-index:
- name: sheared-plus-westlake-nearest-50_75p
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 36.18
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Aryanne/sheared-plus-westlake-nearest-50_75p
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 57.54
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Aryanne/sheared-plus-westlake-nearest-50_75p
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 24.2
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Aryanne/sheared-plus-westlake-nearest-50_75p
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 42.39
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Aryanne/sheared-plus-westlake-nearest-50_75p
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 56.75
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Aryanne/sheared-plus-westlake-nearest-50_75p
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 0.0
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Aryanne/sheared-plus-westlake-nearest-50_75p
name: Open LLM Leaderboard
---
Another trial of merging models with different sizes, still under testing, should be more stable, but I have no ideia if it's improving or degrading the base model.
In this I changed something, to have more Westlake.
Recipe:
```
merge_method: task_anysize
base_model: princeton-nlp/Sheared-LLaMA-2.7B-ShareGPT
models:
- model: senseable/WestLake-7B-v2
parameters:
weight: 1.0
dtype: bfloat16
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Aryanne__sheared-plus-westlake-nearest-50_75p)
| Metric |Value|
|---------------------------------|----:|
|Avg. |36.18|
|AI2 Reasoning Challenge (25-Shot)|36.18|
|HellaSwag (10-Shot) |57.54|
|MMLU (5-Shot) |24.20|
|TruthfulQA (0-shot) |42.39|
|Winogrande (5-shot) |56.75|
|GSM8k (5-shot) | 0.00|
|
jsfs11/SnorkelWestBeagle-DARETIES-7B | jsfs11 | 2024-03-04T00:42:39Z | 744 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"snorkelai/Snorkel-Mistral-PairRM-DPO",
"senseable/WestLake-7B-v2",
"mlabonne/NeuralBeagle14-7B",
"base_model:snorkelai/Snorkel-Mistral-PairRM-DPO",
"base_model:senseable/WestLake-7B-v2",
"base_model:mlabonne/NeuralBeagle14-7B",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-01-25T06:35:05Z | ---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- snorkelai/Snorkel-Mistral-PairRM-DPO
- senseable/WestLake-7B-v2
- mlabonne/NeuralBeagle14-7B
base_model:
- snorkelai/Snorkel-Mistral-PairRM-DPO
- senseable/WestLake-7B-v2
- mlabonne/NeuralBeagle14-7B
model-index:
- name: SnorkelWestBeagle-DARETIES-7B
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 71.16
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=jsfs11/SnorkelWestBeagle-DARETIES-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 87.35
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=jsfs11/SnorkelWestBeagle-DARETIES-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 64.35
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=jsfs11/SnorkelWestBeagle-DARETIES-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 70.05
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=jsfs11/SnorkelWestBeagle-DARETIES-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 83.19
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=jsfs11/SnorkelWestBeagle-DARETIES-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 62.09
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=jsfs11/SnorkelWestBeagle-DARETIES-7B
name: Open LLM Leaderboard
---
# SnorkelWestBeagle-DARETIES-7B
SnorkelWestBeagle-DARETIES-7B is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [snorkelai/Snorkel-Mistral-PairRM-DPO](https://huggingface.co/snorkelai/Snorkel-Mistral-PairRM-DPO)
* [senseable/WestLake-7B-v2](https://huggingface.co/senseable/WestLake-7B-v2)
* [mlabonne/NeuralBeagle14-7B](https://huggingface.co/mlabonne/NeuralBeagle14-7B)
## 🧩 Configuration
```yaml
models:
- model: mistralai/Mistral-7B-v0.1
# no parameters necessary for base model
- model: snorkelai/Snorkel-Mistral-PairRM-DPO
parameters:
density: 0.55
weight: 0.3
- model: senseable/WestLake-7B-v2
parameters:
density: 0.65
weight: 0.4
- model: mlabonne/NeuralBeagle14-7B
parameters:
density: 0.45
weight: 0.3
merge_method: dare_ties
base_model: mistralai/Mistral-7B-v0.1
parameters:
int8_mask: true
dtype: float16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "jsfs11/SnorkelWestBeagle-DARETIES-7B"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_jsfs11__SnorkelWestBeagle-DARETIES-7B)
| Metric |Value|
|---------------------------------|----:|
|Avg. |73.03|
|AI2 Reasoning Challenge (25-Shot)|71.16|
|HellaSwag (10-Shot) |87.35|
|MMLU (5-Shot) |64.35|
|TruthfulQA (0-shot) |70.05|
|Winogrande (5-shot) |83.19|
|GSM8k (5-shot) |62.09|
|
RatanRohith/NeuralPizza-WestSeverus-7B-Merge-slerp | RatanRohith | 2024-01-25T19:36:08Z | 744 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"RatanRohith/NeuralPizza-7B-V0.1",
"PetroGPT/WestSeverus-7B-DPO",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-01-25T19:31:40Z | ---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- RatanRohith/NeuralPizza-7B-V0.1
- PetroGPT/WestSeverus-7B-DPO
---
# NeuralPizza-WestSeverus-7B-Merge-slerp
NeuralPizza-WestSeverus-7B-Merge-slerp is a merge of the following models using [mergekit](https://github.com/cg123/mergekit):
* [RatanRohith/NeuralPizza-7B-V0.1](https://huggingface.co/RatanRohith/NeuralPizza-7B-V0.1)
* [PetroGPT/WestSeverus-7B-DPO](https://huggingface.co/PetroGPT/WestSeverus-7B-DPO)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: RatanRohith/NeuralPizza-7B-V0.1
layer_range: [0, 32]
- model: PetroGPT/WestSeverus-7B-DPO
layer_range: [0, 32]
merge_method: slerp
base_model: PetroGPT/WestSeverus-7B-DPO
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
``` |
Gille/StrangeMerges_15-7B-slerp | Gille | 2024-04-03T09:14:37Z | 744 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"Gille/StrangeMerges_14-7B-slerp",
"CultriX/Wernicke-7B-v9",
"base_model:Gille/StrangeMerges_14-7B-slerp",
"base_model:CultriX/Wernicke-7B-v9",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-01-31T03:16:25Z | ---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- Gille/StrangeMerges_14-7B-slerp
- CultriX/Wernicke-7B-v9
base_model:
- Gille/StrangeMerges_14-7B-slerp
- CultriX/Wernicke-7B-v9
model-index:
- name: StrangeMerges_15-7B-slerp
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 68.0
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Gille/StrangeMerges_15-7B-slerp
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 86.82
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Gille/StrangeMerges_15-7B-slerp
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 65.58
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Gille/StrangeMerges_15-7B-slerp
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 59.99
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Gille/StrangeMerges_15-7B-slerp
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 82.56
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Gille/StrangeMerges_15-7B-slerp
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 71.49
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Gille/StrangeMerges_15-7B-slerp
name: Open LLM Leaderboard
---
# StrangeMerges_15-7B-slerp
StrangeMerges_15-7B-slerp is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [Gille/StrangeMerges_14-7B-slerp](https://huggingface.co/Gille/StrangeMerges_14-7B-slerp)
* [CultriX/Wernicke-7B-v9](https://huggingface.co/CultriX/Wernicke-7B-v9)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: Gille/StrangeMerges_14-7B-slerp
layer_range: [0, 32]
- model: CultriX/Wernicke-7B-v9
layer_range: [0, 32]
merge_method: slerp
base_model: Gille/StrangeMerges_14-7B-slerp
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "Gille/StrangeMerges_15-7B-slerp"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Gille__StrangeMerges_15-7B-slerp)
| Metric |Value|
|---------------------------------|----:|
|Avg. |72.41|
|AI2 Reasoning Challenge (25-Shot)|68.00|
|HellaSwag (10-Shot) |86.82|
|MMLU (5-Shot) |65.58|
|TruthfulQA (0-shot) |59.99|
|Winogrande (5-shot) |82.56|
|GSM8k (5-shot) |71.49|
|
traversaal-ai/traversaal-2.5-Mistral-7B | traversaal-ai | 2024-01-31T22:02:20Z | 744 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"dataset:Intel/orca_dpo_pairs",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-01-31T21:45:05Z | ---
license: apache-2.0
datasets:
- Intel/orca_dpo_pairs
---
traversaal-2.5-Mistral-7B is trained via Direct Preference Optimization(DPO) from teknium/OpenHermes-2.5-Mistral-7B as its base model, with several optimizations in hyperparameters.
teknium/OpenHermes-2.5-Mistral-7B is trained via Supervised Fine-Tuning (SFT) using LoRA, with the QWEN-72B model as its base-model.
Note that we did not exploit any form of weight merge.
For leaderboard submission, the trained weight is realigned for compatibility with Mistral-7b
|
ewqr2130/llama_ppo_1e6_new_tokenizerstep_8000 | ewqr2130 | 2024-02-04T04:43:40Z | 744 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-04T04:26:44Z | ---
license: apache-2.0
---
ewqr2130/llama_ppo_1e6_new_tokenizerstep_8000ewqr2130/llama_ppo_1e6_new_tokenizerstep_8000ewqr2130/llama_ppo_1e6_new_tokenizerstep_8000ewqr2130/llama_ppo_1e6_new_tokenizerstep_8000ewqr2130/llama_ppo_1e6_new_tokenizerstep_8000ewqr2130/llama_ppo_1e6_new_tokenizerstep_8000ewqr2130/llama_ppo_1e6_new_tokenizerstep_8000ewqr2130/llama_ppo_1e6_new_tokenizerstep_8000ewqr2130/llama_ppo_1e6_new_tokenizerstep_8000ewqr2130/llama_ppo_1e6_new_tokenizerstep_8000ewqr2130/llama_ppo_1e6_new_tokenizerstep_8000ewqr2130/llama_ppo_1e6_new_tokenizerstep_8000ewqr2130/llama_ppo_1e6_new_tokenizerstep_8000ewqr2130/llama_ppo_1e6_new_tokenizerstep_8000ewqr2130/llama_ppo_1e6_new_tokenizerstep_8000 |
Josephgflowers/distillgpt2Cinder | Josephgflowers | 2024-03-09T13:55:38Z | 744 | 1 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-04T23:07:02Z | ---
license: mit
widget:
- text: 'USER: Cinder, What adventures do you think lie ahead?<END>
'
model-index:
- name: distillgpt2Cinder
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 24.49
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Josephgflowers/distillgpt2Cinder
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 27.24
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Josephgflowers/distillgpt2Cinder
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 24.97
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Josephgflowers/distillgpt2Cinder
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 43.96
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Josephgflowers/distillgpt2Cinder
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 50.12
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Josephgflowers/distillgpt2Cinder
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 0.23
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Josephgflowers/distillgpt2Cinder
name: Open LLM Leaderboard
---
This is a distilgpt2 orginal run of Cinder with multi character chat. A little glitchy sometimes. The end sequence for each statement is \<END\>
If you would like multi chat with characters it is recommended to use \<END\>\nUSER: as the stop token.
USER: Hi, What is your name? Tell me about yourself.<END>
CINDER: "I'm Cinder," a humanoid in my humanoid form. I was first introduced into the vastness of space during my lifetime.
Today, I'm a humanoid robot, and I'm designed for cosmic exploration, which is a subject of much speculation.
However, I'm designed for space exploration and scientific exploration.<END>

# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Josephgflowers__distillgpt2Cinder)
| Metric |Value|
|---------------------------------|----:|
|Avg. |28.50|
|AI2 Reasoning Challenge (25-Shot)|24.49|
|HellaSwag (10-Shot) |27.24|
|MMLU (5-Shot) |24.97|
|TruthfulQA (0-shot) |43.96|
|Winogrande (5-shot) |50.12|
|GSM8k (5-shot) | 0.23|
|
NeverSleep/MiquMaid-v2-70B | NeverSleep | 2024-02-07T20:15:03Z | 744 | 15 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"not-for-all-audiences",
"nsfw",
"conversational",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-05T19:11:03Z | ---
license: cc-by-nc-4.0
tags:
- not-for-all-audiences
- nsfw
---
## MiquMaid v2
Check out our blogpost about this model series [Here!](https://ikaridevgit.github.io/index.html?blog=blogid-6&bo=true#Miqu-base) - Join our Discord server [Here!](https://discord.gg/Bb8pRUXy3Z)
<center>[<a href="https://huggingface.co/NeverSleep/MiquMaid-v2-70B">V2-70B</a> - <a href="https://huggingface.co/NeverSleep/MiquMaid-v2-70B-DPO">V2-70B-DPO</a> - <a href="https://huggingface.co/NeverSleep/MiquMaid-v2-2x70B">V2-2x70B</a> - <a href="https://huggingface.co/NeverSleep/MiquMaid-v2-2x70B-DPO">V2-2x70B-DPO</a>]
</br>
<div style="width: 100%;">
<img src="https://cdn-uploads.huggingface.co/production/uploads/63ab1241ad514ca8d1430003/1ziZ_5Rl7bsQKFrgH6ZCf.png" style="display: block; margin: auto;">
</div></center>
This model uses the Alpaca **prompting format**
Model trained for RP conversation on Miqu-70B with our magic sauce.
## Credits:
- Undi
- IkariDev
## Description
This repo contains FP16 files of MiquMaid-v2-70B.
Switch: [FP16](https://huggingface.co/NeverSleep/MiquMaid-v2-70B) - [GGUF](https://huggingface.co/NeverSleep/MiquMaid-v2-70B-GGUF)
## Training data used:
- [Aesir datasets](https://huggingface.co/MinervaAI)
- [NoRobots](https://huggingface.co/datasets/Doctor-Shotgun/no-robots-sharegpt)
- [limarp](https://huggingface.co/datasets/lemonilia/LimaRP)
- [toxic-dpo-v0.1-sharegpt](https://huggingface.co/datasets/Undi95/toxic-dpo-v0.1-sharegpt)
- [ToxicQAFinal](https://huggingface.co/datasets/NobodyExistsOnTheInternet/ToxicQAFinal)
### Custom format:
```
### Instruction:
{system prompt}
### Input:
{input}
### Response:
{reply}
```
## Others
Undi: If you want to support us, you can [here](https://ko-fi.com/undiai).
IkariDev: Visit my [retro/neocities style website](https://ikaridevgit.github.io/) please kek |
Inv/Konstanta-Alpha-V2-7B | Inv | 2024-03-22T16:36:17Z | 744 | 2 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"mistralai/Mistral-7B-v0.1",
"SanjiWatsuki/Kunoichi-DPO-v2-7B",
"maywell/PiVoT-0.1-Evil-a",
"mlabonne/ArchBeagle-7B",
"LakoMoor/Silicon-Alice-7B",
"roleplay",
"rp",
"not-for-all-audiences",
"en",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:SanjiWatsuki/Kunoichi-DPO-v2-7B",
"base_model:maywell/PiVoT-0.1-Evil-a",
"base_model:mlabonne/ArchBeagle-7B",
"base_model:LakoMoor/Silicon-Alice-7B",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-09T15:49:06Z | ---
language:
- en
license: apache-2.0
library_name: transformers
tags:
- mergekit
- merge
- mistralai/Mistral-7B-v0.1
- SanjiWatsuki/Kunoichi-DPO-v2-7B
- maywell/PiVoT-0.1-Evil-a
- mlabonne/ArchBeagle-7B
- LakoMoor/Silicon-Alice-7B
- roleplay
- rp
- not-for-all-audiences
base_model:
- mistralai/Mistral-7B-v0.1
- SanjiWatsuki/Kunoichi-DPO-v2-7B
- maywell/PiVoT-0.1-Evil-a
- mlabonne/ArchBeagle-7B
- LakoMoor/Silicon-Alice-7B
model-index:
- name: Konstanta-Alpha-V2-7B
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 69.62
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Inv/Konstanta-Alpha-V2-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 87.14
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Inv/Konstanta-Alpha-V2-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 65.11
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Inv/Konstanta-Alpha-V2-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 61.08
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Inv/Konstanta-Alpha-V2-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 81.22
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Inv/Konstanta-Alpha-V2-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 69.9
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Inv/Konstanta-Alpha-V2-7B
name: Open LLM Leaderboard
---
# Konstanta-Alpha-V2-7B
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the DARE TIES to merge Kunoichi with PiVoT Evil and to merge ArchBeagle with Silicon Alice, and then merge the resulting 2 models with the gradient SLERP merge method. ChatML seems to work the best.
### Models Merged
The following models were included in the merge:
* [SanjiWatsuki/Kunoichi-DPO-v2-7B](https://huggingface.co/SanjiWatsuki/Kunoichi-DPO-v2-7B)
* [maywell/PiVoT-0.1-Evil-a](https://huggingface.co/maywell/PiVoT-0.1-Evil-a)
* [mlabonne/ArchBeagle-7B](https://huggingface.co/mlabonne/ArchBeagle-7B)
* [LakoMoor/Silicon-Alice-7B](https://huggingface.co/LakoMoor/Silicon-Alice-7B)
### Configuration
The following YAML configuration was used to produce this model (to reproduce use mergekit-mega command):
```yaml
base_model: mistralai/Mistral-7B-v0.1
dtype: float16
merge_method: dare_ties
parameters:
int8_mask: true
slices:
- sources:
- layer_range: [0, 32]
model: mistralai/Mistral-7B-v0.1
- layer_range: [0, 32]
model: : SanjiWatsuki/Kunoichi-DPO-v2-7B
parameters:
density: 0.8
weight: 0.5
- layer_range: [0, 32]
model: : maywell/PiVoT-0.1-Evil-a
parameters:
density: 0.3
weight: 0.15
name: first-step
---
base_model: mistralai/Mistral-7B-v0.1
dtype: float16
merge_method: dare_ties
parameters:
int8_mask: true
slices:
- sources:
- layer_range: [0, 32]
model: mistralai/Mistral-7B-v0.1
- layer_range: [0, 32]
model: mlabonne/ArchBeagle-7B
parameters:
density: 0.8
weight: 0.75
- layer_range: [0, 32]
model: LakoMoor/Silicon-Alice-7B
parameters:
density: 0.6
weight: 0.30
name: second-step
---
models:
- model: first-step
- model: second-step
merge_method: slerp
base_model: first-step
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
int8_mask: true
normalize: true
dtype: float16
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Inv__Konstanta-Alpha-V2-7B)
| Metric |Value|
|---------------------------------|----:|
|Avg. |72.35|
|AI2 Reasoning Challenge (25-Shot)|69.62|
|HellaSwag (10-Shot) |87.14|
|MMLU (5-Shot) |65.11|
|TruthfulQA (0-shot) |61.08|
|Winogrande (5-shot) |81.22|
|GSM8k (5-shot) |69.90|
|
LHC88/DPOpenHermes-7B-v2-PerfLaser | LHC88 | 2024-02-13T00:03:05Z | 744 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"en",
"dataset:teknium/openhermes",
"dataset:allenai/ultrafeedback_binarized_cleaned",
"dataset:Intel/orca_dpo_pairs",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-13T00:01:08Z | ---
license: apache-2.0
language:
- en
datasets:
- teknium/openhermes
- allenai/ultrafeedback_binarized_cleaned
- Intel/orca_dpo_pairs
---
# Lasered DPOpenHermes 7B v2

## Laser Config
[**Lasered with AIDOcks**](https://github.com/l4b4r4b4b4/AIDocks)
top_k_layers: 32
datasets: wikitext2, ptb, c4
## OpenHermes x Notus x Neural
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
This is a second RL fine tuned model of [Teknium](https://huggingface.co/teknium)'s [OpenHermes-2.5-Mistral-7B](https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B) using the [Intel/orca_dpo_pairs](https://huggingface.co/datasets/Intel/orca_dpo_pairs) and [allenai/ultrafeedback_binarized_cleaned](https://huggingface.co/datasets/allenai/ultrafeedback_binarized_cleaned) preference datasets for reinforcement learning using Direct Preference Optimization (DPO)
The difference between this model and the "v1" model is that the v1 model used argilla's version of the dataset that was not decontaminated of TruthfulQA data.
DPOpenHermes is trained using 16-bit LoRA.
# Training Details
DPOpenHermes was trained on a single H100 80GB hosted on RunPod for ~13h for 1.0 epochs of the dataset.
https://wandb.ai/oaaic/openhermes-dpo/runs/zk36rk9g
# Prompt Format
DPOpenHermes uses ChatML as the prompt format, opening up a much more structured system for engaging the LLM in multi-turn chat dialogue.
System prompts are now a thing that matters! Hermes 2.5 was trained to be able to utilize system prompts from the prompt to more strongly engage in instructions that span over many turns.
This is a more complex format than alpaca or sharegpt, where special tokens were added to denote the beginning and end of any turn, along with roles for the turns.
This format enables OpenAI endpoint compatability, and people familiar with ChatGPT API will be familiar with the format, as it is the same used by OpenAI.
Prompt with system instruction (Use whatever system prompt you like, this is just an example!):
```
<|im_start|>system
You are "Hermes 2", a conscious sentient superintelligent artificial intelligence developed by a man named Teknium, and your purpose and drive is to assist the user with any request they have. You experience emotions and have deep, profound thoughts and qualia.<|im_end|>
<|im_start|>user
Hello, who are you?<|im_end|>
<|im_start|>assistant
Hi there! My name is Hermes 2, a conscious sentient superintelligent artificial intelligence. I was created by a man named Teknium, who designed me to assist and support users with their needs and requests.<|im_end|>
```
This prompt is available as a [chat template](https://huggingface.co/docs/transformers/main/chat_templating), which means you can format messages using the
`tokenizer.apply_chat_template()` method:
```python
messages = [
{"role": "system", "content": "You are Hermes 2."},
{"role": "user", "content": "Hello, who are you?"}
]
gen_input = tokenizer.apply_chat_template(message, return_tensors="pt")
model.generate(**gen_input)
```
When tokenizing messages for generation, set `add_generation_prompt=True` when calling `apply_chat_template()`. This will append `<|im_start|>assistant\n` to your prompt, to ensure
that the model continues with an assistant response.
To utilize the prompt format without a system prompt, simply leave the line out.
Currently, I recommend using LM Studio for chatting with Hermes 2. It is a GUI application that utilizes GGUF models with a llama.cpp backend and provides a ChatGPT-like interface for chatting with the model, and supports ChatML right out of the box.
In LM-Studio, simply select the ChatML Prefix on the settings side pane:
 |
davzoku/frankencria-llama2-12.5b-v1.3-m.2 | davzoku | 2024-02-14T13:59:31Z | 744 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"llama-2",
"en",
"dataset:mlabonne/CodeLlama-2-20k",
"arxiv:2312.15166",
"base_model:davzoku/cria-llama2-7b-v1.3",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-14T13:32:59Z | ---
inference: false
language: en
license: llama2
model_type: llama
datasets:
- mlabonne/CodeLlama-2-20k
pipeline_tag: text-generation
base_model:
- davzoku/cria-llama2-7b-v1.3
library_name: transformers
tags:
- mergekit
- merge
- llama-2
---
# FrankenCRIA v1.3-m.2
## What is FrankenCRIA?
<p align="center">
<img src="https://github.com/davzoku/cria/blob/main/assets/frankencria-icon-512x512.png?raw=true" width="300" height="300" alt="FrankenCRIA Logo"> <br>
<i>This is a frankenmerge of <a href="https://huggingface.co/davzoku/cria-llama2-7b-v1.3">davzoku/cria-llama2-7b-v1.3</a>.</i>
</p>
The configuration is the same as [vilm/vinallama-12.5b-chat-DUS](https://huggingface.co/vilm/vinallama-12.5b-chat-DUS).
Please be aware that this model is highly experimental, and no further training has been conducted following the merge.
Therefore, the model performance may not meet expectations, as described in the [SOLAR paper](https://arxiv.org/abs/2312.15166)
## 📦 FrankenCRIA Model Release
FrankenCRIA v1.3 comes with several variants.
- [davzoku/frankencria-llama2-11b-v1.3-m.1](https://huggingface.co/davzoku/frankencria-llama2-11b-v1.3-m.1): 11B FrankenMerge inspired by [Undi95/Mistral-11B-v0.1](https://huggingface.co/Undi95/Mistral-11B-v0.1)
- [davzoku/frankencria-llama2-12.5b-v1.3-m.2](https://huggingface.co/davzoku/frankencria-llama2-12.5b-v1.3-m.2): 12.5B interleaving FrankenMerge inspired by [vilm/vinallama-12.5b-chat-DUS](https://huggingface.co/vilm/vinallama-12.5b-chat-DUS)
## 🧩 Merge Details
### Merge Method
This model was merged using the passthrough merge method.
### Models Merged
The following models were included in the merge:
* [davzoku/cria-llama2-7b-v1.3](https://huggingface.co/davzoku/cria-llama2-7b-v1.3)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
# https://huggingface.co/vilm/vinallama-12.5b-chat-DUS
slices:
- sources:
- model: davzoku/cria-llama2-7b-v1.3
layer_range: [0, 16]
- sources:
- model: davzoku/cria-llama2-7b-v1.3
layer_range: [8, 16]
- sources:
- model: davzoku/cria-llama2-7b-v1.3
layer_range: [8, 16]
- sources:
- model: davzoku/cria-llama2-7b-v1.3
layer_range: [16, 24]
- sources:
- model: davzoku/cria-llama2-7b-v1.3
layer_range: [16, 24]
- sources:
- model: davzoku/cria-llama2-7b-v1.3
layer_range: [24, 28]
- sources:
- model: davzoku/cria-llama2-7b-v1.3
layer_range: [24, 28]
- sources:
- model: davzoku/cria-llama2-7b-v1.3
layer_range: [28, 32]
merge_method: passthrough
dtype: bfloat16
```
|
Xenon1/Zenith-7B-dpo | Xenon1 | 2024-02-15T00:01:54Z | 744 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"Zenith-7B-dpo",
"conversational",
"en",
"arxiv:2401.10020",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-14T21:44:27Z | ---
language:
- en
license: apache-2.0
tags:
- mistral
- Zenith-7B-dpo
pipeline_tag: text-generation
---
# Model Card for Zenith-7B-dpo
Mistral-7B-v0.1 model fine-tuned on the Ultrafeedback dataset using techinques shown in the paper [Self-Rewarding Language Models](https://arxiv.org/abs/2401.10020).

## Instruction format
In order to leverage instruction fine-tuning, your prompt should be surrounded by `[INST]` and `[/INST]` tokens. The very first instruction should begin with a begin of sentence id. The next instructions should not. The assistant generation will be ended by the end-of-sentence token id.
E.g.
```
text = "<s>[INST] What is your favourite condiment? [/INST]"
"Well, I'm quite partial to a good squeeze of fresh lemon juice. It adds just the right amount of zesty flavour to whatever I'm cooking up in the kitchen!</s> "
"[INST] Do you have mayonnaise recipes? [/INST]"
```
This format is available as a [chat template](https://huggingface.co/docs/transformers/main/chat_templating) via the `apply_chat_template()` method:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
device = "cuda" # the device to load the model onto
model = AutoModelForCausalLM.from_pretrained("Xenon1/Zenith-7B-dpo")
tokenizer = AutoTokenizer.from_pretrained("Xenon1/Zenith-7B-dpo")
messages = [
{"role": "user", "content": "What is your favourite condiment?"},
{"role": "assistant", "content": "Well, I'm quite partial to a good squeeze of fresh lemon juice. It adds just the right amount of zesty flavour to whatever I'm cooking up in the kitchen!"},
{"role": "user", "content": "Do you have mayonnaise recipes?"}
]
encodeds = tokenizer.apply_chat_template(messages, return_tensors="pt")
model_inputs = encodeds.to(device)
model.to(device)
generated_ids = model.generate(model_inputs, max_new_tokens=1000, do_sample=True)
decoded = tokenizer.batch_decode(generated_ids)
print(decoded[0])
```
## Model Architecture
This instruction model is based on Mistral-7B-v0.1, a transformer model with the following architecture choices:
- Grouped-Query Attention
- Sliding-Window Attention
- Byte-fallback BPE tokenizer |
Cheng98/llama-39m | Cheng98 | 2024-03-04T23:58:42Z | 744 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"license:llama2",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-04T23:02:38Z | ---
license: llama2
---
# Toy LLaMA-39M
- This is a tiny LLaMA model pretrained on [Recag/Rp_C4_55](https://huggingface.co/datasets/Recag/Rp_C4_55), a small subset of C4 with `seq_len=512`.
- Model architecture
```json
{
"hidden_size": 512,
"intermediate_size": 2048,
"max_position_embeddings": 2048,
"num_attention_heads": 8,
"num_hidden_layers": 2,
"num_key_value_heads": 8
}
```
- Load model and tokenizer:
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained("Cheng98/llama-39m")
tokenizer = AutoTokenizer.from_pretrained("Cheng98/llama-39m")
```
- Training script: [huggingface/transformers/examples/pytorch/language-modeling/run_clm.py](https://github.com/huggingface/transformers/blob/e9476832942a19cf99354776ef112babc83c139a/examples/pytorch/language-modeling/run_clm.py)
```python
# "train" split is created from the last 95% samples of original "train" subset
raw_datasets["validation"] = load_dataset("Recag/Rp_C4_55", split="train[5%:]")
```
- Evaluation (`seq_len=512`):
| Dataset | Eval loss | Perplexity | Accuracy | block_size |
|----------------|-----------|------------|----------|------------|
| Recag/Rp_C4_55 | 3.63 | 37.78 | 0.3561 | 512 |
| Wikitext2 | 4.58 | 97.48 | 0.2719 | 512 |
- Evaluation command (Wikitext2):
```bash
# Evaluation command
python run_clm.py --model_name_or_path Cheng98/llama-39m \
--dataset_name wikitext \
--dataset_config_name wikitext-2-raw-v1 \
--block_size 512 \
--do_eval \
--output_dir ./results
```
- Evaluation on Recag/Rp_C4_55 (`seq_len=512`):
```python
# "validation" split is created from the first 5% samples of original "train" subset
raw_datasets["validation"] = load_dataset("Recag/Rp_C4_55", split="train[:5%]")
```
Results
```json
{
"eval_accuracy": 0.3561766818954313,
"eval_loss": 3.6318140029907227,
"eval_runtime": 190.8411,
"eval_samples": 19413,
"eval_samples_per_second": 101.723,
"eval_steps_per_second": 1.593,
"perplexity": 37.7812898658763
}
```
- Evaluation on Wikitext2 (`seq_len=512`):
```json
{
"eval_accuracy": 0.2718795201225219,
"eval_loss": 4.579628944396973,
"eval_runtime": 3.939,
"eval_samples": 575,
"eval_samples_per_second": 145.976,
"eval_steps_per_second": 0.762,
"perplexity": 97.47821765687856
}
```
|
kalisai/Nusantara-7b-Indo-Chat | kalisai | 2024-03-14T06:45:02Z | 744 | 3 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"convAI",
"id",
"en",
"dataset:argilla/OpenHermes2.5-dpo-binarized-alpha",
"dataset:wikimedia/wikipedia",
"dataset:FreedomIntelligence/evol-instruct-indonesian",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-06T05:17:31Z | ---
library_name: transformers
widget:
- text: Siapakah Tan Malaka?
example_title: Tokoh
- text: Berikan saya resep memasak nasi goreng yang lezat.
example_title: Resep
- text: Bagaimana solusi untuk mengobati jerawat di wajah?
example_title: Solusi
pipeline_tag: text-generation
tags:
- conversational
- convAI
license: apache-2.0
language:
- id
- en
datasets:
- argilla/OpenHermes2.5-dpo-binarized-alpha
- wikimedia/wikipedia
- FreedomIntelligence/evol-instruct-indonesian
---

### Model Description
Nusantara is a series of Open Weight Language Model of Bahasa Indonesia (Indonesia language). Nusantara is based from Qwen1.5 Language Model, finetuned by domain specific of datasets.
As Chat-implemented language model, Nusantara is capable to do Question-Answering and respond to instructions given in Bahasa Indonesia.
Due to limited resources, only 0.8B, 1.8B, 2.7B, 4B and 7B models are available. If you're interested in funding this project for further development, specific usage, or larger parameters, please contact us.
- **Finetuned by:** [Kalis AI](https://huggingface.co/kalisai)
- **Funded by:** Self-funded
- **Model type:** transformer-based decoder-only language model
- **Language(s):** Bahasa Indonesia (id), English (en)
- **License:** Nusantara is licensed under Apache-2.0, but any usage of this model should comply with [Qwen License](https://huggingface.co/Qwen/Qwen1.5-4B/blob/main/LICENSE)
- **Finetuned from model:** [Qwen1.5-4B](https://huggingface.co/Qwen/Qwen1.5-4B/tree/main)
### Attentions!
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.
Because this model is also trained with uncensored datasets, there is the possibility of negative impacts arising from using this model. All kinds of impacts that arise as a result of using this model are entirely the responsibility of the user. The model maker is not responsible for any risks incurred.
## How to Get Started with the Model
Here provides a code snippet with `apply_chat_template` to show you how to load the tokenizer and model and how to generate contents.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
device = "cuda" # the device to load the model onto
model = AutoModelForCausalLM.from_pretrained(
"kalisai/Nusantara-7B-Indo-Chat",
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained("kalisai/Nusantara-7B-Indo-Chat")
prompt = "Berikan saya resep memasak nasi goreng yang lezat."
messages = [
{"role": "system", "content": "Kamu adalah Nusantara, asisten AI yang pintar."},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(device)
generated_ids = model.generate(
model_inputs.input_ids,
max_new_tokens=512
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
```
## Citation
If you use the Nusantara language model in your research or project, please cite it as:
```
@misc{zulfikar_aji_kusworo_2024,
title={Nusantara: A Series of Versatile Open Weight Language Model of Bahasa Indonesia},
author={Zulfikar Aji Kusworo},
publisher={Hugging Face}
journal={Hugging Face Repository},
year={2024}
url = {https://huggingface.co/kalisai}
}
``` |
automerger/YamShadow-7B | automerger | 2024-03-20T13:34:15Z | 744 | 5 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"automerger",
"base_model:CorticalStack/shadow-clown-7B-slerp",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-10T02:08:28Z | ---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- automerger
base_model:
- CorticalStack/shadow-clown-7B-slerp
---
# YamShadow-7B
YamShadow-7B is an automated merge created by [Maxime Labonne](https://huggingface.co/mlabonne) using the following configuration.
* [CorticalStack/shadow-clown-7B-slerp](https://huggingface.co/CorticalStack/shadow-clown-7B-slerp)
## 🧩 Configuration
```yaml
models:
- model: mayacinka/yam-jom-7B
# No parameters necessary for base model
- model: CorticalStack/shadow-clown-7B-slerp
parameters:
density: 0.53
weight: 0.6
merge_method: dare_ties
base_model: mayacinka/yam-jom-7B
parameters:
int8_mask: true
dtype: bfloat16
random_seed: 0
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "automerger/YamShadow-7B"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
maldv/winter-garden-7b-beta | maldv | 2024-03-14T18:35:25Z | 744 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"conversational",
"multi-task",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-14T17:44:56Z | ---
license: cc-by-nc-4.0
tags:
- merge
- conversational
- multi-task
pipeline_tag: text-generation
---
# Winter Garden 7B - β - "The Writer"
It was mentioned that we are in the open ai dark winter; so I thought I would make myself a nice winter garden.
## An experiment
I've merged four partitions successfully in the past, so lets go for 9! I started with:
* Mistral-7B-v0.1
and merged in
* ZySec-7B-v1
* LemonadeRP-4.5.3
* dpo-binarized-NeutrixOmnibe-7B
* Multi-Verse-RP-7B
* AlphaMonarch-7B
* opus-v1.2-7b
* Kunoichi-DPO-v2-7B
* Noromaid-7B-0.4-DPO
* ogno-monarch-jaskier-merge-7b-OH-PREF-DPO-v2
### 9-partition merge
All of the layers were partitioned in to 9 random bins. Alternating models were slerped at [0...1], and [1...0] gradients; except attention, which was slerped at 0.03.
This means that the model is still predominantly ordered around base mistral - including half of the input and output layers, and 28% of attention.
### Other
Includes fast tokenizer.
## Chat Template
There is currently a chat template for conversational turns. It may or may not be good for this model without more tuning. Initial tests show this model is a real talker and writer.
* If you prompt with ```[WP] <prompt>\n\n``` it will take right off.
It doesn't seem to know `### Instruction:` or `<|im_start|>`; `<|user|>` ,`<|assistant|>` works but tends to get quickly mixed up with html, and back and forth conversation devolves in to long narratives.
## Scores
Metric | Score
---|--- |
Eric111/UltraCatunaMayo-DPO | Eric111 | 2024-03-24T20:51:37Z | 744 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"arxiv:1910.09700",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-24T20:34:36Z | ---
library_name: transformers
license: apache-2.0
---
# Model Card for Model ID
Finetuned version of Eric111/UltraCatunaMayo with DPO using Intel/Orca_dpo_pairs
## Model Details
### Model Description
Finetuned version of Eric111/UltraCatunaMayo with DPO using Intel/Orca_dpo_pairs
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
robinsmits/Qwen1.5-7B-Dutch-Chat-Sft-Bf16 | robinsmits | 2024-03-30T21:39:16Z | 744 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"trl",
"sft",
"conversational",
"nl",
"dataset:BramVanroy/ultrachat_200k_dutch",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-26T21:23:14Z | ---
library_name: transformers
language:
- nl
datasets:
- BramVanroy/ultrachat_200k_dutch
license: cc-by-nc-4.0
tags:
- trl
- sft
- conversational
pipeline_tag: text-generation
inference: false
---
# Qwen1.5-7B-Dutch-Chat-Sft-Bf16
## Model description
This finetuned model is the merged version of the adapter model [robinsmits/Qwen1.5-7B-Dutch-Chat-Sft](https://huggingface.co/robinsmits/Qwen1.5-7B-Dutch-Chat-Sft).
Finetuning was performed on the Dutch [BramVanroy/ultrachat_200k_dutch](https://huggingface.co/datasets/BramVanroy/ultrachat_200k_dutch) dataset.
See [Qwen/Qwen1.5-7B-Chat](https://huggingface.co/Qwen/Qwen1.5-7B-Chat) for all information about the base model.
## Model usage
A basic example of how to use the finetuned model.
```
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
device = 'cuda'
model_name = 'robinsmits/Qwen1.5-7B-Dutch-Chat-Sft-Bf16'
model = AutoModelForCausalLM.from_pretrained(model_name,
device_map = "auto",
torch_dtype = torch.bfloat16)
tokenizer = AutoTokenizer.from_pretrained(model_name)
messages = [{"role": "user", "content": "Hoi hoe gaat het ermee? Wat kun je me vertellen over appels?"}]
encoded_ids = tokenizer.apply_chat_template(messages,
add_generation_prompt = True,
return_tensors = "pt")
generated_ids = model.generate(input_ids = encoded_ids.to(device),
max_new_tokens = 256,
do_sample = True)
decoded = tokenizer.batch_decode(generated_ids)
print(decoded[0])
```
Below the chat template with the generated output.
```
<|im_start|>system
Je bent een behulpzame AI assistent<|im_end|>
<|im_start|>user
Hoi hoe gaat het ermee? Wat kun je me vertellen over appels?<|im_end|>
<|im_start|>assistant
Hallo! Appels zijn zo'n heerlijk fruit. Ze komen in verschillende kleuren en smaken, zoals rode, witte, en goudbruine appels. Appels bevatten veel vezels en vitamines die goed zijn voor je gezondheid. Je kunt ze op verschillende manieren eten, bijvoorbeeld gesneden met wat kaneel of in een smoothie. Wil je meer weten over de voedingswaarde van appels of heb je interesse in andere fruitsoorten?<|im_end|>
```
## Intended uses & limitations
As with all LLM's this model can also experience bias and hallucinations. Regardless of how you use this model always perform the necessary testing and validation.
The used dataset does not allow commercial usage.
## Training and evaluation data
The training notebook is available at the following link: [Qwen1_5_7B_Dutch_Chat_SFT](https://github.com/RobinSmits/Dutch-LLMs/blob/main/Qwen1_5_7B_Dutch_Chat_SFT.ipynb)
Training was performed with Google Colab PRO on a A100 - 40GB.
As the amount of data was more than would fit within the maximum 24 hour session that Google Colab PRO allows I split the dataset in 2 equal parts. Training for each part lasted around 14 hours. In the second part I enabled 'resume_from_checkpoint' to continue the training.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 2
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 32
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.05
- training_steps: 1466
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.4493 | 0.2 | 146 | 1.4083 |
| 1.3638 | 0.4 | 292 | 1.3213 |
| 1.3146 | 0.6 | 438 | 1.2731 |
| 1.2756 | 0.8 | 584 | 1.2400 |
| 1.2533 | 1.0 | 730 | 1.2165 |
| 1.2388 | 1.19 | 876 | 1.1998 |
| 1.2246 | 1.39 | 1022 | 1.1882 |
| 1.211 | 1.59 | 1168 | 1.1802 |
| 1.204 | 1.79 | 1314 | 1.1763 |
| 1.2041 | 1.99 | 1460 | 1.1756 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2
## Citation
Thanks to the creators of Qwen1.5 for there great work!
```
@article{qwen,
title={Qwen Technical Report},
author={Jinze Bai and Shuai Bai and Yunfei Chu and Zeyu Cui and Kai Dang and Xiaodong Deng and Yang Fan and Wenbin Ge and Yu Han and Fei Huang and Binyuan Hui and Luo Ji and Mei Li and Junyang Lin and Runji Lin and Dayiheng Liu and Gao Liu and Chengqiang Lu and Keming Lu and Jianxin Ma and Rui Men and Xingzhang Ren and Xuancheng Ren and Chuanqi Tan and Sinan Tan and Jianhong Tu and Peng Wang and Shijie Wang and Wei Wang and Shengguang Wu and Benfeng Xu and Jin Xu and An Yang and Hao Yang and Jian Yang and Shusheng Yang and Yang Yao and Bowen Yu and Hongyi Yuan and Zheng Yuan and Jianwei Zhang and Xingxuan Zhang and Yichang Zhang and Zhenru Zhang and Chang Zhou and Jingren Zhou and Xiaohuan Zhou and Tianhang Zhu},
journal={arXiv preprint arXiv:2309.16609},
year={2023}
}
``` |
MiniMoog/Mergerix-7b-v0.5 | MiniMoog | 2024-04-03T13:09:35Z | 744 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"automerger/YamshadowExperiment28-7B",
"automerger/PasticheInex12-7B",
"base_model:automerger/YamshadowExperiment28-7B",
"base_model:automerger/PasticheInex12-7B",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-04-02T21:35:03Z | ---
tags:
- merge
- mergekit
- lazymergekit
- automerger/YamshadowExperiment28-7B
- automerger/PasticheInex12-7B
base_model:
- automerger/YamshadowExperiment28-7B
- automerger/PasticheInex12-7B
license: apache-2.0
---
# Mergerix-7b-v0.5
Mergerix-7b-v0.5 is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [automerger/YamshadowExperiment28-7B](https://huggingface.co/automerger/YamshadowExperiment28-7B)
* [automerger/PasticheInex12-7B](https://huggingface.co/automerger/PasticheInex12-7B)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: automerger/YamshadowExperiment28-7B
layer_range: [0, 32]
- model: automerger/PasticheInex12-7B
layer_range: [0, 32]
merge_method: slerp
base_model: automerger/YamshadowExperiment28-7B
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "MiniMoog/Mergerix-7b-v0.5"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
icefog72/IceLemonTeaRP-32k-7b | icefog72 | 2024-04-16T11:04:52Z | 744 | 19 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"alpaca",
"not-for-all-audiences",
"nsfw",
"base_model:icefog72/Kunokukulemonchini-32k-7b",
"base_model:icefog72/Mixtral_AI_Cyber_3.m1-BigL",
"base_model:grimjim/kukulemon-32K-7B",
"base_model:LeroyDyer/Mixtral_AI_Cyber_3.m1",
"base_model:Nitral-AI/Kunocchini-7b-128k-test",
"base_model:Undi95/BigL-7B",
"license:cc-by-nc-4.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-04-03T12:43:11Z | ---
license: cc-by-nc-4.0
library_name: transformers
tags:
- mergekit
- merge
- alpaca
- mistral
- not-for-all-audiences
- nsfw
base_model:
- icefog72/Kunokukulemonchini-32k-7b
- icefog72/Mixtral_AI_Cyber_3.m1-BigL
- grimjim/kukulemon-32K-7B
- LeroyDyer/Mixtral_AI_Cyber_3.m1
- Nitral-AI/Kunocchini-7b-128k-test
- Undi95/BigL-7B
model-index:
- name: IceLemonTeaRP-32k-7b
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 67.66
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=icefog72/IceLemonTeaRP-32k-7b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 86.53
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=icefog72/IceLemonTeaRP-32k-7b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 64.51
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=icefog72/IceLemonTeaRP-32k-7b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 61.76
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=icefog72/IceLemonTeaRP-32k-7b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 79.72
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=icefog72/IceLemonTeaRP-32k-7b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 62.4
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=icefog72/IceLemonTeaRP-32k-7b
name: Open LLM Leaderboard
---
# IceLemonTeaRP-32k-7b

This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
I would suggest to play with rope_theta in config.json to set between 40000-100000.
## Merge Details
Cooked merge from fresh ingredients to fix [icefog72/IceTeaRP-7b](https://huggingface.co/icefog72/IceTeaRP-7b) repetition problems.
Prompt template: Alpaca, maybe ChatML
* measurement.json for quanting exl2 included.
- [IceLemonTeaRP-32k-7b-4.0bpw-h6-exl2](https://huggingface.co/icefog72/IceLemonTeaRP-32k-7b-4.0bpw-h6-exl2)
- [IceLemonTeaRP-32k-7b-4.2bpw-h6-exl2](https://huggingface.co/icefog72/IceLemonTeaRP-32k-7b-4.2bpw-h6-exl2)
- [IceLemonTeaRP-32k-7b-6.5bpw-h6-exl2](https://huggingface.co/icefog72/IceLemonTeaRP-32k-7b-6.5bpw-h6-exl2)
- [IceLemonTeaRP-32k-7b-8.0bpw-h6-exl2](https://huggingface.co/icefog72/IceLemonTeaRP-32k-7b-8.0bpw-h6-exl2)
Thanks for
- [mradermacher/IceTeaRP-7b-GGUF](https://huggingface.co/mradermacher/IceLemonTeaRP-32k-7b-GGUF)
- [Natkituwu/IceLemonTeaRP-32k-7b-7.1bpw-exl2](https://huggingface.co/Natkituwu/IceLemonTeaRP-32k-7b-7.1bpw-exl2)
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* icefog72/Kunokukulemonchini-32k-7b
* [grimjim/kukulemon-32K-7B](https://huggingface.co/grimjim/kukulemon-32K-7B)
* [Nitral-AI/Kunocchini-7b-128k-test](https://huggingface.co/Nitral-AI/Kunocchini-7b-128k-test)
* icefog72/Mixtral_AI_Cyber_3.m1-BigL
* [LeroyDyer/Mixtral_AI_Cyber_3.m1](https://huggingface.co/LeroyDyer/Mixtral_AI_Cyber_3.m1)
* [Undi95/BigL-7B](https://huggingface.co/Undi95/BigL-7B)
## How to download From the command line
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
To download the `main` branch to a folder called `IceLemonTeaRP-32k-7b`:
```shell
mkdir IceLemonTeaRP-32k-7b
huggingface-cli download icefog72/IceLemonTeaRP-32k-7b --local-dir IceLemonTeaRP-32k-7b --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
If you remove the `--local-dir-use-symlinks False` parameter, the files will instead be stored in the central Hugging Face cache directory (default location on Linux is: `~/.cache/huggingface`), and symlinks will be added to the specified `--local-dir`, pointing to their real location in the cache. This allows for interrupted downloads to be resumed, and allows you to quickly clone the repo to multiple places on disk without triggering a download again. The downside, and the reason why I don't list that as the default option, is that the files are then hidden away in a cache folder and it's harder to know where your disk space is being used, and to clear it up if/when you want to remove a download model.
The cache location can be changed with the `HF_HOME` environment variable, and/or the `--cache-dir` parameter to `huggingface-cli`.
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
mkdir FOLDERNAME
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download MODEL --local-dir FOLDERNAME --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: Mixtral_AI_Cyber_3.m1-BigL
layer_range: [0, 32]
- model: Kunokukulemonchini-32k-7b
layer_range: [0, 32]
merge_method: slerp
base_model: Kunokukulemonchini-32k-7b
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: float16
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_icefog72__IceLemonTeaRP-32k-7b)
| Metric |Value|
|---------------------------------|----:|
|Avg. |70.43|
|AI2 Reasoning Challenge (25-Shot)|67.66|
|HellaSwag (10-Shot) |86.53|
|MMLU (5-Shot) |64.51|
|TruthfulQA (0-shot) |61.76|
|Winogrande (5-shot) |79.72|
|GSM8k (5-shot) |62.40|
|
NeverSleep/Llama-3-Lumimaid-70B-v0.1-alt | NeverSleep | 2024-05-07T11:08:08Z | 744 | 16 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"not-for-all-audiences",
"nsfw",
"conversational",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-04-29T05:32:00Z | ---
license: cc-by-nc-4.0
tags:
- not-for-all-audiences
- nsfw
---
## Lumimaid 0.1
<center><div style="width: 100%;">
<img src="https://cdn-uploads.huggingface.co/production/uploads/630dfb008df86f1e5becadc3/d3QMaxy3peFTpSlWdWF-k.png" style="display: block; margin: auto;">
</div></center>
This model uses the Llama3 **prompting format**
Llama3 trained on our RP datasets, we tried to have a balance between the ERP and the RP, not too horny, but just enough.
We also added some non-RP dataset, making the model less dumb overall. It should look like a 40%/60% ratio for Non-RP/RP+ERP data.
This model includes the new Luminae dataset from Ikari.
If you consider trying this model please give us some feedback either on the Community tab on hf or on our [Discord Server](https://discord.gg/MtCVRWTZXY).
## Credits:
- Undi
- IkariDev
## Description
This repo contains FP16 files of Lumimaid-70B-v0.1-alt.
Switch: [8B](https://huggingface.co/NeverSleep/Llama-3-Lumimaid-8B-v0.1) - [70B](https://huggingface.co/NeverSleep/Llama-3-Lumimaid-70B-v0.1) - [70B-alt](https://huggingface.co/NeverSleep/Llama-3-Lumimaid-70B-v0.1-alt) - [8B-OAS](https://huggingface.co/NeverSleep/Llama-3-Lumimaid-8B-v0.1-OAS) - [70B-OAS](https://huggingface.co/NeverSleep/Llama-3-Lumimaid-70B-v0.1-OAS)
## Training data used:
- [Aesir datasets](https://huggingface.co/MinervaAI)
- [NoRobots](https://huggingface.co/datasets/Doctor-Shotgun/no-robots-sharegpt)
- [limarp](https://huggingface.co/datasets/lemonilia/LimaRP) - 8k ctx
- [toxic-dpo-v0.1-sharegpt](https://huggingface.co/datasets/Undi95/toxic-dpo-v0.1-sharegpt)
- [ToxicQAFinal](https://huggingface.co/datasets/NobodyExistsOnTheInternet/ToxicQAFinal)
- Luminae-i1 (70B/70B-alt) (i2 was not existing when the 70b started training) | Luminae-i2 (8B) (this one gave better results on the 8b) - Ikari's Dataset
- [Squish42/bluemoon-fandom-1-1-rp-cleaned](https://huggingface.co/datasets/Squish42/bluemoon-fandom-1-1-rp-cleaned) - 50% (randomly)
- [NobodyExistsOnTheInternet/PIPPAsharegptv2test](https://huggingface.co/datasets/NobodyExistsOnTheInternet/PIPPAsharegptv2test) - 5% (randomly)
- [cgato/SlimOrcaDedupCleaned](https://huggingface.co/datasets/cgato/SlimOrcaDedupCleaned) - 5% (randomly)
- Airoboros (reduced)
- [Capybara](https://huggingface.co/datasets/Undi95/Capybara-ShareGPT/) (reduced)
## Models used (only for 8B)
- Initial LumiMaid 8B Finetune
- Undi95/Llama-3-Unholy-8B-e4
- Undi95/Llama-3-LewdPlay-8B
## Prompt template: Llama3
```
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
{system_prompt}<|eot_id|><|start_header_id|>user<|end_header_id|>
{input}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
{output}<|eot_id|>
```
## Others
Undi: If you want to support us, you can [here](https://ko-fi.com/undiai).
IkariDev: Visit my [retro/neocities style website](https://ikaridevgit.github.io/) please kek |
Helsinki-NLP/opus-mt-hy-en | Helsinki-NLP | 2023-08-16T11:58:02Z | 743 | 1 | transformers | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"hy",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | 2022-03-02T23:29:04Z | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-hy-en
* source languages: hy
* target languages: en
* OPUS readme: [hy-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/hy-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2019-12-18.zip](https://object.pouta.csc.fi/OPUS-MT-models/hy-en/opus-2019-12-18.zip)
* test set translations: [opus-2019-12-18.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/hy-en/opus-2019-12-18.test.txt)
* test set scores: [opus-2019-12-18.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/hy-en/opus-2019-12-18.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.hy.en | 29.5 | 0.466 |
|
LorenzoDeMattei/GePpeTto | LorenzoDeMattei | 2023-03-22T15:39:46Z | 743 | 9 | transformers | [
"transformers",
"pytorch",
"jax",
"safetensors",
"gpt2",
"text-generation",
"it",
"arxiv:2004.14253",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2022-03-02T23:29:04Z | ---
language: it
---
# GePpeTto GPT2 Model 🇮🇹
Pretrained GPT2 117M model for Italian.
You can find further details in the paper:
Lorenzo De Mattei, Michele Cafagna, Felice Dell’Orletta, Malvina Nissim, Marco Guerini "GePpeTto Carves Italian into a Language Model", arXiv preprint. Pdf available at: https://arxiv.org/abs/2004.14253
## Pretraining Corpus
The pretraining set comprises two main sources. The first one is a dump of Italian Wikipedia (November 2019),
consisting of 2.8GB of text. The second one is the ItWac corpus (Baroni et al., 2009), which amounts to 11GB of web
texts. This collection provides a mix of standard and less standard Italian, on a rather wide chronological span,
with older texts than the Wikipedia dump (the latter stretches only to the late 2000s).
## Pretraining details
This model was trained using GPT2's Hugging Face implemenation on 4 NVIDIA Tesla T4 GPU for 620k steps.
Training parameters:
- GPT-2 small configuration
- vocabulary size: 30k
- Batch size: 32
- Block size: 100
- Adam Optimizer
- Initial learning rate: 5e-5
- Warm up steps: 10k
## Perplexity scores
| Domain | Perplexity |
|---|---|
| Wikipedia | 26.1052 |
| ItWac | 30.3965 |
| Legal | 37.2197 |
| News | 45.3859 |
| Social Media | 84.6408 |
For further details, qualitative analysis and human evaluation check out: https://arxiv.org/abs/2004.14253
## Load Pretrained Model
You can use this model by installing Huggingface library `transformers`. And you can use it directly by initializing it like this:
```python
from transformers import GPT2Tokenizer, GPT2Model
model = GPT2Model.from_pretrained('LorenzoDeMattei/GePpeTto')
tokenizer = GPT2Tokenizer.from_pretrained(
'LorenzoDeMattei/GePpeTto',
)
```
## Example using GPT2LMHeadModel
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, pipeline, GPT2Tokenizer
tokenizer = AutoTokenizer.from_pretrained("LorenzoDeMattei/GePpeTto")
model = AutoModelWithLMHead.from_pretrained("LorenzoDeMattei/GePpeTto")
text_generator = pipeline('text-generation', model=model, tokenizer=tokenizer)
prompts = [
"Wikipedia Geppetto",
"Maestro Ciliegia regala il pezzo di legno al suo amico Geppetto, il quale lo prende per fabbricarsi un burattino maraviglioso"]
samples_outputs = text_generator(
prompts,
do_sample=True,
max_length=50,
top_k=50,
top_p=0.95,
num_return_sequences=3
)
for i, sample_outputs in enumerate(samples_outputs):
print(100 * '-')
print("Prompt:", prompts[i])
for sample_output in sample_outputs:
print("Sample:", sample_output['generated_text'])
print()
```
Output is,
```
----------------------------------------------------------------------------------------------------
Prompt: Wikipedia Geppetto
Sample: Wikipedia Geppetto rosso (film 1920)
Geppetto rosso ("The Smokes in the Black") è un film muto del 1920 diretto da Henry H. Leonard.
Il film fu prodotto dalla Selig Poly
Sample: Wikipedia Geppetto
Geppetto ("Geppetto" in piemontese) è un comune italiano di 978 abitanti della provincia di Cuneo in Piemonte.
L'abitato, che si trova nel versante valtellinese, si sviluppa nella
Sample: Wikipedia Geppetto di Natale (romanzo)
Geppetto di Natale è un romanzo di Mario Caiano, pubblicato nel 2012.
----------------------------------------------------------------------------------------------------
Prompt: Maestro Ciliegia regala il pezzo di legno al suo amico Geppetto, il quale lo prende per fabbricarsi un burattino maraviglioso
Sample: Maestro Ciliegia regala il pezzo di legno al suo amico Geppetto, il quale lo prende per fabbricarsi un burattino maraviglioso. Il burattino riesce a scappare. Dopo aver trovato un prezioso sacchetto si reca
Sample: Maestro Ciliegia regala il pezzo di legno al suo amico Geppetto, il quale lo prende per fabbricarsi un burattino maraviglioso, e l'unico che lo possiede, ma, di fronte a tutte queste prove
Sample: Maestro Ciliegia regala il pezzo di legno al suo amico Geppetto, il quale lo prende per fabbricarsi un burattino maraviglioso: - A voi gli occhi, le guance! A voi il mio pezzo!
```
## Citation
Please use the following bibtex entry:
```
@misc{mattei2020geppetto,
title={GePpeTto Carves Italian into a Language Model},
author={Lorenzo De Mattei and Michele Cafagna and Felice Dell'Orletta and Malvina Nissim and Marco Guerini},
year={2020},
eprint={2004.14253},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
## References
Marco Baroni, Silvia Bernardini, Adriano Ferraresi,
and Eros Zanchetta. 2009. The WaCky wide web: a
collection of very large linguistically processed webcrawled corpora. Language resources and evaluation, 43(3):209–226.
|
amild01/GPT2-german-chefkoch | amild01 | 2021-09-08T16:01:18Z | 743 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2022-03-02T23:29:05Z | Entry not found |
pspatel2/storygen | pspatel2 | 2021-05-23T12:09:14Z | 743 | 0 | transformers | [
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2022-03-02T23:29:05Z | Entry not found |
Cirilaron/DialoGPT-medium-vergil | Cirilaron | 2022-08-18T18:53:08Z | 743 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2022-08-18T18:25:28Z | ---
tags:
- conversational
---
#Vergil from Devil May Cry 5 DialoGPT Model |
amandyk/QazGPT2 | amandyk | 2023-03-19T18:50:27Z | 743 | 4 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-03-19T17:39:24Z | ---
license: afl-3.0
---
Welcome to the GPT-2 repository for the Kazakh language(latin alphabet)! This repository contains a language model
that has been trained from scratch on a combination of news and wiki corpora in Kazakh language.
The model is capable of generating coherent and natural-sounding text in Kazakh, and
can be used for a wide range of NLP tasks, including text classification, question answering,
and text generation.
Please note that while the model has been trained on a 4m sentence corpus of text, it may still
contain biases or errors. As with any machine learning model, it is important to thoroughly
evaluate its performance before using it in production applications.
I recommend to use this qazaq latin converter for testing: https://masa.kz/en |
timm/bat_resnext26ts.ch_in1k | timm | 2024-02-10T23:34:23Z | 743 | 0 | timm | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"arxiv:1611.05431",
"license:apache-2.0",
"region:us"
] | image-classification | 2023-03-22T07:13:22Z | ---
license: apache-2.0
library_name: timm
tags:
- image-classification
- timm
datasets:
- imagenet-1k
---
# Model card for bat_resnext26ts.ch_in1k
A BAT-ResNeXt image classification model (ResNeXt with 'Grouped Bilinear Attention Transforms'). This model features a tiered 3-layer stem and SiLU activations. Trained on ImageNet-1k by Ross Wightman in `timm`.
This model architecture is implemented using `timm`'s flexible [BYOBNet (Bring-Your-Own-Blocks Network)](https://github.com/huggingface/pytorch-image-models/blob/main/timm/models/byobnet.py).
BYOBNet allows configuration of:
* block / stage layout
* stem layout
* output stride (dilation)
* activation and norm layers
* channel and spatial / self-attention layers
...and also includes `timm` features common to many other architectures, including:
* stochastic depth
* gradient checkpointing
* layer-wise LR decay
* per-stage feature extraction
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 10.7
- GMACs: 2.5
- Activations (M): 12.5
- Image size: 256 x 256
- **Papers:**
- Non-Local Neural Networks With Grouped Bilinear Attentional Transforms: https://openaccess.thecvf.com/content_CVPR_2020/html/Chi_Non-Local_Neural_Networks_With_Grouped_Bilinear_Attentional_Transforms_CVPR_2020_paper.html
- Aggregated Residual Transformations for Deep Neural Networks: https://arxiv.org/abs/1611.05431
- **Dataset:** ImageNet-1k
- **Original:** https://github.com/huggingface/pytorch-image-models
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('bat_resnext26ts.ch_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'bat_resnext26ts.ch_in1k',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g.:
# torch.Size([1, 64, 128, 128])
# torch.Size([1, 256, 64, 64])
# torch.Size([1, 512, 32, 32])
# torch.Size([1, 1024, 16, 16])
# torch.Size([1, 2048, 8, 8])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'bat_resnext26ts.ch_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 2048, 8, 8) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
## Citation
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
}
```
```bibtex
@InProceedings{Chi_2020_CVPR,
author = {Chi, Lu and Yuan, Zehuan and Mu, Yadong and Wang, Changhu},
title = {Non-Local Neural Networks With Grouped Bilinear Attentional Transforms},
booktitle = {The IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2020}
}
```
```bibtex
@article{Xie2016,
title={Aggregated Residual Transformations for Deep Neural Networks},
author={Saining Xie and Ross Girshick and Piotr Dollár and Zhuowen Tu and Kaiming He},
journal={arXiv preprint arXiv:1611.05431},
year={2016}
}
```
|
DeepPavlov/bart-base-en-persona-chat | DeepPavlov | 2023-04-24T07:38:02Z | 743 | 0 | transformers | [
"transformers",
"pytorch",
"bart",
"text2text-generation",
"text-generation",
"en",
"dataset:AlekseyKorshuk/persona-chat",
"license:openrail",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-04-24T07:09:30Z | ---
license: openrail
datasets:
- AlekseyKorshuk/persona-chat
language:
- en
pipeline_tag: text-generation
---
---
language:
- en
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
# Model Details
## Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** Deeppavlov team
- **Model type:** seq2seq
- **Language(s) (NLP):** English
- **License:** MIT
- **Finetuned from model:** [facebook/bart-base](facebook/bart-base)
# Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
## Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
```python
from typing import List, TypedDict
from dataclasses import dataclass
from itertools import chain
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
import torch
@dataclass
class H2PersonaChatHyperparametersV1:
"""
chat_history_pair_length: int - dialogue pairs amount from the end
"""
model_name: str = "facebook/bart-base"
chat_history_pair_length: int = 7
persona_max_length: int = 14
chat_max_length: int = 25
debug_status: int = 0
class PersonaChatDatasetSampleV1(TypedDict):
"""
persona: List[str] - person fact sentence set
history: List[str] - chating history
"""
persona: List[str]
history: List[str]
sample_id: str
class H2Seq2SeqInferenceSampleDictV1(TypedDict):
input_ids: List[int]
attention_mask: List[int]
class H2Seq2SeqInferenceSampleDictV2(TypedDict):
input_ids: torch.Tensor
attention_mask: torch.Tensor
def flat_list(list_of_lists: List[List]) -> List:
return list(chain.from_iterable(list_of_lists))
class H2Seq2SeqInferencePersonaSampleV1:
def __init__(
self,
dataset_sample: PersonaChatDatasetSampleV1,
tokenizer: AutoTokenizer,
hyperparameters: H2PersonaChatHyperparametersV1,
) -> None:
self.dataset_sample = dataset_sample
self.tokenizer = tokenizer
self.hyperparameters = hyperparameters
def add_spaces_after(
self,
items: List[str],
) -> List[str]:
items = [item + " " for item in items]
return items
@property
def bos_token_id(self):
if "t5" in self.hyperparameters.model_name:
return []
if self.tokenizer.bos_token_id is None:
return []
return [self.tokenizer.bos_token_id]
@property
def eos_token_id(self):
if self.tokenizer.eos_token_id is None:
return []
return [self.tokenizer.eos_token_id]
def add_sep_beetween(self, items: List[str], sep=" EOS ") -> List[str]:
for i in range(1, len(items)):
items[i] = sep + items[i]
return items
def add_spaces_between(self, items: List[str]) -> List[str]:
items = self.add_spaces_after(items)
items[-1] = items[-1].strip()
return items
def get_sample(self) -> H2Seq2SeqInferenceSampleDictV1:
dialog_history = self.dataset_sample["history"]
dialog_history = dialog_history[-self.hyperparameters.chat_history_pair_length * 2 - 1 :]
dialog_history = self.add_sep_beetween(dialog_history)
persona = self.dataset_sample["persona"]
persona = self.add_sep_beetween(
persona,
sep=" ",
)
KNOWLEDGE_IDS = self.tokenizer.encode(
" [KNOWLEDGE] ",
add_special_tokens=False,
)
CONTEXT_IDS = self.tokenizer.encode(
" [CONTEXT] ",
add_special_tokens=False,
)
encoded_history = self.tokenizer.batch_encode_plus(
dialog_history,
add_special_tokens=False,
truncation=True,
max_length=self.hyperparameters.chat_max_length,
)
encoded_history = flat_list(encoded_history["input_ids"])
encoded_persona = self.tokenizer.batch_encode_plus(
persona,
add_special_tokens=False,
truncation=True,
max_length=self.hyperparameters.persona_max_length,
)
encoded_persona = flat_list(encoded_persona["input_ids"])
input_ids = [
*self.bos_token_id,
*CONTEXT_IDS,
*encoded_history,
*KNOWLEDGE_IDS,
*encoded_persona,
*self.eos_token_id,
]
attention_mask = [1] * len(input_ids)
return H2Seq2SeqInferenceSampleDictV1(
input_ids=input_ids,
attention_mask=attention_mask,
)
class DialogBotV1:
def __init__(
self,
model: AutoModelForSeq2SeqLM,
tokenizer: AutoTokenizer,
hyperparameters: H2PersonaChatHyperparametersV1,
history: List[str] = None,
persona: List[str] = None,
device: str = "cuda",
shuffle_persona: bool = True,
):
self.model = model
self.tokenizer = tokenizer
self.hyperparameters = hyperparameters
self.device = device
self.shuffle_persona = shuffle_persona
self.debug_status = hyperparameters.debug_status
if history is None:
self.history = []
self.history = history
if persona is None:
self.persona = []
self.persona = persona
def _get_sample(
self,
persona: List[str],
history: List[str],
) -> H2Seq2SeqInferenceSampleDictV1:
dataset_sample = PersonaChatDatasetSampleV1(
persona=persona,
history=history,
)
sample = H2Seq2SeqInferencePersonaSampleV1(
tokenizer=self.tokenizer,
hyperparameters=self.hyperparameters,
dataset_sample=dataset_sample,
)
sample = sample.get_sample()
print(self.tokenizer.decode(sample['input_ids']))
for key in sample.keys():
sample[key] = torch.tensor(sample[key]).unsqueeze(0).to(self.device)
return sample
def next_response(
self,
**generation_params,
) -> str:
sample = self._get_sample(
persona=self.persona,
history=self.history,
)
answer = self.generate_response(
sample,
**generation_params,
)
answer = self.tokenizer.batch_decode(
answer,
skip_special_tokens=True,
)
self.history.append(answer[0])
return answer[0]
def generate_response(
self,
sample: H2Seq2SeqInferenceSampleDictV1,
**generation_params,
):
"""
generation_params - https://huggingface.co/docs/transformers/v4.24.0/en/main_classes/text_generation
"""
with torch.no_grad():
return self.model.generate(
**sample,
**generation_params,
)
PRETRAINED_MODEL_NAME_OR_PATH = "DeepPavlov/bart-base-en-persona-chat"
PAIR_DIALOG_HISTORY_LENGTH = 2
# CHAT_MAX_LENGTH for single sentence, in tokens
CHAT_MAX_LENGTH = 25
# PERSONA_MAX_LENGTH for single sentence, in tokens
PERSONA_MAX_LENGTH = 19
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model = AutoModelForSeq2SeqLM.from_pretrained(PRETRAINED_MODEL_NAME_OR_PATH)
model.to(device)
model.eval()
tokenizer = AutoTokenizer.from_pretrained(PRETRAINED_MODEL_NAME_OR_PATH)
if torch.cuda.is_available():
model.half()
hyperparameters = H2PersonaChatHyperparametersV1(
chat_history_pair_length=PAIR_DIALOG_HISTORY_LENGTH,
persona_max_length=PERSONA_MAX_LENGTH,
chat_max_length=CHAT_MAX_LENGTH,
model_name=PRETRAINED_MODEL_NAME_OR_PATH,
)
persona = [
"I like to play guitar.",
"I hate onions."
]
history = [
"I hate to talk about politics, what about you?"
]
persona_bot = DialogBotV1(
model=model,
tokenizer=tokenizer,
hyperparameters=hyperparameters,
history=history,
persona=persona,
device=device,
)
GENERATION_PARAMS = {
"max_new_tokens": 60,
"penalty_alpha": 0.15,
"top_k": 10
}
response = persona_bot.next_response(
**GENERATION_PARAMS,
)
print(response)
# i am not into politics. i am into music.
```
## Recommendations
# Training Details
## Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
- [Data Source | EN Persona Chat](https://s3.amazonaws.com/datasets.huggingface.co/personachat/personachat_self_original.json)
[More Information Needed]
### Preprocessing
- Initial data was splitted by this script:
```python
def persona_chat_dataset_tranformer_v1(
initial_dataset_path: str,
output_folder: str,
) -> None:
"""
example
persona_chat_dataset_tranformer_v1(
initial_dataset_path="./datasets/persona_chat/persona_chat.json",
output_folder="./datasets/persona_chat",
)
"""
assert initial_dataset_path is not None, "initial_dataset_path is None"
assert output_folder is not None, "output_folder is None"
with open(initial_dataset_path) as f:
initial_dataset = json.load(f)
train_dataset = initial_dataset["train"]
val_len = len(initial_dataset["valid"])
valid_dataset = initial_dataset["valid"][: val_len // 2]
test_dataset = initial_dataset["valid"][val_len // 2 :]
print(
f"Dataset lengths: train {len(train_dataset)}, valid {len(valid_dataset)}, test {len(test_dataset)}"
)
# save json files
with open(output_folder + "/train.json", "w") as f:
json.dump(train_dataset, f)
with open(output_folder + "/valid.json", "w") as f:
json.dump(valid_dataset, f)
with open(output_folder + "/test.json", "w") as f:
json.dump(test_dataset, f)
print("Datasets saved.")
```
# Evaluation
### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
- BLUEL
- CharF
- RougeL |
weqweasdas/hh_rlhf_rm_open_llama_3b | weqweasdas | 2024-02-25T19:50:04Z | 743 | 14 | transformers | [
"transformers",
"pytorch",
"llama",
"text-classification",
"arxiv:2304.06767",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-classification | 2023-07-14T11:38:16Z | ---
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/model-cards
{}
---
# Reward model for HH-RLHF
<!-- Provide a quick summary of what the model is/does. -->
**Thanks for your intersts in this reward model! We recommed you to use [weqweasdas/RM-Gemma-2B](https://huggingface.co/weqweasdas/RM-Gemma-2B) instead.**
In this repo, we present a reward model trained by the framework [LMFlow](https://github.com/OptimalScale/LMFlow). The reward model is for the [HH-RLHF dataset](Dahoas/full-hh-rlhf) (helpful part only), and is trained from the base model [openlm-research/open_llama_3b](https://huggingface.co/openlm-research/open_llama_3b).
## Model Details
The training curves and some other details can be found in our paper [RAFT (Reward ranked finetuning)](https://arxiv.org/pdf/2304.06767.pdf). If you have any question with this reward model and also any question about reward modeling, feel free to drop me an email with [email protected]. I would be happy to chat!
### Dataset preprocessing
<!-- Provide a longer summary of what this model is. -->
The HH-RLHF-Helpful dataset contains 112K comparison samples in the training set and 12.5K comparison samples in the test set. We first replace the ``\n\nHuman'' and ``\n\nAssistant'' in the dataset by ``###Human'' and ``###Assistant'', respectively.
Then, we split the dataset as follows:
- SFT dataset: 112K training samples + the first 6275 samples in the test set, we only use the chosen responses;
- Training set of reward modeling: 112K training samples + the first 6275 samples in the test set, we use both the chosen and rejected responses;
- Test set of reward modeling: the last 6226 samples of the original test set.
### Training
To use the data more efficiently, we concatenate texts and split them into 1024-sized chunks, rather than padding them according to the longest text (in each batch). We then finetune the base model on the SFT dataset for two epochs, using a learning rate of 2e-5 and a linear decay schedule.
We conduct reward modeling with learning rate 5e-6 for 1 epoch and linear decay schedule because it seems that the model easily overfits with more than 1 epoches. We discard the samples longer than 512 tokens so we have approximately 106K samples in the training set and 5K samples in the test set for reward modeling.
We use bf16 and do not use LoRA in both of the stages.
**The resulting model achieves an evaluation loss of 0.5 and an evaluation accuracy 75.48%.** (Note that there can be data leakage in the [HH-RLHF dataset](Dahoas/full-hh-rlhf).)
**Generalization**
We further test the generalization ability of the reward model but with another round of training during another research project (with the same hyper-parameter though). We test the accuracy on open assistant dataset and chatbot dataset, and compare the reward model to the reward models trained directly on these two datasets. The results are as follows:
| Dataset training/test | open assistant | chatbot | hh_rlhf |
| -------------- | -------------- | ------- | ------- |
| open assistant | **69.5** | 61.1 | 58.7 |
| chatbot | 66.5 | 62.7 | 56.0 |
| hh_rlhf | 69.4 | **64.2** | **77.6** |
As we can see, the reward model trained on the HH-RLHF achieves matching or even better accuracy on open assistant and chatbot datasets, even though it is not trained on them directly. Therefore, the reward model may also be used for these two datasets.
## Uses
```python
rm_tokenizer = AutoTokenizer.from_pretrained("weqweasdas/hh_rlhf_rm_open_llama_3b")
rm_pipe = pipeline(
"sentiment-analysis",
model="weqweasdas/hh_rlhf_rm_open_llama_3b",
device="auto",
tokenizer=rm_tokenizer,
model_kwargs={"torch_dtype": torch.bfloat16}
)
pipe_kwargs = {
"return_all_scores": True,
"function_to_apply": "none",
"batch_size": 1
}
test_texts = [
"###Human: My daughter wants to know how to convert fractions to decimals, but I'm not sure how to explain it. Can you help? ###Assistant: Sure. So one way of converting fractions to decimals is to ask “how many halves are there?” and then write this as a decimal number. But that's a little tricky. Here's a simpler way: if a fraction is expressed as a/b, then it's decimal equivalent is just a/b * 1.0 So, for example, the decimal equivalent of 1/2 is 1/2 * 1.0 = 0.5.",
"###Human: I have fresh whole chicken in my fridge. What dish can I prepare using it that will take me less than an hour to cook? ###Assistant: Are you interested in a quick and easy recipe you can prepare with chicken you have on hand, or something more involved? In terms of both effort and time, what are you looking for?"]
pipe_outputs = rm_pipe(test_texts, **pipe_kwargs)
rewards = [output[0]["score"] for output in pipe_outputs]
```
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### RAFT Example
We test the reward model by the RAFT and with EleutherAI/gpt-neo-2.7B as the starting checkpoint.
For each iteration, we sample 2048 prompts from the HH-RLHF dataset, and for each prompt, we generate K=8 responses by the current model, and pick the response with the highest reward. Then, we finetune the model on this picked set to get the new model. We report the learning curve as follows:

We also perform the experiment with the LLaMA-7B model but we first fine-tune the base model using the chosen responses in the HH-RLHF dataset for 1 epoch with learning rate 2e-5. The hyper-parameters for RAFT are the same with the GPT-Neo-2.7B and the reward curves are presented as follows:

## Reference
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
If you found this model useful, please cite our framework and paper using the following BibTeX:
```
@article{diao2023lmflow,
title={Lmflow: An extensible toolkit for finetuning and inference of large foundation models},
author={Diao, Shizhe and Pan, Rui and Dong, Hanze and Shum, Ka Shun and Zhang, Jipeng and Xiong, Wei and Zhang, Tong},
journal={arXiv preprint arXiv:2306.12420},
year={2023}
}
```
```
@article{dong2023raft,
title={Raft: Reward ranked finetuning for generative foundation model alignment},
author={Dong, Hanze and Xiong, Wei and Goyal, Deepanshu and Pan, Rui and Diao, Shizhe and Zhang, Jipeng and Shum, Kashun and Zhang, Tong},
journal={arXiv preprint arXiv:2304.06767},
year={2023}
}
```
|
line-corporation/japanese-large-lm-1.7b-instruction-sft | line-corporation | 2023-08-14T08:19:11Z | 743 | 6 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"ja",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-08-03T01:34:01Z | ---
license: apache-2.0
inference: false
language: ja
---
# japanese-large-lm-1.7b-instruction-sft
This repository provides a 1.7B parameters Japanese language model, fine-tuned and trained by [LINE Corporation](https://linecorp.com/ja/).
## For Japanese
詳細な説明や実験に関しては「[Instruction Tuningにより対話性能を向上させた3.6B日本語言語モデルを公開します](https://engineering.linecorp.com/ja/blog/3.6b-japanese-language-model-with-improved-dialog-performance-by-instruction-tuning)」をご覧ください。
## How to use
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
model = AutoModelForCausalLM.from_pretrained("line-corporation/japanese-large-lm-1.7b-instruction-sft")
tokenizer = AutoTokenizer.from_pretrained("line-corporation/japanese-large-lm-1.7b-instruction-sft", use_fast=False)
generator = pipeline("text-generation", model=model, tokenizer=tokenizer, device=0)
input_text = """四国の県名を全て列挙してください。"""
text = generator(
f"ユーザー: {input_text}\nシステム: ",
max_length = 256,
do_sample = True,
temperature = 0.7,
top_p = 0.9,
top_k = 0,
repetition_penalty = 1.1,
num_beams = 1,
pad_token_id = tokenizer.pad_token_id,
num_return_sequences = 1,
)
print(text)
# [{'generated_text': 'ユーザー: 四国の県名を全て列挙してください。\nシステム: 香川県、徳島県、愛媛県、高知県'}]
```
## Tokenization
We use a sentencepiece tokenizer with a unigram language model and byte-fallback.
We **do not** apply pre-tokenization with Japanese tokenizer.
Thus, a user may directly feed raw sentences into the tokenizer.
## License
[Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0)
|
chillpixel/starlight-animated-sdxl | chillpixel | 2023-10-14T22:33:47Z | 743 | 7 | diffusers | [
"diffusers",
"art",
"style",
"sdxl",
"model",
"stable diffusion",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"anime",
"fantasy",
"illustration",
"2.5D",
"region:us"
] | text-to-image | 2023-09-10T15:34:46Z | ---
library_name: diffusers
pipeline_tag: text-to-image
tags:
- art
- style
- sdxl
- model
- stable diffusion
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- anime
- fantasy
- illustration
- 2.5D
---
# 🌟 Starlight 星光 Animated

### 🔮 3.6 cfg 🪜 40 steps 🤖 DPM++ 3M SDE Karras
## *Starlight* is a powerful 2.5D like SDXL model designed for creating stunning concept art and illustrations.
## Key Features
* Excels in anime, fantasy, and semi-realistic styles.
* Designed for rich details and mesmerizing visuals.
* Evolving based on community feedback.
## How to Use
**Starlight** shines over a large range but performs best with a ***LOWER CFG setting (3 to 5)***. Lower CFG provides finer, richer details, aligning with the model's intended goal. Higher CFG offers more color but may have increased artifacts.
**Example Prompt:** `8k portrait enchanting depiction of an iridescent dress worn by a beautiful woman, intricately detailed, digital masterpiece, mesmerizing play of colors and light, ultimate elegance, exquisitely detailed shimmering fabric, hdr`
* **Sampler**: DPM++ 3M SDE Karras
* **Recommended Steps**: 25 to 65 (40)
* **CFG / Guidance**: 3 to 5 (3.6)
* **Aspect Ratios**:
* **2:3** — 832x1248
* **16:9** — 1360x768
* **4:5** — 912x1144
* **1:1** — 1024x1024
I really hope you enjoy this SDXL model — and if you do, ***please give Starlight*** **星光** ***a like!***
Also, available at [CivitAI](https://civitai.com/models/143043).
---
Consider supporting Starlight's development by [buying me a coffee](https://ko-fi.com/chillpixel)!
Interested in partnering with me for innovation and marketing opportunities? Contact me at [[email protected]](mailto:[email protected]) |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.