README / README.md
anonymitaet's picture
Update README.md
3479225 verified
|
raw
history blame
3.74 kB
---
title: README
emoji: 🐠
colorFrom: gray
colorTo: green
sdk: static
pinned: false
---
<div align="center">
<img src="https://cdn-uploads.huggingface.co/production/uploads/656d9adce8bf55919aca7c3f/xcy1rwFGbrVZ1N68LQcEI.gif" style="width: 65%">
</div>
<p style="margin: 0px;" align="center">
<a style="text-decoration: none;" rel="nofollow" href="https://discord.gg/hYUwWddeAu">
<img style="width:20px; vertical-align: middle; display: inline-block; margin-right: 5px; margin-left: 10px; margin-top: 0px; margin-bottom: 0px;" alt="Discord Logo" src="https://cloud.githubusercontent.com/assets/6291467/26705903/96c2d66e-477c-11e7-9f4e-f3c0efe96c9a.png">
<span style="margin-right: 10px;" class="link-text">Discord</span>
</a> |
<a style="text-decoration: none;" rel="nofollow" href="https://arxiv.org/abs/2403.04652">
<img style="width:20px; vertical-align: middle; display: inline-block; margin-right: 5px; margin-left: 10px; margin-top: 0px; margin-bottom: 0px;" alt="ArXiv Logo" src="https://github.com/alpayariyak/openchat/blob/master/assets/arxiv-logomark-small-square-border.png?raw=true">
<span class="link-text">Paper</span>
</a>
</p>
<br/>
<div class="grid lg:grid-cols-3 gap-x-4 gap-y-7">
<a style="text-decoration: none;" href="https://www.01.ai/" class="block overflow-hidden group">
<div
class="w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center bg-[#FFFFFF]"
>
<img alt="" src="https://cdn-uploads.huggingface.co/production/uploads/656d9adce8bf55919aca7c3f/ADRH_f61dt8uVWBsAehkG.gif" class="w-40" />
</div>
<div align="center">Base Models<br/> Yi-6B/9B/34B</div>
</a>
<a
href="https://www.01.ai/"
class="block overflow-hidden"
style="text-decoration: none;"
>
<div class="w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center bg-[#FFFFFF]">
<img alt="" src="https://cdn-uploads.huggingface.co/production/uploads/656d9adce8bf55919aca7c3f/JuiI5Zun1XD5BuHCK0L1I.gif" class="w-40" />
</pre>
</div>
<div align="center">Chat Models <br/> Yi-6B/9B/34B Chat</div>
</a>
<a
href="https://www.01.ai/"
class="block overflow-hidden"
style="text-decoration: none;"
>
<div class="w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center bg-[#FFFFFF]">
<img alt="" src="https://cdn-uploads.huggingface.co/production/uploads/656d9adce8bf55919aca7c3f/7D6SjExHLIO1cH0tmQyxh.gif" class="w-40" />
</div>
<div align="center">Multimodal Models <br/> Yi-VL-6B/34B</div>
</a>
</div>
Welcome to Yi! 😘
Yi model family is a series of language and multimodal models. It is based on 6B and 34B pretrained language models, then extended to **chat** models, 200K **long context** models, depth-upscaled models (9B), and **vision**-language models.
# ⭐️ Highlights
- **Strong performance**: Yi-1.5-34B is on par with or surpasses GPT-3.5 in commonsense reasoning, college exams, math, coding, reading comprehension, and human preference win-rate on multiple evaluation benchmarks.
- **Cost-effective**: For 6B, 9B, and 34B, you can perform inference on consumer-grade hardware (like the RTX 4090). Additionally, 34B is large enough with complex reasoning and emergent abilities, giving a nice performance-cost balance.
# πŸ“Š Benchmarks
TBD
<div align="center">
<img src="https://cdn-uploads.huggingface.co/production/uploads/656d9adce8bf55919aca7c3f/kVHWz7yEY3UJlcRD2nwf2.png" style="width: 65%">
</div>
# πŸ“° News
- 2024-03-16: The Yi-9B-200K is open-sourced and available to the public.
- 2024-03-08: Yi Tech Report is published!
- 2024-03-07: The long text capability of the Yi-34B-200K has been enhanced.
For complete news history, see [News](xx.md).