--- license: other license_name: yi-license license_link: LICENSE widget: - example_title: "Yi-34B-Chat" text: "hi" output: text: " Hello! How can I assist you today?" - example_title: "Yi-34B" text: "There's a place where time stands still. A place of breath taking wonder, but also" output: text: " an eerie sense that something is just not right…\nBetween the two worlds lies The Forgotten Kingdom - home to creatures long since thought extinct and ancient magic so strong it defies belief! Only here can you find what has been lost for centuries: An Elixir Of Life which will restore youth and vitality if only those who seek its power are brave enough to face up against all manner of dangers lurking in this mysterious land! But beware; some say there may even exist powerful entities beyond our comprehension whose intentions towards humanity remain unclear at best ---- they might want nothing more than destruction itself rather then anything else from their quest after immortality (and maybe someone should tell them about modern medicine)? In any event though – one thing remains true regardless : whether or not success comes easy depends entirely upon how much effort we put into conquering whatever challenges lie ahead along with having faith deep down inside ourselves too ;) So let’s get started now shall We?" pipeline_tag: text-generation ---
🤗 Hugging Face • 🤖 ModelScope • ✡️ WiseModel
👋 Join us 💬 WeChat (Chinese) !
Yi-VL-34B
and Yi-VL-6B
, are open-sourced and available to the public.Yi-VL-34B
has ranked first among all existing open-source models in the latest benchmarks, including MMMU and CMMMU (based on data available up to January 2024).
Yi-6B-200K
and Yi-34B-200K
, are open-sourced and available to the public.Yi-6B
and Yi-34B
, are open-sourced and available to the public.Make sure you've installed Docker and nvidia-container-toolkit.
docker run -it --gpus all \
-v <your-model-path>: /models
ghcr.io/01-ai/yi:latest
Alternatively, you can pull the Yi Docker image from registry.lingyiwanwu.com/ci/01-ai/yi:latest
.
You can perform inference with Yi chat or base models as below.
The steps are similar to pip - Perform inference with Yi chat model.
Note that the only difference is to set model_path = '<your-model-mount-path>'
instead of model_path = '<your-model-path>'
.
The steps are similar to pip - Perform inference with Yi base model.
Note that the only difference is to set --model <your-model-mount-path>'
instead of model <your-model-path>
.
conda-lock
to generate fully reproducible lock files for conda environments. ⬇️micromamba
for installing these dependencies.
micromamba install -y -n yi -f conda-lock.yml
to create a conda environment named yi
and install the necessary dependencies.