Chinese-Alpaca-2-13B

This is the full Chinese-Alpaca-2-13B model๏ผŒwhich can be loaded directly for inference and full-parameter training.

Related models๐Ÿ‘‡

Description of Chinese-LLaMA-Alpaca-2

This project is based on the Llama-2, released by Meta, and it is the second generation of the Chinese LLaMA & Alpaca LLM project. We open-source Chinese LLaMA-2 (foundation model) and Alpaca-2 (instruction-following model). These models have been expanded and optimized with Chinese vocabulary beyond the original Llama-2. We used large-scale Chinese data for incremental pre-training, which further improved the fundamental semantic understanding of the Chinese language, resulting in a significant performance improvement compared to the first-generation models. The relevant models support a 4K context and can be expanded up to 18K+ using the NTK method.

The main contents of this project include:

  • ๐Ÿš€ New extended Chinese vocabulary beyond Llama-2, open-sourcing the Chinese LLaMA-2 and Alpaca-2 LLMs.
  • ๐Ÿš€ Open-sourced the pre-training and instruction finetuning (SFT) scripts for further tuning on user's data
  • ๐Ÿš€ Quickly deploy and experience the quantized LLMs on CPU/GPU of personal PC
  • ๐Ÿš€ Support for LLaMA ecosystems like ๐Ÿค—transformers, llama.cpp, text-generation-webui, LangChain, vLLM etc.

Please refer to https://github.com/ymcui/Chinese-LLaMA-Alpaca-2/ for details.

Downloads last month
1,294
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for hfl/chinese-alpaca-2-13b

Quantizations
3 models

Spaces using hfl/chinese-alpaca-2-13b 35