Taka008 commited on
Commit
91cc6ea
·
verified ·
1 Parent(s): 9eb06d0

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -4
README.md CHANGED
@@ -23,16 +23,16 @@ inference: false
23
  ---
24
  # llm-jp-3.1-1.8b-instruct4
25
 
26
- LLM-jp-3.1 is the series of large language models developed by the [Research and Development Center for Large Language Models](https://llmc.nii.ac.jp/) at the [National Institute of Informatics](https://www.nii.ac.jp/en/).
27
 
28
- The LLM-jp-3.1 series consists of models that have undergone mid-training ([instruction pre-training](https://aclanthology.org/2024.emnlp-main.148/)) based on the LLM-jp-3 series, resulting in a significant improvement in instruction-following capabilities compared to the original LLM-jp-3 models.
29
 
30
- This repository provides **llm-jp-3.1-1.8b-instruct4** model.
31
  For an overview of the LLM-jp-3.1 models across different parameter sizes, please refer to:
32
  - [LLM-jp-3.1 Pre-trained Models](https://huggingface.co/collections/llm-jp/llm-jp-31-pre-trained-models-68368787c32e462c40a45f7b)
33
  - [LLM-jp-3.1 Fine-tuned Models](https://huggingface.co/collections/llm-jp/llm-jp-31-fine-tuned-models-68368681b9b35de1c4ac8de4).
34
 
35
- For more details on training and evaluation results, please refer to [this blog post]() (in Japanese).
36
 
37
  Checkpoints format: Hugging Face Transformers
38
 
 
23
  ---
24
  # llm-jp-3.1-1.8b-instruct4
25
 
26
+ LLM-jp-3.1 is a series of large language models developed by the [Research and Development Center for Large Language Models](https://llmc.nii.ac.jp/) at the [National Institute of Informatics](https://www.nii.ac.jp/en/).
27
 
28
+ Building upon the LLM-jp-3 series, the LLM-jp-3.1 models incorporate mid-training ([instruction pre-training](https://aclanthology.org/2024.emnlp-main.148/)), which significantly enhances their instruction-following capabilities compared to the original LLM-jp-3 models.
29
 
30
+ This repository provides the **llm-jp-3.1-1.8b-instruct4** model.
31
  For an overview of the LLM-jp-3.1 models across different parameter sizes, please refer to:
32
  - [LLM-jp-3.1 Pre-trained Models](https://huggingface.co/collections/llm-jp/llm-jp-31-pre-trained-models-68368787c32e462c40a45f7b)
33
  - [LLM-jp-3.1 Fine-tuned Models](https://huggingface.co/collections/llm-jp/llm-jp-31-fine-tuned-models-68368681b9b35de1c4ac8de4).
34
 
35
+ For more details on the training procedures and evaluation results, please refer to [this blog post]() (in Japanese).
36
 
37
  Checkpoints format: Hugging Face Transformers
38