Text Generation
Transformers
PyTorch
English
llama
text-generation-inference
chaoscodes commited on
Commit
a7b785a
·
verified ·
1 Parent(s): f2e242f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -28,7 +28,7 @@ We adopted exactly the same architecture and tokenizer as Llama 2. This means Ti
28
 
29
  In this project, rather than only training a single TinyLlama model, we first train TinyLlama on a corpus of 1.5 trillion tokens to obtain foundational language capabilities. Subsequently, we take this model and turn it into three different models by continual pre-training with three distinct data sampling. For a visual representation of this process, please refer to the figure below.
30
 
31
- ![image-20240401225128124](/Users/zengguangtao/Library/Application Support/typora-user-images/image-20240401225128124.png)
32
 
33
  ### Pretraining
34
 
 
28
 
29
  In this project, rather than only training a single TinyLlama model, we first train TinyLlama on a corpus of 1.5 trillion tokens to obtain foundational language capabilities. Subsequently, we take this model and turn it into three different models by continual pre-training with three distinct data sampling. For a visual representation of this process, please refer to the figure below.
30
 
31
+ ![image-20240401225128124](overview.png)
32
 
33
  ### Pretraining
34