lyu-boxuan commited on
Commit
4eaed6f
1 Parent(s): 6ca843a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +117 -3
README.md CHANGED
@@ -1,3 +1,117 @@
1
- ---
2
- license: llama3
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: llama3
3
+ language:
4
+ - en
5
+ - ja
6
+ metrics:
7
+ - comet
8
+ pipeline_tag: translation
9
+ tags:
10
+ - machine translation
11
+ - MT
12
+ - llama-3
13
+ ---
14
+
15
+ # Overview
16
+ The model is based on rinna's [rinna/llama-3-youko-8b], fine-tuned using LoRA on a small number of parallel sentences from English to Japanese. The model has a COMET (Unbabel/wmt22-comet-da) of 0.9011 on flores200 devtest.
17
+
18
+ * **Model architecture**
19
+
20
+ A 32-layer, 4096-hidden-size transformer-based language model. Refer to the [Llama 3 Model Card](https://github.com/meta-llama/llama3/blob/main/MODEL_CARD.md) for architecture details.
21
+ * **Training: Built with Meta Llama 3**
22
+
23
+ The model was initialized with the [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B) model and continually trained on around **22B** tokens from a mixture of the following corpora
24
+ - [Japanese CC-100](https://huggingface.co/datasets/cc100)
25
+ - [Japanese C4](https://huggingface.co/datasets/mc4)
26
+ - [Japanese OSCAR](https://huggingface.co/datasets/oscar-corpus/colossal-oscar-1.0)
27
+ - [The Pile](https://huggingface.co/datasets/EleutherAI/pile)
28
+ - [Wikipedia](https://dumps.wikimedia.org/other/cirrussearch)
29
+ - rinna curated Japanese dataset
30
+
31
+ * **Contributors**
32
+ - [Koh Mitsuda](https://huggingface.co/mitsu-koh)
33
+ - [Kei Sawada](https://huggingface.co/keisawada)
34
+
35
+ ---
36
+
37
+ # Benchmarking
38
+
39
+ Please refer to [rinna's LM benchmark page](https://rinnakk.github.io/research/benchmarks/lm/index.html).
40
+
41
+ ---
42
+
43
+ # How to use the model
44
+
45
+ ~~~~python
46
+ import torch
47
+ from transformers import AutoModelForCausalLM, AutoTokenizer
48
+
49
+ response_template = "\n### 日本語:\n"
50
+ prefix = "### 次の英語の文書を日本語に翻訳してください:\n"
51
+
52
+
53
+ def create_input(text, tokenizer):
54
+ text = f"{prefix}{text}{response_template}"
55
+ input_ids = tokenizer.encode(text, return_tensors="pt")
56
+ return input_ids
57
+
58
+
59
+ model_id = "lyu/MT/output/llama3-sft-lora-16-NLLB-100k-run2/merge"
60
+ model = AutoModelForCausalLM.from_pretrained(
61
+ model_id, torch_dtype=torch.bfloat16, attn_implementation="flash_attention_2"
62
+ ).cuda()
63
+ tokenizer = AutoTokenizer.from_pretrained(model_id, use_fast=True)
64
+
65
+ en = "LLMs Are Here but Not Quite There Yet"
66
+ input_ids = create_input(en, tokenizer).to(model.device)
67
+ outputs = model.generate(
68
+ input_ids,
69
+ max_new_tokens=256,
70
+ num_beams=5,
71
+ do_sample=False,
72
+ early_stopping=True,
73
+ )
74
+ response = outputs[0][input_ids.shape[-1] :]
75
+ print(tokenizer.decode(response, skip_special_tokens=True))
76
+ ~~~~
77
+
78
+ ---
79
+
80
+ # Tokenization
81
+ The model uses the original meta-llama/Meta-Llama-3-8B tokenizer.
82
+
83
+
84
+ # References
85
+ ```bibtex
86
+ @article{llama3modelcard,
87
+ title={Llama 3 Model Card},
88
+ author={AI@Meta},
89
+ year={2024},
90
+ url = {https://github.com/meta-llama/llama3/blob/main/MODEL_CARD.md}
91
+ }
92
+ @software{gpt-neox-library,
93
+ title = {{GPT-NeoX: Large Scale Autoregressive Language Modeling in PyTorch}},
94
+ author = {Andonian, Alex and Anthony, Quentin and Biderman, Stella and Black, Sid and Gali, Preetham and Gao, Leo and Hallahan, Eric and Levy-Kramer, Josh and Leahy, Connor and Nestler, Lucas and Parker, Kip and Pieler, Michael and Purohit, Shivanshu and Songz, Tri and Phil, Wang and Weinbach, Samuel},
95
+ doi = {10.5281/zenodo.5879544},
96
+ month = {8},
97
+ year = {2021},
98
+ version = {0.0.1},
99
+ url = {https://www.github.com/eleutherai/gpt-neox},
100
+ }
101
+ @misc{rinna-llama-3-youko-8b,
102
+ title = {rinna/llama-3-youko-8b},
103
+ author = {Mitsuda, Koh and Sawada, Kei},
104
+ url = {https://huggingface.co/rinna/llama-3-youko-8b},
105
+ }
106
+ @inproceedings{sawada2024release,
107
+ title = {Release of Pre-Trained Models for the {J}apanese Language},
108
+ author = {Sawada, Kei and Zhao, Tianyu and Shing, Makoto and Mitsui, Kentaro and Kaga, Akio and Hono, Yukiya and Wakatsuki, Toshiaki and Mitsuda, Koh},
109
+ booktitle = {Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)},
110
+ month = {5},
111
+ year = {2024},
112
+ url = {https://arxiv.org/abs/2404.01657},
113
+ }
114
+ ```
115
+ ---
116
+
117
+ # License