Taishi-N324 commited on
Commit
6870534
·
verified ·
1 Parent(s): 4beef28

Upload 6 files

Browse files
Files changed (7) hide show
  1. .gitattributes +1 -0
  2. GEMMA_TERMS_OF_USE.md +77 -0
  3. LICENSE +49 -0
  4. Notice +2 -0
  5. README.md +234 -0
  6. USE_POLICY.md +73 -0
  7. logo.png +3 -0
.gitattributes CHANGED
@@ -34,3 +34,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
  tokenizer.json filter=lfs diff=lfs merge=lfs -text
 
 
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
  tokenizer.json filter=lfs diff=lfs merge=lfs -text
37
+ logo.png filter=lfs diff=lfs merge=lfs -text
GEMMA_TERMS_OF_USE.md ADDED
@@ -0,0 +1,77 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Gemma Terms of Use
2
+
3
+ Last modified: April 1, 2024
4
+
5
+ By using, reproducing, modifying, distributing, performing or displaying any portion or element of Gemma, Model Derivatives including via any Hosted Service, (each as defined below) (collectively, the "Gemma Services") or otherwise accepting the terms of this Agreement, you agree to be bound by this Agreement.
6
+
7
+ Section 1: DEFINITIONS
8
+ 1.1 Definitions
9
+ (a) "Agreement" or "Gemma Terms of Use" means these terms and conditions that govern the use, reproduction, Distribution or modification of the Gemma Services and any terms and conditions incorporated by reference.
10
+
11
+ (b) "Distribution" or "Distribute" means any transmission, publication, or other sharing of Gemma or Model Derivatives to a third party, including by providing or making Gemma or its functionality available as a hosted service via API, web access, or any other electronic or remote means ("Hosted Service").
12
+
13
+ (c) "Gemma" means the set of machine learning language models, trained model weights and parameters identified at ai.google.dev/gemma, regardless of the source that you obtained it from.
14
+
15
+ (d) "Google" means Google LLC.
16
+
17
+ (e) "Model Derivatives" means all (i) modifications to Gemma, (ii) works based on Gemma, or (iii) any other machine learning model which is created by transfer of patterns of the weights, parameters, operations, or Output of Gemma, to that model in order to cause that model to perform similarly to Gemma, including distillation methods that use intermediate data representations or methods based on the generation of synthetic data Outputs by Gemma for training that model. For clarity, Outputs are not deemed Model Derivatives.
18
+
19
+ (f) "Output" means the information content output of Gemma or a Model Derivative that results from operating or otherwise using Gemma or the Model Derivative, including via a Hosted Service.
20
+
21
+ 1.2
22
+ As used in this Agreement, "including" means "including without limitation".
23
+
24
+ Section 2: ELIGIBILITY AND USAGE
25
+ 2.1 Eligibility
26
+ You represent and warrant that you have the legal capacity to enter into this Agreement (including being of sufficient age of consent). If you are accessing or using any of the Gemma Services for or on behalf of a legal entity, (a) you are entering into this Agreement on behalf of yourself and that legal entity, (b) you represent and warrant that you have the authority to act on behalf of and bind that entity to this Agreement and (c) references to "you" or "your" in the remainder of this Agreement refers to both you (as an individual) and that entity.
27
+
28
+ 2.2 Use
29
+ You may use, reproduce, modify, Distribute, perform or display any of the Gemma Services only in accordance with the terms of this Agreement, and must not violate (or encourage or permit anyone else to violate) any term of this Agreement.
30
+
31
+ Section 3: DISTRIBUTION AND RESTRICTIONS
32
+ 3.1 Distribution and Redistribution
33
+ You may reproduce or Distribute copies of Gemma or Model Derivatives if you meet all of the following conditions:
34
+
35
+ 1. You must include the use restrictions referenced in Section 3.2 as an enforceable provision in any agreement (e.g., license agreement, terms of use, etc.) governing the use and/or distribution of Gemma or Model Derivatives and you must provide notice to subsequent users you Distribute to that Gemma or Model Derivatives are subject to the use restrictions in Section 3.2.
36
+ 2. You must provide all third party recipients of Gemma or Model Derivatives a copy of this Agreement.
37
+ 3. You must cause any modified files to carry prominent notices stating that you modified the files.
38
+ 4. All Distributions (other than through a Hosted Service) must be accompanied by a "Notice" text file that contains the following notice: "Gemma is provided under and subject to the Gemma Terms of Use found at ai.google.dev/gemma/terms".
39
+ You may add your own intellectual property statement to your modifications and, except as set forth in this Section, may provide additional or different terms and conditions for use, reproduction, or Distribution of your modifications, or for any such Model Derivatives as a whole, provided your use, reproduction, modification, Distribution, performance, and display of Gemma otherwise complies with the terms and conditions of this Agreement. Any additional or different terms and conditions you impose must not conflict with the terms of this Agreement.
40
+
41
+ 3.2 Use Restrictions
42
+ You must not use any of the Gemma Services:
43
+
44
+ 1. for the restricted uses set forth in the Gemma Prohibited Use Policy at ai.google.dev/gemma/prohibited_use_policy ("Prohibited Use Policy"), which is hereby incorporated by reference into this Agreement; or
45
+ 2. in violation of applicable laws and regulations.
46
+ To the maximum extent permitted by law, Google reserves the right to restrict (remotely or otherwise) usage of any of the Gemma Services that Google reasonably believes are in violation of this Agreement.
47
+
48
+ 3.3 Generated Output
49
+ Google claims no rights in Outputs you generate using Gemma. You and your users are solely responsible for Outputs and their subsequent uses.
50
+
51
+ Section 4: ADDITIONAL PROVISIONS
52
+ 4.1 Updates
53
+ Google may update Gemma from time to time.
54
+
55
+ 4.2 Trademarks
56
+ Nothing in this Agreement grants you any rights to use Google's trademarks, trade names, logos or to otherwise suggest endorsement or misrepresent the relationship between you and Google. Google reserves any rights not expressly granted herein.
57
+
58
+ 4.3 DISCLAIMER OF WARRANTY
59
+ UNLESS REQUIRED BY APPLICABLE LAW, THE GEMMA SERVICES, AND OUTPUTS, ARE PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, EITHER EXPRESS OR IMPLIED, INCLUDING ANY WARRANTIES OR CONDITIONS OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE. YOU ARE SOLELY RESPONSIBLE FOR DETERMINING THE APPROPRIATENESS OF USING, REPRODUCING, MODIFYING, PERFORMING, DISPLAYING OR DISTRIBUTING ANY OF THE GEMMA SERVICES OR OUTPUTS AND ASSUME ANY AND ALL RISKS ASSOCIATED WITH YOUR USE OR DISTRIBUTION OF ANY OF THE GEMMA SERVICES OR OUTPUTS AND YOUR EXERCISE OF RIGHTS AND PERMISSIONS UNDER THIS AGREEMENT.
60
+
61
+ 4.4 LIMITATION OF LIABILITY
62
+ TO THE FULLEST EXTENT PERMITTED BY APPLICABLE LAW, IN NO EVENT AND UNDER NO LEGAL THEORY, WHETHER IN TORT (INCLUDING NEGLIGENCE), PRODUCT LIABILITY, CONTRACT, OR OTHERWISE, UNLESS REQUIRED BY APPLICABLE LAW, SHALL GOOGLE OR ITS AFFILIATES BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY DIRECT, INDIRECT, SPECIAL, INCIDENTAL, EXEMPLARY, CONSEQUENTIAL, OR PUNITIVE DAMAGES, OR LOST PROFITS OF ANY KIND ARISING FROM THIS AGREEMENT OR RELATED TO, ANY OF THE GEMMA SERVICES OR OUTPUTS EVEN IF GOOGLE OR ITS AFFILIATES HAVE BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES.
63
+
64
+ 4.5 Term, Termination, and Survival
65
+ The term of this Agreement will commence upon your acceptance of this Agreement (including acceptance by your use, modification, or Distribution, reproduction, performance or display of any portion or element of the Gemma Services) and will continue in full force and effect until terminated in accordance with the terms of this Agreement. Google may terminate this Agreement if you are in breach of any term of this Agreement. Upon termination of this Agreement, you must delete and cease use and Distribution of all copies of Gemma and Model Derivatives in your possession or control. Sections 1, 2.1, 3.3, 4.2 to 4.9 shall survive the termination of this Agreement.
66
+
67
+ 4.6 Governing Law and Jurisdiction
68
+ This Agreement will be governed by the laws of the State of California without regard to choice of law principles. The UN Convention on Contracts for the International Sale of Goods does not apply to this Agreement. The state and federal courts of Santa Clara County, California shall have exclusive jurisdiction of any dispute arising out of this Agreement.
69
+
70
+ 4.7 Severability
71
+ If any provision of this Agreement is held to be invalid, illegal or unenforceable, the remaining provisions shall be unaffected thereby and remain valid as if such provision had not been set forth herein.
72
+
73
+ 4.8 Entire Agreement
74
+ This Agreement states all the terms agreed between the parties and supersedes all other agreements between the parties as of the date of acceptance relating to its subject matter.
75
+
76
+ 4.9 No Waiver
77
+ Google will not be treated as having waived any rights by not exercising (or delaying the exercise of) any rights under this Agreement.
LICENSE ADDED
@@ -0,0 +1,49 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ **LLAMA 3.3 COMMUNITY LICENSE AGREEMENT**
2
+
3
+ Llama 3.3 Version Release Date: December 6, 2024
4
+
5
+ “**Agreement**” means the terms and conditions for use, reproduction, distribution and modification of the Llama Materials set forth herein.
6
+
7
+ “**Documentation**” means the specifications, manuals and documentation accompanying Llama 3.3 distributed by Meta at [https://www.llama.com/docs/overview](https://llama.com/docs/overview).
8
+
9
+ “**Licensee**” or “**you**” means you, or your employer or any other person or entity (if you are entering into this Agreement on such person or entity’s behalf), of the age required under applicable laws, rules or regulations to provide legal consent and that has legal authority to bind your employer or such other person or entity if you are entering in this Agreement on their behalf.
10
+
11
+ “**Llama 3.3**” means the foundational large language models and software and algorithms, including machine-learning model code, trained model weights, inference-enabling code, training-enabling code, fine-tuning enabling code and other elements of the foregoing distributed by Meta at [https://www.llama.com/llama-downloads](https://www.llama.com/llama-downloads).
12
+
13
+ “**Llama Materials**” means, collectively, Meta’s proprietary Llama 3.3 and Documentation (and any portion thereof) made available under this Agreement.
14
+
15
+ “**Meta**” or “**we**” means Meta Platforms Ireland Limited (if you are located in or, if you are an entity, your principal place of business is in the EEA or Switzerland) and Meta Platforms, Inc. (if you are located outside of the EEA or Switzerland). 
16
+
17
+ By clicking “I Accept” below or by using or distributing any portion or element of the Llama Materials, you agree to be bound by this Agreement.
18
+
19
+ 1\. **License Rights and Redistribution**.
20
+
21
+ a. Grant of Rights. You are granted a non-exclusive, worldwide, non-transferable and royalty-free limited license under Meta’s intellectual property or other rights owned by Meta embodied in the Llama Materials to use, reproduce, distribute, copy, create derivative works of, and make modifications to the Llama Materials.  
22
+
23
+ b. Redistribution and Use.  
24
+
25
+ i. If you distribute or make available the Llama Materials (or any derivative works thereof), or a product or service (including another AI model) that contains any of them, you shall (A) provide a copy of this Agreement with any such Llama Materials; and (B) prominently display “Built with Llama” on a related website, user interface, blogpost, about page, or product documentation. If you use the Llama Materials or any outputs or results of the Llama Materials to create, train, fine tune, or otherwise improve an AI model, which is distributed or made available, you shall also include “Llama” at the beginning of any such AI model name.
26
+
27
+ ii. If you receive Llama Materials, or any derivative works thereof, from a Licensee as part of an integrated end user product, then Section 2 of this Agreement will not apply to you. 
28
+
29
+ iii. You must retain in all copies of the Llama Materials that you distribute the following attribution notice within a “Notice” text file distributed as a part of such copies: “Llama 3.3 is licensed under the Llama 3.3 Community License, Copyright © Meta Platforms, Inc. All Rights Reserved.”
30
+
31
+ iv. Your use of the Llama Materials must comply with applicable laws and regulations (including trade compliance laws and regulations) and adhere to the Acceptable Use Policy for the Llama Materials (available at [https://www.llama.com/llama3\_3/use-policy](https://www.llama.com/llama3_3/use-policy)), which is hereby incorporated by reference into this Agreement.
32
+   
33
+ 2\. **Additional Commercial Terms**. If, on the Llama 3.3 version release date, the monthly active users of the products or services made available by or for Licensee, or Licensee’s affiliates, is greater than 700 million monthly active users in the preceding calendar month, you must request a license from Meta, which Meta may grant to you in its sole discretion, and you are not authorized to exercise any of the rights under this Agreement unless or until Meta otherwise expressly grants you such rights.
34
+
35
+ 3**. Disclaimer of Warranty**. UNLESS REQUIRED BY APPLICABLE LAW, THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS THEREFROM ARE PROVIDED ON AN “AS IS” BASIS, WITHOUT WARRANTIES OF ANY KIND, AND META DISCLAIMS ALL WARRANTIES OF ANY KIND, BOTH EXPRESS AND IMPLIED, INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE. YOU ARE SOLELY RESPONSIBLE FOR DETERMINING THE APPROPRIATENESS OF USING OR REDISTRIBUTING THE LLAMA MATERIALS AND ASSUME ANY RISKS ASSOCIATED WITH YOUR USE OF THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS.
36
+
37
+ 4\. **Limitation of Liability**. IN NO EVENT WILL META OR ITS AFFILIATES BE LIABLE UNDER ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE, ARISING OUT OF THIS AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, INCIDENTAL, EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF META OR ITS AFFILIATES HAVE BEEN ADVISED OF THE POSSIBILITY OF ANY OF THE FOREGOING.
38
+
39
+ 5\. **Intellectual Property**.
40
+
41
+ a. No trademark licenses are granted under this Agreement, and in connection with the Llama Materials, neither Meta nor Licensee may use any name or mark owned by or associated with the other or any of its affiliates, except as required for reasonable and customary use in describing and redistributing the Llama Materials or as set forth in this Section 5(a). Meta hereby grants you a license to use “Llama” (the “Mark”) solely as required to comply with the last sentence of Section 1.b.i. You will comply with Meta’s brand guidelines (currently accessible at [https://about.meta.com/brand/resources/meta/company-brand/](https://about.meta.com/brand/resources/meta/company-brand/)[)](https://en.facebookbrand.com/). All goodwill arising out of your use of the Mark will inure to the benefit of Meta.
42
+
43
+ b. Subject to Meta’s ownership of Llama Materials and derivatives made by or for Meta, with respect to any derivative works and modifications of the Llama Materials that are made by you, as between you and Meta, you are and will be the owner of such derivative works and modifications.
44
+
45
+ c. If you institute litigation or other proceedings against Meta or any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Llama Materials or Llama 3.3 outputs or results, or any portion of any of the foregoing, constitutes infringement of intellectual property or other rights owned or licensable by you, then any licenses granted to you under this Agreement shall terminate as of the date such litigation or claim is filed or instituted. You will indemnify and hold harmless Meta from and against any claim by any third party arising out of or related to your use or distribution of the Llama Materials.
46
+
47
+ 6\. **Term and Termination**. The term of this Agreement will commence upon your acceptance of this Agreement or access to the Llama Materials and will continue in full force and effect until terminated in accordance with the terms and conditions herein. Meta may terminate this Agreement if you are in breach of any term or condition of this Agreement. Upon termination of this Agreement, you shall delete and cease use of the Llama Materials. Sections 3, 4 and 7 shall survive the termination of this Agreement. 
48
+
49
+ 7\. **Governing Law and Jurisdiction**. This Agreement will be governed and construed under the laws of the State of California without regard to choice of law principles, and the UN Convention on Contracts for the International Sale of Goods does not apply to this Agreement. The courts of California shall have exclusive jurisdiction of any dispute arising out of this Agreement. 
Notice ADDED
@@ -0,0 +1,2 @@
 
 
 
1
+ Llama 3.3 is licensed under the Llama 3.3 Community License, Copyright © Meta Platforms, Inc. All Rights Reserved.
2
+ Gemma is provided under and subject to the Gemma Terms of Use found at ai.google.dev/gemma/terms
README.md ADDED
@@ -0,0 +1,234 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ - ja
5
+ library_name: transformers
6
+ pipeline_tag: text-generation
7
+ license:
8
+ - llama3.3
9
+ - gemma
10
+ model_type: llama
11
+ ---
12
+
13
+ # Llama 3.3 Swallow - Built with Llama
14
+
15
+ Llama 3.3 Swallow is a large language model (70B) that was built by continual pre-training on the [Meta Llama 3.3](https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct) model.
16
+ Llama 3.3 Swallow enhanced the Japanese language capabilities of the original Llama 3.3 while retaining the English language capabilities.
17
+ We use approximately 315 billion tokens that were sampled from a large Japanese web corpus (Swallow Corpus Version 2), Japanese and English Wikipedia articles, and mathematical and
18
+ coding contents, etc (see the Training Datasets section of the base model) for continual pre-training.
19
+ The instruction-tuned models (Instruct) were built by supervised fine-tuning (SFT) on the synthetic data specially built for Japanese.
20
+ See the Swallow Model Index section to find other model variants.
21
+
22
+ # Release History
23
+ - **March 10, 2025**: Released [Llama-3.3-Swallow-70B-Instruct-v0.4](https://huggingface.co/tokyotech-llm/Llama-3.3-Swallow-70B-Instruct-v0.4) and [Llama-3.3-Swallow-70B-v0.4](https://huggingface.co/tokyotech-llm/Llama-3.3-Swallow-70B-v0.4).
24
+ - **December 30, 2024**: Released [Llama-3.1-Swallow-70B-Instruct-v0.3](https://huggingface.co/tokyotech-llm/Llama-3.1-Swallow-70B-Instruct-v0.3).
25
+ - **December 23, 2024**: Released [Llama-3.1-Swallow-8B-Instruct-v0.3](https://huggingface.co/tokyotech-llm/Llama-3.1-Swallow-8B-Instruct-v0.3).
26
+ - **November 11, 2024**: Released [Llama-3.1-Swallow-8B-v0.2](https://huggingface.co/tokyotech-llm/Llama-3.1-Swallow-8B-v0.2) and [Llama-3.1-Swallow-8B-Instruct-v0.2](https://huggingface.co/tokyotech-llm/Llama-3.1-Swallow-8B-Instruct-v0.2).
27
+ - **October 08, 2024**: Released [Llama-3.1-Swallow-8B-v0.1](https://huggingface.co/tokyotech-llm/Llama-3.1-Swallow-8B-v0.1), [Llama-3.1-Swallow-8B-Instruct-v0.1](https://huggingface.co/tokyotech-llm/Llama-3.1-Swallow-8B-Instruct-v0.1), [Llama-3.1-Swallow-70B-v0.1](https://huggingface.co/tokyotech-llm/Llama-3.1-Swallow-70B-v0.1), and [Llama-3.1-Swallow-70B-Instruct-v0.1](https://huggingface.co/tokyotech-llm/Llama-3.1-Swallow-70B-Instruct-v0.1).
28
+
29
+ ## Swallow Model Index
30
+ |Model|Llama-3.1-Swallow v0.1|Llama-3.1-Swallow-Instruct v0.1|Llama-3.1-Swallow v0.2|Llama-3.1-Swallow-Instruct v0.2|Llama-3.1-Swallow-Instruct v0.3|Llama-3.3-Swallow v0.4|Llama-3.3-Swallow-Instruct v0.4|
31
+ |---|---|---|---|---|---|---|---|
32
+ |8B| [🤗 HuggingFace](https://huggingface.co/tokyotech-llm/Llama-3.1-Swallow-8B-v0.1) | [🤗 HuggingFace](https://huggingface.co/tokyotech-llm/Llama-3.1-Swallow-8B-Instruct-v0.1) | [🤗 HuggingFace](https://huggingface.co/tokyotech-llm/Llama-3.1-Swallow-8B-v0.2) | [🤗 HuggingFace](https://huggingface.co/tokyotech-llm/Llama-3.1-Swallow-8B-Instruct-v0.2) | [🤗 HuggingFace](https://huggingface.co/tokyotech-llm/Llama-3.1-Swallow-8B-Instruct-v0.3) | | |
33
+ |70B| [🤗 HuggingFace](https://huggingface.co/tokyotech-llm/Llama-3.1-Swallow-70B-v0.1) | [🤗 HuggingFace](https://huggingface.co/tokyotech-llm/Llama-3.1-Swallow-70B-Instruct-v0.1) | | | [🤗 HuggingFace](https://huggingface.co/tokyotech-llm/Llama-3.1-Swallow-70B-Instruct-v0.3) | [🤗 HuggingFace](https://huggingface.co/tokyotech-llm/Llama-3.3-Swallow-70B-v0.4) | [🤗 HuggingFace](https://huggingface.co/tokyotech-llm/Llama-3.3-Swallow-70B-Instruct-v0.4) |
34
+
35
+ ![logo](./logo.png)
36
+
37
+ The website [https://swallow-llm.github.io/](https://swallow-llm.github.io/index.en.html) provides large language models developed by the Swallow team.
38
+
39
+ ## Model Details
40
+
41
+ * **Model type**: Please refer to [Llama 3.1 MODEL_CARD](https://github.com/meta-llama/llama3/blob/main/MODEL_CARD.md) for details on the model architecture.
42
+ * **Language(s)**: Japanese English
43
+ * **Library**: [Megatron-LM](https://github.com/NVIDIA/Megatron-LM)
44
+ * **Tokenizer**: Please refer to [Llama 3.1 blog](https://ai.meta.com/blog/meta-llama-3-1) for details on the tokenizer.
45
+ * **Contact**: swallow[at]nlp.c.titech.ac.jp
46
+
47
+ ## Model Performance
48
+
49
+ ### Japanese tasks
50
+
51
+ |Model|JCom.|JEMHopQA|NIILC|JSQuAD|XL-Sum|MGSM|WMT20-en-ja|WMT20-ja-en|JMMLU|JHumanEval|Ja Avg|
52
+ |---|---|---|---|---|---|---|---|---|---|---|---|
53
+ | |4-shot|4-shot|4-shot|4-shot|1-shot|4-shot|4-shot|4-shot|5-shot|0-shot| |
54
+ | |EM acc|Char-F1|Char-F1|Char-F1|ROUGE-2|EM acc|BLEU|BLEU|EM acc|pass@1| |
55
+ | Qwen2-72B | 0.960 | 0.620 | 0.561 | 0.926 | 0.238 | 0.768 | 0.275 | 0.241 | 0.782 | 0.561 | 0.593 |
56
+ | Qwen2.5-72B | **0.972** | 0.611 | 0.619 | **0.930** | 0.279 | **0.828** | 0.287 | 0.252 | **0.804** | **0.648** | 0.623 |
57
+ | Sarashina2-70B | 0.929 | **0.717** | 0.668 | 0.929 | 0.190 | 0.488 | 0.313 | 0.243 | 0.592 | 0.235 | 0.530 |
58
+ | Llama 3 70B | 0.946 | 0.606 | 0.589 | 0.922 | 0.228 | 0.664 | 0.286 | 0.252 | 0.705 | 0.491 | 0.569 |
59
+ | Llama 3.1 70B | 0.946 | 0.616 | 0.603 | 0.925 | 0.228 | 0.672 | 0.287 | 0.257 | 0.669 | 0.462 | 0.566 |
60
+ | Llama 3 Youko 70B | 0.946 | 0.602 | 0.610 | 0.923 | 0.242 | 0.684 | 0.292 | 0.250 | 0.704 | 0.463 | 0.571 |
61
+ | Llama 3 Swallow 70B | 0.968 | 0.675 | 0.684 | 0.923 | 0.239 | 0.708 | 0.307 | 0.255 | 0.706 | 0.477 | 0.594 |
62
+ | Llama 3.1 Swallow 70B | 0.955 | 0.645 | 0.678 | 0.923 | 0.272 | 0.684 | 0.320 | 0.259 | 0.709 | 0.487 | 0.593 |
63
+ | **Llama 3.3 Swallow 70B v0.4** | 0.967 | 0.671 | **0.732** | 0.924 | **0.283** | 0.776 | **0.327** | **0.260** | 0.742 | 0.604 | **0.629** |
64
+
65
+ ### English tasks
66
+
67
+ |Model|OpenBookQA|TriviaQA|HellaSWAG|SQuAD2.0|XWINO|MMLU|GSM8K|MATH|BBH|HumanEval|En Avg|
68
+ |---|---|---|---|---|---|---|---|---|---|---|---|
69
+ | |4-shot|4-shot|4-shot|4-shot|4-shot|5-shot|4-shot|4-shot|3-shot|0-shot| |
70
+ | |Acc|EM acc|Acc|EM acc|Acc|Acc|EM acc|CoT EM Acc|CoT EM Acc|pass@1| |
71
+ | Qwen2-72B | 0.418 | 0.790 | 0.677 | 0.673 | 0.915 | 0.842 | **0.893** | 0.560 | 0.643 | 0.608 | 0.702 |
72
+ | Qwen2.5-72B | 0.416 | 0.760 | 0.685 | **0.693** | 0.901 | **0.861** | 0.870 | **0.626** | 0.727 | 0.554 | 0.709 |
73
+ | Sarashina2-70B | 0.388 | 0.537 | 0.628 | 0.675 | 0.917 | 0.630 | 0.011 | 0.206 | 0.639 | 0.281 | 0.491 |
74
+ | Llama 3 70B | 0.440 | 0.826 | **0.690** | 0.618 | 0.920 | 0.787 | 0.801 | 0.446 | **0.829** | 0.527 | 0.689 |
75
+ | Llama 3.1 70B | **0.450** | **0.829** | **0.690** | 0.605 | 0.920 | 0.786 | 0.798 | 0.434 | 0.655 | 0.546 | 0.671 |
76
+ | Llama 3 Youko 70B | 0.436 | **0.829** | **0.690** | 0.610 | 0.922 | 0.785 | 0.797 | 0.408 | 0.826 | 0.412 | 0.671 |
77
+ | Llama 3 Swallow 70B | 0.430 | 0.823 | 0.682 | 0.628 | 0.923 | 0.774 | 0.817 | 0.414 | 0.734 | 0.499 | 0.672 |
78
+ | Llama 3.1 Swallow 70B v0.1 | 0.428 | 0.826 | **0.690** | 0.612 | **0.927** | 0.772 | 0.809 | 0.380 | 0.806 | 0.540 | 0.679 |
79
+ | **Llama 3.1 Swallow 70B v0.4** | 0.424 | 0.817 | 0.683 | 0.641 | 0.920 | 0.802 | 0.863 | 0.496 | 0.754 | **0.709** | **0.711** |
80
+
81
+ ## Evaluation Benchmarks
82
+
83
+ ### Japanese evaluation benchmarks
84
+
85
+ We used llm-jp-eval(v1.3.0), JP Language Model Evaluation Harness(commit #9b42d41) and Code Generation LM Evaluation Harness(commit #0261c52). The details are as follows:
86
+
87
+ - Multiple-choice question answering (JCommonsenseQA [Kurihara et al., 2022])
88
+ - Open-ended question answering (JEMHopQA [Ishii et al., 2024])
89
+ - Open-ended question answering (NIILC [関根, 2003])
90
+ - Machine reading comprehension (JSQuAD [Kurihara et al., 2022])
91
+ - Automatic summarization (XL-Sum [Hasan et al., 2021])
92
+ - Machine translation (WMT2020 ja-en [Barrault et al., 2020])
93
+ - Machine translation (WMT2020 en-ja [Barrault et al., 2020])
94
+ - Mathematical reasoning (MGSM [Shi et al., 2023])
95
+ - Academic exams (JMMLU [尹ら, 2024])
96
+ - Code generation (JHumanEval [佐藤ら, 2024])
97
+
98
+ ### English evaluation benchmarks
99
+
100
+ We used the Language Model Evaluation Harness(v.0.4.2) and Code Generation LM Evaluation Harness(commit #0261c52). The details are as follows:
101
+
102
+ - Multiple-choice question answering (OpenBookQA [Mihaylov et al., 2018])
103
+ - Open-ended question answering (TriviaQA [Joshi et al., 2017])
104
+ - Machine reading comprehension (SQuAD2 [Rajpurkar et al., 2018])
105
+ - Commonsense reasoning (XWINO [Tikhonov and Ryabinin, 2021])
106
+ - Natural language inference (HellaSwag [Zellers et al., 2019])
107
+ - Mathematical reasoning (GSM8K [Cobbe et al., 2021])
108
+ - Mathematical reasoning (MATH [Hendrycks et al., 2022][Lightman et al., 2024])
109
+ - Reasoning (BBH (BIG-Bench-Hard) [Suzgun et al., 2023])
110
+ - Academic exams (MMLU [Hendrycks et al., 2021])
111
+ - Code generation (HumanEval [Chen et al., 2021])
112
+
113
+ ## Training Datasets
114
+
115
+ ### Continual Pre-Training
116
+ The following datasets were used for continual pre-training.
117
+
118
+ - [Cosmopedia](https://huggingface.co/datasets/HuggingFaceTB/cosmopedia)
119
+ - [Dclm-baseline-1.0](https://huggingface.co/datasets/mlfoundations/dclm-baseline-1.0)
120
+ - [English Wikipedia](https://dumps.wikimedia.org/other/cirrussearch)
121
+ - [FineMath-4+ ](https://huggingface.co/datasets/HuggingFaceTB/finemath)
122
+ - [Japanese Wikipedia](https://dumps.wikimedia.org/other/cirrussearch)
123
+ - [Laboro ParaCorpus](https://github.com/laboroai/Laboro-ParaCorpus)
124
+ - [Swallow Corpus Version 2](https://arxiv.org/abs/2404.17733) (filtered using [Swallow Education Classifier(Wiki-based)](https://huggingface.co/tokyotech-llm/edu-classifier))
125
+ - [Swallow Corpus Version 2](https://arxiv.org/abs/2404.17733) (filtered using [Swallow Education Classifier](https://huggingface.co/tokyotech-llm/edu-classifier))
126
+ - [Swallow Corpus Version 2](https://arxiv.org/abs/2404.17733) (synthetic QA-format)
127
+ - Swallow Code Version 0.3 (filtering from [The Stack v2 train smol ids](https://huggingface.co/datasets/bigcode/the-stack-v2-train-smol-ids) and then refactoring with Llama-3.3-70B-Instruct)
128
+
129
+ ### Swallow Corpus Version 2
130
+
131
+ We built the Swallow Corpus by extracting high-quality Japanese texts from Common Crawl. In Version 2, we expanded the scope of the Common Crawl collection and modified the pipeline sequence to enable more flexible quality filtering.
132
+ For Llama 3.1 Swallow v0.2, we further refined our quality filtering and data sampling strategies, resulting in an even higher-quality selection of Japanese texts for pre-training.
133
+ For Llama 3.3 Swallow 70B v0.4, we generated synthetic QA-format text by using Gemma 2 27B IT to paraphrase educational web documents from our corpus
134
+
135
+ Further details of the methodology and analysis will be provided in a forthcoming paper.
136
+
137
+ ### Swallow Code Version 0.3
138
+
139
+ We built the Swallow Code Version 0.3 by filtering from the stack v2 train smol ids and then refactoring with Llama-3.3-70B-Instruct.
140
+ In filtering, we removed the code texts with syntax errors or scored below seven by pylint. We have already released the filtered version as [Swallow Code Version 0.1](https://huggingface.co/datasets/tokyotech-llm/swallow-code-v0.1).
141
+ In refactoring, we gave a prompt to Llama-3.3-70B-Instruct to follow [Google Python Style Guide](https://google.github.io/styleguide/pyguide.html) and good conventions.
142
+
143
+
144
+ ## Risks and Limitations
145
+
146
+ The models released here are still in the early stages of our research and development and have not been tuned to ensure outputs align with human intent and safety considerations.
147
+
148
+ ## Acknowledgements
149
+
150
+ We thank Meta Research for releasing Llama 3.3 under a generous open license.
151
+
152
+ We would like to thank Amazon Web Services (AWS) for providing access to SageMaker HyperPod, which enabled the training of the Llama 3.3 Swallow project.
153
+
154
+ We received various supports including:
155
+
156
+ + AIST project: "Research and Development of Foundation Models for Generative AI in the Physical Domain"
157
+ + NEDO project: "Development of Artificial Intelligence Application Technology to Support Judgment in Design Risk Assessment Work Based on the Perspective of Skilled Persons" (JPNP18002) of "Development of Integration Technology as the Core of Next Generation Artificial Intelligence and Robotics"
158
+ + MEXT project: "Formation of R&D center to ensure transparency and reliability of generative AI models"
159
+ + AIST program: [Large Generative AI Development Support Program](https://abci.ai/en/link/lfm_support_program.html)
160
+
161
+ ## License
162
+
163
+ [META LLAMA 3.3 COMMUNITY LICENSE](https://www.llama.com/llama3_3/license/) and [Gemma Terms of Use](https://ai.google.dev/gemma/terms)
164
+
165
+ ## Authors
166
+
167
+ Here are the team members:
168
+ - From [Institute of Science Tokyo Okazaki Laboratory](https://www.nlp.c.titech.ac.jp/index.en.html), the following members:
169
+ - [Naoaki Okazaki](https://www.chokkan.org/index.ja.html)
170
+ - [Sakae Mizuki](https://s-mizuki-nlp.github.io/)
171
+ - [Youmi Ma](https://www.nlp.c.titech.ac.jp/member/youmi.en.html)
172
+ - [Koki Maeda](https://sites.google.com/view/silviase)
173
+ - [Kakeru Hattori](https://aya-se.vercel.app/)
174
+ - [Masanari Ohi](https://sites.google.com/view/masanariohi)
175
+ - [Hinari Shimada](https://hinarishimada.github.io/portfolio)
176
+ - [Taihei Shiotani](https://github.com/inatoihs)
177
+ - [Koshiro Saito](https://sites.google.com/view/koshiro-saito)
178
+ - From [Institute of Science Tokyo YOKOTA Laboratory](https://www.rio.gsic.titech.ac.jp/en/index.html), the following members:
179
+ - [Rio Yokota](https://twitter.com/rioyokota)
180
+ - [Kazuki Fujii](https://twitter.com/okoge_kaz)
181
+ - [Taishi Nakamura](https://twitter.com/Setuna7777_2)
182
+ - [Takumi Okamoto](https://www.linkedin.com/in/takumi-okamoto)
183
+ - [Ishida Shigeki](https://www.wantedly.com/id/reborn27)
184
+ - [Yukito Tajima](https://www.linkedin.com/in/yukito-tajima-51bbb2299)
185
+ - [Masaki Kawamura](https://x.com/Masakichi333210)
186
+ - From [Artificial Intelligence Research Center, AIST, Japan](https://www.airc.aist.go.jp/en/teams/), the following members:
187
+ - [Hiroya Takamura](https://sites.google.com/view/hjtakamura)
188
+
189
+ ## How to cite
190
+
191
+ If you find our work helpful, please feel free to cite these papers.
192
+
193
+ ```
194
+ @inproceedings{Fujii:COLM2024,
195
+ title={Continual Pre-Training for Cross-Lingual LLM Adaptation:
196
+ Enhancing Japanese Language Capabilities},
197
+ author={Kazuki Fujii and Taishi Nakamura and Mengsay Loem and Hiroki
198
+ Iida and Masanari Ohi and Kakeru Hattori and Hirai Shota and Sakae
199
+ Mizuki and Rio Yokota and Naoaki Okazaki},
200
+ booktitle="Proceedings of the First Conference on Language Modeling",
201
+ series={COLM},
202
+ pages="(to appear)",
203
+ year="2024",
204
+ month=oct,
205
+ address={University of Pennsylvania, USA},
206
+ }
207
+
208
+ @inproceedings{Okazaki:COLM2024,
209
+ title={Building a Large Japanese Web Corpus for Large Language Models},
210
+ author={Naoaki Okazaki and Kakeru Hattori and Hirai Shota and Hiroki
211
+ Iida and Masanari Ohi and Kazuki Fujii and Taishi Nakamura and Mengsay
212
+ Loem and Rio Yokota and Sakae Mizuki},
213
+ booktitle="Proceedings of the First Conference on Language Modeling",
214
+ series={COLM},
215
+ pages="(to appear)",
216
+ year="2024",
217
+ month=oct,
218
+ address={University of Pennsylvania, USA},
219
+ }
220
+ ```
221
+
222
+ ### References
223
+
224
+ ```tex
225
+ @misc{dubey2024llama3herdmodels,
226
+ title={The Llama 3 Herd of Models},
227
+ author={Abhimanyu Dubey and Abhinav Jauhri and Abhinav Pandey and Abhishek Kadian and Ahmad Al-Dahle and Aiesha Letman and Akhil Mathur and Alan Schelten and Amy Yang and Angela Fan et al.},
228
+ year={2024},
229
+ eprint={2407.21783},
230
+ archivePrefix={arXiv},
231
+ primaryClass={cs.AI},
232
+ url={https://arxiv.org/abs/2407.21783},
233
+ }
234
+ ```
USE_POLICY.md ADDED
@@ -0,0 +1,73 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ **Llama 3.3** **Acceptable Use Policy**
2
+
3
+ Meta is committed to promoting safe and fair use of its tools and features, including Llama 3.3. If you access or use Llama 3.3, you agree to this Acceptable Use Policy (“**Policy**”). The most recent copy of this policy can be found at [https://www.llama.com/llama3\_3/use-policy](https://www.llama.com/llama3_3/use-policy).
4
+
5
+ **Prohibited Uses**
6
+
7
+ We want everyone to use Llama 3.3 safely and responsibly. You agree you will not use, or allow others to use, Llama 3.3 to:
8
+
9
+ 1. Violate the law or others’ rights, including to:
10
+
11
+ 1. Engage in, promote, generate, contribute to, encourage, plan, incite, or further illegal or unlawful activity or content, such as:
12
+ 1. Violence or terrorism
13
+ 2. Exploitation or harm to children, including the solicitation, creation, acquisition, or dissemination of child exploitative content or failure to report Child Sexual Abuse Material
14
+ 3. Human trafficking, exploitation, and sexual violence
15
+ 4. The illegal distribution of information or materials to minors, including obscene materials, or failure to employ legally required age-gating in connection with such information or materials.
16
+ 5. Sexual solicitation
17
+ 6. Any other criminal activity
18
+
19
+ 2. Engage in, promote, incite, or facilitate the harassment, abuse, threatening, or bullying of individuals or groups of individuals
20
+
21
+ 3. Engage in, promote, incite, or facilitate discrimination or other unlawful or harmful conduct in the provision of employment, employment benefits, credit, housing, other economic benefits, or other essential goods and services
22
+
23
+ 4. Engage in the unauthorized or unlicensed practice of any profession including, but not limited to, financial, legal, medical/health, or related professional practices
24
+
25
+ 5. Collect, process, disclose, generate, or infer private or sensitive information about individuals, including information about individuals’ identity, health, or demographic information, unless you have obtained the right to do so in accordance with applicable law
26
+
27
+ 6. Engage in or facilitate any action or generate any content that infringes, misappropriates, or otherwise violates any third-party rights, including the outputs or results of any products or services using the Llama Materials
28
+
29
+ 7. Create, generate, or facilitate the creation of malicious code, malware, computer viruses or do anything else that could disable, overburden, interfere with or impair the proper working, integrity, operation or appearance of a website or computer system
30
+
31
+ 8. Engage in any action, or facilitate any action, to intentionally circumvent or remove usage restrictions or other safety measures, or to enable functionality disabled by Meta 
32
+
33
+ 2. Engage in, promote, incite, facilitate, or assist in the planning or development of activities that present a risk of death or bodily harm to individuals, including use of Llama 3.3 related to the following:
34
+
35
+ 1. Military, warfare, nuclear industries or applications, espionage, use for materials or activities that are subject to the International Traffic Arms Regulations (ITAR) maintained by the United States Department of State or to the U.S. Biological Weapons Anti-Terrorism Act of 1989 or the Chemical Weapons Convention Implementation Act of 1997
36
+
37
+ 2. Guns and illegal weapons (including weapon development)
38
+
39
+ 3. Illegal drugs and regulated/controlled substances
40
+
41
+ 4. Operation of critical infrastructure, transportation technologies, or heavy machinery
42
+
43
+ 5. Self-harm or harm to others, including suicide, cutting, and eating disorders
44
+
45
+ 6. Any content intended to incite or promote violence, abuse, or any infliction of bodily harm to an individual
46
+
47
+ 3. Intentionally deceive or mislead others, including use of Llama 3.3 related to the following:
48
+
49
+ 1. Generating, promoting, or furthering fraud or the creation or promotion of disinformation
50
+
51
+ 2. Generating, promoting, or furthering defamatory content, including the creation of defamatory statements, images, or other content
52
+
53
+ 3. Generating, promoting, or further distributing spam
54
+
55
+ 4. Impersonating another individual without consent, authorization, or legal right
56
+
57
+ 5. Representing that the use of Llama 3.3 or outputs are human-generated
58
+
59
+ 6. Generating or facilitating false online engagement, including fake reviews and other means of fake online engagement 
60
+
61
+ 4. Fail to appropriately disclose to end users any known dangers of your AI system
62
+
63
+ 5. Interact with third party tools, models, or software designed to generate unlawful content or engage in unlawful or harmful conduct and/or represent that the outputs of such tools, models, or software are associated with Meta or Llama 3.3
64
+
65
+ With respect to any multimodal models included in Llama 3.3, the rights granted under Section 1(a) of the Llama 3.3 Community License Agreement are not being granted to you if you are an individual domiciled in, or a company with a principal place of business in, the European Union. This restriction does not apply to end users of a product or service that incorporates any such multimodal models.
66
+
67
+ Please report any violation of this Policy, software “bug,” or other problems that could lead to a violation of this Policy through one of the following means:
68
+
69
+ * Reporting issues with the model: [https://github.com/meta-llama/llama-models/issues](https://l.workplace.com/l.php?u=https%3A%2F%2Fgithub.com%2Fmeta-llama%2Fllama-models%2Fissues&h=AT0qV8W9BFT6NwihiOHRuKYQM_UnkzN_NmHMy91OT55gkLpgi4kQupHUl0ssR4dQsIQ8n3tfd0vtkobvsEvt1l4Ic6GXI2EeuHV8N08OG2WnbAmm0FL4ObkazC6G_256vN0lN9DsykCvCqGZ)
70
+ * Reporting risky content generated by the model: [developers.facebook.com/llama\_output\_feedback](http://developers.facebook.com/llama_output_feedback)
71
+ * Reporting bugs and security concerns: [facebook.com/whitehat/info](http://facebook.com/whitehat/info)
72
+ * Reporting violations of the Acceptable Use Policy or unlicensed uses of Llama 3.3: [email protected]
73
+
logo.png ADDED

Git LFS Details

  • SHA256: 94b72c879bf2551a0d37342663ce704088c193b457bcc8c32336a2e5cd46e469
  • Pointer size: 132 Bytes
  • Size of remote file: 2.19 MB